id
stringlengths 10
10
| title
stringlengths 7
231
| abstract
stringlengths 3
2.43k
| authors
stringlengths 5
21.5k
| published_date
stringlengths 20
20
| link
stringlengths 33
34
| markdown
stringlengths 133
1.92M
|
---|---|---|---|---|---|---|
2307.16887 | Data-Based MHE for Agile Quadrotor Flight | This paper develops a data-based moving horizon estimation (MHE) method for
agile quadrotors. Accurate state estimation of the system is paramount for
precise trajectory control for agile quadrotors; however, the high level of
aerodynamic forces experienced by the quadrotors during high-speed flights make
this task extremely challenging. These complex turbulent effects are difficult
to model and the unmodelled dynamics introduce inaccuracies in the state
estimation. In this work, we propose a method to model these aerodynamic
effects using Gaussian Processes which we integrate into the MHE to achieve
efficient and accurate state estimation with minimal computational burden.
Through extensive simulation and experimental studies, this method has
demonstrated significant improvement in state estimation performance displaying
superior robustness to poor state measurements. | Wonoo Choo, Erkan Kayacan | 2023-07-31T17:52:54Z | http://arxiv.org/abs/2307.16887v1 | # Data-Based MHE for Agile Quadrotor Flight
###### Abstract
This paper develops a data-based moving horizon estimation (MHE) method for agile quadrotors. Accurate state estimation of the system is paramount for precise trajectory control for agile quadrotors; however, the high level of aerodynamic forces experienced by the quadrotors during high-speed flights make this task extremely challenging. These complex turbulent effects are difficult to model and the unmodelled dynamics introduce inaccuracies in the state estimation. In this work, we propose a method to model these aerodynamic effects using Gaussian Processes which we integrate into the MHE to achieve efficient and accurate state estimation with minimal computational burden. Through extensive simulation and experimental studies, this method has demonstrated significant improvement in state estimation performance displaying superior robustness to poor state measurements.
## I Introduction
The agility of the quadrotors makes them an ideal platform for many tasks and applications that require high speeds and maneuverability, such as explorations, inspections, light shows, drone filming, or search and rescue missions [1, 2, 3, 4, 5]. The ability to execute high-speed agile flights make quadrotors so versatile and desirable for many applications; however, acquiring the full state information of the system is a challenging task which is further encumbered by their restricted payload capacity [1, 6] Quadrotors are generally equipped with a GPS and an IMU. This combination of sensors measures the position, angular velocity and the linear acceleration of the system requiring the angular position and the linear velocity to be estimated. Moreover, due to the limited capability of the sensors, the measured states are also required to be estimated to handle the measurement noise.
The difficulties of designing accurate state estimator lies in modeling the dynamics of the system. The aerodynamic effects that the quadrotors experience during agile flights are challenging to model as they are a combination of propeller lift, interaction between the propellers, and drag from rotors and the fuselage [7, 8]. Furthermore, the model accuracy and complexity that can be utilized are constrained by the limited computing capabilities of the onboard executing platform. Using a kinematic model can be an advantageous option over a dynamic model due to the high-levels of unmodelled aerodynamic effects. However, low quality sensors can degrade the performance of kinematic model based estimator [9]. Dynamic model based estimator with an accurate model can be more robust to the poor state measurements and estimate unknown system parameters online, such as the payload mass, which can improve the state estimation and trajectory tracking performance [10, 11, 12].
For nonlinear systems, the most commonly used state estimation method is Extended Kalman Filter (EKF) [1, 10]. However, the highly nonlinear behaviors of agile quadrotors can lead to poor performance as EKF linearizes the nonlinear model around the current state estimates using Taylor expansions [3, 13]. Authors of [14] utilizes Convolutional Neural Network (CNN) to learn the IMU kinematic properties and the dynamics of the quadrotor to be integrated with the EKF to improve the state estimation. Though, this method improves the performance of the EKF, it requires large dataset of ground truth measurements to train the CNN which can be difficult to obtain [14].
Moving Horizon Estimation (MHE) is an alternative option to the EKF that is able to handle nonlinear models and system inequalities without linearization, which distinguishes it from other state estimation methods [11, 15]. MHE is an optimization-based method that utilizes the system model and the past measurements to estimate the system states. It is robust to poor initial guesses and guarantees local stability in contrast to EKF which cannot guarantee any general convergence and may fail with poor initialization [10, 15].
MHE and Model Predictive Control (MPC) are often referred to as the dual optimization problem of each other as they share the same optimal control structure [3, 16]. Authors of [7] incorporated Gaussian Processes (GP) to a simple nominal model of the system to improve the closed loop tracking control of the quadrotor by enhancing the dynamic model of the system. The GP models were trained to learn the unmodelled aerodynamic effects of the system that are challenging to model. As performance of the MHE is also heavily dependent on the accuracy of the system model, we propose extending this approach to state estimation methods.
In this paper, we develop an MHE pipeline augmented with GP regression models that has learned the residual
Fig. 1: Architecture of the GP based MHE. The offline computations are denoted in red and online computations are denoted in black.
dynamics using the onboard sensor measurements. We extend the data-driven approach for MPC presented in [7] to MHE by combining a simple dynamic model with GPs to improve the state estimation performance of agile quadrotors. This method significantly reduces the learning problem by only considering the unmodelled dynamics of the system and the data collection process is substantially simplified by removing the requirement of ground truth odometry. Moreover, the additional computation from the GPs are minimized by only requiring single point predictions from the current state measurements at each time step. We conduct comparative simulation and experimental studies of the developed method in this paper to MHEs with a kinematic model and a dynamic model with and without an unknown system parameter at varying noise levels on the measurements. At higher noise levels, the GP augmented MHE displayed improved robustness to poor state measurements over other MHEs with the added capability of estimating varying payload mass.
The paper is organized as follows: The dynamic model of the quadrotor is presented in Section II. The formulation of the MHEs investigated in this study are described in Section III. The traditional formulation of GP and the developed methodology for data collection and model learning is presented in Section IV. The simulation studies are presented in Section V and the experimental studies are presented in Section VI. Then finally, a brief conclusion is drawn in Section VII.
## II Preliminaries
### _Notation_
In this paper, we denote the scalars with lowercase, vectors with bold lowercase, and matrices with bold uppercase. The Euclidean vector norm is denoted as \(\|\cdot\|\). The World and Body frame axis are shown in Fig. 2. The Body frame is at the center of mass of the quadrotor and the rotors are assumed to be on the \(xy\)-plane of the Body frame. A vector \(\mathbf{v}\) pointing from \(\mathbf{p}_{1}\) to \(\mathbf{p}_{2}\) in the World frame is denoted as \({}_{W}\mathbf{v}_{12}\). If \(\mathbf{p}_{1}\) is the origin of the frame it is described in, then the pre-subscript is omitted. The orientation of the quadrotor is represented using a unit quaternion, \(\mathbf{q}_{WB}=(q_{w},q_{x},q_{y},q_{z})\) where \(\|\mathbf{q}_{WB}\|=1\). The quaternion frame rotation is given by the quaternion-vector product, \(\odot\), such that \(\mathbf{q}\odot\mathbf{v}=\mathbf{qv}\mathbf{\bar{q}}\), where \(\mathbf{\bar{q}}\) is the conjugate of \(\mathbf{q}\).
### _Quadrotor Dynamics_
The quadrotor is modeled as a 6 degree-of-freedom rigid body with mass \(m\). The quadrotor's states position, orientation, linear velocity and angular velocity is denoted as \(\mathbf{x}=[\mathbf{p}_{WB},\,\mathbf{q}_{WB},\,\mathbf{v}_{WB},\,\mathbf{\omega}_{B}]^{\intercal}\). The control input is given by the collective thrust \(\mathrm{f}_{thrust}\) and the linear acceleration in body frame, \(\mathbf{a}_{B}\) is given by (1).
\[\mathbf{a}_{B}=\begin{bmatrix}0\\ 0\\ \mathrm{f}_{thrust}/m\end{bmatrix} \tag{1}\]
Here the angular velocity dynamics were ignored as the individual thrusts of the rotors were unknown. The dynamics of the angular velocity was set as 0 to be formulated as a measurement state in the MHE. The nonlinear state-space model of the system is 13-dimensional, and its dynamics is given by (2) where \(\mathbf{g}_{W}\) denotes the Earth's gravity.
\[\dot{\mathbf{x}} = \mathbf{f}_{dyn}(\mathbf{x},\mathbf{u})=\begin{bmatrix}\mathbf{v}_{WB}\\ \mathbf{q}_{WB}\cdot\begin{bmatrix}0\\ \mathbf{\omega}_{B}/2\end{bmatrix}\\ \mathbf{q}_{WB}\odot\mathbf{a}_{B}+\mathbf{g}_{W}\\ 0\end{bmatrix} \tag{2}\]
## III MHE Formulation
MHE is an optimization-based state estimation method that utilizes the system model and the past measurements [13]. Theoretically, the MHE solves for an infinite horizon optimization problem [10, 11]. However, as this is computationally intractable in real-time, MHE reconciles only a finite number of recent measurements in the estimation horizon of length \(N\) and the measurements collected before the estimation horizon are summarized by the arrival cost \(\bar{\mathbf{x}}_{k-N}\)[10, 13, 15]. The MHE problem is formulated as follows:
\[\min_{\mathbf{x},\mathbf{w}} \sum_{i=k-N}^{k}\|\mathbf{y}_{i}-\mathbf{\hat{y}}_{i}\|_{\mathbf{R}_{k}^{-1}} ^{2}+\|\mathbf{w_{i}}\|_{\mathbf{Q}_{k}^{-1}}^{2} \tag{3a}\] \[+\|\mathbf{x}_{k-N}-\bar{\mathbf{x}}_{k-N}\|_{\mathbf{Q}_{k-N}^{-1}}^{2}\] subject to \[\mathbf{x}_{k+1}=\mathbf{f}_{RK4}(\mathbf{x}_{k},\mathbf{u}_{k})+\mathbf{w}_{k}\] (3b) \[\mathbf{y}_{k}=\mathbf{h}(\mathbf{x}_{k},\mathbf{u}_{k})+\mathbf{\nu}_{k}\] (3c) \[\mathbf{x}_{k}=\mathbf{x}(t_{k}) \tag{3d}\]
where \(\mathbf{Q}\in\mathbb{R}^{n_{x}\times n_{x}}\) and \(\mathbf{R}\in\mathbb{R}^{n_{y}\times n_{y}}\) are symmetric positive semi-definite weighting matrices and the system uncertainty is denoted by \(\mathbf{w}_{k}\sim\mathcal{N}(0,\mathbf{Q}_{k})\). The output function \(\mathbf{h}(\cdot)\) maps the system states to the measurements \(\mathbf{y}_{k}\) where \(\mathbf{\nu}_{k}\sim\mathcal{N}(0,\mathbf{R}_{k})\) is the measurement noise. The measurement noise is an independent Gaussian noise with diagonal covariance given by:
\[\mathbf{R}_{k}=\mathrm{diag}(\sigma_{p_{x}}^{2},\sigma_{p_{y}}^{2},\sigma_{p_{x}}^ {2},\sigma_{\omega_{x}}^{2},\sigma_{\omega_{y}}^{2},\sigma_{\omega_{x}}^{2}, \sigma_{a_{x}}^{2},\sigma_{a_{y}}^{2},\sigma_{a_{x}}^{2}) \tag{4}\]
where \(\sigma_{p}\), \(\sigma_{\omega}\) and \(\sigma_{a}\) are the standard deviation of the noise on the position, body rate and linear acceleration measurements respectively.
To investigate the performance of the GP augmented MHE we compare the state estimation performance to MHEs with kinematic model and a dynamic model. We extend the studies to analyze the performance of these MHEs with a varying payload mass where the GP augmented MHE and the MHE
Fig. 2: Visualization of the quadrotor model and its reference frames.
with the dynamic model estimate this unknown parameter online. The formulation of these MHEs only differ in their respective models and the formulation of these are presented in the following subsections.
These models were realized in discrete time steps \(\delta t\) utilizing explicit \(4^{th}\) order Runge-Kutta method. The MHE was formulated using ACADOS [17], and CasADi [18] as a multiple shooting problem solving sequential quadratic program in a real-time iterating scheme.
### _MHE with Kinematic Model_
MHE with a kinematic model of the system is denoted as Kinematic-MHE (K-MHE). It is formulated such that \(\mathbf{p}_{WB}\), \(\mathbf{\omega}_{B}\) and \(\mathbf{a}_{B}\) are modeled as the measurements in the MHE. The kinematic model utilized in this MHE is given by the following equation.
\[\begin{bmatrix}\dot{\mathbf{p}}_{WB}\\ \dot{\mathbf{q}}_{WB}\\ \dot{\mathbf{v}}_{WB}\\ \dot{\mathbf{\omega}}_{B}\\ \dot{\mathbf{a}}_{B}\end{bmatrix}=\mathbf{f}_{kin}(\mathbf{x},\mathbf{u})=\begin{bmatrix}\mathbf{v }_{WB}\\ 0\\ \mathbf{q}_{WB}\cdot\begin{bmatrix}0\\ \mathbf{\omega}_{B}/2\end{bmatrix}\\ \mathbf{q}_{WB}\odot\mathbf{a}_{B}+\mathbf{g}_{W}\\ 0\end{bmatrix} \tag{5}\]
\[\begin{split}\mathbf{x}&=\begin{bmatrix}\mathbf{p}_{WB}&\mathbf{q}_{ WB}&\mathbf{v}_{WB}&\mathbf{\omega}_{B}&\mathbf{a}_{B}\end{bmatrix}^{\intercal}\\ \mathbf{y}&=\begin{bmatrix}\mathbf{p}_{WB}&\mathbf{\omega}_{B}&\mathbf{a}_{B}\end{bmatrix}^{ \intercal}\end{split}\]
### _MHE with Dynamic Model_
MHE with a dynamic model of the system is denoted as Dynamic-MHE (D-MHE). It is formulated such that \(\mathbf{p}_{WB}\) and \(\mathbf{\omega}_{B}\) are given as measurements and \(\mathrm{f}_{thrust}\) as an input to the system to calculate \(\mathbf{a}_{B}\) as formulated in (1). The model utilized in this MHE is given by the following equation.
\[\begin{bmatrix}\dot{\mathbf{p}}_{WB}\\ \dot{\mathbf{q}}_{WB}\\ \dot{\mathbf{v}}_{WB}\\ \dot{\mathbf{\omega}}_{B}\end{bmatrix}=\mathbf{f}_{dyn}(\mathbf{x},\mathbf{u})=\begin{bmatrix} \mathbf{v}_{WB}\\ \mathbf{q}_{WB}\cdot\begin{bmatrix}0\\ \mathbf{\omega}_{B}/2\end{bmatrix}\\ \mathbf{q}_{WB}\odot\mathbf{a}_{B}+\mathbf{g}_{W}\\ 0\end{bmatrix} \tag{6}\]
\[\begin{split}\mathbf{x}&=\begin{bmatrix}\mathbf{p}_{WB}&\mathbf{q}_{ WB}&\mathbf{v}_{WB}&\mathbf{\omega}_{B}\end{bmatrix}^{\intercal}\\ \mathbf{u}&=\begin{bmatrix}\mathbf{t}_{thrust}\end{bmatrix}&\mathbf{y}= \begin{bmatrix}\mathbf{p}_{WB}&\mathbf{\omega}_{B}&\mathbf{u}\end{bmatrix}^{\intercal} \end{split}\]
### _Data-Based MHE_
MHE with GPs complementing the nominal thrust model, to improve the state estimates of the agile quadrotor, is denoted as GP Augmented MHE (GP-MHE). It is formulated similar to D-MHE with an additional measurement state of \({}_{B}\mathbf{a}_{e}\) computed using the GPs to predict the acceleration error of the nominal model. The model error is assumed to be in the subspace spanned by \(\mathbf{B}_{d}\). The GP augmented model of the system is given by the following equation.
\[\begin{bmatrix}\dot{\mathbf{p}}_{WB}\\ \dot{\mathbf{q}}_{WB}\\ \dot{\mathbf{v}}_{WB}\\ \dot{\mathbf{\omega}}_{B}\\ {}_{B}\hat{\mathbf{a}}_{e}\end{bmatrix}=\mathbf{f}_{GP}(\mathbf{x},\mathbf{u})=\begin{bmatrix} \mathbf{v}_{WB}\\ \mathbf{q}_{WB}\cdot\begin{bmatrix}0\\ \mathbf{\omega}_{B}/2\end{bmatrix}\\ \mathbf{q}_{WB}\odot\mathbf{a}_{B}+\mathbf{g}_{W}\\ 0\end{bmatrix}+\mathbf{B}_{dB}\hat{\mathbf{a}}_{e}\]
\[\begin{split}\mathbf{x}&=\begin{bmatrix}\mathbf{p}_{WB}&\mathbf{q}_{ WB}&\mathbf{v}_{WB}&\mathbf{\omega}_{B}&\mathbf{a}_{e}\end{bmatrix}^{\intercal}\\ \mathbf{u}&=\begin{bmatrix}\mathbf{t}_{thrust}\end{bmatrix}&\mathbf{y}= \begin{bmatrix}\mathbf{p}_{WB}&\mathbf{\omega}_{B}&\mathbf{u}&\mathbf{B}\hat{\mathbf{a}}_{e}\end{bmatrix}^{ \intercal}\end{split} \tag{7}\]
As MHE keeps a record of the previous measurements, repeated computation of the model errors in the estimation horizon is unnecessary. Only a single point prediction of the current measurements is required, minimizing the computational burden of the GP.
### _MHE with Payload Mass Estimation_
MHE with parameter estimation for D-MHE and GP-MHE are formulated with their original respective models, with the addition of the payload mass term \(m_{p}\) in the acceleration formulation:
\[\mathbf{a}_{B}=\begin{bmatrix}0\\ 0\\ \mathrm{f}_{thrust}/(m+m_{p})\end{bmatrix} \tag{8}\]
The unknown parameter \(m_{p}\) is formulated as a single-degree of freedom to the optimization problem and constrained on the lower bound by 0. In this formulation the parameter uncertainty is only considered in the lower diagonal block of the new arrival cost weighting matrix \(\mathbf{P}_{k-N}^{-1}\) such that
\[\mathbf{P}_{k-N}^{-1}=\begin{bmatrix}\mathbf{Q}_{k-N}^{-1}\\ \mathbf{Q}_{p}^{-1}\end{bmatrix} \tag{9}\]
where \(\mathbf{Q}_{p}\) is the weighting matrix of the unknown parameter. The new MHE problem with parameter estimation is formulated as follows:
\[\min_{\mathbf{x},\mathbf{w},m_{p}} \sum_{i=k-N}^{k}\left\lVert\mathbf{y}_{i}-\hat{\mathbf{y}}_{i}\right\rVert _{\mathbf{R}_{k}^{-1}}^{2}+\left\lVert\mathbf{w_{i}}\right\rVert_{\mathbf{Q}_{k}^{-1}}^{2}\] (10a) subject to \[+\left\lVert\mathbf{x}_{k-N}-\hat{\mathbf{x}}_{k-N}\right\rVert_{\mathbf{P}_{ k-N}^{-1}}^{2}\] \[\mathbf{v}_{k-1} \mathbf{f}_{kK4}(\mathbf{x}_{k},m_{p},\mathbf{u}_{k})+\mathbf{w}_{k} \tag{10b}\] \[\mathbf{y}_{k}=\mathbf{h}(\mathbf{x}_{k},\mathbf{u}_{k})+\mathbf{\nu}_{k}\] (10c) \[m_{pmin}\leq\ m_{p}\leq m_{pmax}\] (10d) \[\mathbf{x}_{k}=\mathbf{x}(t_{k}) \tag{10e}\]
## IV Gaussian Process Regression
Like majority of the supervised machine learning algorithms, GP attempts to define the relationship between the inputs and the outputs of a given set of training points [19, 20, 21, 22]. Here we utilize GP to identify the unknown dynamics of the system \(\mathbf{d}:\mathbb{R}^{n_{z}}\rightarrow\mathbb{R}^{n_{c}}\) from a set of inputs \(\mathbf{z}\in\mathbb{R}^{n_{z}}\) and outputs \(\mathbf{c}\in\mathbb{R}^{n_{c}}\):
\[\mathbf{c}=\mathbf{d}(\mathbf{z})+\mathbf{w}_{d} \tag{11}\]
where the process noise \(\mathbf{w}_{d}\sim\mathcal{N}(\mathbf{0},\mathbf{\Sigma})\) is independent and identically distributed Gaussian noise with diagonal covariance \(\mathbf{\Sigma}=\mathrm{diag}([\sigma_{1}^{2},...,\sigma_{n_{d}}^{2}])\). This allows each dimension of \(\mathbf{c}\) to be modeled independently with individual 1-dimensional GPs [23]. Given the training set \(\{\mathbf{z}\), \(\mathbf{c}\}\) and the test point \(z_{*}\), the mean and the variance function of the GP is given by:
\[\mu(z_{*}) =\mathbf{k}_{*}^{\intercal}\mathbf{K}^{-1}\mathbf{c}\ \ \ \ \Sigma_{\mu k}=k_{**}-\mathbf{k}_{*}^{\intercal}\mathbf{K}^{-1}\mathbf{k}_{*}\] (12) with \[\mathbf{K} =\kappa(\mathbf{z},\mathbf{z})+\sigma_{n}^{2}\mathbf{I}\] \[\mathbf{k}_{*} =\kappa(\mathbf{z},z_{*})\
where \(k_{**}\) denotes the variance of the test point, \(\mathbf{k}_{*}\) denotes the covariance between the training samples and the test point, and \(\mathbf{K}\) denotes the covariance matrix between the training points, also known as the Gram matrix. In this paper, we compute these (co)-variance using the Radial Basis Function (RBF) given by:
\[\kappa(\mathbf{z}_{i},\mathbf{z}_{j})=\sigma_{f}^{2}\text{exp}\bigg{(}-\frac{1}{2}\left( \mathbf{z}_{i}-\mathbf{z}_{j}\right)^{\intercal}\mathbf{L}^{-2}\left(\mathbf{z}_{i}-\mathbf{z}_{j} \right)\bigg{)}+\sigma_{n}^{2} \tag{13}\]
where \(\mathbf{z}_{i}\) and \(\mathbf{z}_{j}\) represent the data features, \(\mathbf{L}\) denotes the diagonal length scale matrix, and \(\sigma_{f}\) and \(\sigma_{n}\) represent the data and prior noise variance respectively [24]. The variables \(\mathbf{L}\), \(\sigma_{f}\) and \(\sigma_{n}\) shape the response of the GP regression model. During the model training process, these variables are optimized to identify the regression of the training samples. The computational complexity of the GP prediction is given by \(\mathcal{O}(n^{3})\) where \(n\) is the number of training samples [20, 23, 25]. This motivates the use of approximation methods to handle the memory requirements and the computational demands of the traditional GP formulations [22, 23, 25].
In this paper, we implement the sparse approximation method introduced in [25], where the computational complexity is minimized by reducing the number of training points. This approach analyzes the posterior to compute the _effective prior_ using another GP to encapsulate the behavior of the training samples in to a subset of \(m\) number of inducing points [25]. Therefore, rather than approximating the inference, the GP is reinterpreted as an exact inference with an approximated prior. The computational complexity of the sparse GP is now given by \(\mathcal{O}(m^{3})\) where \(m\ll n\)[25].
### _Data Collection and Model Learning_
The training data points were collected using a quadrotor with a K-MHE. At each sample time the acceleration measurement \(\mathbf{a}_{B}\), the estimated orientation \(\mathbf{\hat{q}}_{WB}\), and the collective thrust \(\mathrm{f}_{thrust}\) were recorded to calculate \(\mathbf{\hat{a}}_{B}\) using (1). The acceleration error of the dynamic model was computed by:
\[{}_{B}\mathbf{a}_{e}=\mathbf{a}-\mathbf{\hat{a}}\] (14a) where \[\mathbf{a}=\mathbf{a}_{B}+\mathbf{\hat{q}}_{WB}^{-1}\odot\mathbf{g}_{W} \tag{14b}\] \[\mathbf{\hat{a}}=\mathbf{\hat{q}}_{WB}^{-1}\odot\left(\mathbf{\hat{q}}_{WB} \odot\mathbf{\hat{a}}_{B}+\mathbf{g}_{W}\right) \tag{14c}\]
The GP models were trained such that the acceleration measurements \(\mathbf{a}_{B}\) were mapped to the body frame acceleration disturbances \({}_{B}\mathbf{a}_{e}\). The unmodelled dynamics in each axis were modeled independently to minimize the required inducing points of the GP models. Therefore, three individual GP models \(\mu_{ax}\), \(\mu_{ay}\), \(\mu_{az}\) were developed. The GP predictions are denoted as follows
\[{}_{B}\mathbf{\hat{a}}_{e}= \mathbf{\mu}_{a}({}_{B}\mathbf{a}_{m})=\begin{bmatrix}\mu_{ax}({}_{B}a_{x })\\ \mu_{ay}({}_{B}a_{y})\\ \mu_{az}({}_{B}a_{z})\end{bmatrix} \tag{15}\] \[\mathbf{\Sigma}_{\mu}(\mathbf{a}_{B})=\mathrm{diag}\left(\begin{bmatrix} \Sigma_{ax}({}_{B}a_{x})\\ \Sigma_{ay}({}_{B}a_{y})\\ \Sigma_{az}({}_{B}a_{z})\end{bmatrix}\right)\]
## V Simulation Studies
### _Simulation Setup_
We first evaluate the the performance of the state estimators in a Robot Operating System (ROS) Gazebo [26] environment utilizing RotorS [27] simulator package to simulate the AscTec Hummingbird quadrotor. The sensors simulated were GPS and IMU and the measurements were received at 100Hz with cascaded zero-mean Gaussian distributed noise at varying levels as indicated in Table I. The simulation studies were conducted on laptop with 16GB of RAM, 10th Generation Intel Core i7-10750H and NVDIA GeForce RTX 2070 8GB GDDR6.
In the simulations studies, we compare the state estimation performance of the MHEs with varying models as described in Section III: K-MHE, D-MHE and GP-MHE. To investigate the performance of the proposed estimation method, we analyze the quadrotor executing two different trajectories illustrated in Fig. 3. The lemniscate trajectory is defined by \([x(t)=5\text{cos}\left(\sqrt{2}t\right)-5,\,y(t)=5\text{sin}\left(\sqrt{2}t \right)\text{cos}\left(\sqrt{2}t\right),\,z(t)=2.5]\). The slanted circle trajectory is defined by \(\left[x(t)=5\text{cos}(t)\,,\,y(t)=5\text{sin}(t)\,,\,z(t)=-\text{cos}(t)+2.5]\). Both trajectories accelerates from a hovering state, reaching a peak velocity of \(11.3ms^{-1}\) and \(8.7ms^{-1}\) respectively and decelerates back down to \(0ms^{-1}\). The slanted circle trajectory was utilized to imitate a quadrotor, weighing \(m=1\text{kg}\), transporting an object weighing 300g, from point A to point B to investigate the state estimation performance with a varying parameter.
### _Trained GP Regression Models_
Each GP models were trained for every corresponding noise levels and trajectory. The GP models mapping the accelerometer measurements to the acceleration error at measurement noise level III are presented in Fig. 4. These models are sparse GPs trained with 50 inducing points derived from dense GPs. The GPs on the left column are trained from data points collected by a quadrotor executing the lemniscate trajectory. The GPs on the right column are trained from data
\begin{table}
\begin{tabular}{c c c c} \hline \hline & \(\sigma_{p}\) & \(\sigma_{\omega}\) & \(\sigma_{a}\) \\ \hline Noise Level I & 0.007 & 0.40 & 0.007 \\ Noise Level II & 0.5 & 0.86 & 0.01 \\ Noise Level III & 1 & 1.72 & 0.1 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Standard deviation of the measurement noise at varying levels
Fig. 3: Quadrotor trajectories utilized in Simulation studies.
points collected by a quadrotor executing the slanted circle trajectory.
It was found that within the range of 20 to 60 inducing points, the sample size of the GPs had little to no effect in the performance and the required computation time of the GP-MHE. This can be explained by the near linear relationship of the GPs in the \(x\) and \(y\) axis and the use of sparse GPs. As the general trend of the dataset is learned by a larger GP and the sparse GP is trained using effective priors. A GP trained using 20 effective priors can produce similar regression as a GP trained from 60. Furthermore, as only a single point prediction is required at every time step, the differences in the additional computational time of these GPs are negligible.
### _Simulation Results of the GP augmented MHE_
We first investigate the trade-off between the optimization time and the estimation performance with respect to the number of estimation nodes in the MHE. The Root Mean Squared Error (RMSE) of \(\mathbf{q}\) and \(\mathbf{v}\) state estimates and the optimization time executing the lemniscate trajectory at measurement noise level III are visualized in Fig. 5. The computational complexity follows a near linear relationship with respect to the estimation horizon length, while the estimation performance plateaus around 45 nodes. From this we chose the MHEs to be formulated with estimation horizon of 0.5 seconds with 50 nodes, corresponding to \(3-4.5\)ms of optimization time. It can also be noted that the GP only adds approximately 2ms of additional computational time to the D-MHE regardless of the number of estimation nodes.
The stability of the MHEs to the measurement noise were investigated on the lemniscate trajectory at three levels of measurement noise. Table II summarizes the state estimation performance of the three estimators.
Due to the limited knowledge of the system, D-MHE performed poorly at all noise levels. It is to be noted that the external disturbances in real-time application will exceed the aerodynamic effects simulated in these studies, resulting in further inaccurate estimates by the D-MHE. With the GP model corrections the performance was significantly improved. The orientation states were improved by \(73\%\), \(41\%\); and \(30\%\), and the velocity state estimates were improved by \(64\%\), \(23\%\), and \(15\%\), at noise levels I, II and III respectively.
At noise level I and II, K-MHE produced the most accurate state estimates, closely followed by the GP-MHE. It was expected that the K-MHE outperform GP-MHE with low measurement noise where the sensor measurements can accurately describe the dynamics of the system. At higher noise levels, the quality of the measurements are deterred, hence the slight improvement of GP-MHE over K-MHE. At noise level III the RMSE of the orientation state estimates of K-MHE and GP-MHE are \(7.049^{\circ}\) and \(6.791^{\circ}\) respectively. The RMSE of the velocity state estimates of K-MHE and GP-MHE are of similar level: \(1.000ms^{-1}\) and \(0.987ms^{-1}\) respectively. This can be explained by the robustness of the GPs to Gaussian distributed noise, thus it can provide more accurate dynamics to the MHE over direct measurement of the accelerometer.
\begin{table}
\begin{tabular}{c c c c c} \hline \hline & & \multicolumn{3}{c}{Model RMSE} \\ \hline & & K-MHE & D-MHE & GP-MHE \\ \hline \multirow{3}{*}{Noise Level I} & p [\(m\)] & 0.059 & 0.198 & 0.056 \\ & q[\({}^{\circ}\)] & 1.540 & 5.907 & 1.596 \\ & \(\forall\{m/s\}\) & 0.168 & 0.625 & 0.223 \\ \hline \multirow{3}{*}{Noise Level II} & p [\(m\)] & 0.200 & 0.276 & 0.240 \\ & q[\({}^{\circ}\)] & 3.157 & 7.163 & 4.232 \\ & \(\forall\{m/s\}\) & 0.473 & 0.781 & 0.604 \\ \hline \multirow{3}{*}{Noise Level III} & p [\(m\)] & 0.420 & 0.447 & 0.413 \\ & q[\({}^{\circ}\)] & 7.049 & 9.708 & 6.791 \\ \cline{1-1} & \(\forall\{m/s\}\) & 1.000 & 1.159 & 0.987 \\ \hline \hline \end{tabular}
\end{table} TABLE II: Comparison of state estimation results of MHEs at varying noise levels in simulation.
Fig. 4: Sparse GP models with 50 inducing points mapping the body frame acceleration error observed on the quadrotor in Noise Level III simulation.
Fig. 5: Trade-off between the number of estimation nodes, estimation performance in \(\mathbf{q}\) and \(\mathbf{v}\) states and optimization time at Noise level III simulation.
### _Simulation Results of the GP augmented MHE with online parameter estimation_
We investigate the state estimation performance with a varying parameter \(m_{p}\) while executing the slanted circle trajectory. The mass of the quadrotor was increased by \(300\)g at point **A** and reduced back down to its original mass at point **B**, where the original mass of the quadrotor is 1kg. This was repeated for the entire duration of the trajectory as the quadrotor accelerated to its maximum velocity and decelerated back down to its hovering state. The MHE with the payload mass estimation were formulated as discussed in Section III-D to include the additional unknown parameter.
The performance of the MHEs with and without the additional payload are presented in Table III. The estimation of the payload mass by the MHEs are presented in Fig. 6. It can be noted that due to the extra estimation parameter and its constraints, the required computation time has increased by \(2.1-2.7ms\); however, it is still less than its sampling rate of \(10ms\).
It is evident that the D-MHE and GP-MHE can successfully estimate the varying mass of the quadrotor. This is further supported by the insignificant differences between the state estimation performance of 0g and 300g payload mass. Despite accurate parameter estimation, D-MHE performed poorly due to the weak dynamic model. By introducing the GP models to the MHE, the \(\mathbf{q}\) and \(\mathbf{v}\) state estimates improved by \(40\%\) and \(19\%\) respectively, performing at similar level to K-MHE. This shows that the GP-MHE can successfully handle varying parameters while still providing accurate state estimations of the system. This can be advantageous over K-MHE as knowledge of an unknown system parameter can be improve the trajectory tracking performance of a controller.
## VI Experimental Studies
### _Experimental Setup_
We finally conduct comparative analysis of the MHEs on a real-time NeuroBEM dataset from the University of Zurich to further validate the performance of the GP-MHE [28]. The dataset contains Vicon and onboard measurements of the agile quadrotor flights. For further details on data collection please refer to [28]. The trajectory of the flight data that was utilized is presented in Fig. 7. The recorded flight reached a maximum velocity of \(15m/s\) with a maximum acceleration of \(42.5m/s^{2}\). Firstly, we tested the performance of the MHEs with the given dataset to simulate the state estimation performance in an indoor setting with accurate position measurements. The MHE of the _indoor_ experiment was formulated with the position measurements from Vicon and inertial measurements from the onboard IMU. Secondly, to imitate the inaccurate GPS measurements in an outdoor environment we cascade a Gaussian distributed noise of \(\sigma_{p}=1m\) to the Vicon position measurements. The MHE of the _outdoor_ experiment was formulated with the position measurements from Vicon with the added noise and the inertial measurements from the onboard IMU.
The RMSE of the state estimation performance of the three MHEs are tabulated in Table IV. Due to the increased agility and aggressiveness of the trajectory the errors seen on the experimental results are higher than those from the simulation results. The RMSE results are an average of three experiments as the results may vary due to the stochastic behavior from the cascaded noise for the outdoor setting. In an indoor setting, the \(\mathbf{q}\) state estimates of GP-MHE improved by \(26\%\) and \(5.6\%\) compared to D-MHE and K-MHE respectively. While there was a minute decrease in the accuracy of the \(\mathbf{v}\) state estimates. In an outdoor setting, the \(\mathbf{q}\) state estimates improved by \(14.5\%\) and \(4\%\) compared to D-MHE and K-MHE respectively. The \(\mathbf{v}\) state estimates increased by \(4.2\%\) compare to D-MHE however the accuracy decreased by \(1.8\%\) compared to K-MHE.
The enhancements in the state estimation performance of the GP-MHE is evident in the results with significant improvements in the estimation of the angular position of the quadrotor. The particular improvement in the \(\mathbf{q}\) state estimates can be explained by the fact that the dynamic model assumes there are no accelerations in the \(x\) and \(y\) directions in the body frame from the rotor thrusts. In the D-MHE, these are compensated with the \(\mathbf{q}\) states to capture those movements, with the consequences of increased error in the angular positions. Furthermore, the authors of [7, 28] explains that the main source of disturbance experienced by the given quadrotor is the rotor drag due to its power and the compactness of its design. Where in GP-MHE, the GP models are trained with acceleration errors that are coupled with these angular position estimates, it is able to capture these disturbances and correct for these model errors.
## VII Conclusion
In this work, we have presented a data-based MHE method for agile quadrotors. The dynamic model of the system is augmented with GPs to provide model corrections for unmodelled aerodynamic effects. The training points were collected from on-board sensors rather than ground truth data simplifying the data collection process. The GP models were then trained to predict the acceleration error of the nominal model given its accelerometer measurements. As MHE keeps a record of its past measurements, the computational burden of the GP could be minimized by only requiring single point prediction of the current measurements. The simulation and experimental studies have demonstrated a significant improvement in the state estimation performance compared to traditional dynamic model-based estimation methods. Furthermore, GP-MHE achieved a similar level of performance to K-MHE while also offering the added capability of estimating unknown model parameters online. This valuable feature can complement the controller and enhance the closed-loop tracking performance of the quadrotor.
|
2306.17668 | Grothendieck-Verdier duality in categories of bimodules and weak module
functors | Various monoidal categories, including suitable representation categories of
vertex operator algebras, admit natural Grothendieck-Verdier duality
structures. We recall that such a Grothendieck-Verdier category comes with two
tensor products which should be related by distributors obeying pentagon
identities. We discuss in which circumstances these distributors are
isomorphisms. This is achieved by taking the perspective of module categories
over monoidal categories, using in particular the natural weak module functor
structure of internal Homs and internal coHoms. As an illustration, we exhibit
these concepts concretely in the case of categories of bimodules over
associative algebras. | Jürgen Fuchs, Gregor Schaumann, Christoph Schweigert, Simon Wood | 2023-06-30T13:56:32Z | http://arxiv.org/abs/2306.17668v2 | # Grothendieck-Verdier duality in categories of bimodules and weak module functors
###### Abstract
Various monoidal categories, including suitable representation categories of vertex operator algebras, admit natural Grothendieck-Verdier duality structures. We recall that such a Grothendieck-Verdier category comes with two tensor products which should be related by distributors obeying pentagon identities. We discuss in which circumstances these distributors are isomorphisms. This is achieved by taking the perspective of module categories over monoidal categories, using in particular the natural weak module functor structure of internal Homs and internal coHoms. As an illustration, we exhibit these concepts concretely in the case of categories of bimodules over associative algebras.
Introduction
Dualities play a pivotal role in various applications of monoidal categories. A familiar example of a duality is a rigid structure. For many applications, for instance in representation theory and in linear logic, it is, however, necessary to consider more general notions of duality. For example, if a tensor product is not exact, then the category does not admit a rigid duality.
A framework that generalizes rigidity and in which structures familiar from rigidity, such as internal Homs, persist, is furnished by a \(*\)-autonomous structure, also known as a _Grothendieck-Verdier duality_. Categories endowed with a Grothendieck-Verdier duality have a wide range of applications, from linear logic to the representation theory of vertex operator algebras. In this paper we present some important aspects of Grothendieck-Verdier dualities, concentrating on such which are related to internal Hom and coHom functors. Our point of view is the one of module categories. In particular, we will regard a monoidal category as a module category over itself. Specifically, a crucial ingredient is the structure of internal Homs and coHoms as weak module functors. This allows us to introduce distributors, which play a role similar to the associator for the tensor product. Before discussing these concepts in appropriate generality, we illustrate some of them in the, arguably, simplest relevant situation - finite-dimensional bimodules over finite-dimensional algebras. After presenting the general theory, we return to this example and present explicit formulas for the distributors in categories of bimodules.
## 2 Bimodules
To start with, let us consider some very basic mathematical objects which are familiar already to undergraduates. We fix an algebraically closed field \(\Bbbk\).
Let \(A\) be a (unital associative) \(\Bbbk\)-algebra. The category of \(A\)-bimodules can be endowed with a monoidal structure given by the tensor product \(\otimes_{A}\) over \(A\), which has \(A\) as its monoidal unit. The tensor product \(\otimes_{A}\) is defined by the coequalizer diagram
\[B_{1}\otimes_{\Bbbk}A\otimes_{\Bbbk}B_{2}\xrightarrow{\varphi}B_{1}\otimes_ {\Bbbk}B_{2}\longrightarrow B_{1}\otimes_{A}B_{2}\,. \tag{2.1}\]
Here the first arrow is the linear map
\[\varphi=\rho^{B_{1}}\otimes_{\Bbbk}\mathrm{id}_{B_{2}}-\mathrm{id}_{B_{1}} \otimes_{\Bbbk}\lambda^{B_{2}}, \tag{2.2}\]
where \(\rho^{B_{1}}\) is the right \(A\)-action on \(B_{1}\) and \(\lambda^{B_{2}}\) the left \(A\)-action on \(B_{2}\). Here, as well as largely below, we suppress the associator \(\alpha\) of a monoidal category, and likewise we will suppress the unitors \(l\) and \(r\).
The tensor product \(\otimes_{\mathcal{A}}\) is right exact. Since our primary interest in this note is in duality structures, it is convenient to impose suitable finiteness conditions. Specifically, unless stated otherwise, in the sequel all \(\Bbbk\)-algebras are assumed to be finite-dimensional, and all modules and bimodules to be finite-dimensional as \(\Bbbk\)-vector spaces. Then in particular the tensor product is right exact and equips the category \(A\)-bimod of \(A\)-bimodules with a monoidal structure, and we can make contact to the theory of coalgebras and comodules.
When \(A\) is commutative, then every left \(A\)-module is canonically a right module and even a bimodule. Accordingly we may then also consider the category \(A\mod\) of left \(A\)-modules, which is a monoidal subcategory of the category \(A\)-bimod of \(A\)-bimodules, with \(A\) as a left module as the monoidal unit. One motivation to consider commutative algebras in this elementary study comes from the representation theory of vertex operator algebras. Indeed,
commutative algebras can be seen as particular examples of vertex operator algebras [FB, Sect. 1.3], and finite-dimensional algebras as particular examples of \(C_{2}\)-cofinite vertex operator algebras. Vertex operator algebras that are commutative algebras are known to arise as the result of BRST cohomology or quantum Hamiltonian reduction, see e.g. Section 2.2 of [LZ] for an infinite-dimensional example. BRST cohomology can also result in finite-dimensional commutative algebras. 1
Footnote 1: We thank Sven Möller for pointing out an (unpublished) example of such a commutative algebra.
A fundamental operation in linear algebra is to take the linear dual of a vector space. The category \(\operatorname{vect}_{\Bbbk}\) of finite-dimensional vector spaces is _rigid_, that is, every vector space \(V\) has both a left dual \({}^{\vee}V\) and a right dual \(V^{\vee}\) (both of which are isomorphic to \(\operatorname{Hom}_{\Bbbk}(V,\Bbbk)\) since \(\operatorname{vect}_{\Bbbk}\) is symmetric) endowed with evaluation and coevaluation maps, given by
\[\begin{array}{c}\operatorname{ev}\colon\ V^{\vee}\otimes_{\Bbbk}V \xrightarrow{}\Bbbk\,,\hskip 28.452756pt\operatorname{coev}\colon\ \Bbbk\xrightarrow{}V\otimes_{\Bbbk}V^{\vee},\\ f\otimes_{\Bbbk}v\longmapsto f(v)\,,\hskip 56.905512pt\xi\longmapsto\xi \sum_{i}v_{i}\otimes_{\Bbbk}v^{i}\end{array} \tag{2.3}\]
for the right dual, and analogously for the left dual, where \((v_{i})\) is a basis \(V\) and \((v^{i})\) the dual basis of \(V^{\vee}\). The notion of rigidity pervades the theory of Hopf algebras and quantum topology. Indeed, a (quasi-)Hopf algebra is essentially an algebra \(H\) endowed with additional structure that is precisely such that the category \(H\)-mod of finite-dimensional \(H\)-modules is rigid monoidal.
Taking the double dual gives a monoidal endofunctor \((-)^{\vee\vee}\) of the monoidal category \(\operatorname{vect}_{\Bbbk}\). The double dual \(V^{\vee\vee}\) of a finite-dimensional vector space is canonically identified with \(V\). More precisely, the category \(\operatorname{vect}_{\Bbbk}\) is pivotal, i.e. there is a monoidal isomorphism \(\operatorname{id}_{\operatorname{vect}_{\Bbbk}}\!\to\!(-)^{\vee\vee}\) between the identity functor and the double dual. For an arbitrary rigid monoidal category \(\mathcal{C}\) such a monoidal isomorphism may or may not exist; and if it exists, then there might be several of them. The choice of a specific one is then called a pivotal structure on \(\mathcal{C}\). For \(\mathcal{C}\!=\!H\)-mod the category of finite-dimensional modules over a Hopf algebra \(H\), a pivotal structure amounts to a finding a group-like element \(g\!\in\!H\) such that the square of the antipode \(S\) of \(H\) is given by \(S^{2}(h)\!=\!ghg^{-1}\).
It is tempting to regard rigidity as the generic paradigm for a duality structure on monoidal categories. However, already long ago more general duality structures have been encountered (compare e.g. Remark 4.8 below). Such generalized dualities occur in important situations, in particular for representation categories of vertex operator algebras to which the HLZ-theory of tensor products [HLZ] applies. The purpose of the present section is to illustrate these structures in the simplest possible case: for bimodules over a \(\Bbbk\)-algebra.
Indeed it is easy to see that in general the category \(A\)-bimod cannot be rigid, even if the algebra \(A\) is finite-dimensional and only finite-dimensional bimodules are considered. In an abelian rigid monoidal category, the functors \(X\!\otimes\!-\) and \(-\!\otimes\!X\) of tensoring from the left or from the right with an object \(X\) are exact [EGNO, Prop. 4.2.1]. This property does not hold for bimodules, in general. A simple instructive counter example is the following: For any field \(\Bbbk\), the algebra of dual numbers, i.e. the two-dimensional quotient
\[A_{2}:=\Bbbk[x]\,/\,\langle x^{2}\rangle \tag{2.4}\]
of the polynomial algebra \(\Bbbk[x]\), is a commutative associative unital algebra. A canonical basis of \(A_{2}\) is given by the classes \([1]\!=\!1\!\operatorname{mod}x^{2}\) and \([x]\!=\!x\!\operatorname{mod}x^{2}\); to unburden notation, we drop the class symbols and just write \(1\) and \(x\) for the elements of this basis. The algebra \(A_{2}\) has, up to
isomorphism, a single simple module \(S\), which is one-dimensional with generator \(s\); the element \(x\) of \(A_{2}\) acts on it by zero. The only other indecomposable finite-dimensional \(A_{2}\)-module, up to isomorphism, is the free module \(P\!=\!A_{2}\) of rank 1, with basis elements 1 and \(x\); the element \(x\!\in\!A_{2}\) acts as \(1\!\mapsto\!x\) and \(x\!\mapsto\!x^{2}\!=\!0\).
The following argument shows that the simple module \(S\) is not flat. The injective morphism \(\iota\!:S\!\xrightarrow{\,\,}\!P\) is given by \(s\!\mapsto\!x\). One immediately verifies that \(S\!\otimes_{\!A_{2}}S\!\cong\!S\) and, since \(P\) is the monoidal unit, \(S\!\otimes_{\!A_{2}}\!P\!\cong\!S\). The structure map of the coequalizer \(S\!\otimes_{\!\Bbbk}P\!\xrightarrow{\,\,}\!S\!\otimes_{\!A_{2}}P\) maps \(s\otimes_{\!\Bbbk}x\!\mapsto\!0\). It follows that tensoring with \(S\) maps the monomorphism \(\iota\) to the zero morphism: thus indeed the module \(S\) is not flat. As a consequence the category \(A_{2}\)-mod is not rigid.
Still, even in the non-rigid case we do have dual vector spaces at our disposal. We account for their presence by introducing, for any pair \(A\) and \(A^{\prime}\) of algebras, the contravariant functor
\[G_{A,A^{\prime}}:\quad A\hbox{-}A^{\prime}\hbox{-}\hbox{bimod}\xrightarrow{ \,\,}A^{\prime}\hbox{-}A\hbox{-}\hbox{bimod} \tag{2.5}\]
that sends a bimodule \(B\!\in\!A\hbox{-}A^{\prime}\hbox{-}\hbox{bimod}\) to its linear dual \(B^{*}\!=\!\hbox{Hom}_{\Bbbk}(B,\Bbbk)\) endowed with the canonical action
\[(a^{\prime}.\beta.a)(b):=\beta(a.b.a^{\prime}) \tag{2.6}\]
for \(a\!\in\!A\), \(a^{\prime}\!\in\!A^{\prime}\), \(b\!\in\!B\) and \(\beta\!\in\!B^{*}\). Obviously this functor squares to the identity, \(G_{A^{\prime},A}\circ G_{A,A^{\prime}}\!=\!\hbox{id}\). In particular, for \(A^{\prime}\!=\!A\) we get this way an \(A\)-bimodule \(A^{*}\); this furnishes a distinguished object in the monoidal category \(A\)-bimod.
The distinguished bimodule \(A^{*}\) is, in general, not isomorphic to the monoidal unit \(A\) of \(A\)-bimod, as is already illustrated by the simple example
\[B_{3}:=\Bbbk[x,y]\,/\,\langle x^{2},y^{2},xy\rangle \tag{2.7}\]
of a three-dimensional commutative algebra. A canonical basis of \(B_{3}\) is given by the classes [1], \([x]\) and \([y]\), which again we abbreviate just as 1, \(x\) and \(y\). The element 1 is the unit of \(B_{3}\), while \(x\) and \(y\) satisfy \(xx\!=\!yy\!=\!xy\!=\!0\). The algebra \(B_{3}\) has a unique (up to isomorphism) simple (bi)module \(S\), which is one-dimensional. There is a short non-split exact sequence
\[0\xrightarrow{\,\,}S\oplus S\xrightarrow{\,\,}B_{3}\xrightarrow{\,\,}S\,, \tag{2.8}\]
for \(B_{3}\), from which by dualizing we obtain a short exact sequence
\[0\xrightarrow{\,\,}S\xrightarrow{\,\,}B_{3}^{*}\xrightarrow{\,\,}S\oplus S \xrightarrow{\,\,}0 \tag{2.9}\]
for \(B_{3}^{*}\). In particular, the modules \(B_{3}\) and \(B_{3}^{*}\) have different socle and hence are not isomorphic. (As a consequence, \(B_{3}\) does not admit the structure of a Frobenius algebra; it is in fact the simplest example of an algebra with this property.)
These observations raise the question: What is the appropriate _categorical_ duality structure (which cannot be rigidity) on the monoidal category \(A\)-bimod that accomodates the distinguished bimodule \(B_{3}^{*}\) and the duality (2.5) inherited from the duality on vector spaces?
Before discussing the appropriate duality structure in detail, let us recall a few further properties that the monoidal category \(A\)-bimod of bimodules over a finite-dimensional algebra inherits from \(\hbox{vect}_{\Bbbk}\). These will eventually find their explanation in the framework of the general duality structures.
We first note the somewhat less well-known fact that for a finite-dimensional algebra \(A\) the abelian category \(A\)-bimod admits a second monoidal product. To see this, note that
the vector space \(A^{*}\) has a natural structure of a coalgebra. Further, any right \(A\)-module \((M,\rho\colon M\otimes_{\Bbbk}A\mathop{\rightarrow}M)\) becomes a right \(A^{*}\)-comodule as follows: Select a basis \((a_{i})\) of \(A\), which yields a dual basis \((a^{i})\) of \(A^{*}\). Then the prescription
\[\tilde{\rho}(m):=\sum_{i}m.a_{i}\otimes_{\Bbbk}a^{i} \tag{2.10}\]
defines a right \(A^{*}\)-coaction. A similar construction turns a left \(A\)-module into a left \(A^{*}\)-comodule. One then defines a tensor product \(\otimes^{A}\) of comodules as an equalizer
\[B_{1}\otimes^{A}B_{2}\to B_{1}\otimes_{\Bbbk}B_{2}\xrightarrow{ \varphi}B_{1}\otimes_{\Bbbk}A^{*}\otimes_{\Bbbk}B_{2} \tag{2.11}\]
with \(\varphi\) the linear map
\[\varphi=\tilde{\rho}^{B_{1}}\otimes_{\Bbbk}\mathrm{id}_{B_{2}}-\mathrm{id}_{ B_{1}}\otimes_{\Bbbk}\tilde{\lambda}^{B_{2}}. \tag{2.12}\]
Since this tensor product is defined as a limit, it is, as usual for coalgebras, left exact. Its monoidal unit is the bimodule \(A^{*}\).
We thus have two tensor products on \(A\)-bimod; the tensor product \(\otimes^{A}\) is left exact, while \(\otimes_{A}\) is right exact. This suggests to study their left and right adjoints, respectively. Doing so leads to the notions of internal Homs and internal coHoms; we introduce these for general linear monoidal categories:
**Definition 2.1**.: Let \(\mathcal{C}\) be a \(\Bbbk\)-linear monoidal category with tensor product \(\otimes\), and let \(Y,Z\mathop{\in}\mathcal{C}\). The _internal Hom_\(\operatorname{\underline{Hom}}^{\mathrm{r}}(Y,Z)\) is an object representing the functor \(\mathcal{C}^{\mathrm{opp}}\mathop{\rightarrow}\operatorname{vect}_{\Bbbk}\) that is given by \(X\mathop{\mapsto}\operatorname{Hom}(X\otimes Y,Z)\), i.e. we have functorial isomorphisms
\[\operatorname{Hom}(X\otimes Y,Z)\cong\operatorname{Hom}(X,\operatorname{ \underline{Hom}}^{\mathrm{r}}(Y,Z))\,. \tag{2.13}\]
Similarly a second internal Hom \(\operatorname{\underline{Hom}}^{\mathrm{l}}\) is obtained by keeping the second tensor factor [BoD, Remarks 2.1(3)], i.e.
\[\operatorname{Hom}(X\otimes Y,Z)\cong\operatorname{Hom}(Y,\operatorname{ \underline{Hom}}^{\mathrm{l}}(X,Z))\,. \tag{2.14}\]
The category \(\mathcal{C}\) is called _right closed_ if all internal Homs \(\operatorname{\underline{Hom}}^{\mathrm{r}}(Y,Z)\) exist, and it is called _left closed_ if all internal Homs \(\operatorname{\underline{Hom}}^{\mathrm{l}}(Y,Z)\) exist.
A necessary condition for the existence of the internal Homs is that the tensor product \(\otimes\) is right exact. The analogous notion for the left exact tensor product \(\otimes^{A}\) on \(A\)-bimod is the left adjoint of \(\otimes^{A}\). Again this can be generalized:
**Definition 2.2**.: Let \(\mathcal{C}\) be a \(\Bbbk\)-linear monoidal category with tensor product \(\otimes\), and let \(Y,Z\mathop{\in}\mathcal{C}\). Then the _internal coHom_ is the object characterized by functorial isomorphisms
\[\operatorname{Hom}(X,Y\otimes Z)\cong\operatorname{Hom}(\operatorname{\underline {coHom}}^{\mathrm{r}}(Z,X),Y)\,. \tag{2.15}\]
Similarly there is a second internal coHom \(\operatorname{\underline{coHom}}^{\mathrm{l}}\).
For the case of the monoidal category of \(A\)-bimodules considered here, internal Homs and coHoms do exist. Using the canonical pivotal structure of finite-dimensional vector spaces, they can be expressed in terms of the tensor products \(\otimes^{A}\) and \(\otimes_{A}\), respectively:
\[\operatorname{\underline{Hom}}^{\mathrm{r}}(X,Y)=Y\otimes^{A}X^{*}\quad \text{and}\quad\operatorname{\underline{coHom}}^{\mathrm{r}}(X,Y)=Y\otimes_{A }X^{*}, \tag{2.16}\]
as well as
\[\underline{\mathrm{Hom}}^{\mathrm{l}}(X,Y)=X^{*}\otimes^{A}Y\quad\text{and}\quad \underline{\mathrm{coHom}}^{\mathrm{l}}(X,Y)=X^{*}\otimes_{A}Y. \tag{2.17}\]
To show that the category \(A\)-bimod does admit internal Homs, we provide them explicitly. In fact, for any triple of algebras \(A\), \(B\) and \(C\) and bimodules \({}_{C}M_{B}\), \({}_{B}N_{A}\) and \({}_{C}X_{A}\) there is the adjunction
\[\mathrm{Hom}_{C,A}({}_{C}M_{B}\otimes_{B}{}_{B}N_{A,\,C}X_{A})\cong\mathrm{ Hom}_{C,B}({}_{C}M_{B},\mathrm{Hom}_{A}(N_{A},X_{A}))\,, \tag{2.18}\]
where we indicate only one of the algebra actions on a bimodule in cases when it is more relevant than the other, e.g. \(N_{A}\) is the bimodule \({}_{B}N_{A}\) seen as a right \(A\)-module, and where on the right hand side, \(\mathrm{Hom}_{A}(N_{A},X_{A})\) is regarded as \((C,B)\)-bimodule via the left module structures of \(N_{A}\) and \(X_{A}\). We can thus read off that
\[\underline{\mathrm{Hom}}^{\mathrm{r}}(N,X)=\mathrm{Hom}_{A}(N_{A},X_{A}))\,. \tag{2.19}\]
Similarly we find
\[\underline{\mathrm{Hom}}^{\mathrm{l}}(N,X)=\mathrm{Hom}_{A}({}_{A}N,{}_{A}X))\,. \tag{2.20}\]
Note that (2.19) exhibits the internal \(\mathrm{Hom}\) as a sub-bimodule
\[\mathrm{Hom}_{A}(N_{A},X_{A})\subset\mathrm{Hom}_{\Bbbk}(N,X) \tag{2.21}\]
of the bimodule obtained by equating the right \(A\)-actions on \(X\) and \(N\), and analogously for (2.20). In contrast, internal coHoms are defined as quotients. For finite-dimensional bimodules this fits with the description (2.16) and (2.17) of internal Homs and coHoms in terms of the two tensor products \(\otimes^{A}\) and \(\otimes_{\!A}\). Also note that according to (2.19) and (2.20), left and right internal Homs are, in general, non-isomorphic.
Finally we specialize (2.18) to the case that \(X\!=\!A^{*}\). Restricting further to the special case that \(M\) and \(N\) are \(A\)-bimodules we then get
\[\begin{split}\mathrm{Hom}_{A,A}(M\otimes_{\!A}N,G({}_{A}A_{A})) &\stackrel{{\eqref{eq:M_A_A_A_A_A_A_A_A_A_A_A}}}{{ \cong}}\mathrm{Hom}_{A,A}(M,\mathrm{Hom}_{A}(N_{A},(A^{*})_{A}))\\ &=\ \mathrm{Hom}_{A,A}(M,\mathrm{Hom}_{\Bbbk,A}(N,\mathrm{Hom}_{ \Bbbk}(A,\Bbbk)))\\ &\stackrel{{\eqref{eq:M_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_AA_A_A_A_A_A_A_A_A_A_A_A_A_A_A_A_AA_A_A_A_A_AA_A_AA_A_A_AA_A_AA_A_AA_AA_AA_AA_AAA_AAAA_AAAAAAAAAAAAAA
Grothendieck-Verdier duality
### First definitions
The observations in the preceding section motivate the following
**Definition 3.1**.: Let \(\mathcal{C}\!=\!(\mathcal{C},\otimes,1,\alpha,l,r)\) be a monoidal category.
1. A _dualizing object_ of \(\mathcal{C}\) is an object \(K\!\in\!\mathcal{C}\) such that for every \(y\!\in\!\mathcal{C}\) the functor \(x\!\mapsto\!\operatorname{Hom}(x\otimes y\), \(K)\) is representable by some object \(Gy\!\in\!\mathcal{C}\) and the so defined contravariant functor \(G\!:\,\mathcal{C}\!\to\!\mathcal{C}\) is an anti-equivalence. We thus have isomorphisms \[\varpi_{x,y}:\quad\operatorname{Hom}(x\!\otimes\!y,K)\xrightarrow{\cong} \operatorname{Hom}(x,Gy)\] (3.1) for \(x,y\!\in\!\mathcal{C}\). \(G\) is called the _duality functor with respect to \(K\)_.
2. A _Grothendieck-Verdier structure_ on \(\mathcal{C}\) is the choice of a dualizing object \(K\!\in\!\mathcal{C}\). A _Grothendieck-_Verdier category_, or _GV-category_, for short, is a monoidal category together with a choice of a Grothendieck-Verdier structure.
While the functor \(G\) depends on the choice of \(K\), we suppress it in the notation. 2 This duality structure has been introduced in [Ba1] under the name \(*\)-autonomous category, compare also [Ba2, Ba3, Ba4]. The term Grothendieck-Verdier category dates back to [BoD]. GV-categories appear for instance in the study of quadratic algebras and operads [Ma] and also play an important role in linear logic, see e.g. [Se].
Footnote 2: It is worth noting that this (standard) definition of a GV-category differs from the one in Definition 2.3 of [HasL], a paper which also discusses the relation to linearly distributive categories.
The result (2.22) in Section 2 tells us that \(A\)-bimod is a GV-category with dualizing object \(A^{*}\), thereby giving a categorical meaning to this distinguished bimodule. This should not come as a surprise: it is known [FSS, Rem. 3.17] that this fact can be formulated in a Morita invariant manner. The dualizing object is in fact _structure_; if \(K\) is a dualizing object, then any object of the form \(K\otimes D\) with \(D\) an invertible object of \(\mathcal{C}\) is dualizing as well [BoD, Prop. 1.3]. It should, however, be appreciated that the dualizing object itself need not be invertible. GV-categories with non-invertible dualizing object indeed arise as representation categories of vertex operator algebras [GRW]. Below, we also give an example involving bimodules.
**Definition 3.2**.: An \(r\)_-category_ is a monoidal category for which the monoidal unit \(1\) is a dualizing object.
Clearly, every rigid monoidal category is an \(r\)-category. But there also exist non-rigid \(r\)-categories; for an example, see e.g. [BoD, Ex. 1.9].
In the case of categories \(A\)-bimod of bimodules, certain invertible bimodules can be obtained by twisting either the left or right action of \(A\) by an algebra automorphism \(\psi\) of \(A\). For the isomorphism class of the bimodule \(A_{\psi}\) obtained in this way, \(\psi\) only matters up to inner automorphisms. For instance, the group of outer automorphisms of the three-dimensional algebra \(B_{3}\!=\!\Bbbk[x,y]/\langle x^{2},y^{2},xy\rangle\) considered in (2.7) is isomorphic to \(\operatorname{GL}(2)\), acting on the two generators in the obvious way. This gives a simple example of a GV-category admitting different GV-structures.
We also have already noticed that in this case the vector space dual \(B_{3}^{*}\) is not isomorphic to \(B_{3}\) as a bimodule, so \(B_{3}\) is not a Frobenius algebra. The following consideration shows that \(B_{3}^{*}\) is not even invertible: \(B_{3}\) has, up to isomorphism, a single simple module \(S\), which is one-dimensional and on which the elements \(x\) and \(y\) of \(B_{3}\) act as zero. A direct calculation shows that \(B_{3}^{*}\otimes_{\!A}B_{3}^{*}\cong S^{\oplus 4}\). Thus if \(B_{3}^{*}\) were invertible, then so would also be \(S^{\oplus 4}\), i.e. there would be a bimodule \(X\) such that \(S^{\oplus 4}\otimes_{\!A}X\cong A\). But this is impossible, simply because \(A\) is indecomposable as a module over itself. In fact, one can show that the regular bimodule \(B_{3}\) is, up to isomorphism, the only invertible \(B_{3}\)-bimodule.
### Internal Homs
The existence of a GV-structure on a monoidal category implies much structure familiar from category theory. In particular, recall from Definition 2.1 that, for \({\cal C}\) a linear monoidal category and \(y,z\in{\cal C}\), the internal Hom \(\underline{\mbox{Hom}}^{\mbox{\scriptsize r}}(y,z)\in{\cal C}\) is an object representing the functor \(x\mapsto\mbox{Hom}(x\otimes y,z)\), as expressed by the functorial isomorphisms (2.13). Since \({\cal C}\) is not assumed to be braided, there is also a separate variant \(\underline{\mbox{Hom}}^{\mbox{\scriptsize l}}\) of the internal Hom in which the second tensor factor is kept, with functorial isomorphisms (2.14).
A GV-category admits internal Homs. In fact, from the defining properties of the functor \(G\) it follows that
\[\underline{\mbox{Hom}}^{\mbox{\scriptsize r}}(x,z)\cong G(x\otimes G^{-1}z) \qquad\mbox{and}\qquad\underline{\mbox{Hom}}^{\mbox{\scriptsize l}}(x,z)\cong G ^{-1}(Gz\otimes x)\,. \tag{3.2}\]
In particular we have
\[\underline{\mbox{Hom}}^{\mbox{\scriptsize r}}(x,K)\cong G(x\otimes G^{-1}(K)) \cong G(x\otimes 1)\cong G(x)\,, \tag{3.3}\]
i.e. there is an isomorphism
\[\underline{\mbox{Hom}}^{\mbox{\scriptsize r}}(-,K)\cong G \tag{3.4}\]
of functors. This explicitly shows how the duality functor \(G\) is determined in terms of the dualizing object \(K\).
There are canonical evaluation morphisms
\[\underline{\mbox{ev}}^{\mbox{\scriptsize r}}_{x,y}\colon\ \ \underline{\mbox{Hom}}^{\mbox{ \scriptsize r}}(x,y)\otimes x\xrightarrow{}y\quad\mbox{ and }\quad\underline{\mbox{ev}}^{\mbox{ \scriptsize l}}_{x,y}\colon\ \ x\otimes\underline{\mbox{Hom}}^{\mbox{\scriptsize l}}(x,y) \xrightarrow{}y \tag{3.5}\]
for the internal Homs, given by the images of the identity morphism in the spaces \(\mbox{End}(\underline{\mbox{Hom}}^{\mbox{\scriptsize r}}(x,y))\) and \(\mbox{End}(\underline{\mbox{Hom}}^{\mbox{\scriptsize l}}(x,y))\) under the defining adjunctions (2.13) and (2.14), respectively. When combined with (3.4), this immediately gives a left and right evaluation morphisms for the tensor product. In contrast, compatible coevaluations do not exist, in general; otherwise the tensor product on any abelian GV-category would be exact. Given the evaluations, associative multiplication morphisms
\[\begin{split}\underline{\mu}^{\mbox{\scriptsize r}}_{x,y,z}& :\ \ \underline{\mbox{Hom}}^{\mbox{\scriptsize r}}(y,z)\otimes\underline{\mbox{Hom}}^ {\mbox{\scriptsize r}}(x,y)\xrightarrow{}\underline{\mbox{Hom}}^{\mbox{ \scriptsize r}}(x,z)\\ \mbox{and}\qquad\underline{\mu}^{\mbox{\scriptsize l}}_{x,y,z}& :\ \ \underline{\mbox{Hom}}^{\mbox{\scriptsize l}}(x,y)\otimes\underline{\mbox{Hom}}^ {\mbox{\scriptsize l}}(y,z)\xrightarrow{}\underline{\mbox{Hom}}^{\mbox{ \scriptsize l}}(x,z)\end{split} \tag{3.6}\]
are obtained by standard arguments [EGNO, Sect. 7.9]. In particular, \(\underline{\mbox{Hom}}^{\mbox{\scriptsize l}}(z,z)\) and \(\underline{\mbox{Hom}}^{\mbox{\scriptsize r}}(z,z)\) have the structure of (unital) associative algebras.
### A second tensor product
It is natural to introduce on a GV-category a second monoidal structure by
\[x\bullet y:=G^{-1}(Gy\otimes Gx)\,. \tag{3.7}\]
The two tensor products are, in general, different, and hence in general \(G(x\otimes y)\) and \(G(y)\otimes G(x)\) are not isomorphic, in contrast to the case of rigid categories. The double dual \(G^{2}\), however, is monoidal [BoD, Prop. 5.2]. It describes how cyclic invariance of invariant tensors of Grothendieck-Verdier categories is violated [BoD, Rem. 2.1(2)]:
\[\begin{split}\operatorname{Hom}(G^{2}(y)\otimes x,K)& \cong\operatorname{Hom}(G^{2}(y),Gx)\\ &\cong\operatorname{Hom}(x,Gy)\cong\operatorname{Hom}(x\otimes y,K)\,.\end{split} \tag{3.8}\]
There is also another variant of the second monoidal structure, given by
\[x\bullet^{\prime}y:=G(G^{-1}y\otimes G^{-1}x)\,. \tag{3.9}\]
The monoidal structure on the functor \(G^{2}\) provides a canonical identification of \(\bullet\) and \(\bullet^{\prime}\); accordingly we will henceforth identify \(\bullet\) and \(\bullet^{\prime}\).
_Remark 3.3_.: If \(\mathcal{C}\) is an \(r\)-category, then the monoidal products \(\otimes\) and \(\bullet\) are connected by canonical functorial morphisms
\[\varphi_{x,y}:\quad x\otimes y\xrightarrow{}x\bullet y\,. \tag{3.10}\]
The morphisms \(\varphi_{x,y}\) are functorial in \(x\) and \(y\) and compatible with the associativity and constraints for the two tensor products. In general they are not isomorphisms.
_Remark 3.4_.: As follows from (2.16), in the categories of bimodules considered in Section 2, the two tensor products \(\otimes\) and \(\bullet\) are realized as the \(\otimes_{\!A}\)- and \(\otimes^{\!A}\)-tensor products of bimodules, respectively. These have counterparts in the representation theory of vertex operator algebras. There are two commonly used constructions for the tensor product of modules over a vertex operator algebra. One of them, developed in the physics literature [Na, GK], considers, for a given pair of modules \(M\) and \(N\), two actions of the vertex operator algebra on the vector space \(M\otimes_{\!\mathbb{C}}N\) and performs a coequalizer construction resembling the one in the definition of \(\otimes_{\!A}\). The other one [HLZ], which is e.g. briefly summarized in Section 2.2 of [ALSW], starts instead with two actions on the vector space \(\operatorname{Hom}_{\mathbb{C}}(M\otimes_{\mathbb{C}}N,\mathbb{C})\) and restricts to the subspace on which they coincide, similarly as in the equalizer construction of \(\otimes^{\!A}\), and afterwards takes the dual of this subspace. We refer to [KR] for a more detailed account of the two approaches. We expect that in favorable circumstances the two constructions can be related through a Grothendieck-Verdier structure in a similar way as the tensor products \(\otimes_{\!A}\) and \(\otimes^{\!A}\) of bimodules; we intend to come back to this issue elsewhere.
Recall from Definition 2.2 that, for \((\mathcal{C},\otimes,1)\) a linear monoidal category and \(x,z\!\in\!\mathcal{C}\), the internal coHom \(\underline{\operatorname{coHom}}^{\mathrm{r}}(z,x)\!\in\!\mathcal{C}\) is the object characterized by the functorial isomorphisms (2.15). As seen in (3.2), for any GV-category internal Homs for \(\otimes\) exist, and so do internal coHoms for \(\bullet\). In terms of the two tensor products, they can be written as
\[\underline{\operatorname{coHom}}^{\mathrm{r}}(x,y)=y\otimes G^{-1}x\quad\text { and }\quad\underline{\operatorname{Hom}}^{\mathrm{r}}(x,y)=y\bullet Gx\,, \tag{3.11}\]
and as
\[\underline{\operatorname{coHom}}^{\mathrm{l}}(x,y)=Gx\otimes y\quad\text{ and }\quad \underline{\operatorname{Hom}}^{\mathrm{l}}(x,y)=G^{-1}x\bullet y\,, \tag{3.12}\]
generalizing the expressions (2.16) and (2.17).
It is an instructive exercise to derive the concrete formulas we gave for the monoidal category \(A\)-bimod for the tensor product \(\otimes^{A}\) and the internal (co)Hom from these general statements. Also, in analogy to (3.6) it follows immediately that for any object \(c\!\in\!\mathcal{C}\) the internal coHom \(\underline{\mathrm{coHom}}(c,c)\) is a counital coassociative coalgebra for the tensor product \(\bullet\). By duality, this tensor product comes with coevaluations.
As usual, the trivialization of double duals is interesting:
**Definition 3.5**.: A _pivotal_ GV-category is a GV-category together with natural isomorphisms
\[\psi_{x,y}:\quad\mathrm{Hom}(x\!\otimes\!y,K)\xrightarrow{\cong}\mathrm{Hom}( y\!\otimes\!x,K) \tag{3.13}\]
such that
\[\psi_{x,y}\circ\psi_{y,x}=\mathrm{id}\qquad\mathrm{and}\qquad\psi_{x\otimes y, z}\circ\psi_{y\otimes z,x}\circ\psi_{z\otimes x,y}=\mathrm{id}\,, \tag{3.14}\]
or, more explicitly when not suppressing the associator,
\[\psi_{x\otimes y,z}\circ\alpha_{x,y,z}\circ\psi_{y\otimes z,x}\circ\alpha_{y, z,x}\circ\psi_{z\otimes x,y}\circ\alpha_{z,x,y}=\mathrm{id}\,. \tag{3.15}\]
We have
**Proposition 3.6**.: [BoD, Prop. 6.7] _Pivotal structures on \(\mathcal{C}\) are in bijection to monoidal isomorphisms of functors \(\pi\!:\mathrm{id}\to G^{2}\) whose component \(\pi_{K}\!:K\xrightarrow{\cong}G^{2}(K)\) is given by the canonical isomorphism \(K\xrightarrow{\cong}G1\!=\!G^{2}G^{-1}1\xrightarrow{\cong}G^{2}K\)._
There also exists a notion of a _ribbon_ GV-category [BoD, Sect. 8]. It has been shown that ribbon GV-categories lead to _ansular functors_ [MW], and non-degeneracy conditions on the braiding have been identified which guarantee that they lead to modular functors [BrW]. It should also be appreciated that the representation category of a vertex operator algebra to which the HLZ tensor product theory applies has a natural structure of a ribbon GV-category [ALSW]. (This fits with the facts that a vertex operator algebra and its gradewise dual are not necessarily isomorphic as modules, and that the tensor product of vertex operator algebra modules need not be exact.)
### A symmetric formulation of the Eilenberg-Watts theorem
To connect the abstract considerations of Sections 3.1 - 3.3 with the particular case of \(A\)-bimod that was studied in Section 2, let us focus on the case of a GV-category that is in addition abelian.
**Lemma 3.7**.: _Let \(\mathcal{C}\) be an abelian category that has the structure of a GV-category with biadditive tensor product bifunctor \(\otimes\). Then \(\otimes\) is right exact and the second tensor product \(\bullet\) is left exact._
This statement applies in particular to the abelian category \(A\)-bimod of finite-dimensional bimodules over finite-dimensional algebras which we considered in Section 2. In this case we observe, as a by-product, the following application of dualities: the GV-structure on \(A\)-bimod allows one to rewrite the classical Eilenberg-Watts theorem in a form that treats left exact and right exact functors in a completely symmetric manner.
The classical Eilenberg-Watts theorem describes right exact functors between categories of modules: Let \(\mathrm{mod}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
## 4 Distributors
In a monoidal category with tensor product \(\otimes\), different bracketings of multiple tensor products are related through the associator \(\alpha\!=\!\alpha^{\otimes}\). (A pentagon identity for \(\alpha\) ensures coherence.) In a GV-category there is another associator \(\alpha^{\bullet}\) for multiple \(\bullet\)-tensor products, whose components are directly obtained from those of \(\alpha\) as images under the duality functor \(G\). As a consequence, they are isomorphisms and also \(\alpha^{\bullet}\) satisfies a pentagon equation. In addition there are similar coherent families of morphisms that relate multiple products involving both \(\bullet\) and \(\otimes\). These are best understood when viewed from the perspective of module categories, involving in particular the notion of a weak module functor. Recall that a module category is a categorification of the notion of a module over a ring, see e.g. [EGNO, Def. 7.1.1],
**Definition 4.1**.: Let \(\mathcal{M}\) and \(\mathcal{N}\) be left module categories over a monoidal category \(\mathcal{C}\). A _lax module functor_ from \(\mathcal{M}\) to \(\mathcal{N}\) is a functor \(G\colon\mathcal{M}\!\xrightarrow{}\!\mathcal{N}\) endowed with a natural family of morphisms \(g_{c,m}\colon c\vartriangleright G(m)\!\xrightarrow{}\!G(c\vartriangleright m)\) (called the lax _module constraint_ of \(G\)) such that the appropriate pentagon diagram is fulfilled. Analogously, for an _oplax module functor_\(F\colon\mathcal{M}\!\xrightarrow{}\!\mathcal{N}\) there is a corresponding coherent family of morphisms \(F(c\vartriangleright m)\!\xrightarrow{}\!c\vartriangleright F(m)\).
A lax or oplax module functor is also called a _weak module functor_.
A _strong module functor_ is a lax (or oplax) module functor for which the lax (or oplax) module constraints are isomorphisms.
The notion of internal Homs and coHoms generalizes from monoidal categories to module categories in a straightforward way. For \((\mathcal{M},\vartriangleright)\) a left module category over a monoidal category \((\mathcal{C},\otimes)\), one defines internal Homs, similarly as \(\underline{\mathrm{Hom}}^{\mathrm{r}}\) in Definition 2.1 for \(\mathcal{C}\) itself, as objects representing the functors \(c\!\mapsto\!\mathrm{Hom}_{\mathcal{M}}(c\vartriangleright m,n)\), giving rise to coherent isomorphisms
\[\mathrm{Hom}_{\mathcal{M}}(c\vartriangleright m,n)\cong\mathrm{Hom}_{\mathcal{ C}}(c,\underline{\mathrm{Hom}}^{\mathrm{r}}(m,n)) \tag{4.1}\]
for \(c\!\in\!\mathcal{C}\) and \(m,n\!\in\!\mathcal{M}\). Denote, for any object \(m\) in a left \((\mathcal{C},\otimes)\)-module category \(\mathcal{M}\), by \(L_{m}^{\vartriangleright}\colon c\mathcal{C}\!\xrightarrow{}\!\mathcal{M}\), with \({}_{c}\mathcal{C}\) standing for the category \(\mathcal{C}\) regarded as a left module over itself, the strong \((\mathcal{C},\otimes)\)-module functor obtained by acting on \(m\), i.e.
\[L_{m}^{\vartriangleright}(c):=c\vartriangleright m \tag{4.2}\]
for \(c\!\in\!\mathcal{C}\). Then the _internal Hom_\(\underline{\mathrm{Hom}}^{\mathrm{r}}(m,-)\) is the right adjoint of the module functor \(L_{m}^{\vartriangleright}\). Indeed, all all module functors \(\mathcal{C}\!\xrightarrow{}\!\mathcal{M}\) are of this type:
**Lemma 4.2**.: _Let \(\mathcal{C}\) be a monoidal category, \(\mathcal{M}\) a left \(\mathcal{C}\)-module, and \(F\colon\mathcal{C}\!\xrightarrow{}\!\mathcal{M}\) a strong module functor. Then there exists an object \(m\!\in\!\mathcal{M}\) with a module natural isomorphism \(F\xrightarrow{\cong}L_{m}^{\vartriangleright}\). The object \(m\) is unique up to unique isomorphism._
Proof.: We set \(m\!:=\!F(1)\). From the module constraint \(f\) of \(F\) we obtain a natural isomorphism \(f_{c,1}\colon F(c)\!=\!F(c\otimes 1)\xrightarrow{\cong}c\vartriangleright \!F(1)\!=\!c\vartriangleright m\) for \(c\!\in\!\mathcal{C}\). From the pentagon axiom of the module constraint it follows that this family of isomorphisms constitutes a module natural isomorphism \(F\xrightarrow{\cong}L_{m}^{\vartriangleright}\). By a Yoneda argument, \(m\) is unique up to unique isomorphism.
If \((\mathcal{C},\otimes)\) is a GV-category, then the regular left module \({}_{c}\mathcal{C}\) does admit internal Homs (which are given by the formulas (3.2)). In contrast, for an arbitrary module category over a GV-category, the existence of such right adjoints is, of course, not guaranteed. Accordingly we give
**Definition 4.3**.: A _left GV-module category_ over a GV-category \(\mathcal{C}\) is a left module category \((\mathcal{M},\rhd)\) over \((\mathcal{C},\otimes)\) such that both the action functor \(L_{m}^{\rhd}\colon{}_{\mathcal{C}}\mathcal{C}\to\mathcal{M}\) defined by (4.2) and the functor \(c\rhd-:\mathcal{M}\to\mathcal{M}\) admit a right adjoint, for every \(m\in\mathcal{M}\) and every \(c\in\mathcal{C}\), respectively.
Note that the GV-structure of \(\mathcal{C}\) is in fact not used in this definition. The separate terminology 'GV-module category' is chosen to indicate that an additional condition is imposed.
Since any left adjoint functor preserves all colimits, for \(\mathcal{M}\) a GV-module category over a GV-category \(\mathcal{C}\) the functor \(L_{m}^{\rhd}\) is right exact for every \(m\in\mathcal{M}\). By definition, for any \(c\in\mathcal{C}\) the endofunctor \(c\rhd-\) has a right adjoint; denote this right adjoint by \(R_{c}\colon\mathcal{M}\to\mathcal{M}\). We have
**Proposition 4.4**.: _Let \((\mathcal{M},\rhd)\) be a left GV-module category over a GV-category \((\mathcal{C},\otimes)\). Then the bifunctor \(\blacktriangleright:\mathcal{C}\times\mathcal{M}\xrightarrow{}\mathcal{M}\) given by_
\[c\blacktriangleright m:=R_{Gc}(m) \tag{4.3}\]
_is left exact in each variable and defines a left module category structure over \((\mathcal{C},\bullet)\)._
Proof.: In the defining adjunction \(\operatorname{Hom}_{\mathcal{M}}(c\rhd n,m)\cong\operatorname{Hom}_{\mathcal{ M}}(n,R_{c}(m))\) of \(R_{c}\), the left hand side is manifestly left exact in \(m\in\mathcal{M}\), and thus so is \(\blacktriangleright\). Left exactness in \(c\in\mathcal{C}\) follows by right exactness of \(\rhd\) from the isomorphism
\[\operatorname{Hom}_{\mathcal{M}}(m,c\blacktriangleright n)=\operatorname{Hom}_{ \mathcal{M}}(m,R_{Gc}(n))\cong\operatorname{Hom}_{\mathcal{M}}(Gc\rhd m,n)\,. \tag{4.4}\]
Further, for \(m,n\in\mathcal{M}\) and \(b,c\in\mathcal{C}\) there are natural isomorphisms
\[\operatorname{Hom}_{\mathcal{M}}(m,b\blacktriangleright(c\blacktriangleright n)) \cong\operatorname{Hom}_{\mathcal{M}}(Gb\rhd m,c\blacktriangleright n) \tag{4.5}\] \[\cong\operatorname{Hom}_{\mathcal{M}}(Gc\rhd(Gb\rhd m),n)\] \[\cong\operatorname{Hom}_{\mathcal{M}}((Gc\otimes Gb)\rhd m),n)\] \[\cong\operatorname{Hom}_{\mathcal{M}}(m,(b\bullet c)\blacktriangleright n )\,.\]
From these isomorphisms the module constraints for \(\blacktriangleright\) follow by the Yoneda lemma. Moreover, the Yoneda embedding transports the pentagon identity for the module constraint of \(\rhd\) to the one for the module constraint of \(\blacktriangleright\).
Setting
\[L_{m}^{\blacktriangleright}(c):=c\blacktriangleright m \tag{4.6}\]
we thus obtain for any left GV-module category \((\mathcal{M},\rhd)\) over a GV-category \((\mathcal{C},\otimes)\) also a \((\mathcal{C},\bullet)\)-module functor \(L_{m}^{\blacktriangleright}\colon\overleftarrow{}_{\mathcal{C}}\mathcal{C}\to \mathcal{M}\). Since \(L_{m}^{\blacktriangleright}\) is defined via the right adjoint of the \((\mathcal{M},\rhd)\)-module functor \(L_{m}^{\rhd}\), it has a left adjoint. Indeed we have natural isomorphisms
\[\operatorname{Hom}_{\mathcal{M}}(n,L_{m}^{\blacktriangleright}(c)) =\operatorname{Hom}_{\mathcal{M}}(n,c\blacktriangleright m)\cong \operatorname{Hom}_{\mathcal{M}}(Gc\rhd n,m) \tag{4.7}\] \[\cong\operatorname{Hom}_{\mathcal{C}}(Gc,\underline{\operatorname{ Hom}}^{\mathrm{r}}(n,m))\cong\operatorname{Hom}_{\mathcal{C}}(G^{-1}(\underline{ \operatorname{Hom}}^{\mathrm{r}}(n,m)),c)\,.\]
We call the left adjoint of the \((\mathcal{C},\bullet)\)-module functor \(L_{m}^{\blacktriangleright}\) the _internal coHom_ and denote it by \(\underline{\operatorname{coHom}}^{\mathrm{r}}(m,-)\). By definition, there are thus coherent isomorphisms
\[\operatorname{Hom}_{\mathcal{M}}(n,c\blacktriangleright m)\cong\operatorname{ Hom}_{\mathcal{C}}(\underline{\operatorname{coHom}}^{\mathrm{r}}(m,n),c) \tag{4.8}\]
for \(c\in\mathcal{C}\) and \(m,n\in\mathcal{M}\) and, by (4.7), \(\underline{\operatorname{coHom}}^{\mathrm{r}}(m,n)\cong G^{-1}(\underline{ \operatorname{Hom}}^{\mathrm{r}}(n,m))\) for \(m,n\in\mathcal{M}\).
It should be appreciated that the adjoint of a strong module functor is, in general, only a weak module functor. To understand this, it is convenient to consider linear categories and work with module profunctors. Following [Sh, Def. 2.1] we give
**Definition 4.5**.: Let \({\mathcal{M}}\) and \({\mathcal{N}}\) be left modules over a \({\Bbbk}\)-linear monoidal category \({\mathcal{C}}\). A \({\mathcal{C}}\)_-module profunctor_ from \({\mathcal{M}}\) to \({\mathcal{N}}\) is a bilinear functor \(H\colon{\mathcal{M}}^{\rm opp}\times{\mathcal{N}}\mathop{\hbox{\vrule height 6. 0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.5pt depth 0.0pt}}\nolimits\mathop{\hbox{\vrule height 6.0pt width 0.5pt depth 0.0pt\vrule height 6.0pt width 0.
The particular case of interest to us is that \({\cal C}\) is a GV-category and that \({\cal M}\!=\!{}_{c}{\cal C}\) is the regular left \({\cal C}\)-module, for which according to (3.11) we have \(\underline{\mbox{Hom}}^{\rm r}(x,y)\!=\!y\bullet G(x)\). In this case we obtain
**Lemma 4.9**.: _Let \({\cal C}\) be a \(\Bbbk\)-linear GV-category. Then there is a family_
\[{}^{\rm r}\delta^{x}_{c,y}:\quad c\otimes\underline{\mbox{Hom}}^{\rm r}(x,y)= c\otimes(y\bullet G(x))\xrightarrow{}(c\otimes y)\bullet G(x)=\underline{\mbox{Hom}}^{ \rm r}(x,c\otimes y) \tag{4.14}\]
_of morphisms, for \(c,x,y\!\in\!{\cal C}\), which endows the internal Hom \(\underline{\mbox{Hom}}^{\rm r}(x,y)\!=\!y\bullet G(x)\) with a lax module functor structure for \({\cal C}\) as the regular left \({\cal C}\)-module._
While \(\underline{\mbox{Hom}}^{\rm r}(x,-)\!=\!-\bullet G(x)\) is, in general, only a weak module functor for the right exact tensor product \(\otimes\), it is a strong module functor for the left exact tensor product \(\bullet\). Indeed, the module functor
\[I_{x}:=\underline{\mbox{Hom}}^{\rm r}(x,-):\quad{}^{\bullet}_{c}{\cal C} \xrightarrow{}{}^{\bullet}_{c}{\cal C}\,, \tag{4.15}\]
with \({}^{\bullet}_{c}{\cal C}\) the regular \(({\cal C},\bullet)\)-left module, is strong because the associator \(\alpha^{\bullet}\) provides us with isomorphisms
\[I_{y}(c\bullet x)=(c\bullet x)\bullet G(y)\xrightarrow{\cong}c\bullet(x \bullet G(y))=c\bullet I_{y}(x)\,. \tag{4.16}\]
Invoking Lemma 4.6, this implies that the left adjoint \(L_{y}\!=\!-\!\otimes\!y\!:{}^{\bullet}_{c}{\cal C}\!\to\!{}^{\bullet}_{c}{ \cal C}\) of \(I_{y}\) is an oplax module functor. Accordingly there are coherent morphisms
\[\begin{split}{}^{1}\delta^{x}_{y,c}:\quad(G^{-1}(x)\bullet y) \otimes c&=L_{c}(G^{-1}(x)\bullet y)\\ &\xrightarrow{}G^{-1}(x)\bullet L_{c}(y)=G^{-1}(x)\bullet(y \otimes c)\,.\end{split} \tag{4.17}\]
Borrowing terminology from the theory of linearly distributive categories (see e.g. [CS, Pa, HasL]), we give
**Definition 4.10**.: The natural transformations \({}^{1}\delta\) and \({}^{\rm r}\delta\) introduced in (4.14) and (4.17), are called the left and right _distributors_ of the GV-category \({\cal C}\), respectively.
In fact, as stated in the literature [Pa, HasL], GV-categories are the same as linearly distributive categories with negation. The interpretation of distributors in terms of GV-structures has, however, (to the best of our knowledge) not been established explicitly. In our approach, the definition of distributors via weak module functors implies
**Proposition 4.11**.: _The distributors \({}^{1}\delta\) and \({}^{\rm r}\delta\) satisfy all the compatibility conditions with the unitors for \(1^{\otimes}\) and \(1^{\bullet}\) (four mixed triangle identities) and with the associators \(\alpha^{\otimes}\) and \(\alpha^{\bullet}\) (four mixed pentagon identities) that the distributors in a linearly distributive category have to obey._
Proof.: Since by definition a weak module functor satisfies an appropriate pentagon diagram, the two paths in the pentagon diagram that is built from the associator \(\alpha^{\otimes}\) and the weak module structure 4.14, which induces the left distributor, commute:
\[\begin{split}(c\otimes d)\otimes\underline{\mbox{Hom}}^{\rm r}(x, y)&=(c\otimes d)\otimes(y\bullet G(x))\\ &\xrightarrow{}(c\otimes(d\otimes y))\bullet G(x)=\underline{ \mbox{Hom}}^{\rm r}(x,c\otimes(d\otimes y))\,.\end{split} \tag{4.18}\]
Further, for all \(x_{1},x_{2}\!\in\!{\cal C}\) there is a canonical isomorphism
\[L_{x_{2}}\circ L_{x_{1}}\cong L_{x_{1}\otimes x_{2}} \tag{4.19}\]
of strong module functors. This isomorphism induces an isomorphism of the respective right adjoint weak module functors, and thus translates into a commuting diagram
\[c\otimes\left((y\bullet G(x_{2}))\bullet G(x_{1})\right)\,\smash{\mathop{ \longrightarrow}\limits}\,\,(c\otimes y)\bullet\left(G(x_{2})\bullet G(x_{1}) \right). \tag{4.20}\]
Two further pentagon diagrams, which are mirror images of (4.18) and (4.20) and involve the right distributor commute by the analogous arguments for the strong module functors \(L^{\bullet}_{x}\). Inspection shows that these pentagons are the same as those obeyed by the distributors in a linearly distributive category, as given e.g. in [CS, Sect. 2.1.3]. The triangle diagrams involving the unitors (see e.g. [CS, Sect. 2.1.2]) are immediate.
It can also be shown that the two additional pentagon identities that are valid in a linearly distributive category, each of which involves both \({}^{1}\delta\) and \({}^{r}\delta\), are fulfilled as well. However, the proof of this statement we know of is less conceptual and considerably more indirect; we refrain from presenting it here. Also note that, as a consequence of their construction via _weak_ module functors, the distributors \({}^{r}\delta\) and \({}^{1}\delta\) are, in general, not isomorphisms. They _are_ isomorphisms if and only if the category is rigid, see Proposition 5.2.
## 5 Subcategories of rigid objects
Next we characterize, for right closed monoidal categories \({\mathcal{C}}\), those objects \(x\,{\in}\,{\mathcal{C}}\) for which \(\underline{\operatorname{Hom}}^{\operatorname{r}}(x,-)\) is a _strong_ module functor, i.e. for which the coherence morphisms \({}^{1}\delta^{x}_{y,z}\) in (4.14) are _is_omorphisms for all objects \(y,z\,{\in}\,{\mathcal{C}}\). We start with the standard observation (compare e.g. [EGNO, Prop. 2.10.8]) that the monoidal equivalence of \({\mathcal{C}}\) and \(\operatorname{End}_{{\mathcal{C}}}({\mathcal{C}})\) directly implies
**Lemma 5.1**.: _Let \(({\mathcal{C}},\otimes)\) be a monoidal category and let \(x\,{\in}\,{\mathcal{C}}\). An object \(x^{\vee}\,{\in}\,{\mathcal{C}}\) is a right dual of \(x\) if and only if the functor \(-\otimes x^{\vee}\) is right adjoint to the functor \(-\otimes x\) as a module functor._
Note that it does not suffice to merely require that \(-\otimes x^{\vee}\) is right adjoint to \(-\otimes x\) as a linear functor (a counter example is given in [HalZ]). A statement analogous to Lemma 5.1 holds for left duals.
**Proposition 5.2**.: _Let \(({\mathcal{C}},\otimes)\) be a right closed monoidal category. For \(x\,{\in}\,{\mathcal{C}}\) the lax module functor \(\underline{\operatorname{Hom}}^{\operatorname{r}}(x,-)\colon{\mathcal{C}} \xrightarrow{}{\mathcal{C}}\) is a strong module functor if and only if \(x\) has a right dual object \(x^{\vee}\). Moreover, if this is the case, then \(x^{\vee}\,{\cong}\,\underline{\operatorname{Hom}}^{\operatorname{r}}(x,1)\) as objects, and_
\[\underline{\operatorname{Hom}}^{\operatorname{r}}(x,-)\cong-\otimes x^{\vee} \tag{5.1}\]
_as module functors._
Proof.: Assume that the object \(x\) has a right dual \(x^{\vee}\). Then the left \({\mathcal{C}}\)-module functor \(L_{x}\colon{\mathcal{C}}\xrightarrow{}{\mathcal{C}}\) with \(L_{x}(y)\,{=}\,y\otimes x\) has as a right adjoint the left \({\mathcal{C}}\)-module functor \(L_{x^{\vee}}\colon{\mathcal{C}}\xrightarrow{}{\mathcal{C}}\), in such a way that the unit and counit of the adjunction \(\operatorname{Hom}_{{\mathcal{C}}}(L_{x}(y),z)\,{\cong}\,\operatorname{Hom}_{{ \mathcal{C}}}(y,L_{x^{\vee}}(z))\) consist of module natural transformations. Thus it follows from Corollary 4.7 that \(\underline{\operatorname{Hom}}^{\operatorname{r}}(x,-)\,{=}\,{-}\,\otimes x^{\vee}\) as lax module functors. This, in turn, implies that \(\underline{\operatorname{Hom}}^{\operatorname{r}}(x,-)\) is in fact a strong module functor, with the associator of \({\mathcal{C}}\) as module constraint.
Conversely, assume that \(\underline{\operatorname{Hom}}^{\operatorname{r}}(x,-)\colon{\mathcal{C}} \xrightarrow{}{\mathcal{C}}\) is a strong module functor. By Lemma 4.2 it then follows that there is a natural isomorphism
\[\underline{\operatorname{Hom}}^{\operatorname{r}}(x,-)\cong-\otimes\underline{ \operatorname{Hom}}^{\operatorname{r}}(x,1) \tag{5.2}\]
of module functors. Thus \(-\otimes\underline{\operatorname{Hom}}^{\operatorname{r}}(x,1)\) is right adjoint to \(-\otimes x\) as a module functor, and so by Lemma 5.1 the object \(\underline{\operatorname{Hom}}^{\operatorname{r}}(x,1)\,{\in}\,{\mathcal{C}}\) is a right dual of \(x\)
Now the class of objects of \((\mathcal{C},\otimes)\) that admit a right dual is closed under the monoidal product \(\otimes\). Thus we have
**Corollary 5.3**.: _The full subcategory on those objects \(y\!\in\!\mathcal{C}\) for which \(\operatorname{\underline{Hom}}^{\mathrm{r}}(y,-)\) is a strong module functor is a unital monoidal subcategory of \(\mathcal{C}\)._
Note that, as for instance the category \(B_{3}\)-bimod of bimodules over the algebra (2.7) illustrates, for a GV-category the dualizing object \(K\) need not be contained in this subcategory. Also, clearly, there are analogues for left closed monoidal categories and for internal coHoms:
**Lemma 5.4**.: _Let \((\mathcal{C},\otimes)\) be a monoidal category._
1. _Assume that the monoidal category_ \((\mathcal{C},\otimes^{\mathrm{opp}})\) _obtained by reversing the monoidal structure is left closed. Then for every_ \(x\!\in\!\mathcal{C}\) _the left internal Hom_ \(\operatorname{\underline{Hom}}^{\mathrm{l}}(x,-)\) _is a lax right module functor, with module constraint_ \(\operatorname{\underline{Hom}}^{\mathrm{l}}(x,y)\otimes z\!\xrightarrow{ \operatorname{\underline{Hom}}^{\mathrm{l}}(x,y\otimes z)}\)_. Moreover,_ \(\operatorname{\underline{Hom}}^{\mathrm{l}}(x,-)\) _is strong if and only if_ \(x\) _has a left dual._
2. _Assume that the opposite category_ \((\mathcal{C}^{\mathrm{opp}},\otimes)\) _is left closed. A right internal coHom of_ \((\mathcal{C},\otimes)\) _is a right internal Hom of_ \((\mathcal{C}^{\mathrm{opp}},\otimes)\)_. As a consequence,_ \(\operatorname{coHom}^{\mathrm{r}}(x,-)\) _is an oplax left module functor with module constraint_ \(\operatorname{coHom}^{\mathrm{r}}(x,y\otimes z)\!\xrightarrow{\ \ }y\otimes \operatorname{coHom}^{\mathrm{r}}(x,z)\)_. Moreover,_ \(\operatorname{coHom}^{\mathrm{r}}(x,-)\) _is strong if and only if_ \(x\) _has a left dual._
3. _Assume that the monoidal category_ \((\mathcal{C}^{\mathrm{opp}},\otimes^{\mathrm{opp}})\) _is left closed. Then the left internal coHom_ \(\operatorname{coHom}^{\mathrm{l}}\) _is an oplax right module functor with module constraint_ \(\operatorname{coHom}^{\mathrm{l}}(x,y\otimes z)\!\xrightarrow{\ \ }\operatorname{\underline{coHom}}^{\mathrm{l}}(x,y)\otimes z\)_._ _Moreover,_ \(\operatorname{coHom}^{\mathrm{l}}(x,-)\) _is strong if and only if_ \(x\) _has a right dual._
As an illustration, for categories of bimodules we get
**Lemma 5.5**.: _Let \(A\) be a finite-dimensional \(\Bbbk\)-algebra, and let \({}_{A}M_{A}\!\in\!A\)-bimod be a finite-dimensional \(A\)-bimodule. The following statements are equivalent:_
1. \(\operatorname{\underline{Hom}}^{\mathrm{r}}(M,-)\) _is a strong module functor._
2. \(M\) _has an_ \(\otimes_{A}\)_-right dual._
3. \(M_{A}\) _is projective as a right_ \(A\)_-module._
4. \({}_{A}(M^{*})\) _is injective as a left_ \(A\)_-module._
5. _For all_ \(X,Y\!\in\!A\)_-bimod _the distributor_ \(X\otimes_{A}(Y\otimes^{A}M^{*})\!\xrightarrow{\ \ }(X\otimes_{A}Y)\otimes^{A}M^{*}\) _is an isomorphism._
_Likewise, the following statements are equivalent:_
1. \(\operatorname{\underline{Hom}}^{\mathrm{l}}(M,-)\) _is a strong module functor._
2. \(M\) _has an_ \(\otimes_{A}\)_-left dual._
3. \({}_{A}M\) _is projective as a left_ \(A\)_-module._
4. \((M^{*})_{A}\) _is injective as a right_ \(A\)_-module._
* _For all_ \(X,Y\!\in\!A\)_-_bimod _the distributor_ \((M^{*}\!\otimes^{A}\!X)\otimes_{\!A}\!Y\xrightarrow{}\!M^{*}\!\otimes^{A}\!(X \otimes_{\!A}\!Y)\) _is an isomorphism._
Proof.: (ii) follows from (i) with Proposition 5.2. The equivalence (ii)\(\,\Longleftrightarrow\,\)(iii) is standard (compare e.g. [nLab]). Since \((-)^{*}\) is an antiequivalence, (iv) is equivalent to (iii). Finally, equivalence of (v) and (i) follows again from Proposition 5.2.
We can also determine the duals explicitly. For instance, the \(\otimes_{\!A}\)-right dual of \({}_{A}M_{A}\) is
\[G^{-1}(M\otimes K)=(M\otimes_{\!A}A^{*})^{*}=\operatorname{Hom}_{A}(M_{A},A_{ A})\,, \tag{5.3}\]
in accordance with (2.19).
## 6 Distributors for bimodules
Let us now describe in detail the distributors for the case of categories of finite-dimensional bimodules over finite-dimensional \(\Bbbk\)-algebras considered in Section 2. Recall that we assume all algebras \(A\), \(B\), etc. as well as all bimodules to be finite-dimensional.
Making use of equalizer inclusions and coequalizer surjections, for any triple \(X,Y,Z\) of \((A,A)\)-bimodules we will define four \((A,A)\)-bimodule homomorphisms
\[\begin{split}&\partial^{\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
where \(x\in X\) and we write a generic element of \(Y\otimes^{A}Z\) as a finite sum \(\sum_{i}y_{i}\otimes_{\Bbbk}z_{i}\) with \(y_{i}\in Y\) and \(z_{i}\in Z\), obeying \(\sum_{i}y_{i}.a\otimes_{\Bbbk}x_{i}=\sum_{i}y_{i}\otimes_{\Bbbk}a.z_{i}\) for all \(a\in A\). The map (6.4) is cobalanced, i.e. \(\sum_{i}(x\otimes y_{i})\otimes a.z_{i}=\sum_{i}(x\otimes y_{i}.a)\otimes z_{i }=\sum_{i}(x\otimes y_{i}).a\otimes z_{i}\) for all \(a\in A\); hence by the universal property of equalizers there exists a unique homomorphism \(\gamma^{\downarrow}_{X,Y,Z}\colon X\otimes_{\Bbbk}(Y\otimes^{A}Z)\xrightarrow{ \ }(X\otimes_{\Bbbk}Y)\)\(\otimes^{A}Z\) such that the diagram
(6.5)
commutes. Since \(\imath^{\downarrow}_{X,Y,Z}\) and \(X\otimes_{\Bbbk}\imath_{Y,Z}\) are injective and \(\alpha_{\rm vect}\) is an isomorphism, and since the images of \(\imath_{Y,Z}\) and of (6.4) coincide, \(\gamma^{\downarrow}_{X,Y,Z}\) is an isomorphism. Next consider the diagram
(6.6)
where
\[\pi_{X,Y}:\quad X\otimes_{\Bbbk}Y\twoheadrightarrow X\otimes_{\Bbbk}Y \tag{6.7}\]
\[\text{and}\qquad\pi^{\downarrow}_{X,Y,Z}:\quad X\otimes_{\Bbbk}(Y\otimes^{A}Z) \twoheadrightarrow X\otimes_{A}(Y\otimes^{A}Z)\]
are the coequalizer surjections from \(X\otimes_{\Bbbk}Y\) onto \(X\otimes_{A}Y\) and from \(X\otimes_{\Bbbk}(Y\otimes^{A}Z)\) onto \(X\otimes_{A}(Y\otimes^{A}\otimes^{A}Z)\), respectively. Note that since the functor \(-\otimes^{A}Z\) need not be right exact, \(\pi_{X,Y}\otimes^{A}Z\) need not be surjective. Explicitly we have
\[(\pi_{X,Y}\otimes^{A}Z)\circ\gamma^{\downarrow}_{X,Y,Z}:\quad X \otimes_{\Bbbk}(Y\otimes^{A}Z)\xrightarrow{\ \ }(X\otimes_{A}Y)\otimes^{A}Z\,, \tag{6.8}\] \[\sum_{i}x\otimes_{\Bbbk}(y_{i}\otimes_{\Bbbk}z_{i}) \longmapsto\sum_{i}[x\otimes_{\Bbbk}y_{i}]\otimes_{\Bbbk}z_{i}\,,\]
where \([x\otimes_{\Bbbk}y]\) is the element of \(X\otimes_{\Bbbk}Y\) represented by \(x\otimes_{\Bbbk}y\) with \(x\in X\) and \(y\in Y\). The map (6.8) is balanced, i.e. \(\sum_{i}[x.a\otimes_{\Bbbk}y_{i}]\otimes_{\Bbbk}z_{i}=\sum_{i}[x\otimes_ {\Bbbk}a.y_{i}]\otimes_{\Bbbk}z_{i}\) for all \(a\in A\), By the universal property of the coequalizer defining the \(\otimes_{\!4}\)-tensor product, there is thus a unique map \(\partial^{\downarrow}_{X,Y,Z}\colon X\otimes_{A}(Y\otimes^{A}Z)\xrightarrow{ \ \ }(X\otimes_{A}Y)\otimes^{A}Z\) such that the diagram
(6.9)
commutes. Explicitly we have
\[\partial^{\downarrow}_{X,Y,Z}:\quad\sum_{i}[x\otimes_{\Bbbk}(y_{i}\otimes_{ \Bbbk}z_{i})]\mapsto\sum_{i}[x\otimes_{\Bbbk}y_{i}]\otimes_{\Bbbk}z_{i}\,. \tag{6.10}\]
The other three maps in (6.1) are constructed similarly, leading to
**Definition 6.1**.: Let \(X\), \(Y\) and \(Z\) be \((A,A)\)-bimodules.
The homomorphism \(\partial^{\mathfrak{l}}_{X,Y,Z}\colon X\otimes_{A}(Y\otimes^{A}Z)\mathop{ \rightarrow}\limits(X\otimes_{A}Y)\otimes^{A}Z\) is the one that is determined by the commutativity of the two squares (6.5) and (6.9).
The homomorphisms \(\widetilde{\partial}^{\mathfrak{l}}_{X,Y,Z}\colon X\otimes_{A}(Y\otimes^{A}Z) \mathop{\rightarrow}\limits(X\otimes_{A}Y)\otimes^{A}Z\) and \(\partial^{\mathfrak{r}}_{X,Y,Z},\widetilde{\partial}^{\mathfrak{r}}_{X,Y,Z}\colon( X\otimes^{A}Y)\otimes_{A}Z\mathop{\rightarrow}\limits X\otimes^{A}(Y \otimes_{A}Z)\) are the ones that are determined by the commutativity of the following pairs of squares, respectively (using self-explanatory notation similar to the one in (6.3) and (6.7)):
\(\bullet\)\(\widetilde{\partial}^{\mathfrak{l}}_{X,Y,Z}\):
(6.11)
\(\bullet\)\(\partial^{\mathfrak{r}}_{X,Y,Z}\):
(6.12)
\(\bullet\)\(\widetilde{\partial}^{\mathfrak{r}}_{X,Y,Z}\):
(6.13)
The so defined homomorphisms indeed provide us with the distributors:
**Proposition 6.2**.: _Let \(X\), \(Y\) and \(Z\) be \((A,A)\)-bimodules and let_
\[\begin{split}{}^{\mathrm{r}}\delta^{Z}_{X,Y}:\quad X\otimes_{A} \underline{\mathrm{Hom}}{}^{\mathrm{r}}(Z,Y)&\xrightarrow{ \underline{\mathrm{Hom}}{}^{\mathrm{r}}}(Z,X\otimes_{A}Y)\\ \text{and}\qquad{}^{\mathrm{l}}\delta^{Z}_{X,Y}:\quad\underline{ \mathrm{Hom}}{}^{\mathrm{l}}(Z,X)\otimes_{A}Y&\xrightarrow{ \underline{\mathrm{Hom}}{}^{\mathrm{l}}}(Z,X\otimes_{A}Y)\end{split} \tag{6.14}\]
_be the respective lax module functor structures on the module functors \(\underline{\mathrm{Hom}}{}^{\mathrm{r}}(Z,-)\) and \(\underline{\mathrm{Hom}}{}^{\mathrm{l}}(Z,-)\), as defined in (4.14) and (4.17). Then we have_
\[\partial^{\mathrm{r}}_{X,Y,Z}={}^{\mathrm{l}}\delta^{X^{*}}_{Y,Z}=\widetilde{ \partial}^{\mathrm{r}}_{X,Y,Z}\qquad\text{and}\qquad\partial^{\mathrm{l}}_{X, Y,Z}={}^{\mathrm{r}}\delta^{Z^{*}}_{X,Y}=\widetilde{\partial}^{\mathrm{l}}_{X,Y,Z}\,. \tag{6.15}\]
_Further, \(\partial^{\mathrm{r}}\) and \(\partial^{\mathrm{l}}\) are explicitly given by_
\[\begin{split}\partial^{\mathrm{r}}_{X,Y,Z}\big{(}\sum_{i}[(x_{i} \otimes_{\mathrm{k}}y_{i})\otimes_{\mathrm{k}}z]\big{)}&=\sum_{i} x_{i}\otimes[y\otimes_{\mathrm{k}}z]\\ \text{and}\qquad\partial^{\mathrm{l}}_{X,Y,Z}\big{(}\sum_{j}[x \otimes_{\mathrm{k}}(y_{j}\otimes_{\mathrm{k}}z_{j})]\big{)}&= \sum_{j}[x\otimes_{\mathrm{k}}y_{j}]\otimes_{\mathrm{k}}z_{j}\,.\end{split} \tag{6.16}\]
Proof.: We show that \(\partial^{\mathrm{l}}_{X,Y,Z}\), \(\widetilde{\partial}^{\mathrm{l}}_{X,Y,Z}\) and \({}^{\mathrm{l}}\delta^{Z^{*}}_{X,Y}\) are all given by the second formula in (6.16). We have already obtained that formula for \(\partial^{\mathrm{l}}_{X,Y,Z}\) in (6.10). The same result is found for \(\widetilde{\partial}^{\mathrm{l}}_{X,Y,Z}\) when performing an analogous calculation as the one leading to (6.10) instead for the diagrams (6.11). To derive the formula also for \({}^{\mathrm{l}}\delta^{Z^{*}}_{X,Y}\) we invoke Lemma 4.6 and Corollary 4.7 to compute the module functor structure from the associator \(\alpha^{\otimes}\) of the \(\otimes_{A}\)-tensor product. Consider the adjoint functors \(L_{Z^{*}}(-)\!=\!-\otimes_{A}Z^{*}\) and \(I_{Z^{*}}(-)\!=\!\underline{\mathrm{Hom}}{}^{\mathrm{r}}(Z^{*},-)\). The associator \(\alpha^{\otimes}\) provides a strong module functor structure on \(L_{Z^{*}}\) and can be used to construct a transformation
\[\begin{split}\theta^{\otimes}_{U,Y,X}:\quad\mathrm{Hom}(L_{Z^{*}} (U),Y)&\xrightarrow{\mathrm{Hom}}(L_{Z^{*}}(X\otimes_{A}U),X \otimes_{A}Y)\,,\\ \psi&\longmapsto(\mathrm{id}_{C}\otimes_{A}\psi) \circ(\alpha^{\otimes}_{X,U,Z^{*}})^{-1}\end{split} \tag{6.17}\]
that is natural in \(U\) and \(Y\) as well as dinatural in \(X\).
The internal Hom adjunction \(\phi_{V,W}\colon\mathrm{Hom}(L_{Z^{*}}(V),W)\xrightarrow{\cong}\mathrm{Hom} (V,I_{Z^{*}}(W))\) can then be used to construct a (di)natural transformation
\[\theta^{\delta}_{U,Y,X}:\quad\mathrm{Hom}(U,I_{Z^{*}}(Y))\xrightarrow{\mathrm{ Hom}}(X\otimes_{A}U,I_{Z^{*}}(X\otimes_{A}Y)) \tag{6.18}\]
by setting \(\theta^{\delta}_{U,Y,X}\!:=\!\phi_{X\otimes_{A}U,X\otimes_{A}Y}\circ\theta^{ \otimes}_{U,Y,X}\circ\phi^{-1}_{U,Y}\). That is, \(\theta^{\delta}_{U,Y,X}\) is exactly such that the diagram
(6.19)
commutes. The transported lax module structure on \(I_{Z^{*}}\) is then
\[{}^{\mathrm{l}}\delta^{Z^{*}}_{X,Y}=\theta^{\delta}_{I_{Z^{*}}(Y),Y,X}(\mathrm{ id}_{I_{Z^{*}}(Y)})\,. \tag{6.20}\]
Evaluating this map explicitly on generic elements in \(X\), \(Y\otimes^{A}Z\) and \(Z^{*}\), respectively, finally leads again to the expression given for \(\partial^{\mathrm{l}}_{X,Y,Z}\) in (6.16).
The formulas for \(\partial^{\mathrm{r}}_{X,Y,Z}\), \(\widetilde{\partial}^{\mathrm{r}}_{X,Y,Z}\) and \({}^{\mathrm{r}}\delta^{Z^{*}}_{XY}\) follow by analogous arguments.
We also note the following properties of the distributors, which follow independently of Lemma 5.5:
**Proposition 6.3**.: _Let \(X\) be an \((A,A)\)-bimodule._
1. _If_ \(X\) _is left_ \(\otimes_{A}\)_-flat, then_ \(\partial^{\mathrm{l}}_{X,Y,Z}\) _is injective for all_ \(Y,Z\in A\)_-_bimod_._
2. _If_ \(X\) _is right_ \(\otimes^{A}\)_-flat, then_ \(\partial^{\mathrm{l}}_{Y,Z,X}\) _is surjective for all_ \(Y,Z\in A\)_-_bimod_._
3. _If_ \(X\) _is right_ \(\otimes_{A}\)_-flat, then_ \(\partial^{\mathrm{r}}_{Y,Z,X}\) _is injective for all_ \(Y,Z\in A\)_-_bimod_._
4. _If_ \(X\) _is left_ \(\otimes^{A}\)_-flat, then_ \(\partial^{\mathrm{r}}_{X,Y,Z}\) _is surjective for all_ \(Y,Z\in A\)_-bimod_._
Proof.: Part (1): Consider the second of the diagrams (6.11). The composite \(\widetilde{\gamma}^{1}_{X,Y,Z}\circ(X\otimes_{A}\imath_{Y,Z})\) is cobalanced, and \(\partial^{\mathrm{l}}_{X,Y,Z}\) is the unique homomorphism determined by the universal property of the equalizer. Thus the kernel of \(\partial^{\mathrm{l}}_{X,Y,Z}\) equals the kernel of \(X\otimes_{A}\imath_{Y,Z}\). Now note that this map is obtained by applying the functor \(X\otimes_{A}-\) to the inclusion in the equalizer definition of \(\otimes^{A}\). Hence the kernel of \(X\otimes_{A}\imath_{Y,Z}\) is determined by torsion, and hence \(\partial^{\mathrm{l}}_{X,Y,Z}\) is injective if \(X\) is left \(\otimes_{A}\)-flat.
Part (2): Recall that in the diagram (6.9) the composite \((\pi_{X,Y}\otimes^{A}Z)\circ\gamma^{1}_{X,Y,Z}\) is balanced, and that \(\partial^{\mathrm{l}}_{X,Y,Z}\) is the unique homomorphism determined by the universal property of the coequalizer. Thus the image of \(\pi_{X,Y}\otimes^{A}Z\) equals the image of \(\partial^{\mathrm{l}}_{X,Y,Z}\). Note further that \(\pi_{X,Y}\otimes^{A}Z\) is obtained by applying the functor \(-\otimes^{A}\) to the surjection in the coequalizer definition of \(\otimes_{A}\). Hence the image of \(\partial^{\mathrm{l}}_{X,Y,Z}\) is determined by cotorsion, and thus \(\partial^{\mathrm{l}}_{X,Y,Z}\) is surjective if \(Z\) is right \(\otimes^{A}\)-flat.
The statements (3) and (4) are shown analogously.
It is worth pointing out that the failure of the distributors to be isomorphisms can concern both their kernel and their image. We mention two concrete examples: First, the three-dimensional algebra \(B_{3}=\Bbbk[x,y]/\langle x^{2},y^{2},xy\rangle\) considered in (2.7) has two non-isomorphic three-dimensional indecomposable modules, namely the projective \(P=B_{3}=\operatorname{span}_{\Bbbk}\{1,x,y\}\) and the injective \(I=P^{*}=\operatorname{span}_{\Bbbk}\{1^{*},x^{*},y^{*}\}\). We find that
\[\begin{split}\ker(\partial^{\mathrm{l}}_{I,P,P})=\operatorname{ span}_{\Bbbk}&\{[x^{*}\otimes_{\Bbbk}(x\otimes_{\Bbbk}x)]-[y^{*} \otimes_{\Bbbk}(y\otimes_{\Bbbk}x)]\,,\\ &[x^{*}\otimes_{\Bbbk}(x\otimes_{\Bbbk}y)]-[y^{*}\otimes_{ \Bbbk}(y\otimes_{\Bbbk}y)]\,,[x^{*}\otimes_{\Bbbk}(y\otimes_{\Bbbk}x)] \,,\\ &[x^{*}\otimes_{\Bbbk}(y\otimes_{\Bbbk}y)]\,,[y^{*}\otimes_{ \Bbbk}(x\otimes_{\Bbbk}x)]\,,[y^{*}\otimes_{\Bbbk}(x\otimes_{\Bbbk}y)] \}\\ \operatorname{im}(\partial^{\mathrm{l}}_{I,P,P})=\operatorname{ span}_{\Bbbk}\{[1^{*}\otimes_{\Bbbk}1]\otimes_{\Bbbk}x,[1^{*} \otimes_{\Bbbk}1]\otimes_{\Bbbk}y\}\,,\end{split} \tag{6.21}\]
i.e. \(\partial^{\mathrm{l}}_{I,P,P}\) is neither injective nor surjective. Second, for the algebra (2.4) of dual numbers, which up to isomorphism has two indecomposable modules \(S=\operatorname{span}_{\Bbbk}\{s\}\) and \(P=A_{2}=\operatorname{span}_{\Bbbk}\{1,x\}\), we find an example for which the distributor vanishes: \(\partial^{\mathrm{l}}_{S,P,S}=0\).
Both the concrete examples and the general results studied in this note reveal that GV-dualities are natural structures and that they deserve further study, in particular in the realm of quantum algebra and quantum topology.
Acknowledgments:
J.F. is supported by VR under project no. 2022-02931. C.S. is supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under SCHW1162/6-1 and under Germany's Excellence Strategy - EXC 2121 "Quantum Universe" - 390833306. S.W. is supported by the Engineering and Physical Sciences Research Council (EPSRC) EP/V053787/1 and by the Alexander von Humboldt Foundation.
|
2309.15822 | Methods of self-assessment of confidence for secondary school maths
students, and the benefits or otherwise of using such methods | We first consider the method of scoring students' self-assessment of
confidence (SAC) used by Foster in [1], and find that with it reporting their
true confidence is not the optimal strategy for students. We then identify all
continuously differentiable scoring functions that both drive the student
towards the optimal strategy of truthful reporting of confidence and satisfy an
additional axiom ensuring motivation also to give correct answers to the
questions asked. We discuss the relative merits of some of them, and favour
splitting marks between a signed mark for correctness or not and a second mark
for SAC based on the apparent Shannon information on whether the answer is
correct, as the latter also imparts a useful life skill, namely avoiding being
overconfident.
We then turn to do further Bayesian analysis of the public dataset associated
with [1], showing that the effects of incorporating SAC into teaching vary both
by school and by quartile of ability in class. Finally we speculate on the
potential reasons for this and discuss how future research could identify and
avoid some of the causes. | Roger Sewell | 2023-09-25T19:23:47Z | http://arxiv.org/abs/2309.15822v1 | Methods of self-assessment of confidence for secondary school maths students, and the benefits or otherwise of using such methods.1
###### Abstract
We first consider the method of scoring students' self-assessment of confidence (SAC) used by Foster in [1], and find that with it reporting their true confidence is not the optimal strategy for students. We then identify all continuously differentiable scoring functions that both drive the student towards the optimal strategy of truthful reporting of confidence and satisfy an additional axiom ensuring motivation also to give correct answers to the questions asked. We discuss the relative merits of some of them, and favour splitting marks between a signed mark for correctness or not and a second mark for SAC based on the apparent Shannon information on whether the answer is correct, as the latter also imparts a useful life skill, namely avoiding being overconfident.
We then turn to do further Bayesian analysis of the public dataset associated with [1], showing that the effects of incorporating SAC into teaching vary both by school and by quartile of ability in class. Finally we speculate on the potential reasons for this and discuss how future research could identify and avoid some of the causes.
###### Contents
* 1 Introduction
* 2 Functions for self-assessment of confidence
* 2.1 Introduction
* 2.2 Solution to question 1
* 2.3 Solution to question 2
* 2.4 Solution to question 3
* 2.5 Discussion of question 4
Further analysis of the dataset of Foster's SAC experiment [1] * 3.1 Introduction * 3.2 The probabilistic model used * 3.3 Priors used * 3.4 Exploration of the model * 3.5 Software testing and induced priors * 3.6 Results * 3.6.1 Differences in mean increase over class in log-odds of answer correct * 3.6.2 Individual students' gains in log-odds of answer correct * 3.6.3 Pretest to posttest changes in distribution over class of log-odds of answer correct * 3.7 Discussion
## 1 Introduction
The self-assessment of ones own confidence (SAC) in the correctness of ones answer to a mathematical question provides the opportunity both to reflect on how sure one is and on whether more work is needed, and to receive realistic feedback on ones own accuracy. In [1], Foster investigated the use of a particular method of SAC in maths teaching and its effect on (standard unmodified) assessment results by comparing classes using SAC with those not using it. As in [1] we will assume the setting of a UK secondary school teaching students aged 12-18.
The specific method used in [1] is this: Ask the student, in addition to providing the answer to the question asked, to also provide an estimate \(q\) of how confident they are that their answer is correct, giving 10 for totally confident and 0 for totally unconfident. When the question is marked, a right answer scores \(q\) and a wrong answer scores \(-q\).
However, it is unclear here whether "totally unconfident" was understood by students to mean "I'm certain it's wrong" or "It's equally likely to be right as wrong". Because probability is the mathematically natural way of expressing such confidence, particularly in a Bayesian setting, we here instead assume a scale of 0 to 1, with 1 meaning "I'm certain it's correct" and 0 meaning "I'm certain it's wrong", and 0.5 meaning "I think it's equally likely to be right as wrong". (For practical application to students who don't understand non-integer numbers this can of course be rescaled in practice.)
It is claimed in [1] that in the long term students cannot systematically improve their SAC scores by over- or under-stating their true confidence levels, but this is in fact not true: indeed, a policy that gives maximum expected score \(s\) is to report confidence \(q\) to be 1 (or top of the allowed scale) for any true confidence value \(p\) greater than 0.5 and 0 (or bottom of the allowed scale) for any \(p<0.5\), since
\[E(s)=pq+(1-p)(-q),\]
the expectation of the score on this question, is maximised not by setting \(q=p\) but by setting
\[q=\begin{cases}1&(p\geq\frac{1}{2})\\ 0&(p<\frac{1}{2}).\end{cases}\]
Even so, it is entirely possible that this was not spotted by the students whose results are analysed in [1], and that it may therefore nonetheless have had the desired psychological effect.
In the present paper, then, we seek to do two things: first, to investigate functions for scoring SAC responses with a view to finding ones with better properties, and second, to further analyse the data collected in [1] to look for further clues as to which students may be affected in which ways, and to suggest future investigations that might lead to improved learning outcomes for students.
Functions for self-assessment of confidence
### Introduction
Indeed, in regard to the first aim, we invite the reader to consider the following questions. We will denote by \(q\) a student's reported confidence that his answer to a given question is correct, and by \(p\) the probability that an answer for which the student reports confidence \(q\) is actually correct1. If the student has correctly judged his own ability, \(p\) will also be equal to his subjective probability that his answer is correct.
Footnote 1: If you are a frequentist, you might define \(p\) to be the fraction of answers which the student rates \(q\) that are actually correct.
Then, if the score \(s(q)\) for a right answer is \(f(q)\) and that for a wrong answer is \(f(1-q)\) (or more generally \(g(q)\)), and \(I\) denotes the open interval \((0,1)\):
1. Find all continuously differentiable functions \(f:(0,1)\to\mathbb{R}\) (if any exist) such that \[h(p,q)=\operatorname{E}s(q)=pf(q)+(1-p)f(1-q)\] satisfies, for \(J=I\): 1. \(\forall p\in I\), \(q\mapsto h(p,q)\) has a single strict local maximum on \(I\) at \(q=p\); 2. \(p\mapsto h(p,p)\) is strictly increasing on \(J\).
2. Are there any more such functions if we instead define \(J=\left(\frac{1}{2},1\right)\)?
3. Find all continuously differentiable functions \(f,g:(0,1)\to\mathbb{R}\) (if any such pairs exist) such that \[h(p,q)=\operatorname{E}s(q)=pf(q)+(1-p)g(q)\] satisfies conditions 1a,1b above for \(J=I\).
4. Now suppose that such functions \(f\) (or \(f,g\)) are to be used for SAC; discuss the merits and demerits of the various options.
Intuitively we want the student's reported confidence \(q\) to be trained to match his actual accuracy \(p\), hence the desire to impose condition 1a; however we also want to encourage right answers, and don't want \(h\) to be maximised at \(h(0,0)\) ("I am sure that my answer is wrong", and indeed it is) as one can always easily come up with a definitely wrong answer (e.g. "What is 2+2?" "2+2=Frog"), hence the desire to impose condition 1b.
### Solution to question 1
Question 1 is: Find all continuously differentiable functions \(f:(0,1)\to\mathbb{R}\) (if any exist) such that
\[h(p,q)=\operatorname{E}s(q)=pf(q)+(1-p)f(1-q)\]
satisfies, for \(J=I\):
1. \(\forall p\in I\), \(q\mapsto h(p,q)\) has a single strict local maximum on \(I\) at \(q=p\);
2. \(p\mapsto h(p,p)\) is strictly increasing on \(J\).
Now, fixing some \(p\), if there is a single strict local maximum, then
\[\frac{\partial}{\partial q}h(p,q)=0\]
has a single solution for \(q\) in \(I\). Thus
\[pf^{\prime}(p)=(1-p)f^{\prime}(1-p),\]
which we can achieve for example by setting \(f^{\prime}(p)=1/p\), which not only achieves an extremum of \(q\mapsto h(p,q)\), but one that is a maximum. Indeed, subject to multiplying by a constant and adding a constant, we have
\[h(p,q)=\frac{p\log(q)+(1-p)\log(1-q)+\log 2}{\log(2)},\]
a function shown in figure 1.
But now while still satisfying condition 1a we have freedom to multiply \(f^{\prime}\) by any continuously differentiable function \(m\) such that for all \(q\in I\), \(m(q)=m(1-q)>0\), and indeed any \(f\) satisfying condition 1a must arise in this way.
However, for all \(p\in I\) we also have \(h(p,p)=h(1-p,1-p)\), so \(p\mapsto h(p,p)\) cannot be strictly increasing, and no such functions exist that also meet condition 1b.
### Solution to question 2
For question 2 we relax condition 1b to restrict \(J\) to be \(\left(\frac{1}{2},1\right)\), so the last paragraph of the solution to question 2 no longer applies, and indeed \(f(p)=\log(2p)\) satisfies the required conditions, giving e.g.
\[h(p,q)=\frac{p\log(q)+(1-p)\log(1-q)+\log 2}{\log(2)},\]
already illustrated in figure 1. Alternatively, taking e.g. \(m(q)=2q(1-q)\) we get
\[h(p,q)=-p(1-q)^{2}-(1-p)q^{2}+1,\]
Figure 1: Expectation of score using \(s(q)=\log_{2}(2q)\) for a correct answer and \(s(q)=\log_{2}(2(1-q))\) for a wrong answer. Note that the colour scale has been clipped below at -1, and that in the top left and bottom right corners \(h\) approaches \(-\infty\).
a function shown in figure 2.
In regard to satisfying condition 1b with \(J=\left(\frac{1}{2},1\right)\), we note that for \(p\in J\),
\[\frac{d}{dp}h(p,p) =f(p)-f(1-p)+pf^{\prime}(p)-(1-p)f^{\prime}(1-p)\] \[=f(p)-f(1-p)+m(p)-m(1-p)\] \[=f(p)-f(1-p)\] \[=\int_{\frac{1}{2}}^{p}\frac{m(p)}{p}\,dp-\int_{\frac{1}{2}}^{1-p }\frac{m(p)}{p}\,dp\] \[=\int_{1-p}^{p}\frac{m(p)}{p}\,dp\] \[>0\]
so condition 1b with \(J=\left(\frac{1}{2},1\right)\) is satisfied by any function constructed thus.
In summary, the set of continuously differentiable functions satisfying conditions 1a and 1b for \(J=(\frac{1}{2},1)\) are precisely the integrals of pointwise products of the function \(p\mapsto\frac{1}{p}\) with functions \(m:I\to\mathbb{R}\) satisfying \(\forall x\in I,m(x)=m(1-x)>0\).
### Solution to question 3
Question 3 is: Find all continuously differentiable functions \(f,g:(0,1)\to\mathbb{R}\) (if any such pairs exist) such that
\[h(p,q)=\operatorname{E}s(q)=pf(q)+(1-p)g(q)\]
satisfies conditions 1a,1b above for \(J=I\).
Applying condition 1a we find that for all \(p\in I\),
\[g^{\prime}(p)=-\frac{p}{1-p}f^{\prime}(p).\]
Figure 2: Expectation of score using \(s(q)=2q-q^{2}\) for a correct answer and \(s(q)=1-q^{2}\) for a wrong answer.
Let us start then by picking any continuous \(f^{\prime}(p)\) and setting \(g^{\prime}(p)\) as thus constrained. We then have, for each \(p\in I\), an extremum of \(q\mapsto h(p,q)\) at \(q=p\). Inspection of
\[\frac{\partial}{\partial q}h(p,q)=pf^{\prime}(q)-(1-p)\frac{q}{1-q}f^{\prime}(q)\]
shows that the extremum being unique and a maximum is equivalent to \(f^{\prime}(q)\) being positive for all \(q\in I\). Checking
\[\frac{\partial}{\partial p}h(p,p)=f(p)-g(p)\]
we note that we also require \(f(p)>g(p)\) for all \(p\in I\), and as expected these conditions together then are equivalent to conditions 1a and 1b being satisfied. So long as the \(g\) resulting is bounded above we can achieve the last inequality by simply adding a constant to \(f\).
As an example we set
\[f^{\prime}(p)=2(1-p),\]
\[g^{\prime}(p)=-2p,\]
so that
\[f(p)=1-(1-p)^{2},\]
\[g(p)=-p^{2},\]
and thus
\[h(p,q)=p(2q-q^{2})+(1-p)(-q^{2}),\]
a function shown in figure 3, or after multiplying by \(2\) and subtracting \(1\) in figure 4.
In summary the set of continuously differentiable pairs of functions \((f,g)\) such that \(h\) given as in question 3 satisfies conditions 1a and 1b is precisely the set of integrals of functions \(f^{\prime}>0\) and \(g^{\prime}(p)=-\frac{p}{1-p}f^{\prime}(p)\) such that for all \(p\in I\), \(f(p)>g(p)\).
Figure 3: Expectation of score using \(s(q)=2q-q^{2}\) for a correct answer and \(s(q)=-q^{2}\) for a wrong answer.
### Discussion of question 4
We now discuss the merits and demerits of these various options for SAC. All achieve the primary goal of training students to correctly assess their own accuracy by satisfying condition 1a. However, using the version of figure 1 or 2 alone completely disregards whether the student has correctly answered the question, giving him full marks for a wrongly answered question that he reports as being definitely wrong.
One option is to separate out marks for getting the question right and marks for SAC. If for correctness one scores +1 for a correct answer and -1 for a wrong answer, while also scoring according to figure 1 for SAC, the effective combined score resulting is as in figure 5.
This has the disadvantage that if the student thinks the probability that his answer is correct is less than 0.2, it is slightly in his interest to change his answer to ensure it is wrong and then say that he's sure it's wrong.
However, a definite advantage of separating scores for correctness and SAC in this particular way is that with the method of figure 1 the student is driven to maximising the apparent Shannon information content about his own accuracy in his reported confidence. Moreover, maximising the SAC scores as in figure 1 also maximises the expected log probability that a student who marks his own paper randomly, marking correct with the relevant probability \(q\) and wrong with probability \(1-q\) on each question, will mark all the questions correctly. It also provides good training against overconfidence for the rest of life: if one is overconfident, one rapidly clocks up some very negative SAC scores.
There are, of course, various psychological factors that could also be considered when making a choice between the various possible SAC scoring methods that do maximise score when confidence matches reality. For example, it can be somewhat discouraging when using the method of figure 1 to score \(-\infty\) for a question - but it also teaches an important lesson.
Finally we should point out the obvious SAC scoring method for multiple choice questions with only finitely many choices. In this instance requiring the student to assign probabilities \(q_{1},q_{2},...,q_{K}\) summing to 1 for the various choices, and then giving him \(\log(q_{k})\) as his score, where \(k\) is the correct answer, uniquely maximises his expected score when the \(q_{j}\) values match his subjective probabilities of answer
Figure 4: Expectation of score using \(s(q)=2(2q-q^{2})-1\) for a correct answer and \(s(q)=-2q^{2}-1\) for a wrong answer. Note that the colour scale has been clipped below at -1, and that \(h\) approaches -3 in the top left corner.
\(j\) being correct - the various deliberations above are only needed where it is infeasible to assign a probability distribution over all the possible answers, and instead concentrate on just right or wrong.
## 3 Further analysis of the dataset of Foster's SAC experiment [1]
### Introduction
In [1] data was collected from 4 schools. In each school, students were divided into a control group who were taught by the traditional method, and an intervention group who were taught additionally using SAC. Before use of SAC was started, each school ran a formal test (not using SAC) on all the pupils; these pretests were the same for each student in a school, but differed between schools. During and after the period in which SAC was used on one subset, both subsets underwent further such tests (also not using SAC); the number of further such tests ranged from 1 to 3 between the different schools.
The various tests were marked out of various numbers \(N_{s,t}\) where \(s\) is the index number of the school and \(t\) is the test number in that school; the values of \(N_{s,t}\) ranged from 50 to 131.
Foster both ran a frequentist analysis, which found no significant difference overall between the performance gain in the control group and that in the intervention group, and also a Bayesian analysis, investigating whether one could conclusively say that there was no difference in the performance gains. He used priors which attempted to be objective, being based on Jeffreys priors.
However, we would differ from Foster on a few philosophical points, as follows.
First, as explained in detail in [2], we believe frequentist analyses to be frequently misleading, and try to avoid them and use Bayesian analysis instead.
Second, we think it is usually a mistake to think that a prior can be "objective"; in particular a Jeffreys prior encapsulates the thought "I've chosen this experiment because its accuracy profile matches my prior on the variable being measured". For example, if I try to measure current by passing the current
Figure 5: Expected total score from scoring \(+1\) for a correct answer, -1 for a wrong answer, plus the SAC score from figure 1. Note that the colour axis runs from -2 to +2, and that the colour scale has been clipped below at -2, with \(h\) approaching \(-\infty\) at the top left and bottom right corners.
in question through a resistive metal wire and measuring the radiated heat power when it reaches steady state, and the error in that power measurement is Gaussian with a standard deviation independent of the radiated power, then the steady state radiated power is proportional to the square of the current (assuming resistance stays constant) until such point as the wire ruptures. Hence the measurement is more sensitive to small changes at higher current flows than low ones (\(\frac{dW}{dT}=2IR\) if \(W\) is the radiated power, \(I\) is the current, and \(R\) the resistance), and the Jeffreys prior will have probability density proportional to the absolute value of the current up to the rupture current, after which the Jeffreys prior will be zero. But that might not be my prior on \(I\) at all - it might just be that the only ammeter I could find was one that worked like this.
A prior can, however, be (at least relatively) uninformative if it avoids placing near-zero probability density on values of the unknown that are actually possible.
Third, given two teaching methods, we believe the probability that the difference in outcome between them will be _exactly_ zero will be zero. For this reason we usually avoid using model comparison that compares a model in which the difference is always exactly zero with a model that allows the two to be different; doing so amounts to setting a prior probability density on some parameter which has an infinite density spike at the point(s) representing no difference. Instead we prefer to instead ask what the posterior probability is that the result of method A is better than that of method B (and assume that one minus this value represents the probability that method A is worse than method B). Where we cannot clearly tell whether method A or method B is better, we note that fact, rather than either assuming or trying to prove that the two methods give exactly the same results.
Fourth, we believe that Bayes works best when given _all_ the data. Consequently we avoid selecting a subset of the data in one class that matches that in another class in some way, and instead use all the data, and look for relationships between the distributions over the classes and the differences in those distributions when teaching method is changed.
We therefore set up a rather different probabilistic model from that in [1], and carry out additional analyses to see whether we can get any further insights on what happened in Foster's very interesting experiment.
### The probabilistic model used
We use a hierarchical Bayesian generative model to describe the system.
By way of notation we define the following:
* \(m\) will denote the teaching method, 1 for traditional and 2 for incorporating SAC;
* \(s\) will denote the index number of the school, ranging from 1 to 4;
* \(t\) will denote the test number within that particular school; test 1 is the pretest, and tests 2 to 4 those done after various durations of applying the relevant teaching method;
* \(T_{s}\) will denote the total number of tests applied in school \(s\);
* \(N_{s,t}\) will denote the number of marks allocated in test \(t\) in school \(s\);
* \(u\) will denote the index number of the student in his method subset of his school;
* \(n_{m,s,t,u}\) will denote the number of marks gained in test \(t\) by student \(u\) of method group \(m\) in school \(s\);
* \(U_{m,s}\) will denote the number of students being taught by method \(m\) in school \(s\);
* \(p_{m,s,t,u}\) will denote the probability that student \(u\) of the method \(m\) subset in school \(s\) answers a question in test \(t\) correctly; we assume that this is the same for all questions in that test;
* \(K_{m,s,t}\) will denote the number of mixture components in the prior on the various \((p_{m,s,t,u})_{u=1,...,U_{m,s}}\);
* \(\lambda\) will denote the parameter of the integer-valued exponential prior on each of the \(K_{m,s,t}\).
* \(\alpha_{m,s,t,k},\beta_{m,s,t,k}\) will denote the parameters of the \(k\)th mixture component Beta distribution of the prior on the various \((p_{m,s,t,u})_{u=1,...,U_{m,s}}\);
* \(k_{m,s,t,u}\) will be the mixture component number applicable in a given sample from the model to student \(u\) of the method \(m\) subset in school \(s\) for test \(t\);
* \(q_{m,s,t,k}\) will be the mixing probability for component \(k\);
* \(\gamma_{m,s,t,k}\) will be the corresponding parameter for the Dirichlet prior on \((q_{m,s,t,k})_{k=1,...,K_{m,s,t}}\).
* \(\mu\) is a parameter of the prior on the vector \((\gamma_{m,s,t,k})_{k=1,...,K}\) given \(K_{m,s,t}\).
* \(\kappa,a,b\) are the parameters of the proBeta prior on the various \(\alpha_{m,s,t,k},\beta_{m,s,t,k}\).
We assume that a student's marks in a test are binomially distributed with parameters \(N_{s,t},p_{m,s,t,u}\). We put a prior on each element of the set \(\{p_{m,s,t,u}:u\in\{1,...,U_{m,s}\}\}\) that is a mixture of \(K_{m,s,t}\) Beta distributions with mixing probabilities \(q_{m,s,t,k}\geq 0\) summing to 1 over \(k=1,...,K_{m,s,t}\). The parameters of these Beta distributions being \(\alpha_{m,s,t,k},\beta_{m,s,t,k}\), we put a proBeta prior2 on those given for all values of \((m,s,t,k)\) independently by
Footnote 2: The conjugate distribution to the joint parameters \((\alpha,\beta)\) of the Beta distribution.
\[P(\alpha,\beta|\kappa,a,b)\propto\left(\frac{\Gamma(\alpha+\beta)}{\Gamma( \alpha)\Gamma(\beta)}\right)^{\kappa}a^{\kappa(\alpha-1)}b^{\kappa(\beta-1)},\]
for \(\alpha,\beta>0\) where \(\kappa>0,a>0,b>0,a+b<1\). We put a Dirichlet prior on the vector \((q_{m,s,t,k})_{k=1,...,K_{m,s,t}}\) with parameter vector \((\gamma_{m,s,t,k})_{k=1,...,K_{m,s,t}}\), given for each value of \((m,s,t)\) independently by
\[P(q|\gamma)=\frac{1}{\sqrt{K}}\frac{\Gamma(\sum_{k=1}^{K}\gamma_{k})}{\prod _{k=1}^{K}\Gamma(\gamma_{k})}\prod_{k=1}^{K}q_{k}^{\gamma_{k}-1}\]
for \(\sum_{k=1}^{K}q_{k}=1\) and all \(q_{k}>0\). We put a prior on the vector \((\gamma_{m,s,t,k})_{k=1,...,K}\) given \(K_{m,s,t}\) putting all the probability on the single value \(\mu/K_{m,s,t}\). Finally we put an integer-valued exponential prior on the number of mixture components, i.e. on each \(K_{m,s,t}\) independently, given by
\[P(K|\lambda)=\frac{\lambda^{-(K-1)}}{1-\lambda}\]
for \(\lambda\in(0,1)\) and \(K=1,2,...\).
Thus our probabilistic model may be defined by the following hierarchical equations:
\[\kappa,a,b,\lambda,\mu\text{ are constants defining the prior}\]
\[\forall(m,s,t),\ K=K_{m,s,t}:P(K|\lambda)=\frac{\lambda^{-(K-1)}}{1-\lambda} \ \ (K=1,2,3,...)\]
\[\forall(m,s,t,k),\ (\alpha,\beta)=(\alpha_{m,s,t,k},\beta_{m,s,t,k}):P(\alpha, \beta|\kappa,a,b)\propto\left(\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)}\right)^{\kappa}a^{\kappa(\alpha-1)}b^{\kappa(\beta-1)}\ \ (\alpha,\beta>0)\]
\[\forall(m,s,t,k),\gamma_{m,s,t,k}=\mu/K_{m,s,t}\]
\[\forall(m,s,t),q=(q_{m,s,t,k})_{k=1,...,K_{m,s,t}},\gamma=(\gamma_{m,s,t,k})_{ k=1,...,K_{m,s,t}}:\ P(q|\gamma)=\frac{1}{\sqrt{K}}\frac{\Gamma(\sum_{k=1}^{K} \gamma_{k})}{\prod_{k=1}^{K}\Gamma(\gamma_{k})}\prod_{k=1}^{K}q_{k}^{\gamma_{k }-1}\ (q_{k}>0,\sum_{k=1}^{K}q_{k}=1)\]
\[\forall(m,s,t,u),k=k_{m,s,t,u},q=(q_{m,s,t,k})_{k=1,...,K_{m,s,t}}:\ P(k|q)=q_{k}\ \ (k=1,...,K_{m,s,t})\]
\[\forall(m,s,t,u),p=p_{m,s,t,u},\alpha=\alpha_{m,s,t,k_{m,s,t,u}},\beta=\beta_{m,s,t,k_{m,s,t,u}}:\ P(p|\alpha,\beta)=\frac{\Gamma(\alpha+\beta)}{\Gamma( \alpha)\Gamma(\beta)}p^{\alpha-1}(1-p)^{\beta-1}\ \ (0<p<1)\]
\[\forall(m,s,t,u),n=n_{m,s,t,u},N=N_{m,s,t},p=p_{m,s,t,u}:\ P(n|N,p)=\frac{N!}{n! (N-n)!}p^{n}(1-p)^{N-n}\ \ (n=0,1,...,N).\]
### Priors used
We set the following values of the top level parameters:
\[\kappa=0.01;a=b=0.4525;\lambda=0.5;\mu=1.\]
Thus we put most of the density of the Dirichlet prior on the mixture coefficients near the edges of the \(K\)-simplex (any individual mixture component is likely to have small weight), and the proBeta prior on the \((\alpha,\beta)\) pairs has the density shown in figure 6.
### Exploration of the model
We explore the posterior distribution of the model given the observed data \(U,T,N,n\) and the priors \(\kappa,a,b,\lambda,\mu\) (i.e. we take joint samples from the values of all the other parameters) using Markov chain Monte-Carlo sampling (see [3]). Specific details are as follows:
* We use Gibbs sampling, visiting the variables palindromically moving from bottom to top of the model and back again, repeatedly.
* We integrate out all the variables \(p_{m,s,t,u}\) to increase mobility, then at the end resample them all given their parents and children in the model to complete our sample set.
* The proBeta distribution is log-concave, so we use adaptive rejection sampling[4] along orthogonal directions in \((\alpha,\beta)\) space, one direction being on a straight line through the origin and our current point.
Figure 6: proBeta prior used for each \((\alpha_{m,s,t,k},\beta_{m,s,t,k})\) pair.
* For the methods needed to sample the remaining conditional distributions that occur, see [5].
Having drawn a suitably large number (e.g. 10,000) samples of the vector of all variables in the system, we can now ask detailed questions of the model. For example, we can easily produce, for each sample and each student, the increase in log-odds of getting a question right from pretest to one of the posttests, or for each sample the average such gain in each (school, posttest, method group) subset. These samples then represent the posterior distribution of each such quantity, and allow us to calculate, for each school and posttest, the probability that that gain is bigger for one method than for the other, and the average value of that gain.
Alternatively, we can repeat the above analysis but restricting attention to the upper half or top quartile of each class, or to the lower half or bottom quartile, to see whether the change in method works better with strong or weak students.
Alternatively, we can restrict attention e.g. to a particular school and posttest combination, and plot for each student the probability that their posttest log-odds are higher than their pretest log-odds against their position in class at pretest.
Numerous other inferences can similarly be drawn, and we illustrate some of them in section 3.6 below.
### Software testing and induced priors
To confirm correct operation of the software we ran it on synthetic data, getting the inference shown in the top left plot of figure 7 (of the same type as those shown in figures 10 and 11); in this case the truth is known and is also shown, lying comfortably within the inferred distribution. To visualise the induced prior on this inference, we ran the software using two classes of 70 students each doing a pretest and a posttest each consisting of zero questions; this gave the top right plot of figure 7 for pretest on the "non-SAC" class, showing that the induced prior on the distribution over the class is wide; similar plots (not shown) were obtained for the other class and on both posttests. Under the prior the probability of an increase in gain of log-odds of getting a question correct due to SAC was 0.495 (using only 10,000 samples the difference from 0.5 is not surprising), and the expected increase in gain was 0.004 nats (for comparison with tables 1, 2, and 3). The bottom left plot of figure 7 shows that under the prior the probability of log-odds of getting a question correct increasing from pretest to posttest is almost exactly 0.5 for every student (for comparison with figure 8), and typical gains for individual students are very small as shown in the bottom right plot (for comparison with figure 9).
Thus the software appears to be correctly working and the prior is inducing sensible distributions on dependent variables.
### Results
For simplicity we only report results comparing the pretest with the final posttest. We will use the word "class" to mean one of the subsets of pupils in a single school who were taught by the same method (SAC or no SAC), even though some of these consisted of several classes as commonly understood.
#### 3.6.1 Differences in mean increase over class in log-odds of answer correct
We first report on the class as a whole, i.e. taking all those at a school given SAC compared with all those not given SAC. Table 1 shows that none of these results reaches the 0.05 or 0.95 probability level, and that the increases in gain from pretest to postest in log-odds of getting a correct answer to a question are all less than 0.1 nats in magnitude. These results are not surprising given the findings in [1].
However, just saying this doesn't tell us all that this data contains. If we instead split each class into
Figure 7: Plots confirming correct operation of the software and sensible settings for the priors after running the Markov chain Monte-Carlo system for 10,000 samples. The top left plot uses synthetic data, for which the truth is known; the true cumulative distribution function (cdf) of the probability of a student in this hypothetical class getting a question correct is shown as the black line, while the posterior distribution of this cdf is shown as the background colour plot. The top right plot shows similar inference made using a test consisting of zero questions, so that it reflects the induced prior on this cdf. The bottom left plot shows that when two such tests consisting of zero questions are applied to a class, the probability of the inter-test gain in log-odds of individual students getting a question correct being positive is almost exactly 0.5 for every student, and the bottom right plot shows that the posterior expectation of gain in log-odds is very close to zero. In both the bottom plots the “score on first test” is just set to the index-number of the student, as obviously all score zero on a test consisting of zero questions.
two, with those scoring most in pretest in the "a" part and those scoring least in the "b" part, we get table 2. Now we see that there are quite large effects emerging, but that these differ in the various schools: in school 2 introduction of SAC favours the top half of the class, while in school 4 it favours the bottom half of the class. If we divide into quartiles we get table 3, where we see similar trends.
#### 3.6.2 Individual students' gains in log-odds of answer correct
We can also look at what happens with individual students in these classes, and how how they fared varied with their rank in the pretest.
These plots (figures 8 and 9) show a degree of consistency with the results in section 3.6.1, in that in school 2 introduction of SAC appears to have been bad for those at the bottom the class, while in school 4 introduction of SAC appears to have been bad for those at the top of the class and good for those at the bottom. We note that this is not _just_ an observation made after averaging out across students, but appear to be consistent within classes for those at the top and bottom.
\begin{table}
\begin{tabular}{c|c|c} School & \(P\)(method 2 gain \(>\) method 1 gain) & E(method 2 gain \(-\) method 1 gain) \\ & a & b & a \\ \hline
1 & 0.796 & 0.270 & +0.080 & -0.058 \\
2 & 0.979 & 0.081 & +0.226 & -0.252 \\
3 & 0.392 & 0.091 & -0.031 & -0.138 \\
4 & 0.005 & 0.946 & -0.400 & +0.202 \\ \hline \end{tabular}
\end{table}
Table 2: For class divided into two by pretest score (a high, b low), posterior probabilities that the gain in log-odds of answer correct from pretest to posttest is better with SAC than without, and the expected gain increase by using SAC.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c} School & \(P\)(method 2 gain \(>\) method 1 gain) & E(method 2 gain \(-\) method 1 gain) (nats) \\ & a & b & c & d & a & b & c & d \\ \hline
1 & 0.699 & 0.586 & 0.234 & 0.260 & +0.077 & -0.005 & -0.087 & -0.082 \\
2 & 0.854 & 0.999 & 0.775 & 0.132 & +0.172 & +0.370 & +0.075 & -0.122 \\
3 & 0.567 & 0.200 & 0.406 & 0.011 & +0.027 & -0.145 & -0.033 & -0.329 \\
4 & 0.000 & 0.295 & 0.941 & 0.962 & -0.661 & -0.099 & +0.246 & +0.285 \\ \hline \end{tabular}
\end{table}
Table 3: For class divided into four by pretest score (a high,..., d low), posterior probabilities that the gain in log-odds of answer correct from pretest to posttest is better with SAC than without, and the expected gain increase by using SAC.
Figure 8: Posterior probabilities for individual students that their probability of answering correctly increased from pretest to posttest plotted against their mark in the pretest. The top two plots are for school 2, in whom SAC appeared to reduce gain in the bottom quartile, and the bottom two plots are for school 4, in whom SAC appeared to increase gain in the bottom quartile. The left hand plots are without SAC and the right hand plots are with SAC Note that the set of students is different in each plot, and that the tests done differ in the top row and bottom row.
Figure 9: Posterior expected gain in log-odds of answering correctly from pretest to posttest for individual students plotted against their mark in the pretest. The top two plots are for school 2, in whom SAC appeared to reduce gain in the bottom quartile, and the bottom two plots are for school 4, in whom SAC appeared to increase gain in the bottom quartile. The left hand plots are without SAC and the right hand plots are with SAC; Note that the set of students is different in each plot, and that the tests done differ in the top row and bottom row.
#### 3.6.3 Pretest to posttest changes in distribution over class of log-odds of answer correct
We can also plot the cumulative distribution over students in each class of the probability of getting a question correct together with its posterior uncertainty. This is shown for school 2 in figure 10 and for school 4 in figure 11. These plots are unsurprising given the plots of individual gains under the two methods, but they do show various other differences between the schools, such as the difference between the pretest ability of the classes getting teaching with and without SAC (correct observation) and the apparently greater gain in ability over the timecourse in school 4 than school 2 (not necessarily a correct observation, as the tests were different in each school, and school 4's posttest might have been easier).
### Discussion
The differences in the apparent effects of incorporating SAC into teaching in the different schools are intriguing.
Figure 10: Posterior distribution of the cumulative distribution function over students in each class of the probability of getting a question correct: school2. The left two plots are for the pretest and the right two plots for the posttest. The top two plots are without SAC and the bottom two with SAC.
Figure 11: Posterior distribution of the cumulative distribution function over students in each class of the probability of getting a question correct: school 4. The left two plots are for the pretest and the right two plots for the posttest. The top two plots are without SAC and the bottom two with SAC.
We should first note that we are not claiming frequentist significance for any of these results - indeed we are not interested in frequentist properties over many datasets as we have only one dataset to hand. Similarly the usual frequentist corrections for multiple analyses do not figure in this Bayesian situation.
As to the reasons for the differences, we can only speculate, while noting that different schools and different classes have both different students and potentially different teachers.
For example, it may be the case that the effect of using SAC is very dependent on the teacher's understanding of it, or indeed on whether the teacher is a willing or unwilling user of it. Alternatively, it might be that some teachers unconsciously spend more time with those at the bottom of the class, or on the other hand with those at the top. A further possibility, impossible to assess as neither sex of teachers nor sex of students was available in the public dataset, is that it makes a difference whether teacher or student is male or female3.
Footnote 3: RFS’s wife, herself an academic with far more publications involving statistics than her husband, seems to be totally unable to express her own confidence levels as other than 0 or 1, thinking that she either knows something or that she doesn’t; her husband has no such difficulty. While this example does not generalise to all men and all women, as James Damore so aptly observed in [6], men and women are different.
In future research it is highly desirable that such issues do not arise. One obvious approach is to take enough students to fit into two classes, randomise the students to one class or the other, and to have the same teacher teach both classes, but one with and one without SAC; then repeat this setup across multiple schools. Ideally public data should include both sex of teacher and sex of each student. Something that schools, parents, and students seem, however, to be slow to grasp is that when it is not known whether a particular teaching method works better than another or not, it is entirely ethical to randomise students between the two methods - indeed it could be argued that it would be unethical _not_ to do such research, as we would then never find out. The fact that of 55 schools approached by Foster in [1] only 4 agreed to take part is worrying for the future of educational research in the UK. |
2309.10972 | SEMPART: Self-supervised Multi-resolution Partitioning of Image
Semantics | Accurately determining salient regions of an image is challenging when
labeled data is scarce. DINO-based self-supervised approaches have recently
leveraged meaningful image semantics captured by patch-wise features for
locating foreground objects. Recent methods have also incorporated intuitive
priors and demonstrated value in unsupervised methods for object partitioning.
In this paper, we propose SEMPART, which jointly infers coarse and fine
bi-partitions over an image's DINO-based semantic graph. Furthermore, SEMPART
preserves fine boundary details using graph-driven regularization and
successfully distills the coarse mask semantics into the fine mask. Our salient
object detection and single object localization findings suggest that SEMPART
produces high-quality masks rapidly without additional post-processing and
benefits from co-optimizing the coarse and fine branches. | Sriram Ravindran, Debraj Basu | 2023-09-20T00:07:30Z | http://arxiv.org/abs/2309.10972v1 | # Sempart: Self-supervised Multi-resolution Partitioning of Image Semantics
###### Abstract
Accurately determining salient regions of an image is challenging when labeled data is scarce. DINO-based self-supervised approaches have recently leveraged meaningful image semantics captured by patch-wise features for locating foreground objects. Recent methods have also incorporated intuitive priors and demonstrated value in unsupervised methods for object partitioning. In this paper, we propose _Sempart_, which jointly infers coarse and fine bi-partitions over an image's DINO-based semantic graph. Furthermore, _Sempart_ preserves fine boundary details using graph-driven regularization and successfully distills the coarse mask semantics into the fine mask. Our salient object detection and single object localization findings suggest that _Sempart_ produces high-quality masks rapidly without additional post-processing and benefits from co-optimizing the coarse and fine branches.
## 1 Introduction
Identifying salient regions of an image prone to holding visual attention remains a long-standing fuzzy problem [59] relying significantly on carefully annotated data [51, 5, 54]. Recently self-supervised (SSL) mechanisms based on large-scale pre-trained backbones [9, 6, 22], such as DINO [7], have demonstrated increased capability in segmenting images [21, 30] and extracting objects in the foreground [41, 39, 54, 4, 42].
The unavailability of labels is limiting to inferring high-quality object masks. However, many recent methods have demonstrated that incorporating well-informed priors into the partitioning process is significantly beneficial to finding saliency regions and foreground objects in an unsupervised setting [36, 41, 46, 47, 31, 54, 4, 39].
Different forms of statistical independence of the foreground have driven recent approaches, with the most recent state-of-the-art focusing on movability [4] of the salient ob
ject. Distinguishability and predictability of the foreground from the background have also been successful indicators. For example, statistical variations such as in color and texture of the foreground minimally alter the overall distribution of the population [8]. Furthermore, in-painting models such as MAE [22] have been particularly effective at measuring predictability [36] and defining movability [4].
Inferring graph signals [32] for partitioning a semantic graph over an image has gained popularity [39, 54, 41, 30, 1], with recent methods establishing surprisingly strong baselines using traditional techniques. In particular, the solution to the relaxation of the NP-complete discrete normalized cut problem [37] first demonstrated promise in unsupervised image segmentation, which has further translated to recent findings in [39, 54, 40].
[20, 19] discuss the benefit of learning to predict spectral decomposition for a graph and employ graph neural networks in a reinforcement learning setup for predictively performing the normalized cut. More recently, [43] leveraged normalized cut for regularizing a convolutional network driven by partial cross entropy loss in a weakly supervised setting and demonstrated significant performance improvement. More broadly, spectral partitioning of semantic graphs [39, 54, 30, 1] has become an emerging underlying theme for detecting salient regions.
**Contributions.** In this paper, we propose Sempart, which builds on ideas from [54, 12, 11] for producing high-quality foreground masks in an SSL setting. Sempart learns a transformer-based encoder that refines the patchwise DINO features for inferring a relaxation of graph cut that minimizes the expected normalized cut loss [20] over a semantic graph informed by DINO feature correspondences.
As seen in [54, 41, 7], the foreground masks obtained abandon the fine boundary details from processing features at a low resolution. Unlike [39, 54, 41], which perform successive refinement of the _coarse masks_ post-inference, Sempart implements a convolutional _fine branch_ that processes and supplements the transformed DINO features with RGB features at progressively increasing resolutions for producing original resolution _fine masks_. Motivated by [12, 11], Sempart treats the _coarse mask_ as the source and the image as a guide for inferring high-quality _fine masks_ (see Figure 1, Table 1) regularized by weighted neighborhood-based graph total variation [48].
In summary, our contributions are as follows:
* We propose a novel strategy for co-optimizing _coarse_ and _fine masks_, that decouples image partitioning into semantic separation of rich self-supervised features and high-frequency detailing, respectively.
* Sempart outperforms recent state-of-the-art methods in saliency detection by 3.7% in \(\max F_{\beta}\) and 2.7% in IoU on average and emits high-quality bounding boxes for locating objects.
* Sempart produces high-quality _fine masks_ rapidly by eliminating time-consuming post-inference iterative refinement and saving 200ms on average.
## 2 Related work
Vision systems have historically benefited from segmenting a scene into objects constituting salient regions [51]. Supervised mechanisms [33, 58] have dominated the landscape despite the prohibitive costs of obtaining labeled data. Traditional unsupervised approaches [5, 29] have encoded beliefs about the foreground region, such
Figure 2: Overview of Sempart: We refine the SSL features into co-optimized low resolution _coarse_ and high resolution _fine masks_, based on graph cut and guided super-resolution respectively.
as differences in color and contrast and objectness and depth perception, into partitioning techniques.
**Spectral methods.** Graph-based techniques have received interest wherein spectral partitioning is undertaken over a graphical representation of an image deduced from the priors. [37] proposed _normalized cut_ as an improvement over the _min cut_ criterion [56], for producing clusters that are well balanced. The relaxation of the discrete problem involved a spectral analysis of the symmetrically normalized graph Laplacian
\[L=I-D^{-\nicefrac{{1}}{{2}}}WD^{-\nicefrac{{1}}{{2}}}. \tag{1}\]
The unavailability of effective semantic similarity measures between regions of an image for populating the adjacency matrix \(W\) inhibited the quality of resulting partitions.
**Self-supervised representations.** With the emergence of deep techniques for learning contextually aware representations [7, 22, 9, 6],many of these traditional prior-based techniques have demonstrated increased effectiveness and therefore received renewed interest. The semantically aware DINO [7] features were used for implementing seed expansion into salient regions, initialized with patches that are least similar to other patches as seed in LOST [41]. On the contrary, FOUND [42] locates a background seed first and then expands it. In [53], a SOLO [52] model is trained on coarse masks extracted using SSL features, for instance segmentation.
The memory bottleneck of attention mechanism [14] prevents low-resolution deep SSL features from capturing high-frequency details of an image which often are only helpful in predicting coarse masks [41, 39, 54]. Therefore, despite significant performance gains, these methods require computationally heavy post-processing [3, 25, 26] to generate high-quality fine masks.
**Inpainting** as a helpful object detection tool was first proposed in [36], which hypothesized that it is difficult to predict the foreground given a background and vice versa. SSL features from masked autoencoder (MAE [22]) were also leveraged by recent state-of-the-art MOVE [4] for adversarially training a convolutional mask generator for distinguishing between real- and fake-inpainted images based on movability of salient objects. MOVE established superiority in detecting both salient regions as well as single objects. The movability criterion allows MOVE to directly predict saliency masks at a high resolution which is also why it outperformed its counterparts without post-processing.
SelfMask[39] uses multi-model SSL features [7, 9, 6] for populating \(W\) and constructs pseudo ground truth saliency masks for a subsequent MaskFormer [10] training by clustering eigenvectors of the unnormalized graph Laplacian. Along similar lines, [30] employs clustering based on normalized Laplacian for semantic segmentation and object localization.
Our work is most closely related to [54, 20, 12, 11]. TokenCut [54] makes the bi-partitioning mathematically precise by using the eigenvector with the second smallest eigenvalue, which corresponds to a relaxation of the normalized cut [37] problem and demonstrates value in pursuing graph-based techniques for detecting salient regions.
Iterative computations during inference with expensive post-processing [54, 30], or otherwise training in two stages leveraging multiple SSL models [39] for improving performance, can be limiting. To alleviate this, we follow MOVE's approach of training a single bi-partitioning model as a transformation of the DINO backbone (see Figure 2) and encode our novel strategies into the loss functions (see Section 3.2, Section 3.3). The Sempart architecture involves a _fine branch_ inspired by graph-driven iterative techniques for super-resolution [12, 11] for predicting accurate high-resolution masks.
Minimizing expected graph cut losses over a population was previously evaluated in [20, 19, 1], which proposed to optimize expected normalized cut using graph neural networks. We show that Sempart exhibits similar benefits (see Table 1, Table 3) from jointly inferred graph-driven bi-partitioning and graph regularized guided super-resolution for generating high-fidelity saliency masks rapidly without any post-processing or multi-stage training.
## 3 Approach
In this work, we detect salient regions and localize single objects within an image by learning to partition the image into two regions that are semantically less related [54, 21, 39, 1]. We leverage DINO [7], which provides effective pre-trained SSL feature correspondences [54, 21, 30] for learning a _coarse_ binary mask that partitions a semantic graph constructed between image patches as nodes. Motivated by image-guided super-resolution [12] and graph regularization [11, 48], we co-optimize and infer masks at the original resolution in parallel, thereby correcting a _coarse mask's_ inaccuracies, preserving fine boundary details.
### Background
**Normalized Cut.** The normalized cut [37] of a weighted undirected complete graph \(G=(V,E,w)\) where \(w_{ij}>0\) denotes the weight of \((i,j)\in E\), is given by a binary graph signal \(s:v\in V\to s(v)\in\{0,1\}\) that minimizes
\[\text{Ncut}(A,B)=\frac{w(A,B)}{w(A,V)}+\frac{w(B,A)}{w(B,V)} \tag{2}\]
where \(A\coloneqq\{v|v\in V,s(v)=0\}\), \(B\coloneqq\{v|v\in V,s(v)=1\}\) and \(w(A,B)\coloneqq\sum_{s(i)=0,s(j)=1}w_{i,j}\).
Being NP-complete, Shi et al. [37] first proposed to solve a relaxation which amounts to solving a generalized eigen
system followed by discretization. More recently, the relaxation of (4) has been effective at semantically segmenting images in a self-supervised manner [54]. Motivated by [20, 19], non-linear parameterizations of the graph signal have enabled deep partitioning [1] and regularization [43] based on normalized cut.
**Deep self-supervised feature correspondences.** Large-scale pre-trained self-supervised image embedders such as DINO [7], MAE [22], MoCo [9], SwAV [6] possess beneficial emergent properties for downstream tasks [41, 54, 4, 39, 21]. These models are based on vision transformers [15], which generate an embedding for each patch. Specifically, given an image of dimensions \(C\times H\times W\), and an SSL embedder operating with patch size \(p\), we obtain a tensor of size \(D\times(H/p\times W/p+1)\), including the embedding for the [CLS] token that represents the entire image. In this paper, we leverage DINO as it emits semantically relevant embeddings [7, 54, 41, 42, 21].
In particular, [54] computed an affinity matrix using the feature correspondences from DINO. A graph view of the output is considered where the graph \(G=(V,E)\) contains patches \(V\), and connections between any two patches are encoded in the edge list \(E\). Each patch \(v\in V\) has an associated normalized DINO embedding \(F_{v}\). The affinity matrix is given by the feature correspondences,
\[W_{ij}=\left\{\begin{array}{l}1\mid\langle F_{v_{i}},F_{v_{j}}\rangle>\tau \\ \epsilon\mid otherwise.\end{array}\right. \tag{3}\]
### Self-supervised multi-resolution partitioning (Sempart)
We propose Sempart, which converts an image into a semantic graph \(G\) over non-overlapping patches, which form the set of nodes \(V\). Sempart's architecture (see Figure 2) has two main branches that infer a _coarse_ and _fine mask_ jointly, which are informed by normalized cut and image-guided super-resolution, respectively. We posit that guided super-resolution not only refines the _coarse mask_ into a _fine mask_ by preserving high-resolution details. It also helps regularize the overall learning and justifies our co-optimization strategy.
**Normalized cut for _coarse mask.** A frozen DINO backbone transforms the input image \(X\in\mathbb{R}^{3\times 320\times 320}\) into low-resolution SSL features \(F\in\mathbb{R}^{64\times 40\times 40}\). We apply a single layer transformer encoder with two attention heads, followed by a _coarse branch_ (see Figure 2) comprised of a linear classification head, for transforming the low resolution features into a _coarse_ saliency mask in the form of a soft partitioning indicator vector \(S_{\text{coarse}}\in[0,1]^{|V|}\) where \(|V|=40\times 40\). For partitions A and B with their indicator vectors \(S_{A}=S_{\text{coarse}}\) and \(S_{B}=1-S_{A}\), (2) is rewritten as
\[\mathcal{L}_{\text{Ncut}}(X)\coloneqq\text{Ncut}(A,B)=\sum_{i\in\{A,B\}} \frac{S_{i}^{T}W(1-S_{i})}{{S_{i}}^{T}W\mathbf{1}}. \tag{4}\]
This results in a _coarse mask_ at \(40\times 40\), which amplifies the semantic distinguishability between the two partitions where the affinity between image patches \(i\) and \(j\) is computed using the DINO embeddings in (3) and denoted by \(W_{ij}\). Upon minimizing this heuristic over the entire population, we see a significant improvement in performance over solving the generalized eigensystem in [54] (see Table 1).
**Guided super-resolution for _fine mask_.** The generated _coarse mask_ often fails to capture finer high-frequency details [54, 12] at the original image resolution, which is detrimental to the performance in detecting salient regions. Previously, such methods have employed expensive iterative post-processing such as Bilateral Filtering [3, 39, 41, 54] or CRF [25, 21] for every inferenced image. These methods utilize pixels' color and positional information to readjust the generated _coarse masks_. The possibility of erosion of the mask has been discussed as a limitation in [4].
By delegating the generation of linearly separable semantic features to the _coarse branch_, our architecture enables a refinement network to exclusively focus on detailing and denoising at higher frequencies and around the edges. We jointly optimize a _fine branch_ (see Figure 2) comprised of a convolutional mask refinement network inspired by a recent guided super-resolution technique [12] which trains a multi-layer perceptron for enhancing the mask with guidance from the image. While [12] performs iterative refinement per image, we co-optimize our refinement network for predicting a _fine mask_ which aligns with the _coarse mask_ (see Figure 1).
The output from the transformer encoder layer is gradually scaled up from \(40\times 40\) to \(320\times 320\) in 3 steps. In each step, the image is first scaled up \(2\times\) using bilinear interpolation and processed through a convolutional block described in Suppl. Note that we also concatenate the appropriately resized input image to the input of each convolutional block. This information is pertinent for conditioning the _fine branch_ to satisfy the regularization in Section 3.3.
The features \(\widehat{F}\in\mathbb{R}^{131\times 320\times 320}\) from the last convolutional block are linearly classified into \(S_{\text{fine}}\in[0,1]^{320\times 320}\) which is subsequently average pooled to \(\widehat{S}_{\text{fine}}\in[0,1]^{40\times 40}\) for aligning with the \(S_{\text{coarse}}\in[0,1]^{40\times 40}\). The corresponding loss function is given as
\[\mathcal{L}_{\text{SR}}(X)\coloneqq\|\widehat{S}_{\text{fine}}-S_{\text{ coarse}}\|_{2}^{2}. \tag{5}\]
### Graph total variation regularization (GTV)
Graph-based regularization has yielded benefits in capturing high-frequency details of an image in [11, 12]. A
similarity metric between pixels of an image \(X\) is used to populate the affinity matrix \(A>0\), which is then used to compute the degree matrix \(D\). The graph Laplacian \(L=D-A\) is used to compute the graph regularizer as the quadratic form for a graph signal [32]\(s\), given by
\[\mathcal{L}_{reg}=\frac{1}{2}\sum_{(i,j)\in E}A_{ij}(s(i)-s(j))^{2}. \tag{6}\]
Considering significant computational complexity from the total number of pairs of pixels, we enforce \(A_{ij}=0\) when pixels \(X_{i}\) and \(X_{j}\) are not vertically or horizontally adjacent, also known as the pixel neighborhood \(\mathcal{N}\). This is equivalent to a weighted version of the total variation (TV) loss [28, 16], which has been previously used for denoising images and other signals [2, 23, 16, 34]. A natural extension to graphs is discussed in [48].
**GTV fine.** The guided super-resolution can result in more than one _fine mask_ for a given _coarse mask_, which is where our graph total variation (GTV) loss not only works as a denoiser but plays a more important role as a regularizer. More specifically, \(A_{ij}=\exp\left(-\|X_{i}-X_{j}\|_{2}^{2}/\sigma\right)\) is given by the euclidean similarity between the pairwise pixels. As a result, the \(\mathcal{L}_{\text{GTV-fine}}\) loss encourages the upsampler along the _fine_ branch in Figure 2 to leverage the color information.
**GTV coarse.** We also implement a similar graph TV regularizer denoted by \(\mathcal{L}_{\text{GTV-coarse}}\) for the _coarse mask_ based on \(A_{ij}=W_{ij}\mathbf{1}\{i\in\mathcal{N}(j)\}\) where \(W_{ij}\) is as defined in (3). This is responsible for denoising and predicting a smooth _coarse mask_.
### Loss formulation
The Sempart losses in Section 3.2 together with the GTV losses in Section 3.3 drive the joint learning of _coarse_ and _fine masks_. While the Sempart losses are driven by DINO feature correspondences for inferring accurate image partitions, the GTV losses are significantly involved in denoising the predicted masks and regularizing the overall learning process. The loss functions for the _coarse_ and _fine branches_, respectively, are,
\[\mathcal{L}_{\text{coarse}}(x) =\mathcal{L}_{\text{Ncut}}(x)+\lambda_{\text{GTV-coarse}} \mathcal{L}_{\text{GTV-coarse}}(x)\] \[\mathcal{L}_{\text{fine}}(x) =\lambda_{\text{GTV-fine}}\mathcal{L}_{\text{GTV-fine}}(x)\] \[\mathcal{L}_{\text{joint}}(x) =\lambda_{\text{SR}}\mathcal{L}_{\text{SR}}(x). \tag{7}\]
This gives us our final expected self-supervised loss function \(\mathcal{L}_{\text{Sempart}}=\underset{x\sim\mathbb{P}(X)}{\mathbb{E}}[ \mathcal{L}_{\text{coarse}}(x)+\mathcal{L}_{\text{fine}}(x)+\mathcal{L}_{ \text{joint}}(x)]\).
## 4 Experiments
As done in [54, 4], we evaluate Sempart on unsupervised saliency segmentation and single object detection.
### Implementation
In our work, we use the self-supervised [7] ViT-s/8 transformer from the official implementation of DINO [7].
\begin{table}
\begin{tabular}{c|l|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{} & \multirow{2}{*}{**Model**} & \multicolumn{3}{c|}{**DUT-OMRON [57]**} & \multicolumn{3}{c|}{**DUTS-TE [49]**} & \multicolumn{3}{c}{**ECSSD [38]**} \\ \cline{3-10} & & **Acc** & **IoU** & **maxF\({}_{\beta}\)** & **Acc** & **IoU** & **maxF\({}_{\beta}\)** & **Acc** & **IoU** & **maxF\({}_{\beta}\)** \\ \hline \multirow{6}{*}{**Dataset**} & LOST [41] &.797 &.410 &.473 &.871 &.518 &.611 &.895 &.654 &.758 \\ & TokenCut [54] &.880 &.533 &.600 &.903 &.576 &.672 &.918 &.712 &.803 \\ & FreeSOLO [53] &.909 &.560 &.684 &.924 &.613 &.750 &.917 &.703 &.858 \\ & MOVE [4] &.923 &.615 &.712 &.950 &.713 &.815 &.954 &.830 &.916 \\ & **Sempart-Coarse** & **.932** &.640 &.755 &.956 &.727 &.864 &.961 &.837 &.943 \\ & **Sempart-Fine** & **.932** & **.668** & **.764** & **.959** & **.749** & **.867** & **.964** & **.855** & **.947** \\ \hline \multirow{6}{*}{**Dataset**} & LOST+BF &.818 &.489 &.578 &.887 &.572 &.697 &.916 &.723 &.837 \\ & TokenCut+BF &.897 &.618 &.697 &.914 &.624 &.755 &.934 &.772 &.874 \\ & MOVE+BF &.931 &.636 &.734 &.951 &.687 &.821 &.953 &.801 &.916 \\ & **Sempart-Coarse+BF** & **.934** & **.661** & **.764** & **.957** & **.697** & **.858** & **.960** & **.820** & **.932** \\ & **Sempart-Fine+BF** &.933 &.653 &.760 &.955 &.685 &.853 &.959 &.816 &.931 \\ \hline \multirow{6}{*}{**Dataset**} & SelfMask on pseudo + BF [39] &.919 &.655 & (.774)\({}^{*}\) &.933 &.660 & (.819)\({}^{*}\) &.955 &.818 & (.911)\({}^{*}\) \\ & SelfMask on MOVE &.933 &.666 &.756 &.954 &.728 &.829 &.956 &.835 &.921 \\ \cline{1-1} & SelfMask on MOVE + BF &.937 &.665 &.766 &.952 &.687 &.827 &.952 &.800 &.917 \\ \cline{1-1} & **SelfMask on Sempart-Coarse** &.936 &.675 &.773 & **.958** &.743 &.872 &.962 &.843 &.938 \\ \cline{1-1} & **SelfMask on Sempart-Fine** & **.942** & **.698** & **.799** & **.958** & **.749** & **.879** & **.963** & **.850** & **.944** \\ \hline \multirow{6}{*}{**Dataset**} & U\({}^{2}\)-Net (supervised) &.928 &.693 &.771 &.943 &.733 &.822 & **.967** & **.878** & **.947** \\ \cline{1-1} \cline{2-10} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \end{tabular}
\end{table}
Table 1: Quantitative comparison of Sempart with state-of-the-art MOVE and other related works for saliency detection. Sempart-Coarse and -Fine outperform MOVE significantly in all three evaluation categories (Method, +BF, +SelfMask) across all datasets. The best-performing method in a category and across categories is in **bold** and **underlined**, respectively.
DINO uses an \(8\times 8\) non-overlapping patch on a \(3\times 320\times 320\) input and emits \(384\times 40\times 40\) output which is provided to our simple transformer encoder layer and then routed through both the _coarse_ and _fine branches_ in Figure 2. We employ Adam optimizer [24] with a learning rate of \(0.0001\) and \(\beta=(0.9,0.999)\). We implemented Sempart in PyTorch and trained our models for 20 epochs with a batch size of 8 on a single NVIDIA Tesla P40 GPU. Following careful consideration, hyperparameters \(\lambda_{\text{GTV-coarse}}=0.0006\), \(\lambda_{\text{SR}}=20\), \(\lambda_{\text{GTV-fine}}=0.0002\) have been applied for all results of Sempart.
**Graph affinity.****(a) Normalized cut.** Our implementation follows [54] in computing the affinity matrix \(W\) based on (3) with a minor deviation. We set \(W_{ii}=0\) to discard self-loops that do not belong to a graph cut. We show empirically that this improves model performance. Additionally we set \(\tau=0.2\) and \(\epsilon=\)1e-6 in (3) for the \(\mathcal{L}_{\text{Neut}}\) loss. **(b) GTV Coarse.** In addition to details provided in Section 3.3, we set \(\tau=0\) and \(\epsilon=\)1e-6 for numerical stability. **(c) GTV Fine.**\(\mathcal{L}_{\text{GTV-fine}}\) regularizes the _fine mask_ by limiting the possible solutions. The convolutional blocks learn to generate features that leverage both the contextual features from the transformer encoder and the RGB image features for predicting fine masks that mimic the _coarse mask_ but also preserve the high-frequency image details.
In addition to details provided in Section 3.3, we also set \(\sigma=1\).
**Foreground selection.** We first binarize the indicator vector with threshold = \(0.5\). In order to pick the foreground, we consider four strategies. **(a)** Select the patch with a lower average distance to the center as the foreground. **(b)** Discarding partitions with full spatial width or height as background, selecting the smaller partition to break a tie. **(c)** Select the partition with greatest attention from the last layer of DINO. **(d)** Select the partition occupying the least number of corners. If there is a tie, select the smaller partition.
### Unsupervised saliency segmentation
**Datasets.** As done in [4, 1, 39], we trained Sempart on the train split of DUTS [49], known as DUTS-TR and eval
Figure 3: Qualitative comparison of Sempart-coarse and -fine with TokenCut [54] and MOVE [4] for samples from DUT-OMRON [57].
uate the performance of our model on the corresponding test split DUTS-TE [49], as well as DUT-OMRON [57] and ECSSD [38]. DUTS-TR contains 10,553 images, DUTS-TE contains \(5,019\) images, DUT-OMRON contains 5,168 images, and ECSSD contains 1000 images.
**Evaluation.** As done in [4, 54], we compute the per-pixel mask accuracy (Acc), intersection over union (IoU), and \(\max F_{\beta}\)[54] for evaluating the performance of Sempart. Accuracy is the fraction of pixels correctly predicted into the foreground or background. The overlap between the binary saliency mask and the ground truth gives IoU. We set \(\beta=0.3\) as per [4, 54] where \(\max F_{\beta}\) is given for the threshold used for binarizing the mask that maximizes \(F_{\beta}\).
**Results.** We compared the performance of Sempart with recent state-of-the-art MOVE [4] and several other standard baselines referenced therein. Table 1 contains three horizontal sections corresponding to the baseline method, followed by applying a bilateral filtering [3] step. The final section involves generating pseudo ground truth based on the baseline method and training MaskFormer [10] in a class agnostic manner, as in [39].
We observe that applying the bilateral filter after inferencing Sempart on a per-image basis is detrimental to the overall performance, as is also seen in [4], with the performance of Sempart-Fine deteriorating significantly.
Sempart significantly outperformed all other baselines in all three sections across all datasets. Although Sempart is primarily motivated by the normalized cut minimization in [54], the expected normalized cut loss in Section 3.4 co-optimized with the image-guided graph-based super-resolution loss results in significant improvement in performance. As seen in Figure 3, the per-image optimization in TokenCut selects regions that are not salient or present in the foreground.
Sempart significantly outperforms the movability [4] heuristic in all three sections for all datasets. From Figure 3, we find that MOVE may include multiple semantically unrelated patches into the movable object mask. Additionally, we note that MOVE greatly relies on retraining according to SelfMask[39] for outperforming previous state-of-the-art. While Sempart-Coarse predicts noisy masks (see Figure 3-A,C,D, and E) with slight errors as seen in the last example, Sempart-Fine results in refinement with improved ground truth alignment.
### Single object detection
**Datasets.** We evaluate our model on three datasets - the train split of COCO20K [27] and the training and validation splits of VOC07 [17] and VOC12 [18]. Each image in these datasets has one or more bounding boxes corresponding to each object. The objective is to localize any single object.
**Evaluation.** We detect connected components for separating multiple objects for an image's Sempart mask1. The component with the largest bounding box is used as the object prediction. Suppose the highest IoU between our predicted bounding box and all ground truth bounding boxes exceeds \(0.5\). In that case, we treat it as a successful prediction and use this to compute _Correct Localization_ (CorLoc) metric which is simply the accuracy of prediction.
Footnote 1: If multiple objects lie in a component this evaluation is less reliable.
**Results.** Sempart results in superior bounding-boxes which perform comparably with state-of-the-art MOVE, outperforming it on COCO20k dataset (see Table 3). Our findings suggest that increasing \(\tau\) to \(0.25\) helps us prevent co-located disparate objects from lying in the same connected component and results in a slight improvement.
### Ablations
We ablated Sempart for saliency segmentation as follows,
**Foreground selection.** Unlike [4], where the foreground is given by the _movable_ object, Sempart selects partitions based on occupying _least corners_ given by Sempart-Fine
\begin{table}
\begin{tabular}{l|c|c|c|c|c} \hline
**Method** & **Avg. Time** & **Model** & **RES** & **GPU** & **CPU** \\ \hline TokenCut & 130ms & No & Low & Yes & Yes \\ TokenCut+BF & 337ms & No & High & Yes & Yes \\ MOVE & 13ms & Yes & High & Yes & No \\
**Sempart** & 14ms & Yes & High & Yes & No \\ \hline \end{tabular}
\end{table}
Table 2: Both Sempart and MOVE train a model, generate high-resolution masks, and have comparable average inference times per image.
Figure 4: Sempart for single object detection. Green boxes are ground truth bounding boxes, and the red box is our predicted bounding box. Intersection area is highlighted.
in Table 4. Motivated by [39, 41], we compare with selection based on closeness to the image center (_centrality_), as well as the _framing prior_[39], which labels the segment occupying full spatial width or height as background while breaking ties based on selecting the smaller partition as foreground. Another heuristic that is a close contender to _least corners_ is _total attention_, in which the partition having the highest total overlap with the DINO [CLS] token attention map as foreground.
**Self-loops.** We populate \(W_{ii}\) with (3) instead of 0 for \(\mathcal{L}_{\text{Ncut}}\) and demonstrate that the performance deteriorates.
**Graph TV regularization.** **S**empart without either \(\mathcal{L}_{\text{GTV-coarse}}\) or \(\mathcal{L}_{\text{GTV-fine}}\) is detrimental to performance. The absence of the GTV-fine loss has a greater negative impact.
**Training fine mask directly.** We evaluate a setting where we only have a _fine branch_ (see Figure 2), and the \(\mathcal{L}_{\text{Ncut}}\) and \(\mathcal{L}_{\text{GTV-fine}}\) losses. Table 4 demonstrates that this is inferior to Sempart despite being almost equivalent in the number of parameters. We attribute this to the absence of the _coarse branch_ and the corresponding \(\mathcal{L}_{\text{Ncut}}\) loss which in turn regularized the transformer encoder for subsequent consumption by the convolutional blocks.
**Joint training.** We evaluate a variant of Sempart, where the _coarse_ and _fine branch_ are trained independently. While the \(\mathcal{L}_{\text{coarse}}\) only optimizes the _coarse branch_ and the transformer encoder (see deviations from Figure 2 in Suppl.), the gradients from \(\mathcal{L}_{\text{fine}}\) and \(\mathcal{L}_{\text{joint}}\) are prohibited from optimizing these modules. As seen in Table 4, this is detrimental to performance on all datasets, verifying our hypothesis that co-optimizing _coarse_ and _fine_ mask is mutually beneficial.
### Limitations
Visual saliency is not agnostic to various human biases in favor of humans and animals, as well as objects which are likely to move in a subsequent frame or have high contrast with the background. Figure 5 discusses examples where the ground truth favors a human, a train crossing a bridge, and a rooster over all other objects. Sempart results in over-selection here as it does not explicitly incorporate these priors or even control the object size. Furthermore, our graph TV loss can sometimes merge narrow co-located regions into the mask, as seen in the Figure 5-B and C, which can also be detrimental to localizing objects.
## 5 Conclusion
Sempart demonstrates the efficacy of graph-driven objectives towards self-supervised image partitioning and es
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Method** & **OMRON*** & **D-TE*** & **ECSSD** \\ \hline _fs_: framing prior & 0.663 & 0.730 & 0.825 \\ _fs_: centrality & 0.652 & 0.736 & 0.854 \\ _fs_: total attention & **0.668** & 0.745 & 0.853 \\ \hline w/ self-loops in \(W\) & 0.667 & 0.743 & 0.846 \\ \hline w/o GTV coarse & 0.646 & **0.749** & 0.848 \\ w/o GTV fine & 0.637 & 0.717 & 0.818 \\ \hline train fine mask directly & 0.645 & 0.738 & 0.845 \\ \hline w/o joint training & 0.662 & 0.743 & 0.849 \\ \hline
**Sempart-Fine** & **0.668** & **0.749** & **0.855** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Ablations of Sempart for saliency, using mIoU. *Shorthand has been used due to space constraints; OMRON refers to DUT-OMRON [57] and D-TE refers to DUTS-TE [49]; _fs_ denotes foreground selection.
Figure 5: Limitations of Sempart. Human bias towards humans and moving objects are shown in A and B. Sempart cannot capture the intricate details and smooths over narrow regions in B and C. An immovable background object is included, which is not as visually salient as the rooster in D. The rib is the same color as the wall in E; therefore, the toys are prominent. However, DINO highlights the semantic differences for partitioning the entire rib from the background.
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Method** & **VOC07** & **VOC12** & **COCO20K** \\ \hline DDT+ [55] & 50.2 & 53.1 & 38.2 \\ rOSD [44] & 54.5 & 55.3 & 48.5 \\ LOD [45] & 53.6 & 55.1 & 48.5 \\ FreeSOLO [53] & 56.1 & 56.7 & 52.8 \\ LOST [41] & 61.9 & 64.0 & 50.7 \\ Deep Spectral [30] & 62.7 & 66.4 & 52.2 \\ TokenCut [54] & 68.8 & 72.1 & 58.8 \\ MOVE [4] & **76.0** & **78.8** & 66.6 \\
**Sempart-Coarse** & 74.7 & 77.4 & **66.9** \\
**Sempart-Fine** & 75.1 & 76.8 & 66.4 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sempart bounding boxes exhibits a high CorLoc comparable to state-of-the-art MOVE [4], for single object discovery on VOC2007 [17], VOC2012[18] and outperforms it on COCO20K [27] dataset.
tablishes state-of-the-art performance for detecting salient regions and a competitive advantage in localizing objects. We address the limitations of expensive post-processing, limited resolution, and noise artifacts in saliency masks. We demonstrate the value of a joint learning paradigm for inferring high-quality masks at multiple resolutions using Sempart, which will hopefully be a vital enabler of the subsequent investigation into class-aware object detection for diverse vision systems.
**Acknowledgements** The authors gratefully thank Ambareesh Revanur and Deepak Pai for their valuable feedback, and the anonymous reviewers for their comments.
|
2306.17640 | Tunable non-additivity in Casimir-Lifshitz force between graphene
gratings | We investigate the Casimir-Lifshitz force (CLF) between two identical
graphene strip gratings, laid on finite dielectric substrates, by using the
scattering matrix (S-matrix) approach derived from the Fourier Modal Method
with Local Basis Functions (FMM-LBF). We fully take into account the high-order
electromagnetic diffractions, the multiple scattering and the exact 2D feature
of the graphene strips. We show that the non-additivity, which is one of the
most interesting features of the CLF in general, is significantly high and can
be modulated in situ, without any change in the actual material geometry and
this by varying the graphene chemical potential. We discuss the nature of the
geometrical effects and show the relevance of the geometric parameter d/D (i.e.
the ratio between separation and grating period), which allows to explore the
regions of parameters where the additive result is fully acceptable or where
the full calculation is needed. This study can open to deeper experimental
exploration of the non-additive features of the CLF with micro- or
nano-electromechanical graphene-based systems. | Youssef Jeyar, Minggang Luo, Kevin Austry, Brahim Guizal, Yi Zheng, H. B. Chan, Mauro Antezza | 2023-06-30T13:28:28Z | http://arxiv.org/abs/2306.17640v2 | # Tunable Non-Additivity in Casimir-Lifshitz Force Between Graphene Gratings
###### Abstract
We investigate the Casimir-Lifshitz force (CLF) between two identical graphene strip gratings, laid on finite dielectric substrate. By using the scattering matrix (S-matrix) approach derived from the Fourier Modal Method with local basis functions (FMM-LBF), we fully take into account the high-order electromagnetic diffractions, the multiple scattering and the exact 2D feature of the graphene strips. We show that the non-additivity, which is one of the most interesting features of the CLF in general, is significantly high and can be modulated _in situ_ without any change in the actual material geometry, by varying the graphene chemical potential. This study can open the deeper experimental exploration of the non-additive features of CLF with micro- or nano-electromechanical graphene-based systems.
The Casimir-Lifshitz force (CLF) exists between any couple of electrically neutral bodies, and is due to either vacuum and thermal electromagnetic field fluctuations. It has been widely investigated from both theoretical and experimental sides and for different geometrical configurations, e.g., plane-plane [1], sphere/particle-plane [2; 3; 4; 5; 6; 7], sphere-sphere [8; 9; 10], grating-grating [11; 12; 13; 14; 15], and sphere-grating [16; 17; 18; 19], to name a few. In particular, gratings lead to the excitation of high-order diffractions modes that play an relevant role in the CLF. Additionally, the dielectric polarizability of the interacting bodies plays a crucial role in determining the magnitude and characteristics of this force.
We also stress that for the system with a complex structure (e.g., gratings), different parts of such complex structures interact with each other, which results in a complicated calculation for its Casimir interaction [13]. One of the features that makes CLF interesting and hard to compute at the same time is that it is inherently a _non-additive_ phenomenon. More specifically the force acting on an object cannot be computed as a simple sum over the force that would act individually on the different components of the object itself. Due to the complex light-matter interaction, the fluctuations of the constituent electric dipoles are affected by the presence of other fluctuating dipoles in the structure, and a full collective analysis needs to be done to tackle these the non-additive CLF effects [13; 16; 20; 21; 22].
Recently it has been shown that planar graphene structures exhibits novel behaviors in CLF [23; 24; 25; 26; 27], as well as the radiative heat transfer [28; 29; 30; 31] modulation, due to graphene's peculiar optical properties. Combining the richness coming from the grating geometry and the special dielectric features of the graphene material could result in novel behavior that cannot be obtained with ordinary materials. While substantial non-additive effects in CLF has been calculated and measured in gratings made of metals and semiconductors, these effects are not tunable _in-situ_. So far, modifying the non-additive effects requires changing the geometric configuration. The ability to tune the non-additive effects in-situ could open new opportunities in the exploitation of CLF in nanomechanical systems.
Here we explore the CLF between two parallel graphene-based nanostructures (body 1 and body 2) separated by a distance \(d\). Each structure comprises a finite dielectric substrate with a thickness \(h\), covered with a graphene strip grating, as depicted in Fig. 1. The complex nature of this grating-based system presents significant challenges, requiring considerable calculation time and extensive computational resources when using conventional methods, such as the classical Fourier Modal method (FMM). This lead to a practical impossibility to check the effective convergence and stability of the numerical outcomes with respect to the grat
Figure 1: Schematic of two parallel graphene-gratings coated slabs.
ing diffraction orders and frequency/momentum integration grid steps, with consequent qualitative and quantitative inaccurate predictions.
To overcome such difficulties we use an improved approach, the FMM-LBF, allowing for an efficient and accurate resolution of the scattering problem, fully taking into account the high-order diffractions. This allows us to study in detail how the CLF of the global system changes with the chemical potential and also how it is different from the sum of the interactions between the elements constituting the global system, i.e. its non-additivity. The main prediction of this paper is that in this system the non-additivity can be significantly modulated _in-situ_ by adjusting the graphene chemical potential, without altering the system geometry.
_The system and the model._ The graphene gratings have period \(D\) of 1 \(\mu\)m, filling fraction \(f=a/D\), chemical potential \(\mu\). They lie on a finite-size fused silica (SiO\({}_{2}\)) slab of thickness \(h\). The separation between the two bodies is \(d\) and the whole system is at temperature \(T=300K\). The CLF expression for this system can be explicitly expressed in terms of the reflection matrices \(\mathcal{R}^{(1)}\) and \(\mathcal{R}^{(2)}\) of the two nanostructured bodies, and the pressure acting on body 1 along the positive z direction is [11; 32]
\[P(d,T,\mu)=\frac{k_{\rm B}T}{4\pi^{2}}\sum_{m=0}^{+\infty}{}^{\prime}\int_{- \frac{\pi}{D}}^{\frac{\pi}{D}}\mathrm{d}k_{x}\int_{-\infty}^{+\infty}\mathrm{ d}k_{y}\mathrm{Tr}\left(k_{z}^{\prime}\mathcal{U}\right), \tag{1}\]
with
\[\mathcal{M}=(U^{(12)}\mathcal{R}^{(1)+}\mathcal{R}^{(2)-}+U^{(21)}\mathcal{R }^{(2)-}\mathcal{R}^{(1)+}). \tag{2}\]
The integration in Eq. (1) is on the imaginary frequency axis and the sum is made over the Matsubara frequencies \(\xi_{m}=2\pi mk_{\rm B}T/\hbar\) (the prime on the sum means that the \(m=0\) term is to be divided by 2). \(k_{\rm B}\) is the Boltzmann constant, \(\hbar\) is the reduced Planck's constant. Here \(k_{z}^{\prime}=(\mathrm{diag}(k_{zm}^{\prime}),\mathrm{diag}(k_{zn}^{\prime}))\), \(k_{zn}^{\prime}=\sqrt{\xi_{m}^{2}/c^{2}+\mathbf{k}_{n}^{2}}\), \(\mathbf{k}_{n}=(k_{x,n},k_{y})\), \(k_{x,n}=k_{x}+n\frac{2\pi}{D}\), \(\mathbf{k}=(k_{x},k_{y})\), \(k_{x}\) is in the first Brillouin zone \((-\frac{\pi}{D},\frac{\pi}{D})\), \(k_{y}\) is in \((-\infty,\infty)\), the multiple scattering matrices are \(U^{(12)}=(1-\mathcal{R}^{(1)+}\mathcal{R}^{(2)-})^{-1}\), \(U^{(21)}=(1-\mathcal{R}^{(2)-}\mathcal{R}^{(1)+})^{-1}\), and \(\mathcal{R}^{(1)+}\) and \(\mathcal{R}^{(2-}\) are the reflection operators of body 1 and body 2 in the (TE, TM) basis (see [33]). The dielectric function on the imaginary frequency axis \(\varepsilon(i\xi_{m})=1+2\pi^{-1}\int_{0}^{\infty}\omega e^{\prime\prime}( \omega)/(\omega^{2}+\xi_{m}^{2})d\omega\) is obtained from the fused silica data of the dielectric function \(\varepsilon(\omega)\) on the real frequency axis [34]. The graphene enters through its conductivity explicitly depending on temperature \(T\) and chemical potential \(\mu\). It is the sum of an interband and an intraband contributions \(\sigma=\sigma_{\rm intra}+\sigma_{\rm inter}\), and on the imaginary frequency axis it takes the form [35; 36; 37; 27] :
\[\sigma_{\rm intra}(i\xi_{m}) = \frac{8\sigma_{0}k_{B}T}{\pi(\hbar\xi_{m}+\hbar/\tau)}\ln\left[2 \cosh\left(\frac{\mu}{2k_{B}T}\right)\right], \tag{3}\] \[\sigma_{\rm inter}(i\xi_{m}) = \frac{\sigma_{0}4\hbar\xi_{m}}{\pi}\int_{0}^{+\infty}\frac{G(x)}{ (\hbar\xi_{m})^{2}+4x^{2}}dx, \tag{4}\]
where, \(\sigma_{0}=e^{2}/4\hbar\), \(e\) is the electron charge, \(G(x)=\sinh(x/k_{B}T)/[\cosh(\mu/k_{B}T)+\cosh(x/k_{B}T)]\), \(\tau\) the relaxation time (we use \(\tau=10^{-13}\)s).
To calculate the reflection matrices of bodies 1 and 2, which contain periodic gratings, one can use the simplified version of Fourier Modal Method suited for surface gratings where the fields are expanded in generalized Fourier series (the so called Rayleigh development) in the different homogeneous media while the periodic conductivity is expanded into its Fourier series. Incorporating all this into the boundary conditions yields an algebraic system linking the amplitudes of the fields in the different media. The latter can be recast into a form giving directly the S-matrix of the structure from which one can readily extract the reflection coefficients. However, it is important to note that this method encounters convergence issues when dealing with TM polarization due to the singular nature of the electric field at the edges of the graphene sheet. To address this limitation, the FMM-LBF can be employed (more details in [33]). It incorporates locally defined basis functions that are specifically designed to satisfy the boundary conditions [38; 39].
Another method commonly used in CLF and heat transfer calculations is the FMM with Adaptive Spatial Resolution (FMM-ASR). This method has been specifically developed to address the challenging case of metallic gratings [11; 17; 40]. It involves a change of coordinate according to the periodicity direction of the grating (\(x\)-axis), which improves the convergence process leading to much faster computations compared to the FMM, drastically reducing the computational time. The counterpart of this method is that the the 2D graphene grating has to be modelled with a finite thickness.
Based on our numerical analysis, we have found that the FMM-ASR method is primarily advantageous at very low frequencies compared to FMM-LBF. As a result, we employ the FMM-ASR method specifically for computing only the first term of the Matsubara sum in Eq. (1), while the subsequent terms are much more efficiently calculated using the FMM-LBF.
_Modulation of CFL by the chemical potential._ We first study how the CLF can be tuned by changing the chemical potential \(\mu\) of the graphene grating in our system. We considered graphene gratings with two different filling fractions, \(f=0.5\) and \(0.9\), respectively, and chemical potential \(\mu\) = 0, 0.2, 0.4, 0.6, 0.8, and 1.0 eV, coated on a \(h=20\) nm fused silica substrate. The separation distance between the two graphene gratings varies from 60 nm to 10 \(\mu\)m. The dependences of the modulation ratio \(P(\mu)/P(\mu=0)\) on the separation distance \(d\) are shown in Fig. 2 (a) and (b) for \(f=0.5\) and \(0.9\), respectively. The impact of the chemical potential on the CLF of graphene gratings is found to be significant at separations less than 1 \(\mu\)m. The modulation ratio attains peak values of 1.23 and 1.30 for \(f=0.9\) and \(f=0.5\) respectively and then diminishes as separation increases beyond 1 \(\mu\)m. This trend is analogous to that observed for graphene multilayers, where the chemical potential effect was negligible at large separations [27]. The modulation ratio for
0.9 exceeds that for \(f=0.5\) over the entire range of distance in our calculations.
_Non-additive effects._ Next, we investigate non-additive effects in the CLF. Specifically, in Fig. 3 we compare the full calculation coming from (1) which uses the complete scattering of the structured system, with the approximation of purely additive and much simpler calculation \(P_{\text{add}}=P(f=1)\times f+P(f=0)\times(1-f)\). The latter considers the pressure as a weighed sum of the CLF occurring between planar, non-nanostructured systems, namely fully graphene-coated substrates \(P(f=1)\) and graphene free substrates \(P(f=0)\). This study has been done for two main separation distances of experimental relevance (\(d=60\) nm and \(d=200\) nm) and also for two extreme value of the chemical potential (\(\mu=0\) eV and \(\mu=1\) eV). In figure 3, we show the ratio \(P/P_{\text{add}}\) for different filling fractions \(f\). We see that the non additivity is quite weak (of the order of 5%) at \(d=60\) nm, with little dependence on the chemical potential. This means that, at such short separations, the non-additive complexity and the particular surface mode structure of the given graphene grating have a weak effect and that one can safely use the approximate additive expression \(P_{\text{add}}\) for experiments with an accuracy of few percent. It is worth stressing that the calculation of the full exact CLF is \(10^{3}-10^{4}\) slower and much less straight forward to be coded than the simple additive calculation \(P_{\text{add}}\), even when the faster numerical FMM-LBF method is used. Exact calculations of the CLF for graphene gratings requires considerable computational resources and time.
On the contrary at larger but sill experimentally relevant separations, the situation changes. We see in Fig. 3 that for \(d=200\) nm the CLF is strongly not additive. The scattering details of the gratings are crucial, and the additive expression \(P_{\text{add}}\) is violated up to 30%. In this case a comparison with experiments needs a full theory with a complete consideration of the complexity of the nanostructure. Another crucial point emerging from this study is that, remarkably, in this structure the non-additive effect is not only high, but can also be modulated _in situ_ by simply changing the chemical potential. The possibility of tuning non-additive effects without any geometric modification is highly relevant to experimental studies. Specifically for \(d=200\) nm there are clear changes in the non-additivity \(P/P_{\text{add}}\) as the chemical potential is increased from 0 to 1 eV. Substantial changes in the non-additivity occurs over a wide region of filling fraction values. Collective non-additive contributions affect the system as soon as a grating structuration is introduced, even if the strips cover only a relatively minor or major part of the substrate, and the way this non-additive effects contributes can also be easily tuned. Similar to the case for separation of 60 nm, the ratio \(P/P_{\text{add}}\) attains a peak and then gradually decreases as the filling fraction \(f\) increases. The peak position of the \(P/P_{\text{add}}\) curve shift towards lower value of the
Figure 3: Dependence of the \(P/P_{\text{add}}\) (main figure) and of \(P\) (inset) on the filling fraction \(f\) of the graphene grating.
Figure 2: Normalized CLF \(P(\mu)/P(\mu=0)\) at \(T=300\)K, for different chemical potential \(\mu\) and for a filling fraction (a) \(f=0.5\) and (b) \(f=0.9\).
filling fractionas the separation is increased from 60 nm to 200 nm.
In summary, we have studied the Casimir interactions in graphene nanostructures made by graphene gratings coated on dielectric slabs. To fully take into consideration the high-order diffractions in the CLF acting on the gratings of the two-dimensional materials, we applied an exact method using the Fourier modal method incorporated with the local basic functions (FMM-LBF). We fist find a significant variation of the CLF with the chemical potential. We then study the non-additivity and found that at small separation (\(d=60\) nm) the non-additivity is weak. As a result much direct and fast approximate additive method can be safely used to calculate the force at few percent precision. On the contrary, a significant non-additive effect is present at larger separations (\(d=200\) nm), with a deviation from the additive prediction going up to 30%. Remarkably, this non additivity can be modulated _in situ_ by changing the graphene chemical potential, without any need of geometrical or mechanical variations in the system. The presence of this non-additivity modulability can motivate experimental investigation of the non-additivity on CLF and open new opportunities of utilizing non-additive effects of the CLF in graphene nanomechanical systems.
The work described in this paper was supported by a grant "CAT" from the ANR/RGC Joint Research Scheme sponsored by the French National Research Agency (ANR) and the Research Grants Council (RGC) of the Hong Kong Special Administrative Region, China (Project No. A-HKUST604/20)01. We acknowledge P. Rodriguez-Lopez for useful comments.
Y.J. and M.G.L. contributed equally to this work.
## II Supplemental Material
In this supplemental material, we provide the calculation of the reflection coefficient for a finite slab with a thickness \(h\) and covered with a graphene strips grating characterized by a period \(D\), a width \(a\) and a surface conductivity \(\sigma_{g}\), as shown in Figure 4. The calculation employs the S-matrix algorithm, where we first compute the interface scattering matrix, denoted \(S_{\text{LBF}}\), between input medium I and medium II, using the Fourier Modal Method with Local Basics Functions (FMM-LBF). Subsequently, we determine the slab scattering matrix, denoted \(S_{\text{slab}}\), between medium II and medium III. Notably, the calculation of \(S_{\text{LBF}}\) and \(S_{\text{slab}}\)is applicable in general cases; however, for the overall S-matrix to be valid, we specifically consider vacuum as both the entry and exit medium for \(S_{\text{LBF}}\), and vacuum as the output medium for \(S_{\text{slab}}\). By performing the star product (\(\star\)) operation between \(S_{\text{LBF}}\) and \(S_{\text{slab}}\), we obtain the overall scattering matrix, denoted \(S\), as shown in Eq. (5). It is worth mentioning that this calculation also accommodates imaginary Matsubara frequencies by setting \(\omega=i\xi_{n}\)
\[S=S_{\text{LBF}}\star S_{\text{slab}}. \tag{5}\]
Let us begin by the first interface scattering matrice \(S_{\text{LBF}}\) :
Figure 4: Schematic representation of the system under study. The object consists of a finite fused silica slab with a thickness of \(h\) covered by a graphene grating with a period of \(D\), width of \(a\), filling fraction is defined as \(f=a/D\) and surface conductivity of \(\sigma_{g}\).
## Electromagnetic Fields
In Cartesian system with the basis of \((\hat{\mathbf{e}}_{x},\hat{\mathbf{e}}_{y},\hat{\mathbf{e}}_{z})^{\mathrm{T}}\), the wave vector of the incident electromagnetic (EM) wave yields
\[\mathbf{k}=(k_{x},k_{y},k_{z}), \tag{6}\]
Due to the periodicity along \(x\) direction, many new diffraction channels can be excited, which can be characterized by the wave vector component along \(x\) direction. The \(x\)-component of the \(n^{\mathrm{th}}\) diffraction order wave vector is \(k_{xn}=k_{x}+n\frac{2\pi}{D}\) where \(k_{x}\) is defined in the first Brillouin zone \([-\pi/D,\pi/D]\) and \(n\in[-N,N]\). The \(y\)-component of the \(n^{\mathrm{th}}\) diffraction order wave vector is still \(k_{y}\). The \(z\)-component of the \(n^{\mathrm{th}}\) diffraction order wave vector is dependent on the media. \(k_{z,n}^{\mathrm{I}}\) and \(k_{z,n}^{\mathrm{II}}\) are the \(z\) wave vector of \(n^{\mathrm{th}}\) order diffraction for medium I (\(\varepsilon_{\mathrm{I}}\) incident side) and medium II (\(\varepsilon_{\mathrm{II}}\), out side), respectively. \(\mathbf{k}_{in}\), \(\mathbf{k}_{rn}\), \(\mathbf{k}_{tn}\), and \(\mathbf{k}_{r,n}\) are wave vector of the \(n^{\mathrm{th}}\) diffraction order of the four kinds of fields, respectively.
\[k_{z,n}^{\mathrm{I}}=\sqrt{k_{0}^{2}\varepsilon_{\mathrm{I}}-k_{xn}^{2}-k_{y} ^{2}}\ \ \ \text{and}\ \ \ k_{z,n}^{\mathrm{II}}=\sqrt{k_{0}^{2}\varepsilon_{\mathrm{II}}-k_{xn}^{2}- k_{y}^{2}}\ \ \ \text{with}\ \ \ k_{0}=\omega/c. \tag{7}\]
The electric field in medium I yields as follows.
\[\mathbf{E}_{\mathrm{I}}=\frac{N}{n}(\mathbf{I}_{n}e^{i\mathbf{k}_{in}\cdot \mathbf{r}}+\mathbf{R}_{n}e^{i\mathbf{k}_{xn}\cdot\mathbf{r}}), \tag{8}\]
where \(\mathbf{I}_{n}=(I_{xn},I_{yn},I_{zn})\), \(\mathbf{R}_{n}=(R_{xn},R_{yn},R_{zn})\), \(\mathbf{k}_{in}=(k_{xn},k_{y},k_{z,n}^{\mathrm{I}})\) and \(\mathbf{k}_{rn}=(k_{xn},k_{y},-k_{z,n}^{\mathrm{I}})\). Then the magnetic field in Field I yields
\[\mathbf{H}_{\mathrm{I}}=\frac{1}{k_{0}Z_{0}}\sum_{n}^{N}(\mathbf{k}_{in} \times\mathbf{I}_{n}e^{i\mathbf{k}_{in}\cdot\mathbf{r}}+\mathbf{k}_{rn}\times \mathbf{R}_{n}e^{i\mathbf{k}_{xn}\cdot\mathbf{r}})\ \ \ \text{with}\ \ \ Z_{0}=\sqrt{\frac{\mu_{0}}{\varepsilon_{0}}}, \tag{9}\]
where \(\mathbf{k}_{in}\times\mathbf{I}_{n}=(k_{y}I_{zn}-k_{z,n}^{\mathrm{I}}I_{yn},k_ {z,n}^{\mathrm{I}}I_{xn}-k_{xn}I_{zn},k_{xn}I_{yn}-k_{y}I_{xn})\) and \(\mathbf{k}_{rn}\times\mathbf{R}_{n}=(k_{y}R_{zn}+k_{z,n}^{\mathrm{I}}R_{yn},-k _{z,n}^{\mathrm{I}}R_{xn}-k_{xn}R_{zn},k_{xn}R_{yn}-k_{y}R_{xn})\).
The electric field in medium II yields:
\[\mathbf{E}_{\mathrm{II}}=\sum_{n}^{N}(\mathbf{T}_{n}e^{i\mathbf{k}_{in}\cdot \mathbf{r}}+\mathbf{I}_{n}^{\prime}e^{i\mathbf{k}_{jn}\cdot\mathbf{r}}), \tag{10}\]
where \(\mathbf{T}_{n}=(T_{xn},T_{yn},T_{zn})\), \(\mathbf{I}_{n}^{\prime}=(I_{xn}^{\prime},I_{yn}^{\prime},I_{zn}^{\prime})\), \(\mathbf{k}_{nn}=(k_{xn},k_{y},k_{z,n}^{\mathrm{II}})\)and \(\mathbf{k}_{r,n}=(k_{xn},k_{y},-k_{z,n}^{\mathrm{II}})\) and \(N\) is the truncation order when convergence is achived. Then the magnetic field in Field II yields
\[\mathbf{H}_{\mathrm{II}}=\frac{1}{k_{0}Z_{0}}\sum_{n}^{N}(\mathbf{k}_{rn}\times \mathbf{T}_{n}e^{i\mathbf{k}_{rn}\cdot\mathbf{r}}+\mathbf{k}_{r}\times \mathbf{I}_{n}^{\prime}e^{i\mathbf{k}_{rn}\cdot\mathbf{r}}), \tag{11}\]
where \(\mathbf{k}_{rn}\times\mathbf{T}_{n}=(k_{y}T_{zn}-k_{z,n}^{\mathrm{II}}T_{yn},k_ {z,n}^{\mathrm{II}}T_{xn}-k_{xn}T_{zn},k_{xn}T_{yn}-k_{y}T_{xn})\) and \(\mathbf{k}_{rn}\times\mathbf{I}_{n}^{\prime}=(k_{y}I_{zn}^{\prime}+k_{z,n}^{ \mathrm{II}}I_{yn}^{\prime}-k_{z,n}^{\mathrm{II}}I_{xn}^{\prime}-k_{xn}I_{zn}^{ \prime},k_{xn}I_{yn}^{\prime}-k_{y}I_{xn}^{\prime})\).
In addition, due to \(0=\text{div}\mathbf{E}=\mathbf{k}\cdot\mathbf{E}\), we have the following relations
\[\begin{split}& I_{zn}=-\frac{1}{k_{z,n}^{\mathrm{I}}}(k_{xn}I_{xn}+k_{ y}I_{yn}),R_{zn}=\frac{1}{k_{z,n}^{\mathrm{I}}}(k_{xn}R_{xn}+k_{y}R_{yn}),\\ & T_{zn}=-\frac{1}{k_{z,n}^{\mathrm{II}}}(k_{xn}T_{xn}+k_{y}T_{yn}), I_{zn}^{\prime}=\frac{1}{k_{z,n}^{\mathrm{II}}}(k_{xn}I_{xn}^{\prime}+k_{y}I_{yn}^{ \prime}),\end{split} \tag{12}\]
## III Boundary conditions
The boundary conditions for the electric field can be expressed as :
\[\begin{cases}E_{\mathrm{I}x}(x,y,0)=E_{\mathrm{II}x}(x,y,0),\\ E_{\mathrm{I}y}(x,y,0)=E_{\mathrm{II}y}(x,y,0).\end{cases} \tag{13}\]
Substitute Eqs. (8) and (10) into Eq. (13), for arbitrary \(n\), we have
\[\begin{cases}I_{xn}+R_{xn}=I^{\prime}_{xn}+T_{xn},\\ I_{yn}+R_{yn}=I^{\prime}_{yn}+T_{yn},\end{cases} \tag{14}\]
which can be written in compact form as follows:
\[I+R=I^{\prime}+T, \tag{15}\]
where
\[I=\begin{pmatrix}I_{x}\\ I_{y}\end{pmatrix},R=\begin{pmatrix}R_{x}\\ R_{y}\end{pmatrix},I^{\prime}=\begin{pmatrix}I^{\prime}_{x}\\ I^{\prime}_{y}\end{pmatrix},T=\begin{pmatrix}T_{x}\\ T_{y}\end{pmatrix}. \tag{16}\]
Due to the zero thickness approximation of the graphene grating, the boundary condition of magnetic fields at the interface between mediums I and II yields to :
\[H_{\rm IIx}(x,y,0)-H_{\rm IX}(x,y,0)=\sigma(x)E_{\rm IIy}(x,y,0), \tag{17}\]
where the function \(\sigma(x)\) is periodic and can be expressed using a Fourier series as follows:
\[\sigma(x)=\begin{cases}\sigma_{g}&\text{if}\ \ 0<x<\alpha\\ 0&\text{if}\ \ \ a<x<D\\ \end{cases}=\sum_{n^{\prime}}\sigma_{n^{\prime}}e^{i\frac{2\pi}{D}n^{\prime}x}. \tag{18}\]
It is worth noting that both \(E_{\rm IIy}(x,y,0)\) and \(E_{\rm IIy}(x,y,0)\) can be used to obtain the scattering matrices (because \(E_{\rm(I/II)y}(x,y,0)\) is continuous), here, we use \(E_{\rm IIy}(x,y,0)\). Then by by substituting Eqs. (9), (11) and (10) into Eq. (17), and according to Laurent theory (Fourier Factorization) the following relation is obtained:
\[\sum_{n}\left\{(k_{y}T_{zn}-k_{z,n}^{\rm II}T_{yn})+(k_{y}I^{\prime}_{zn}+k_{z, n}^{\rm II}I^{\prime}_{yn})-(k_{y}I_{zn}-k_{z,n}^{\rm I}I_{yn})-(k_{y}R_{zn}+k_{z, n}^{\rm I}R_{yn})\right\}=k_{0}Z_{0}\sum_{n^{\prime}}\left\{\sigma_{n^{ \prime}-n}(T_{yn}+I^{\prime}_{yn})\right\}, \tag{19}\]
It can be expressed in a compact matrix form as:
\[(k_{y}T_{z}-\gamma_{\rm II}T_{y})+(k_{y}I^{\prime}_{z}+\gamma_{\rm II}I^{ \prime}_{y})-(k_{y}J_{z}-\gamma_{\rm I}I_{y})-(k_{y}R_{z}+\gamma_{\rm I}R_{y}) =k_{0}Z_{0}[\sigma]|(T_{y}+I^{\prime}_{y}), \tag{20}\]
where \(\gamma_{\rm II}=\)diag\((k_{zn}^{II})\), \(\gamma_{\rm I}=\)diag\((k_{zn}^{I})\) and \([\sigma]|\) denotes the Toeplitz matrix with (n, n) entry \(\sigma_{n^{\prime}-n}\) and given as follows :
\[[\sigma]=\begin{pmatrix}\sigma_{0}&\sigma_{-1}&&\sigma_{-2N}\\ \sigma_{1}&\sigma_{0}&\sigma_{-1}&&\\ &\ddots&\ddots&\ddots&&\\ &&\ddots&\ddots&\sigma_{-1}&\\ \sigma_{2N}&&\sigma_{1}&\sigma_{0}&\end{pmatrix}. \tag{21}\]
The Eq. (20) mentioned above can be reformulated into a more concise form using Eq. (12) as follows :
\[-\left(\frac{\alpha k_{y}}{\gamma_{\rm II}}T_{x}+(\gamma_{\rm II}+\frac{k_{y} ^{2}}{\gamma_{\rm II}})T_{y}\right)+\left(\frac{\alpha k_{y}}{\gamma_{\rm II }}I^{\prime}_{x}+(\gamma_{\rm II}+\frac{k_{y}^{2}}{\gamma_{\rm II}})I^{\prime} _{y}\right)+\left(\frac{\alpha\beta}{\gamma_{\rm I}}I_{x}+(\gamma_{\rm I}+\frac {k_{y}^{2}}{\gamma_{\rm I}})I_{y}\right)+\left(\frac{\alpha\beta}{\gamma_{\rm I }}R_{x}+(\gamma_{\rm I}+\frac{k_{y}^{2}}{\gamma_{\rm I}})R_{y}\right)=k_{0}Z_{0 }[[\sigma]|(T_{y}+I^{\prime}_{y}), \tag{22}\]
where \(\alpha=\)diag\((k_{xn})\).
On the other hand, and according to [38], the electric field \((E_{x})\) on the graphene grating surface \((z=0)\) can be expressed in terms of the local basis functions (LBF), \(g_{p}(x)\) and \(s_{p}(x)\) given in Eq. (24), as:
\[E_{x}(x,y)=e^{ik_{y}y}\begin{cases}\sum_{m=1}^{N_{g}}p_{m}g_{m}(x)&\text{for} \ \ x\in\text{graphene}\\ \sum_{m=0}^{N_{g}-1}q_{m}s_{m}(x)&\text{for}\ \ x\in\text{slit}\end{cases}, \tag{23}\]
where
\[\left\{\begin{aligned} & g_{m}(x)=\sin(m\pi x/a)\\ &\\ & s_{m}(x)=\frac{\cos(m\pi(x-a)/c^{\prime})}{\sqrt{(c^{\prime}/2)^ {2}-(x-x_{c})^{2}}}\end{aligned}\right., \tag{24}\]
with \(c^{\prime}=D-a\) and \(x_{c}=(a+D)/2\).
Now, the boundary conditions for the \(y\)-component of the magnetic field can be expressed as:
\[H_{\rm IIy}(x,y,0)-H_{\rm Iy}(x,y,0)=-\sigma(x)E_{x}(x,y). \tag{25}\]
\[\sum_{n}^{N}\left[(k_{z,n}^{\rm II}T_{xn}-k_{xn}T_{zn}-k_{z,n}^{\rm II}I_{xn}^ {\prime}-k_{xn}I_{zn}^{\prime})\right.\left.-(k_{z,n}^{1}I_{xn}-k_{xn}I_{zn}- k_{z,n}^{1}R_{xn}-k_{xn}R_{zn})\right]e^{ik_{xn}x}=-\sigma(x)k_{0}Z_{0} \left\{\begin{aligned} &\sum_{m=1}^{N_{g}}p_{m}g_{m}(x)\\ &\sum_{m=0}^{N_{g}-1}q_{m}s_{m}(x)\end{aligned}\right. \tag{26}\]
Projecting on \(e^{-ik_{xn}x}\) the Eq. (26) and for arbitrary \(n\), we get:
\[(k_{z,n}^{\rm II}T_{xn}-k_{xn}T_{zn}-k_{z,n}^{\rm II}I_{xn}^{\prime}-k_{xn}I_{ zn}^{\prime})-(k_{z,n}^{1}I_{xn}-k_{xn}I_{zn}-k_{z,n}^{1}R_{xn}-k_{xn}R_{zn})=- \sigma_{g}k_{0}Z_{0}\sum_{m=1}^{N_{g}}<e^{-ik_{xn}x},a_{m}g_{m}(x)>, \tag{27}\]
It should be noted that the function \(s_{m}(x)\) doesn't contribute on the Eq. (27) because \(\sigma(x)=0\) on the slit. Then the scalar product \(<f,g>=\frac{1}{D}\int_{0}^{D}f(x)g(x)\mathrm{d}x\) is applied and we obtain:
\[(k_{z,n}^{\rm II}T_{xn}-k_{xn}T_{zn}-k_{z,n}^{\rm II}I_{xn}^{\prime}-k_{xn}I_{ zn}^{\prime})-(k_{z,n}^{1}I_{xn}-k_{xn}I_{zn}-k_{z,n}^{1}R_{xn}-k_{xn}R_{zn})=- \frac{1}{D}\int_{0}^{a}\sigma_{g}k_{0}Z_{0}\sum_{m=1}^{N_{g}}a_{m}g_{m}(x)e^{ -ik_{xn}x}\mathrm{d}x. \tag{28}\]
Exchange the order of summation and integration, the RHS of above Eq. (28) becomes
\[-\frac{1}{D}\int_{0}^{a}\sigma_{g}k_{0}Z_{0}\sum_{m=1}^{N_{g}}p_{m}g_{m}(x)e^ {-ik_{xn}x}\mathrm{d}x=-\sigma_{g}k_{0}Z_{0}\sum_{m=1}^{N_{g}}p_{m}\left[\frac {1}{D}\int_{0}^{a}g_{m}(x)e^{-ik_{xn}x}\mathrm{d}x\right]=-\sigma_{g}k_{0}Z_{ 0}\sum_{m=1}^{N_{g}}p_{m}G_{nm}. \tag{29}\]
where \(G_{nm}=\frac{1}{D}\int_{0}^{a}g_{m}(x)e^{-ik_{xn}x}\mathrm{d}x=\frac{-i\mu}{2 D}e^{-ik_{xn}a/2}\left[\frac{e^{im\pi/2}}{sinc(\alpha_{nm}^{-}a/2)}-e^{-im\pi/2} \tilde{sinc}(\alpha_{nm}^{+}a/2)\right.\)], \(\alpha_{nm}^{\pm}=m\pi/a\pm k_{xn}\).
The Eq. (28) can now be recast in a more compact form as follows:
\[\gamma_{\rm II}T_{x}-\alpha T_{z}-\gamma_{\rm II}I_{x}^{\prime}-\alpha I_{z}^ {\prime}-\gamma_{\rm I}I_{x}+\alpha I_{z}+\gamma_{\rm I}R_{x}+\alpha R_{z}=- \sigma_{g}k_{0}Z_{0}\left[\mathbb{G}\,\mathbf{0}\right]\left(\begin{matrix}p \\ q\end{matrix}\right). \tag{30}\]
where \(\mathbb{G}=\{G_{nm}\}\) is a matrix with size (\((2N+1)\times N_{g}\)) and \(p\) is the column vector formed by the \(N_{g}\) coefficients \(p_{m}\) with \(p=(p_{1},p_{2},p_{3}\cdots p_{N_{g}})^{\rm T}\).
We can write the above equation as follows:
\[\gamma_{\rm II}T_{x}-\alpha T_{z}-\gamma_{\rm II}I_{x}^{\prime}-\alpha I_{z}^ {\prime}-\gamma_{\rm I}I_{x}+\alpha I_{z}+\gamma_{\rm I}R_{x}+\alpha R_{z}=- \sigma_{g}k_{0}Z_{0}\left[\mathbb{G}\,\mathbf{0}\right]\left(\begin{matrix}p \\ q\end{matrix}\right). \tag{31}\]
where \(\left[\mathbb{G}\,\mathbf{0}\right]\) is the horizontale concatenation of matrices \(\mathbb{G}\) and the matrix \(\mathbf{0}\) denoting the zero matrix of size (\((2N+1)\times N_{g}\)).
To obtain \(\mathbb{G}p\), we need to take advantage of the \(x\) component electric field boundary condition in the following way:
\[E_{\rm IIx}(x,y,0)=E_{x}(x,y). \tag{32}\]
Substitute Eqs. (10) and (23) into Eq. (32), we get the following condition:
\[\sum_{n}^{N}(T_{xn}+I_{xn}^{\prime})e^{ik_{xn}x}=\left\{\begin{array}{l}\sum_{ m=1}^{N_{g}}p_{m}g_{m}(x)\\ \sum_{m=0}^{N_{g}-1}q_{m}s_{m}(x)\end{array}\right.. \tag{33}\]
By following the same procedure as before, we obtain:
\[T_{xn}+I_{xn}^{\prime}=\sum_{m=1}^{N_{g}}p_{m}\left[\frac{1}{D}\int_{0}^{a}g_{ m}(x)e^{-ik_{xn}x}\mathrm{d}x\right]+\sum_{m=0}^{N_{g}-1}q_{m}\left[\frac{1}{D} \int_{a}^{D}s_{m}(x)e^{-ik_{xn}x}\mathrm{d}x\right]=\sum_{m=1}^{N_{g}}p_{m}G_{ nm}+\sum_{m=0}^{N_{g}-1}q_{m}S_{nm}, \tag{34}\]
where \(S_{nm}=\frac{\pi}{2D}e^{-ik_{xn}x_{c}}\left[\right.e^{im\pi/2}J_{0}(\beta_{nm }^{-}c^{\prime}/2)+e^{-im\pi/2}J_{0}(\beta_{nm}^{+}c^{\prime}/2)\left.\right]\), \(\beta_{nm}^{\pm}=m\pi/c^{\prime}\pm k_{xn}\).
In compact matrix form equation (34) becomes:
\[T_{x}+I_{xn}^{\prime}=\mathbb{G}p+\mathbb{S}q, \tag{35}\]
where \(\mathbb{S}=\{S_{nm}\}\) is a matrix with size \(((2N+1)\times N_{s})\), \(q\) is the column vector formed by the \(N_{s}\) coefficients \(q=(q_{0},q_{1},q_{2}\cdots q_{N_{s}-1})^{\mathrm{T}}\).
We can write the above equation as follows:
\[\begin{pmatrix}p\\ q\end{pmatrix}=\left[\mathbb{G}\mathbb{S}\right]^{-1}(T_{x}+I_{x}^{\prime}), \tag{36}\]
where \(\left[\mathbb{G}\mathbb{S}\right]\) is the horizontale concatenation of matrices \(\mathbb{G}\) and \(\mathbb{S}\).
Substitute Eq. (36) into Eq. (31), we have
\[\gamma_{\mathrm{II}}T_{x}-\alpha\,T_{z}-\gamma_{\mathrm{II}}I_{x}^{\prime}- \alpha\,I_{z}^{\prime}-\gamma_{1}I_{x}+\alpha I_{z}+\gamma_{1}R_{x}+\alpha R_{ z}=-\sigma_{g}k_{0}Z_{0}\left[\mathbb{G}\mathbb{0}\right]\left[\mathbb{G}\mathbb{S} \right]^{-1}(T_{x}+I_{x}^{\prime}). \tag{37}\]
Substitute Eq. (12) into Eqs. (22) and (37) we get:
\[\begin{pmatrix}\gamma_{\mathrm{II}}+\frac{\alpha^{2}}{\gamma_{ \mathrm{II}}}+\sigma_{g}k_{0}Z_{0}\left[\mathbb{G}\mathbb{0}\right]\left[ \mathbb{G}\mathbb{S}\right]^{-1}&\frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}\\ \frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}&\gamma_{\mathrm{II}}+\frac{k_{y}^{2}} {\gamma_{\mathrm{II}}}+\left[\left[\sigma\right]\right]k_{0}Z_{0}\end{pmatrix} \begin{pmatrix}T_{x}\\ T_{y}\end{pmatrix} \tag{38}\] \[-\begin{pmatrix}\gamma_{\mathrm{II}}+\frac{\alpha^{2}}{\gamma_{ \mathrm{II}}}-\sigma_{g}k_{0}Z_{0}\left[\mathbb{G}\mathbb{0}\right]\left[ \mathbb{G}\mathbb{S}\right]^{-1}&\frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}\\ \frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}&\gamma_{\mathrm{II}}+\frac{k_{y}^{2}} {\gamma_{\mathrm{II}}}-\left[\left[\sigma\right]\right]k_{0}Z_{0}\end{pmatrix} \begin{pmatrix}I_{x}^{\prime}\\ I_{y}^{\prime}\end{pmatrix}\] \[=\begin{pmatrix}\gamma_{\mathrm{II}}+\frac{\alpha^{2}}{\gamma_{ \mathrm{II}}}&\frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}\\ \frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}&\gamma_{\mathrm{II}}+\frac{k_{y}^{2}} {\gamma_{\mathrm{II}}}\end{pmatrix}\begin{pmatrix}I_{x}\\ I_{y}\end{pmatrix}-\begin{pmatrix}\gamma_{\mathrm{II}}+\frac{\alpha^{2}}{\gamma_{ \mathrm{II}}}&\frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}\\ \frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}&\gamma_{\mathrm{II}}+\frac{k_{y}^{2}} {\gamma_{\mathrm{II}}}\end{pmatrix}\begin{pmatrix}R_{x}\\ R_{y}\end{pmatrix}.\]
The above Eq. (38) can be written in a more compact form further as follows:
\[(A+\Lambda)\,T+(\Lambda-A)\,I^{\prime}=B(I-R), \tag{39}\]
where \(\Lambda=\text{diag}(\sigma_{\mathrm{g}}k_{0}Z_{0}\left[\mathbb{G}\mathbb{0} \right]\left[\mathbb{G}\mathbb{S}\right]^{-1},\left[\sigma\right]\mathrm{k}_{0 }Z_{0}),\,A\) and \(B\) are defined as follows.
\[A=\begin{pmatrix}\gamma_{\mathrm{II}}+\frac{\sigma_{\mathrm{II}}^{2}}{\gamma_{ \mathrm{II}}}&\frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}\\ \frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}&\gamma_{\mathrm{II}}+\frac{k_{y}^{2}} {\gamma_{\mathrm{II}}}\end{pmatrix},B=\begin{pmatrix}\gamma_{\mathrm{II}}+ \frac{\alpha^{2}}{\gamma_{\mathrm{II}}}&\frac{\alpha k_{y}}{\gamma_{\mathrm{II}}} \\ \frac{\alpha k_{y}}{\gamma_{\mathrm{II}}}&\gamma_{\mathrm{II}}+\frac{k_{y}^{2}} {\gamma_{\mathrm{II}}}\end{pmatrix}. \tag{40}\]
Combining Eqs. (15) and (39), the amplitude fields of the conical incidence yields to:
\[\begin{pmatrix}\begin{matrix}1&-1\\ B&A+\Lambda\end{pmatrix}\begin{pmatrix}R\\ T\end{pmatrix}=\begin{pmatrix}-1&1\\ B&A-\Lambda\end{pmatrix}\begin{pmatrix}I\\ I^{\prime}\end{pmatrix}. \tag{41}\]
Where \(\mathbb{1}\) is the identity matrix of size \((2(2N+1)\times 2(2N+1))\).
Finally the interface scattering matrix \(S_{\text{LBF}}\) can be obtain as following:
\[\begin{pmatrix}R\\ T\end{pmatrix}=S_{\text{LBF}}\begin{pmatrix}I\\ I^{\prime}\end{pmatrix}, \tag{42}\]
where
\[S_{\text{LBF}}=\begin{pmatrix}\mathbb{1}&-\mathbb{1}\\ B&A+\Lambda\end{pmatrix}^{-1}\begin{pmatrix}-\mathbb{1}&\mathbb{1}\\ B&A-\Lambda\end{pmatrix}. \tag{43}\]
## IV Overall scattering matrix of the finite-thickness planar structure covered with graphene grating
Now let us move on to the second scattering matrix \(S_{\text{slab}}\), it is given by :
\[\begin{pmatrix}T\\ I^{\prime\prime}\end{pmatrix}=S_{\text{slab}}\begin{pmatrix}I^{\prime}\\ T^{\prime}\end{pmatrix}, \tag{44}\]
where
\[S_{\text{slab}}=\begin{pmatrix}\Phi&0\\ 0&1\end{pmatrix}\begin{pmatrix}\mathbb{1}&-\mathbb{1}\\ \mathbb{M}_{1}&\mathbb{M}_{2}\end{pmatrix}^{-1}\begin{pmatrix}-\mathbb{1}& \mathbb{1}\\ \mathbb{M}_{1}&\mathbb{M}_{2}\end{pmatrix}\begin{pmatrix}\Phi&0\\ 0&1\end{pmatrix}, \tag{45}\]
with \(\Phi=\text{diag}\big{(}\text{diag}(e^{i\gamma_{\text{I}}h}),\text{diag}(e^{ i\gamma_{\text{I}}h})\big{)}\), \(0\) is the null matrix of size \((2(2N+1)\times 2(2N+1))\), and
\[\mathbb{M}_{1}=\begin{pmatrix}\frac{ak_{y}}{\gamma_{\text{I}}}&\gamma_{\text{ I}}+\frac{k_{y}^{2}}{\gamma_{\text{I}}}\\ -(\gamma_{\text{I}}+\frac{a^{2}}{\gamma_{\text{I}}})&-\frac{ak_{y}}{\gamma_{ \text{I}}}\end{pmatrix},\mathbb{M}_{2}=\begin{pmatrix}\frac{ak_{y}}{\gamma_{ \text{I}}}&\gamma_{\text{I}}+\frac{k_{y}^{2}}{\gamma_{\text{I}}}\\ -(\gamma_{\text{I}}+\frac{a^{2}}{\gamma_{\text{I}}})&-\frac{ak_{y}}{\gamma_{ \text{I}}}\end{pmatrix}, \tag{46}\]
where \(\gamma_{\text{I}\text{I}}=\sqrt{k_{0}^{2}\varepsilon_{\text{I}\text{I}1}- \alpha^{2}-k_{y}^{2}}\).
Finally, we obtain the total S-matrix (cf. equation(5)) that connects the input of medium I to the output of medium III as follows:
\[\begin{pmatrix}I\\ I^{\prime\prime}\end{pmatrix}=(S_{\text{LBF}}\star S_{\text{slab}})\begin{pmatrix} R\\ T^{\prime}\end{pmatrix}=S\begin{pmatrix}R\\ T^{\prime}\end{pmatrix}, \tag{47}\]
The operation \(\mathbb{A}=\mathbb{B}\star\mathbb{C}\) is defined as [40]
\[\begin{split}\mathbb{A}_{11}&=\mathbb{B}_{11}+\mathbb{B}_{12}( \mathbb{1}-\mathbb{C}_{11}\mathbb{B}_{22})^{-1}\mathbb{C}_{11}\mathbb{B}_{21}, \\ \mathbb{A}_{12}&=\mathbb{B}_{12}(\mathbb{1}-\mathbb{C}_{11} \mathbb{B}_{22})^{-1}\mathbb{C}_{12},\\ \mathbb{A}_{21}&=\mathbb{C}_{21}(\mathbb{1}-\mathbb{B}_{22} \mathbb{C}_{11})^{-1}\mathbb{B}_{21},\\ \mathbb{A}_{22}&=\mathbb{C}_{22}+\mathbb{C}_{21}(\mathbb{1}- \mathbb{B}_{22}\mathbb{C}_{11})^{-1}\mathbb{B}_{22}\mathbb{C}_{12}.\end{split} \tag{48}\]
The scattering matrix in Eq. (47) with a size of \((4(2N+1)\times 4(2N+1))\) define the reflection and transmission matrices as follows :
\[S=\begin{pmatrix}\mathcal{R}_{xyz}^{-}&\mathcal{T}_{xyz}^{-}\\ \mathcal{T}_{xyz}^{+}&\mathcal{R}_{xyz}^{+}\end{pmatrix}. \tag{49}\]
For completeness, the \(\mathcal{T}_{xyz}^{-}\) and \(\mathcal{T}_{xyz}^{+}\) transmission coefficients must be multplied by a phase factor \(e^{-i\gamma_{\text{I}\text{I}}h}\).
Transformation matrices
In the expression of the CLF of Eq. (1), the scattering matrices are expressed in the standard surface optics TE and TM basis. In this section we provide the details on how to change basis and how to explicitly derive the \(\mathcal{R}^{(1)+}\) and \(\mathcal{R}^{(2)-}\) entering in equation (1).
To manipulate the transformation of reflection operators from \((x,y,z)\) Cartesian basis to the (TE, TM) basis, we firstly need to define the unit vectors in (TE, TM) basis as follows :
\[\begin{split}&\hat{\mathbf{e}}_{\text{TE}}^{\phi}(\mathbf{k}_{n}, \omega)=\frac{1}{k_{n}}(-k_{y}\hat{\mathbf{e}}_{x}+k_{x,n}\hat{\mathbf{e}}_{y }),\\ &\hat{\mathbf{e}}_{\text{TM}}^{\phi}(\mathbf{k}_{n},\omega)=\frac{ c}{\omega}(-k_{n}\hat{\mathbf{e}}_{z}+\phi k_{z,n}\hat{\mathbf{k}}_{n}),\end{split} \tag{50}\]
where \(\hat{\mathbf{e}}_{x}\), \(\hat{\mathbf{e}}_{y}\) and \(\hat{\mathbf{e}}_{z}\) are unit vector in the \((x,y,z)\) Cartesian basis, respectively. \(\mathbf{k}_{n}=(k_{x,n},k_{y})\). \(\hat{\mathbf{k}}_{n}=\mathbf{k}_{n}/k_{n}\). \(k_{z,n}=\sqrt{\omega^{2}/c^{2}-\mathbf{k}_{n}^{2}}\). \(\phi\) the direction of propagation of the waves \((+,-)\) along the \(z\)-axis for the incident and the reflected fields, respectively.
In the \((x,y,z)\) Cartesian basis, the field of the order \(n\) is expressed as \(\mathbf{E}_{n}=E_{x,n}\hat{\mathbf{e}}_{x}+E_{y,n}\hat{\mathbf{e}}_{y}+E_{z,n} \hat{\mathbf{e}}_{z}\). In the (TE, TM) basis we can write \(\mathbf{E}_{n}=E_{\text{TE},n}\hat{\mathbf{e}}_{\text{TE}}+E_{\text{TM},n} \hat{\mathbf{e}}_{\text{TM}}\), which can be rearranged by applied the Eq. (50) as follow.
\[\mathbf{E}_{n}=(-\frac{k_{y}}{k_{n}}E_{\text{TE},n}+\phi\frac{c}{\omega}\frac{ k_{z,n}k_{x}}{k_{n}}E_{\text{TM},n})\hat{\mathbf{e}}_{x}+(\frac{k_{x}}{k_{n}}E_{ \text{TE},n}+\phi\frac{c}{\omega}\frac{k_{z,n}k_{y}}{k_{n}}E_{\text{TM},n}) \hat{\mathbf{e}}_{y}-k_{n}\frac{c}{\omega}E_{\text{TM},n}\hat{\mathbf{e}}_{z}. \tag{51}\]
By comparing the above Eq. (51) and \(\mathbf{E}_{n}=E_{x,n}\hat{\mathbf{e}}_{x}+E_{y,n}\hat{\mathbf{e}}_{y}+E_{z,n} \hat{\mathbf{e}}_{z}\), we can obtain the following relation:
\[\begin{pmatrix}E_{x,n}\\ E_{y,n}\end{pmatrix}=\begin{pmatrix}-\frac{k_{y}}{k_{n}}&\phi\frac{k_{z,n}k_{x,n}}{k_{n}}\\ \frac{k_{x,n}}{k_{n}}&\phi\frac{k_{z}k_{x}k_{y}}{k_{n}}\end{pmatrix}\begin{pmatrix} E_{\text{TE},n}\\ E_{\text{TM},n}\end{pmatrix}. \tag{52}\]
The above relationship can be expressed in a more concise form :
\[\begin{pmatrix}E_{x}\\ E_{y}\end{pmatrix}=\mathbb{B}^{\phi}\begin{pmatrix}E_{\text{TE}}\\ E_{\text{TM}}\end{pmatrix}, \tag{53}\]
where the transformation matrix is
\[\mathbb{B}^{\phi}=\begin{pmatrix}-\text{diag}(\frac{k_{y}}{k_{n}})&\text{diag} (\frac{\phi k_{z,n}k_{x,n}}{\omega k_{n}})\\ \text{diag}(\frac{k_{x,n}}{k_{n}})&\text{diag}(\frac{\phi k_{z,n}k_{y}}{\omega k _{n}})\end{pmatrix}. \tag{54}\]
By applying this transformation matrix to the reflection operator \(\mathcal{R}^{-}_{xyz}\), the reflection operator in the (TE, TM) basis yields
\[\mathcal{R}^{-}=(\mathbb{B}^{-})^{-1}\ \mathcal{R}^{-}_{xyz}\ \mathbb{B}^{+}. \tag{55}\]
We stress here that, due to the change of basis, the matrix \(\mathcal{R}^{-}\) as defined in Eq. (55) is now ordered in this specific basis as follow :
\[\mathcal{R}^{-}=\begin{pmatrix}\mathcal{R}^{-}_{TE/TE}&\mathcal{R}^{-}_{TE/TM} \\ \mathcal{R}^{-}_{TM/TE}&\mathcal{R}^{-}_{TM/TM}\end{pmatrix} \tag{56}\]
The size of matrix \(\mathcal{R}^{-}\) is \(2(2N+1)\times 2(2N+1)\) while the size of the sub matrices is \((2N+1)\times(2N+1)\).
In the equations (1) and (2) of the main paper appears the reflection matrices \(\mathcal{R}^{(1)+}\) and \(\mathcal{R}^{(2)-}\) of the two bodies of the same material, which can be directly related to the matrix \(\mathcal{R}^{-}\) we just derived in Eq.(56). In particular, due to the different \(z\)-axis orientation, \(\mathcal{R}^{(1)+}\) is identical to \(\mathcal{R}^{-}\) on the two diagonal blocks (TE/TE) and (TM/TM), while it has a sign difference for the two off-diagonal blocks (TE/TM) and (TM/TE) as described in the paper [40], while \(\mathcal{R}^{(2)-}\) is identical to \(\mathcal{R}^{-}\).
\[\mathcal{R}^{(1)+}=\begin{cases}\mathcal{R}^{-}_{p,p}&p=p^{\prime}\\ -\mathcal{R}^{-}_{p,p^{\prime}}&p\neq p^{\prime}\end{cases} \tag{57}\]
When body 2 is positioned at a distance of \(d\) from the origin, it results in a phase shift in the reflection operator \(\mathcal{R}^{(2)-}\), similar to what is described in the paper [32].
\[\left\langle p,\mathbf{k},n|\mathcal{R}^{(2)-}(\omega)|p^{\prime},\mathbf{k}^{ \prime},n^{\prime}\right\rangle=e^{i\left(k_{zn}+k^{\prime}_{z,n^{\prime}} \right)d}\left\langle p,\mathbf{k},n|\mathcal{R}^{-}(\omega)|p^{\prime}, \mathbf{k}^{\prime},n^{\prime}\right\rangle. \tag{58}\]
We finally are able to compute the CLF in Eq. (1) of the main text with these two reflections operators associated with each body.
|
2309.04907 | Effective Real Image Editing with Accelerated Iterative Diffusion
Inversion | Despite all recent progress, it is still challenging to edit and manipulate
natural images with modern generative models. When using Generative Adversarial
Network (GAN), one major hurdle is in the inversion process mapping a real
image to its corresponding noise vector in the latent space, since its
necessary to be able to reconstruct an image to edit its contents. Likewise for
Denoising Diffusion Implicit Models (DDIM), the linearization assumption in
each inversion step makes the whole deterministic inversion process unreliable.
Existing approaches that have tackled the problem of inversion stability often
incur in significant trade-offs in computational efficiency. In this work we
propose an Accelerated Iterative Diffusion Inversion method, dubbed AIDI, that
significantly improves reconstruction accuracy with minimal additional overhead
in space and time complexity. By using a novel blended guidance technique, we
show that effective results can be obtained on a large range of image editing
tasks without large classifier-free guidance in inversion. Furthermore, when
compared with other diffusion inversion based works, our proposed process is
shown to be more robust for fast image editing in the 10 and 20 diffusion
steps' regimes. | Zhihong Pan, Riccardo Gherardi, Xiufeng Xie, Stephen Huang | 2023-09-10T01:23:05Z | http://arxiv.org/abs/2309.04907v1 | # Effective Real Image Editing with Accelerated Iterative Diffusion Inversion
###### Abstract
Despite all recent progress, it is still challenging to edit and manipulate natural images with modern generative models. When using Generative Adversarial Network (GAN), one major hurdle is in the inversion process mapping a real image to its corresponding noise vector in the latent space, since it is necessary to be able to reconstruct an image to edit its contents. Likewise for Denoising Diffusion Implicit Models (DDIM), the linearization assumption in each inversion step makes the whole deterministic inversion process unreliable. Existing approaches that have tackled the problem of inversion stability often incur in significant trade-offs in computational efficiency. In this work we propose an Accelerated Iterative Diffusion Inversion method, dubbed AIDI, that significantly improves reconstruction accuracy with minimal additional overhead in space and time complexity. By using a novel blended guidance technique, we show that effective results can be obtained on a large range of image editing tasks without large classifier-free guidance in inversion. Furthermore, when compared with other diffusion inversion based works, our proposed process is shown to be more robust for fast image editing in the 10 and 20 diffusion steps' regimes.
## 1 Introduction
Diffusion models are a class of generative models that learn to generate high-quality images by iteratively applying a denoising process from a random noisy starting point. They are capable to achieve state-of-the-art (SOTA) image quality since the very early stage of denoising diffusion probabilistic models (DDPM) [1] and score-based generative modeling [2]. While the number of sampling steps required for high quality image generation was initially very large, several follow-up studies [3, 4, 5, 6] have reduced significantly the number of steps without degrading the image quality, making the widespread use of diffusion models possible. In particular, denoising diffusion implicit models (DDIM) [7] are widely used for their speed and flexibility in deterministic and stochastic generations. Further reducing the number of sampling steps for both image generation or editing is still nevertheless an open research problem.
Diffusion models were initially designed for image generation; for this reason, their usefulness for real image editing is limited without a proper inversion process, similar to the inversion challenge [8] faced by real image editing using a Generative Adversarial Network (GAN) [9]. GAN inversion is limited by the reduced dimensionality in latent space; diffusion inversion is comparably less restricted as the latent space has the same dimensionality as the original image. Naive one-step inversion step, _i.e._ simply perturbing an image with random noise, was initially used for early image editing works such as SDEdit [10] and Blended Diffusion [11]. Later, an Euler method based inversion process that applies deterministic step-by-step noise injection was used for image-to-image translation in DDIB [12] and DiffusionCLIP [13]. However, as shown in the text-guided image editing tests in Prompt-to-Prompt (PTP) [14], such inversion is not reliable for real image editing because the inversion often leads to failed reconstruction even when no editing is performed. Follow-up works like null-text inversion (NTI) [15] and exact diffusion inversion (EDICT) [16] have focused on improving the reconstruction accuracy by introducing auxiliary variables like the learned null-text embedding in NTI, or processes like the coupled diffusion process in EDICT. The reconstruction accuracy improvements come however with regressions in computational complexity of the inversion and/or the editing processes.
In this paper, we are the first to look beyond the simple inversion process that is based on the Euler method and investigate a better numerical solver for improved inversion stability. Modeling the inversion process as an ordinary differential equation (ODE) problem, the implicit backward Euler method is well suited as its solution results in an exact reconstruction using Euler's method, assuming the same time steps are used. Given that, we propose an Accelerated Iterative Diffusion Iteration (AIDI) method that adopts a fixed-point iteration process for each inversion step. Combined with Anderson acceleration [17] method which helps convergence of this iteration, it is demonstrated through experiments of large test set that it results in significantly im
proved reconstruction accuracy. Alternatively, an empirical acceleration method is invented for equivalent performance with less computational overhead.
While a large classifier-free guidance [18] scale is often needed for effective image editing, inversion with the same large guidance scale is not reliable even for our proposed AIDI. We have demonstrated nevertheless that it is possible to apply different guidance scales for inversion and editing respectively and still achieve effective editing using our proposed blended guidance strategy. Inspired by the cross-attention map replacement method proposed in PTP [14], we utilize the cross-attention map from the image reconstruction using the same guidance setting of inversion to blend different guidance scales in image editing. Higher guidance scales up to 7 are applied for pixels that require more editing and the low scale used in inversion, default as one, is used for irrelevant pixels.
As shown in Fig. 1, using a challenging image editing task that swaps dogs in high-resolution AFHQ [19] images to cats, our method is the best overall in terms of both editing quality, evaluated with the FID [20] score as related to the target cat domain in AFHQ, and perceptual similarity, evaluated using the LPIPS [21] metric in reference to the input image. Fig. 1 also shows the average latency time as proportional circular areas. Here the number of function evaluations (NFE) is not used because it is not an accurate metric of computational complexity for all methods. For example, in NTI there is substantial overhead in back propagation caused by learning the null-text embeddings in the inversion process. Here we fix the number of diffusion steps to assess average latency time instead, as it is correlated with image editing quality for all methods. Note that a small number of 20 inversion and editing steps is used for all methods here. As the inversion process is only needed once per image while editing could be applied multiple times, the latency time is split to editing and inversion, represented as inner circle and outer ring respectively. For latency times, ours is only slower in inversion than the original PTP while as fast as PTP and NTI in editing.
In summary, based on pretrained text-to-image diffusion models, we propose a framework for text-based real image editing with the following key advantages:
* We are the first to our knowledge to apply fixed-point iteration and acceleration in diffusion inversion, showing significant improvements in reconstruction accuracy based on a large 5000 image COCO test set. The LPIPS value for a 20-step reconstruction is reduced to 0.063 compared to 0.148 for the baseline.
* For inversion without classifier-free guidance or low guidance scale, we propose a blended guidance method to apply larger guidance scales for effective editing in relevant areas, while maintaining fidelity elsewhere with low scales.
* Our proposed image editing method is still effective for inversion steps as low 10, where competing approaches exhibit significant artifacts.
## 2 Related Works
### Text-to-Image Diffusion Models
The rapid progress of Diffusion-based generative models has advanced the state-of-the-art for many generative tasks, including text-to-image generations. A highly capable unconditional diffusion model was shown in [22], using sampling guidance to match the CLIP scores of the text input and generated image. More recently techniques such as GLIDE[23], DALL-E 2 [24] and Imagen[25], have used text embeddings from large language models to train conditional diffusion models, all of them capable of generating diverse and high quality images that match with the arbitrarily complex prompt texts. Both GLIDE and DALL-E 2 are conditional to CLIP textual embeddings, while DALL-E 2 trains instead a diffusion prior to first generate image embeddings from the input text CLIP embedding, before the image embedding is then fed into another diffusion model for image generation. To handle high-resolution image generation, both GLIDE and Imagen generate the text conditional image at low-resolution using cascaded diffusion models, which are conditional to both text and image to progressively increase resolution. In alternative to that, LDM [26] proposed to conduct the conditional text-to-image diffusion in a latent space of reduced dimensionality for faster training and sampling. Based on the LDM architecture, a large text-to-image model Stable Diffusion [27] was trained with a huge dataset and released for open research.
Figure 1: Quantitative assessment for various diffusion inversion based methods, using a challenging image editing task of swapping dog to cat in AFHQ test set. Image editing quality are assessed jointly by LPIPS and FID. Lower LPIPS score is preferred for perception similarity with original image and lower FID in reference to the AFHQ cat train set translates to better image editing quality. Circle areas and their outer ring represent average latency time for editing and inversion respectively.
### Real Image Editing in Generative Models
Since the successful implementation of disentangled representation by StyleGAN [28, 29], image editing in GAN has become very apt at separating different modes of variation such as pose, expression, color, and texture. Various methods [30, 31] have been published for text-guided image editing using the contrastive language-image model CLIP. While powerful for manipulating generated images, applying it to real image editing is not straightforward since the inversion of a real image to latent variables is trivial. Earlier GAN inversion methods focused on inversion without changing the GAN model weights, either via optimization [32, 33, 34, 35] or learning [36, 37, 38, 39]. Two recent works, HyperStyle [40] and HyperInverter [41], introduced hypernetworks [42] to modify the GAN weights for each image and help recovering lost details during reconstruction. It was shown that the modified weights have no adversarial effects on generated image quality when the inverted latent variables are modified for image editing. However, even with modified weights, GAN inversion is far from perfect due to the significantly reduced dimensionality of the latent space as compared to the original image pixel space.
Diffusion models are not subject to this limitation since there is no dimensionality changes between input and output for each inversion step. Earlier image editing methods using diffusion models, including SDEdit [10] and blended diffusion [11], didn't utilize the potential of accurate diffusion inversion as they inject random noises into input image to synthesize a noisy start. Another recent work Imagic [43] achieved mask-less real image editing without involving diffusion inversion, utilizing optimized textual embedding and diffusion model fine tuning instead.
DiffusionCLIP [13] was the first to adopt the more accurate step-by-step diffusion inversion process but it relied on diffusion model refinement to achieve text-guided image editing, without addressing the inversion accuracy challenge directly. DiffEdit [44] also avoided the inaccuracy concern by controlling an encoding ratio and applying a generated mask. Prompt-to-Prompt (PTP) [14] was the first to achieve comprehensive text guided image editing without diffusion model refinement, including local editing without a known mask. However it focused on generated image editing, citing that the simple step-by-step inversion is not reliable for real images, especially with larger classifier-free guidance scales. Null-text inversion (NTI) [15] proposed to change the constant null-text embedding to image-specific optimal ones in order to achieve accurate reconstruction and it then applies PTP techniques for real image editing. Later EDICT [16] proposed to use an auxiliary diffusion branch to achieve exact inversion and reconstruction of the input image, resulting in improved image editing quality. Most recently, pix2pix-zero [45] learns editing directions in the textual embedding space for corresponding image translation tasks but it adopts the original diffusion inversion process used in PTP without making efforts to avoid the known inversion divergence. To our knowledge, our paper is the first to address the inversion accuracy challenge without change in model configuration or system architecture so the proposed iterative inversion could be applied to and benefit other methods like pix2pix-zero that do not address the inversion accuracy issue yet.
## 3 Proposed Method
### Diffusion Inversion Preliminaries
Our proposed generative compression method is built on pre-trained, unmodified text-to-image diffusion models. Without loss of generality, the publicly available Stable Diffusion model [27], which uses the latent diffusion model (LDM) [26] architecture, is adopted for all experiments. Given the LDM architecture, the diffusion process is conducted in the latent space \(z\), which can be decoded to the image space \(x\). Nevertheless our method is equally applicable to other diffusion models conducted in native image space instead of a latent space.
For a text-to-image diffusion model learned from a large set of paired image latent variable \(z\) and image caption \(p\), the process is optimized using a simple standard mean-squared error (MSE) loss:
\[L^{\text{simple}}=E_{t,z,p,\epsilon}||\epsilon-\mathbf{\epsilon}_{\theta}(z_{t},p,t )||^{2} \tag{1}\]
where \(\epsilon\) is a random Gaussian noise added to \(z\) to synthesize \(z_{t}\), \(t\) is a randomly-set time-step chosen from \(1,2,\dots,T\), and \(\mathbf{\epsilon}_{\theta}\) denotes the diffusion model trained to estimate the injected noise using optimized parameters \(\theta\). As the goal of our proposed iterative inversion is accurate reconstruction, here we use a DDIM sampler where the sampling noise is \(\sigma_{t}=0\) for all \(t\) to achieve a deterministic sampling process. The iterative sampling step to generate an image latent \(z_{0}\) from a random sample \(z_{T}\) is:
\[\begin{split} z_{0}^{t}&=(z_{t}-\sqrt{1-\bar{ \alpha}_{t}}\epsilon_{t})/\sqrt{\bar{\alpha}_{t}}\\ z_{t-1}&=\sqrt{\bar{\alpha}_{t-1}}z_{0}^{t}+\sqrt{1- \bar{\alpha}_{t-1}}\epsilon_{t}\end{split} \tag{2}\]
\(\epsilon_{t}\) is typically calculated as follows:
\[\epsilon_{t}=\omega\mathbf{\epsilon}_{\theta}(z_{t},p,t)+(1-\omega)\mathbf{\epsilon}_ {\theta}(z_{t},\emptyset,t) \tag{3}\]
where \(\omega\) is the classifier-free guidance scale and \(\emptyset\) is the null-text reference.
Early image editing works like SDEdit use the same noise injection step used in model training for inversion at any \(t\). That is:
\[z_{t}=\sqrt{\bar{\alpha}_{t}}z_{0}+\sqrt{1-\bar{\alpha}_{t}}\epsilon. \tag{4}\]
Since \(\epsilon\) is a random noise independent of \(z_{0}\), this is a stochastic inversion process that cannot reliably reconstruct
\(z_{0}\) from \(z_{t}\) when \(t\) is close to \(T\). In later works, a simple DDIM inversion process is adopted base on a linear ODE solver:
\[\begin{split} z_{0}^{t}&=(z_{t}-\sqrt{1-\bar{\alpha}_ {t}}\epsilon_{\tilde{t}})/\sqrt{\bar{\alpha}_{t}}\\ z_{t+1}&=\sqrt{\bar{\alpha}_{t+1}}z_{0}^{t}+\sqrt{1- \bar{\alpha}_{t+1}}\epsilon_{\tilde{t}}.\end{split} \tag{5}\]
To allow for better reconstruction accuracy, here \(\epsilon_{\tilde{t}}\) is instead approximated using \(t+1\) as follows:
\[\epsilon_{\tilde{t}}=\omega\mathbf{\epsilon}_{\theta}(z_{t},p,t+1)+(1-\omega)\mathbf{ \epsilon}_{\theta}(z_{t},\emptyset,t+1). \tag{6}\]
### Accelerated Iterative Diffusion Inversion
For an inversion step where \(z_{t-1}\) is known, we aim to find the optimal \(z_{t}\) so that we can recover \(z_{t-1}\). We can rewrite Equation 2 as:
\[z_{t}\!=\!\sqrt{\frac{\bar{\alpha}_{t}}{\bar{\alpha}_{t-1}}}z_{t-1}\!+\!\left[ \sqrt{1-\bar{\alpha}_{t}}-\sqrt{\frac{(1-\bar{\alpha}_{t-1})\bar{\alpha}_{t}} {\bar{\alpha}_{t-1}}}\right]\epsilon_{t}. \tag{7}\]
Since \(\epsilon_{t}\) depends on \(z_{t}\) (_cf_. Equation 3), it can be denoted as an implicit function \(z_{t}=f(z_{t})\). The ideal inversion step, finding \(z_{t}\) that results in \(z_{t-1}\) exactly, becomes a step to find a fixed-point solution for \(f\). In numerical analysis, this is often solved via the iterative process:
\[z_{t}^{n+1}=f(z_{t}^{n}),n=0,1,\ldots \tag{8}\]
The convergence of this iterative process can often be accelerated using established techniques such as Anderson acceleration. We also propose and employ an alternative empirical acceleration method which is simpler and faster than Anderson's. With appropriate acceleration, the iterative inversion process can become more efficient and stabler than a simple forward Euler method. We summarize our full proposed process in Alg. 1, and refer to it as _AIDI_, short for accelerated iterative diffusion inversion. Note that for the residual function \(g(z)\) used in the AIDI_A variant with Anderson acceleration, it is defined as \(g(z)=f(z)-z\). For the AIDLE variant, it is a simplified version of Anderson acceleration with fixed setting for \(m\) and \(\gamma\) as 1 and \((0.5,0.5)\), saving the additional optimization process to find \(\gamma\).
```
1Input: A latent image \(z_{0}\) and prompt \(p\), acceleration method \(C\), iteration parameters \(I,m\).
2Function:\(f(z)\) is the implicit function defined in Equation 7, \(g(z)\) is the residual function \(f(z)-z\)
3Output: An inverted noise vector \(z_{T}\).
4for\(t=1,2,\ldots,T\)do
5\(z_{t}^{0},z_{t}^{1}\gets z_{t-1},f(z_{t}^{0})\) ;
6for\(i=1,\ldots,I\)do
7if\(C\) is AIDI_Athen
8\(m_{i}\gets min(m,i)\);
9\(G_{i}\leftarrow[g(z_{t}^{i-m_{i}}),\ldots,g(z_{t}^{i})]\);
10\(\gamma_{i}\leftarrow\operatorname*{argmin}_{\gamma\in\Gamma_{i}}\lVert G_{i} \cdot\gamma\rVert_{2}\), where \(\Gamma_{i}=\{(\gamma^{0},\ldots,\gamma^{m_{i}}):\sum_{j=0}^{m_{i}}\gamma^{j}=1\}\);
11elseif\(C\) is AIDI_Ethen
12\(m_{i},\gamma_{i}\gets 1,(0.5,0.5)\);
13 Set \(z_{t}^{i+1}\leftarrow\sum_{j=0}^{m_{i}}\gamma_{i}^{j}f(z_{t}^{i-m_{i}+j})\);
14
15 end if
16\(z_{t}\gets z_{t}^{I}\) ;
17
18 end for Return\(z_{T}\)
```
**Algorithm 1**Accelerated Iterative Diffusion Inversion
### Blended Guidance
While the proposed AIDI can significantly improve the inversion stability at different guidance scales, it is not sufficient on its own for reliably reconstructing and then eventually editing an image when the guidance scale is large. Adopting the PTP pipeline for real image editing, we first fix a small guidance scale like \(\omega=1\) for the inversion using source prompt \(p\). For the following editing, we establish two parallel sampling processes, one conditional to prompt \(p\) with same small \(\omega\) and the other conditional to the target prompt \(p^{*}\). For the \(p^{*}\) process, we introduce a blended \(\omega^{*}\) to apply larger guidance scales for pixels relevant to editing and lower ones for the rest to keep them unedited. To support this blended guidance, a soft editing mask \(\tilde{M}_{t}\) is generated concurrently with the image editing process. First the cross-attention map \(M_{t}\) with an anchor to
Figure 2: Flowchart of the proposed effective real image editing. From top to bottom, (i) the image is transformed to an inverted noise vector using AIDI; the inverted noise vector is used to either (ii) reconstruct the original image using the same prompt \(p\) or (iii) generate an edited image using prompts \(p^{*},\emptyset\) with classifier-free guidance, where the reconstruction process is also used to supply the mask for blended guidance and attention injection. Note that the visible noise is not Gaussian, as all images are decoded from latent space \(z\) to image space \(x\) for display.
ken is determined. Here the anchor token refers to the word most relevant to the intended editing area. This token could be positive, _i.e._ associating with areas to edit, or negative, which associates it with areas to keep unedited. In the case of _photo of a dog \(\rightarrow\) photo of a cat_, _dog_ is a positive token. In contrast, for _a dog \(\rightarrow\) a dog on the beach_, _dog_ is a negative one instead. The mask \(M_{t}\) is first normalized to \(\overline{M}_{t}\), where all pixels smaller than a threshold \(\delta\) are normalized to the range of \((-M,0)\) and the others normalized to the range of \((0,M)\). Then a soft mask \(\tilde{M}_{t}\) is defined as \(Sigmoid(\overline{M}_{t})\) for a positive token and \(Sigmoid(-\overline{M}_{t})\) for a negative one. Given that, the blended guidance scale \(\omega^{*}\) is defined as
\[\omega^{*}_{t}(k)=(\omega_{E}-\omega)\tilde{M}_{t}(k)+\omega \tag{9}\]
where \(\omega_{E}\) is a large guidance scale intended for editing, and \(k\) refers to pixels of the mask image. Note that the soft mask approaches a binary one when \(M\) becomes very large. Combining our proposed AIDI and blended guidance, the overall process for our effective real image editing is illustrated in Fig. 2, assuming \(\omega=1\) in inversion for simplicity.
### Stochastic Editing
For image editing methods based on deterministic DDIM inversion methods, the DDIM sampling process is commonly set as deterministic to better control editing effects. Similar to the proposed blended guidance, the same soft mask is adopted to control the affected area of stochastic sampling. The deterministic sampling process described in Equation 2 is made stochastic as:
\[q(z_{t-1})\sim\mathcal{N}(\sqrt{\tilde{\alpha}_{t-1}}z_{0}^{t}+\sqrt{1-\tilde{ \alpha}_{t-1}-\eta\sigma_{t}^{2}\tilde{M}_{t}}e_{t},\eta\sigma_{t}^{2}\tilde{ \mathbf{M}}_{t}) \tag{10}\]
where \(\sigma_{t}^{2}\) represents the noise injected for stochastic sampling, and \(\tilde{\mathbf{M}}_{t}\) is transformed from \(\tilde{M}_{t}\) as a diagonal matrix. Here \(\eta\) and \(\tilde{M}_{t}\) control the scale and range of stochastic editing respectively. While large \(\eta\) increases sampling diversity, it reduces the perceptual similarity with the original image which is often desired in cases such as replacing
Figure 3: Visual examples of the effective editing abilities of our proposed real image editing based on AIDI, using only 20 editing steps.
an object with another in a different domain. Based on our experiments, a small \(\eta\) can result in satisfying results without loosing perceptual similarity when deterministic editing fails.
## 4 Experiments
All experiments conducted here are based on the released version v1.4 of Stable Diffusion using NVIDIA A100 GPU card. For image reconstruction test, 5000 test images from COCO dataset [46] are used. For image editing, the test set of AFHQ [19] is used primarily for its high image quality and because it enables comparisons with GAN inversion-based editing (we denote it as HS-SCLIP, combining HyperStyle [40] and StyleCLIP [47]). Other than HS-SCLIP, we perform comprehensive comparisons in both reconstruction and editing quality with all related text-based real image editing techniques which use DDIM inversion, including PTP [14], NTI [15] and EDICT [16]. NTI's main innovation is the null-text inversion, so in image editing tests it uses PTP editing techniques after the inversion. For all visual examples, the default inversion and editing steps are 20 unless specified otherwise.
### Image Editing
Visual examples from Fig. 3 showcase the diverse capabilities of our proposed real image editing process. Using a pair of text prompts as before and after-editing image captions, it can either perform local editing, like replacing an object or background, or global editing, like style transfer. As an example, for the _zebra_ image, we are able to change the background to a _forest_ while keeping the unedited area perceptually unchanged. For partial editing within one object, it can change the posture of the _zebra_ from _standing_ to _walking_ in addition to editing the background, or change the expression of the _dog_ to _happy_. It is also able to do a text-based style transfer, converting a photo to a _water-color painting_. While similar range of editing capabilities have been demonstrated before, we are the first to demonstrate effective editing using as few as 10 editing steps, a result enabled by the improved accuracy of AIDI and by the proposed blended guidance, which focuses editing effects to specific relevant areas. Note that without an exact binary mask, the local edits from these prompt-pair based processes would not be be able to keep certain areas intact. Consider for example the editing task of changing the background of _a castle in the mountains_ to _forest_: the castle itself remains largely the same but some greenery is blended within its structure, increasing the overall coherence of the resulting image. This is arguably preferable to a hard blending with the background. The lack of a binary mask can sometimes give rise to hallucinated artifacts, such as the lines around the mouth area in the _dog wearing sunglasses_ instance.
### Reconstruction Accuracy
We conducted a reconstruction test using the COCO test set with diverse contents using their default captions as text prompts where the quantitative results are illustrated as a graphic table, Fig. 6. Compared to the simple inversion used in PTP, results from both our proposed AIDI variants are shown to significantly improve the accuracy across different classifier-free guidance scales and different diffusion steps. For both AIDLE and AIDLA, they are able to achieve near-exact inversion when there is no classifier-free guidance. While the accuracy drops when the guidance scale increases, this does not impact our proposed real image editing process since a low guidance scale like 1 is used in inversion. We confirm that EDICT is able to maintain exact inversion with as few as 10 iteration steps, although it was only tested in the 50 steps' regime at publication. As for NTI, it is only applicable to a guidance scale \(>1\). Despite its accuracy increasing along with the guidance scale, it is worse, even at an empirically optimal guidance scale of 3, than our proposed inversion and reconstruction with a guidance scale of 1. For visual examples in Fig 4, both PTP and
Figure 4: Visual examples of reconstruction accuracy test for various diffusion inversion methods. The AE images on the left are decoded from Stable Diffusion without inversion and used as the reference for perfect reconstruction. Selected local artifacts are highlighted with \(\square\) for quick reference.
NTI's results contain noticeable errors. Our AIDI_A result is as flawless as the EDICT one. Despite the AIDI_E result missing some minor detail, we find its differences from AIDLA negligible from both a quantitative and qualitative perspective. Unless otherwise specified, all image editing tests are conducted using AIDI_E.
### Quantitative Assessment
EDICT is the first one in diffusion inversion based image editing methods to report quantitative analysis for image editing. We have conducted similar quantitative assessments using the AFHQ dataset. Our rationale for choosing AFHQ is that the high quality close-up images do make visual artifacts more noticeable. Another advantage is that for a dog-to-cat image test, there is a real cat image set to serve as a photorealistic reference to evaluate editing results, an arguably better approach than using a CLIP score that relies on the indirect text reference. This also enables comparison with GAN inversion-based methods, since a pre-trained StyleGAN model is available for this dataset. Using the prompt pair _high quality photo of a dog \(\ \to\ \) high quality photo of a cat_, the task is to replace the dog in the input images with a cat which not only looks realistic, but perceptually similar to the original dog in characteristics like color and pose, while maintaining the background unedited. To evaluate the consistency in color, pose and backgrounds, the LPIPS metric is evaluated between the edited and original dog image. To evaluate the editing effects, the FID score in reference to the AFHQ training set is used over CLIP score. Results for other metrics, including CLIP, are included as supplementary materials.
As shown in Fig. 7, our results are the best overall for any number of editing steps. Our result of 20 editing steps is better than the next best EDICT at 50 steps. In the extreme case of 10 steps, our method is still relatively effective while all other methods have significant regressions in quality. For PTP, in addition to the original setting where the same guidance scale is used for both inversion and editing, a PTP\({}^{*}\) version is also tested where the inversion is conducted without guidance, and it has significantly improved performance comparing to PTP. Similar improvement has also been reported in EDICT, denoted as the difference between UC PTP and C PTP. This dual-scale setting is similar to our blended guidance so the performance gap between PTP\({}^{*}\) and ours is mainly caused by the increased inversion
Figure 5: Visual examples for dog-to-cat editing test using AFHQ test set. Results from of different model are organized horizontally and results from different settings, 10/20/50 editing steps for all diffusion-based models or 3 hyperparameter settings for HS-SCIP, are organized vertically.
Figure 6: Reconstruction accuracy test for different diffusion inversion methods using COCO test set. Results from combinations of different inversion steps and various classifier-free guidance scales are included. For the two assessed perceptual metrics, higher SSIM and lower LPIPS are preferred, both color-coded towards green color.
accuracy from our AIDI. For HS-CLIP, which is based on HyperStyle inversion and StyleCLIP editing, three sets of hyperparameters are included to illustrate similar trade-off between high perceptual similarity comparing to the source image and high editing quality in reference to the target domain. To out knowledge, this is the first direct quantitative comparison between editing methods using diffusion inversion and GAN inversion respectively.
The results from NTI, PTP editing with null-text inversion, are worse than PTP baseline even with increased inversion stability. From the visual examples in Fig. 5, they seem to be able to generate a high quality cat image. However, these images are often too smooth and the lack of photorealistic details are causing degraded performance in both FID and LPIPS. One possible cause is that the learned null-text embedding deviates from the original one too much so it is out-of-distribution for the pre-trained diffusion model. Another consideration is that quantitative assessments like this require a fixed setting of hyperparameters so any method relying on fine-tuning of hyperparameters would not perform well on average.
The robustness of our methods for reduced number of inversion steps comparing to others is also noticeable in Fig. 5. For our method, although the 50-step result has better sharpness and more realistic ears, the 10-step version is of acceptable quality without any glaring artifacts, whereas all competing approaches are subject to significant degradations at 10-steps. In the case of EDICT, there are obvious artifacts, dog-like mouth area, for both 10-step and 20-step results. Note that for this experiment, we used similar grid-search strategy to find optimal settings for our method, PTP, PTP\({}^{*}\) and NTI, as we adopted attention map injection techniques from PTP. For EDICT, its own default optimal settings are used. For fair comparison, we didn't change hyperparameters for different editing steps, except for the fixed-point iteration steps in AIDI where 11, 6 and 5 are used for 10/20/50 editing steps respectively.
### Stochastic Editing
When using deterministic DDIM inversion, image editing through the DDIM sampling process must also be set deterministic in order to keep the fidelity of unedited areas. As a result, there is no direct remedy for failed editing results. As shown in the middle image in Fig. 8, the original over-exposed background window was incorrectly edited as part of the cat face. Using our proposed stochastic editing, it is possible to improve editing results when such failure case happens. Different from other stochastic sampling practices which uses a larger \(\eta\) for increased sampling diversity, we used a small \(\eta=0.1\) for stochastic generation of the same editing task and resulted in a properly edited result as shown on the right. We conducted a quantitative analysis using the dog-to-cat test to measure this benefit. As shown in the chart in Fig. 8, \(20\times n\) represents average performance when \(n\) stochastic editings were repeated for each image to select the best one based on a combined rank of low FID and LPIPS. It is shown that the stochastic editing, when tested only once, is expected to have an equivalent FID score as the deterministic one but worse in LPIPS. With increased \(n\), the selected result is expected to further improve in both FID and LPIPS.
## 5 Conclusions
In this work, we have proposed _AIDI_, an accelerated iterative diffusion inversion method capable of improving image reconstruction and editing accuracy while significantly improving the trade-offs between computational cost and quality. While our proposed real image editing is effective without auxiliary masks or sketches to specify editable area and content, it relies on the cross-attention maps between the image and prompt texts for general guidance, similar to
Figure 8: Top: visual examples of stochastic editing recovers from failure case of deterministic editing; Bottom: quantitative comparison between deterministic and stochastic editing.
Figure 7: Quantitative assessments for dog-to-cat swap test using AFHQ test. The data labels represent editing steps for all methods except for HS-SCLIP where they are hyperparameters.
other text guided image editing work using diffusion models. In addition to logically set prompt pairs, the spatial resolution of these attention maps are very coarse. Detailed control of editable area remains a subject of future work.
It is also noted that while we don't use large classifier-free guidance scales for inversion with introduction of blended guidance, reducing the difference in guidance scales between inversion and editing is still useful to reduce potential artifacts caused by blended guidance. While our AIDI can improve inversion stability for large guidance scales as well, it is not sufficient for reliable inversion yet and this is also of future research interests.
**Ethical considerations:** we acknowledge that any image editing technique such as the one presented in this paper are unavoidably encumbered by ethical concerns, and will inherit any bias present in the training data from the underlying backbone. Deployments of these systems should employ appropriate safeguards in regards to allowed prompts and guidances to prevent malicious and illegal uses.
|
2309.12803 | Performance Analysis of Uplink Rate-Splitting Multiple Access with
Hybrid ARQ | Rate-splitting multiple access (RSMA) has attracted a lot of attention as a
general and powerful multiple access scheme. In the uplink, instead of encoding
the whole message into one stream, a user can split its message into two parts
and encode them into two streams before transmitting a superposition of these
two streams. The base station (BS) uses successive interference cancellation
(SIC) to decode the streams and reconstruct the original messages. Focusing on
the packet transmission reliability, we investigate the features of RSMA in the
context of hybrid automatic repeat request (HARQ), a well-established mechanism
for enhancing reliability. This work proposes a HARQ scheme for uplink RSMA
with different retransmission times for a two-user scenario and introduces a
power allocation strategy for the two split streams. The results show that
compared with non-orthogonal multiple access (NOMA) and frequency division
multiple access (FDMA), RSMA outperforms them in terms of error probability and
power consumption. The results show that RSMA with HARQ has the potential to
improve the reliability and efficiency of wireless communication systems. | Yuanwen Liu, Bruno Clerckx, Petar Popovski | 2023-09-22T11:34:29Z | http://arxiv.org/abs/2309.12803v2 | # Performance Analysis of Uplink Rate-Splitting Multiple Access with Hybrid ARQ
###### Abstract
Rate-splitting multiple access (RSMA) has attracted a lot of attention as a general and powerful multiple access scheme. In the uplink, instead of encoding the whole message into one stream, a user can split its message into two parts and encode them into two streams before transmitting a superposition of these two streams. The base station (BS) uses successive interference cancellation (SIC) to decode the streams and reconstruct the original messages. Focusing on the packet transmission reliability, we investigate the features of RSMA in the context of hybrid automatic repeat request (HARQ), a well-established mechanism for enhancing reliability. This work proposes a HARQ scheme for uplink RSMA with different retransmission times for a two-user scenario and introduces a power allocation strategy for the two split streams. The results show that compared with non-orthogonal multiple access (NOMA) and frequency division multiple access (FDMA), RSMA outperforms them in terms of error probability and power consumption. The results show that RSMA with HARQ has the potential to improve the reliability and efficiency of wireless communication systems.
## I Introduction
G has drawn the attention of academia and industry due to its ability to offer new services requiring higher throughput, ultra reliability, low latency, and massive connectivity for everything. In current communication networks, non-contention access methods, such as orthogonal time-frequency division multiple access, are popular in communication systems. However, these methods are not suitable for applications involving massive low-power devices connected to a common BS, because they cannot accommodate numerous devices orthogonally with limited resources. This calls for rethinking multiple access (MA) techniques [1].
RSMA is a flexible, universal, efficient, robust, and reliable MA technique that generalizes and outperforms numerous seemingly unrelated MA schemes, including orthogonal multiple access (OMA), NOMA, multicasting, and space division multiple access (SDMA). It has been shown that RSMA finds numerous new applications in 6G [2, 3]. In the downlink, each message is split into two parts, a common part and a private part, and all the common parts are encoded into one common stream while the private parts are independently encoded into private streams. The BS precodes all common and private streams and transmits these superposed streams to the users. At the receiver, each user uses SIC to decode the common stream first and then decodes its own private stream. The message split enables RSMA to partially decode interference and partially treat the remaining interference as noise by adjusting the power allocation between common streams and private streams properly. Consequently, RSMA softly bridges SDMA and NOMA as a more flexible and general technique and outperforms them [4, 5]. RSMA has not only higher spectrum efficiency, but also higher energy efficiency for various applications [6, 7, 8, 9]. It can also support low latency services [10, 11, 12, 13], and it has the potential to interplay with other techniques, such as reconfigurable intelligent surfaces [14], joint sensing and communications [15], integrated terrestrial and non-terrestrial networks [3], etc. RSMA is also a powerful and robust strategy for multi-user multiple-input multiple-output [16]. These advantages show that RSMA can be a competitive multiple access strategy for future networks [2, 17, 18].
Uplink RSMA is studied in [19, 20, 21, 22, 23, 24, 25, 26, 27, 28] and was first proposed in [19]. The users split their messages into two parts and encode them into two streams. This can be seen as adding a virtual user, and users transmit a superposition of these two streams. BS uses SIC to decode the streams and then reconstructs the messages. The increasing number of streams provides a more flexible decoding order and enables RSMA to achieve all the boundary points of the capacity region without time-sharing [21]. RSMA can also improve user fairness, outage performance [22, 23], simplify the implementation by avoiding complex user pairing [24], increase the connectivity in semi-grant-free transmission [25], and also perform well in finite blocklength regime [26]. Although NOMA with time sharing can achieve the same capacity region as RSMA, it needs multiple time slots [2] and induces communication overhead and latency, which can be more complex than RSMA [21]. Therefore, RSMA is a promising technique for any service which needs grant-free and is required intermittently, i.e., ultra-reliable low-latency communication (URLLC) and massive machine-type communication (mMTC) [29, 27, 28] showed that RSMA can outperform orthogonal multiple access (OMA) and NOMA in some heterogeneous services coexistence scenarios. Consequently, uplink RSMA shows the potential to be applied in future communication systems.
Given the numerous promising properties, it is worth exploring integrating RSMA with other techniques to improve reliability. One critical mechanism to enhance reliability is HARQ. It also increases diversity and improves the efficiency of packet-based transmission [2]. Thus, it is a crucial technique to meet the expectations in future networks [30, 31]. This
mechanism is based on forward error correction (FEC) and automatic repeat request (ARQ) [32]. Generally, there are three types of HARQ: Type I HARQ, HARQ with chase combining (CC), and HARQ with incremental redundancy (IR). Type I HARQ uses the packet in the current round to decode, while HARQ with CC and HARQ with IR can decode packets in different rounds jointly with maximal ratio combining (MRC). In HARQ with CC, the packets are identical in each retransmission turn, and the receiver uses MRC to combine messages in every turn; while in HARQ with IR, additional parity bits are transmitted in each retransmission turn [33]. HARQ has been extensively studied with both OMA [33, 34] and NOMA [35, 36, 37, 38, 39, 40]. For OMA, [33] optimized the transmission power for a given outage probability with limited retransmission times for the CC scheme, and [34] maximized the throughput in the IR scheme. For NOMA, [35, 36, 37, 38] focused on the downlink scenario and showed that NOMA with HARQ can improve the outage probability and energy efficiency. [39, 40, 31] studied NOMA with HARQ in uplink, and showed that it can reduce the latency and save power consumption. Specifically, [39, 31, 40] showed NOMA with HARQ has promising performances for short packets, which means it is suitable for services requiring low latency. Performing HARQ orthogonally would not be able to accommodate numerous users with limited resources. It also increases the latency, because unaccommodated users need to wait in the queue. Clearly, allocating resources and performing HARQ non-orthogonally can improve the efficiency of systems and reduce the waiting time. Besides, BS can keep the previous unsuccessful copies of messages, so using a full payload to retransmit messages again would be wasteful. Meanwhile, fixed-size resources are preferred, so it is not practical to adjust the size of retransmission [31]. However, HARQ with NOMA is doomed to face the problem that users sharing the same resources interfere with each other. When users require the service at a high rate, even the user with a better condition may not be decoded due to interference from the other users, and it is almost impossible to decode the user with worse channels. Then in the retransmission turn, if the channel is still not good enough, an additional retransmission turn would be needed. Therefore, this could increase the latency [41].
RSMA can manage interference better by bringing more streams, and it provides higher flexibility for transmission. [42] studied HARQ scheme for downlink RSMA, and showed it increased the success probability of transmission, while HARQ for uplink RSMA has not been investigated yet. This work focuses on HARQ for uplink RSMA. Fig. 1 shows a toy example of RSMA retransmission scheme and compares it with FDMA and NOMA. Let \(s_{1}\) and \(s_{2}\) denote the signals transmitted by user 1 and user 2, respectively. \(t\) denotes time and \(f\) denotes frequency. Fig. 0(a) shows FDMA, and user 1 and user 2 occupy non-overlapping frequency bands \(w_{1}\) and \(w_{2}\), respectively. If one message cannot be decoded, this message will be retransmitted and will not interfere with the other user. In Fig. 0(a), at first \(s_{2}\) cannot be decoded and user 2 receives a negative acknowledgment (NACK). Then the retransmitted \(s_{2}\) and new \(s_{1}\) are sent in the next time slot, and both users receive acknowledgment (ACK). In Fig. 0(b), a NOMA retransmission scheme is shown. Two users can share the same time-frequency resource, and BS uses SIC to decode the two messages. We assume that user 1 has a better channel condition and \(s_{1}\) will be decoded first. If \(s_{1}\) fails to be decoded, neither \(s_{2}\) can be decoded. But we do not need to always retransmit all the failed messages, because sometimes \(s_{2}\) can be decoded directly once \(s_{1}\) is decoded. Therefore, sometimes only \(s_{1}\) is retransmitted, and after BS decodes \(s_{1}\) successfully, \(s_{2}\) is decoded. Fig. 0(c) shows the retransmission scheme of RSMA. \(s_{1,1}\) and \(s_{1,2}\) denote the two split streams of user 1.
Fig. 1: The retransmission schemes of FDMA, NOMA and RSMA
BS needs to decode both \(s_{1,1}\) and \(s_{1,2}\) to obtain the message of user 1. We assume that user 1 has a better channel condition and the decoding order is \(s_{1,1}\), \(s_{2}\), and \(s_{1,2}\). Similarly, if \(s_{1,1}\) fails to be decoded, neither \(s_{2}\) and \(s_{1,2}\) can be decoded. But sometimes only \(s_{1,1}\) needs to be retransmitted, and \(s_{2}\) and \(s_{1,2}\) will be decoded once BS decodes \(s_{1,1}\) successfully. Thus, compared to FDMA, RSMA has a higher spectrum efficiency; while compared to NOMA, RSMA can retransmit one of the split streams and has the potential to consume less energy.
We note that we make two assumptions here. The two users require the same service, so they have the same reliability requirements. The strong user splits its message into two parts and encodes them into two streams, and the weak user encodes the whole message and transmits only one stream. The contributions of this work are summarized below:
* This work proposes a retransmission scheme for uplink RSMA. This scheme does not need to retransmit all the failed streams, so it will not bring additional complexity. Only one stream of the strong user and the stream of the weak user are involved in retransmission. The two streams of the strong user do not need to be decoded successively, so BS can decode these two streams and the stream of the weak user alternatively. This decoding order is optimal and it can fully exploit decoding flexibility. For the strong user, the sum rate of the two streams is the achievable rate. Thus, one of the streams can always be decoded successfully with a low rate, and the other stream needs to be retransmitted if the sum rate does not satisfy the desired rate. Therefore, only two of three streams involve retransmissions. To our best knowledge, HARQ with RSMA in uplink has not been studied before.
* The power allocation strategy between the two streams of the strong user is introduced in this work. When SIC is used, we assume that if one stream cannot be decoded, the following streams will fail. However, it is not always necessary to retransmit a sequence of failed streams. Sometimes it is possible to only retransmit one of the failed streams. Once this stream is decoded and cancelled after retransmission, BS can continue SIC process to decode other failed streams. The power allocation between the streams of the strong users can actually decide which stream(s) needs to be retransmitted. This work analyzes how power allocation determines which stream(s) will be retransmitted. Naturally, different power allocation strategies bring different error probabilities of the users, and the error probabilities are given analytically in this paper. Then, the power allocation that brings the lowest error probabilities of the two users is chosen.
* The error probabilities and average transmission power per packet for a given rate of RSMA with HARQ are simulated by Monte Carlo and a detailed analysis is presented. The performances are simulated with both CC and IR and different retransmission times are allowed. The results show that although RSMA cannot always decrease the error probability of the strong user, it can decrease the error probability of the weak user dramatically. Since the two users require the same service, it is more critical to enable the weak user to meet the service requirements. In this way, RSMA with HARQ can support a higher rate with the same reliability requirement and retransmission times. It can achieve promising performances regardless of the latency requirement of the service. The results also show that RSMA consumes the least power compared to NOMA and FDMA, because it can mitigate the effect of interference between users. Besides, the strong user can retransmit one stream instead of two to save energy. Hence, RSMA has the potential to be applied to services requiring low power consumption.
The organization of the rest of the paper is summarized below. Section II introduces the system model of RSMA. Section III presents HARQ design for RSMA. We use FDMA and NOMA as baselines, and Section IV introduces HARQ for FDMA and NOMA. Section V demonstrates the numerical results, and Section VI is the conclusion.
_Notations_: \(\mathbb{C}\) denotes the complex numbers set. \(\mathcal{CN}(\delta,\sigma^{2})\) represents a complex Gaussian distribution with mean \(\delta\) and variance \(\sigma^{2}\).
## II System Model
In this section, we will give a brief introduction to uplink RSMA. We consider the scenario that two single-antenna users transmit to a common BS with a single antenna. The two users share one time-frequency block to operate the same application with RSMA. \(L\) time retransmissions are allowed, which means a user can transmit its message up to \(L+1\) rounds, so if the message can not be successfully decoded after \(L+1\) rounds, it will be dropped. The unsuccessful copies will be buffered at BS.
In uplink RSMA, instead of transmitting the whole message, a user can split its message into two parts and encode them into two independent streams. Users send the superposition of the two streams to BS, and BS uses SIC to decode these streams [19]. This procedure relies on channel state information (CSI), so we make some assumptions about channel knowledge. We assume that users do not have CSI while BS has perfect CSI. When the users send the request for connection, BS can obtain the CSI of the users and decide who will split the message and the power allocation, and then send back this information to the users during this connection setup process. Users do not adjust transmission power because they do not have CSI, and their transmission power is normalized to 1. Let \(h_{k}\in\mathbb{C}\) denote the channel between user \(k\) and BS, and the channel is considered as Rayleigh fading channel and fades independently, \(h_{k}\sim\mathcal{CN}(0,\Gamma_{k})\), where \(\Gamma_{k}\) is the average channel gain of user \(k\). For user \(k\), the transmitted signal can be represented as
\[s_{k}=\sum_{i=1}^{2}\sqrt{P_{k,i}}s_{k,i}, \tag{1}\]
where \(s_{k,i}\) is the split stream of user \(k\) and \(P_{k,i}\) is the power allocated to the stream \(s_{k,i}\), which should satisfy the power constraint. The received signal at BS is
\[y=\sum_{k=1}^{2}h_{k}s_{k}+n, \tag{2}\]
where \(n\sim\mathcal{CN}(0,\sigma_{n}^{2})\) is the additive Gaussian noise. Without loss of generality, the noise power is normalized to \(1\).
Actually, for the two-user case of RSMA, boundary points of the capacity region can be reached by one user splitting the message [21], and which user splits its message and how power is allocated can be decided by BS. We assume user 1 has a better channel condition, and it splits its message into \(s_{1,1}\) and \(s_{1,2}\), and \(s_{1,1}\) is allocated a fraction \(\alpha\) of the transmit power, where \(\alpha\in[0,1]\). Thus, the received signal at BS is
\[y=\sqrt{\alpha}h_{1}s_{1,1}+\sqrt{1-\alpha}h_{1}s_{1,2}+h_{2}s_{2}+n. \tag{3}\]
We assume that \(s_{1,1}\) is the stream decoded first without loss of generality, and the decoding order is \(s_{1,1}\), \(s_{2}\) and \(s_{1,2}\), which is the optimal order to achieve all the boundary points of the capacity region [21]. If one stream cannot be decoded, the streams after it are unlikely decoded. Therefore, we assume that if one stream fails, the decoding process terminates, i.e., if \(s_{1,1}\) cannot be decoded, \(s_{2}\) and \(s_{1,2}\) will not be decoded. Thus, to obtain the whole message from user 1, BS needs to decode \(s_{1,1}\), \(s_{2}\) and \(s_{1,2}\); while for user 2 \(s_{1,1}\) and \(s_{2}\) need to be decoded.
## III HARQ design for RSMA
This section introduces HARQ design for RSMA. Subsection III-A introduces how \(\alpha\) affects which stream(s) will be retransmitted. \(\alpha\) also determines the error probabilities of the two users, and the \(\alpha\) which brings the lowest sum of error probabilities is chosen. Subsection III-B and subsection III-C present how to obtain the error probabilities with CC and IR, respectively. Subsection IV presents the HARQ schemes for FDMA and NOMA.
### _How \(\alpha\) Determines Which Stream Needs Retransmissions_
We assume that at the BS if one stream fails, the decoding process terminates, i.e. if \(s_{1,1}\) cannot be decoded, \(s_{2}\) and \(s_{1,2}\) will not be decoded. In the usual HARQ scheme, failed streams will be transmitted again till they are decoded or consume all the retransmission turns. Since the number of streams increases in RSMA, the complexity of HARQ will increase. However, is it necessary to retransmit all the failed streams? For example, if \(s_{1,1}\) cannot be decoded, is it necessary to retransmit all the streams, \(s_{1,1}\), \(s_{2}\) and \(s_{1,2}\) in the retransmission rounds? In fact, we do not need always to do this. In some situations, a stream can be decoded once the previous one is decoded and cancelled. In the previous example, sometimes \(s_{2}\) can be decoded directly after decoding \(s_{1,1}\), so we do not need to retransmit \(s_{2}\). Not only do the failed streams not always be retransmitted together, but neither do \(s_{1,1}\) and \(s_{1,2}\) need to be retransmitted at the same time. Let \(r_{1,1}\) and \(r_{1,2}\) denote the rate of \(s_{1,1}\) and \(s_{1,2}\), respectively. For a given rate requirement \(r_{1}\), if \(r_{1,1}+r_{1,2}\geq r_{1}\) holds, the message from user 1 can be decoded. In other words, if \(r_{1,1}\geq r_{1}-r_{1,2}\) holds, this message can be decoded. Therefore, we can only retransmit \(s_{1,1}\), because it is always possible to increase \(r_{1,1}\) by CC or IR during the retransmission turns and fulfils \(r_{1,1}\geq r_{1}-r_{1,2}\), and the maximum achievable \(r_{1,2}\) is related to \(\alpha\). Hence, in this RSMA retransmission scheme, only \(s_{1,1}\) and \(s_{2}\) will be involved in retransmission. In the retransmission turn, the power allocation \(\alpha\) will not be changed, i.e., the retransmission power of \(s_{1,1}\) and \(s_{1,2}\) will still be \(\alpha\) and \(1-\alpha\), respectively. This can decrease the complexity at the receiver, otherwise, the receiver should be always informed about the new value of \(\alpha\). Thus, \(\alpha\) does not need to be decided at every turn, and it can be decided when both user 1 and user 2 send new packets.
In this scheme, only \(s_{1,1}\) and \(s_{2}\) are involved in retransmission, and sometimes they do not need to be retransmitted together if they both fail. Channel conditions and power allocation can decide which streams will be retransmitted. Let us first consider in which situation the streams do not need to be retransmitted, and then the opposite situations would be retransmissions are needed. Let \(g_{1}\) and \(g_{2}\) denote the instantaneous channel gain of user 1 and user 2 in the current round, respectively. The SINR of \(s_{1,1}\) is
\[\sigma_{1,1}=\frac{\alpha g_{1}}{1+(1-\alpha)g_{1}+g_{2}}, \tag{4}\]
the SINR of \(s_{2}\) is
\[\sigma_{2}=\frac{g_{2}}{1+(1-\alpha)g_{1}}, \tag{5}\]
and the SINR of \(s_{1,2}\) is
\[\sigma_{1,2}=(1-\alpha)g_{1}. \tag{6}\]
Obviously, if there is any \(\alpha\) that can let
\[\log_{2}(1+\sigma_{1,1})+\log_{2}(1+\sigma_{1,2})\geq r_{1}, \tag{7}\]
and
\[\log_{2}(1+\sigma_{2})\geq r_{2}, \tag{8}\]
both hold, the three streams do not require any retransmission. From (7), we can obtain
\[\alpha\leq\frac{(2^{r_{1}}-1)(1+g_{1}+g_{2})-g_{1}-g_{1}g_{2}-g_{1}^{2}}{g_{1} (2^{r_{1}}-1-g_{1}-g_{2})}=\alpha_{h}, \tag{9}\]
and from (8) we can obtain
\[\alpha\geq 1+\frac{1}{g_{1}}-\frac{g_{2}}{g_{1}(2^{r_{2}}-1)}=\alpha_{l}. \tag{10}\]
\(\alpha_{h}\) and \(\alpha_{l}\) may not be values between 0 and 1, and only when they are smaller than \(1+\frac{1}{g_{1}}\), they are mathematically meaningful, because \(1+\sigma_{1,1}\), \(1+\sigma_{1,2}\) and \(1+\sigma_{2}\) should be positive. Thus, the \(\alpha\) which can let all the streams be decoded should satisfy
\[\alpha=\forall x\in S,\ S=\left\{y:0\leq y\leq 1,\alpha_{l}\leq y\leq\alpha_{h}, \alpha_{h}\geq\alpha_{l}\right\}. \tag{11}\]
Although \(\alpha_{h}\) and \(\alpha_{l}\) may not be values between 0 and 1, they can still indicate whether the desired rate pairs \((r_{1},r_{2})\) are in the capacity region. Fig. 2 is an illustration representing the relation between the values of \(\alpha_{h}\) and \(\alpha_{l}\) and the location of \((r_{1},r_{2})\). The top left corner point represents \(\alpha=1\) because this point is achieved by allocating all the power to \(s_{1,1}\) and decoding \(s_{1,1}\) before \(s_{2}\). The bottom right corner point represents \(\alpha=0\) since this point is achieved by allocating all the power to \(s_{1,2}\) and decoding \(s_{2}\) before \(s_{1,2}\). \(\alpha\) between 0 and 1 indicates the points on the diagonal line. When \(\alpha_{h}\leq 0\)
the \(\alpha\) satisfying (7) does not exist, so it represents the points on the right side of the yellow line, which means this \(r_{1}\) cannot be achieved regardless of \(\alpha\). \(\alpha_{h}\geq 1\) means \(\alpha\) always satisfies (7), and this represents \(r_{1}\) is always achievable and the corresponding points are located on the left side of the orange line. \(0<\alpha_{h}<1\) represents the points between the orange line and the yellow line since \(r_{1}\) is not always achievable. Similarly, \(\alpha_{l}\leq 0\) represents the points below the red line, since any \(\alpha\) can satisfy (8). While \(\alpha_{l}\geq 1\) represents the points above the brown line because no \(\alpha\) can satisfy (8), and \(0<\alpha_{l}<1\) represents the points between the red line and the brown line.
Above analysis presents the relation between \(\alpha_{h}\), \(\alpha_{l}\) and the desired rate pairs \((r_{1},r_{2})\). If an \(\alpha\) satisfying (11) exists, the desired rate pair is inside the capacity region, as shown in Fig. 2, and no retransmission is needed. However, for some values of \(\alpha_{h}\) and \(\alpha_{l}\), \(S\) in (11) is empty, which means \(\alpha\) satisfying (11) does not exist and the rate pair is outside the capacity region. In the following situations, \(S\) is empty and retransmissions are needed.
1. \(\alpha_{h}\geq 1\) and \(\alpha_{l}\geq 1\): any \(\alpha\) can satisfy (7); while no \(\alpha\) can satisfy (8). In other words, \(s_{2}\) needs to be retransmitted irrespective of \(\alpha\), while \(s_{1,1}\) and \(s_{1,2}\) do not need retransmission, so only \(s_{2}\) needs to be retransmitted.
2. \(0<\alpha_{h}<1\) and \(\alpha_{l}\geq 1\): if \(\alpha\leq\alpha_{h}\), it can satisfy (7); while no \(\alpha\) can satisfy (8). Therefore, if we choose \(\alpha\in[0,\alpha_{h}]\), only \(s_{2}\) needs to be retransmitted. While if we choose \(\alpha\in(\alpha_{h},1]\), both \(s_{1,1}\) and \(s_{2}\) should be retransmitted, because this \(\alpha\) does not satisfy (7) and (8).
3. \(\alpha_{h}\geq 0\) and \(\alpha_{l}\geq 1\): no \(\alpha\) can satisfy neither (7) nor (8), so both \(s_{1,1}\) and \(s_{2}\) need retransmission.
4. \(0<\alpha_{h}<1\) and \(0<\alpha_{l}<1\) (\(\alpha_{h}<\alpha_{l}\)): (7) and (8) cannot be satisfied at the same time. Thus, if we choose \(\alpha\in[0,\alpha_{h}]\), only \(s_{2}\) is retransmitted. While if \(\alpha\in[\alpha_{l},1]\), only \(s_{1,1}\) needs to be retransmitted because \(s_{2}\) will be decoded after decoding \(s_{1,1}\). Otherwise, \(\alpha\in(\alpha_{h},\alpha_{l})\), both \(s_{1,1}\) and \(s_{2}\) are retransmitted.
5. \(\alpha_{h}\leq 0\) and \(0<\alpha_{l}<1\): There is \(\alpha\) satisfying (8) if \(\alpha\geq\alpha_{l}\), but no \(\alpha\) satisfied (7). So if \(\alpha\in[\alpha_{l},1]\), only \(s_{1,1}\) needs to be retransmitted; while if \(\alpha\in[0,\alpha_{l})\), both \(s_{1,1}\) and \(s_{2}\) will be retransmitted.
6. \(:\alpha_{h}\leq 0\) and \(\alpha_{l}\leq 0\): Any \(\alpha\) can satisfy (8) but no \(\alpha\) satisfies (7). Thus, only \(s_{1,1}\) will be retransmitted because \(s_{2}\) will be decoded once \(s_{1,1}\) is decoded.
These situations are also summarized in TABLE I.
Apparently, for the situations that need retransmissions, the choice of \(\alpha\) can decide which stream(s) will be retransmitted and the error probabilities after retransmissions. Since the two users operate the same service, their reliability requirements will be the same. Let \(p_{1,1}\) and \(p_{2}\) denote the error probability of \(s_{1,1}\) and \(s_{2}\) after the next retransmission turn, respectively. The \(\alpha\) which gives the lowest \(p_{1,1}+p_{2}\) will be chosen. Although sometimes it is possible to only retransmit one stream, considering all the possibilities for reliability reasons. When \(\alpha_{h}\leq 0\) and \(0<\alpha_{l}<1\), if only \(s_{1,1}\) is retransmitted and it fails again after retransmission, \(s_{2}\) still cannot be decoded; while if both streams are retransmitted, even if \(s_{1}\) fails, \(s_{2}\) still has some chance to be decoded. The sum of error probabilities of the former situation could be higher than the latter one. \(p_{1,1}\) and \(p_{2}\) in CC and IR have some differences, and the error probabilities of CC and IR will be analyzed separately.
**Remark 1**: _Although this work focuses on a two-user case, it can provide some insights for a general K-user case. In a K-user case, a user may split its message into more than two parts, and it would be more flexible to handle collisions. A simple 3-user case example can give some intuitions. For a 3-user case, the boundary points of the capacity region can be reached by splitting the messages of any two users. We first consider that user 1 and user 2 split their message into two parts, and the BS receives the streams \(s_{1,1}\), \(s_{1,2}\), \(s_{2,1}\), \(s_{2,2}\) and \(s_{3}\). We assume that \(s_{1,1}\) and \(s_{3}\) can be decoded, and \(s_{1,2}\), \(s_{2,1}\) and \(s_{2,2}\) remain, which is shown in Fig. (a)a. In this case, \(s_{2,1}\) needs to be retransmitted. If user 1 splits its message into three parts, and the power fraction of
Fig. 2: An illustration of capacity region for two users.
maintains the same, the BS will receive \(s_{1,1}\), \(s_{1,2}\), \(s_{1,3}\), \(s_{2,1}\), \(s_{2,2}\) and \(s_{3}\). Then \(s_{1,1}\) and \(s_{3}\) are decoded, and \(s_{1,2}\), \(s_{1,3}\), \(s_{2,1}\) and \(s_{2,2}\) remain. However, \(s_{1,2}\) could be decoded with a low rate, and the BS may continue decoding the remaining streams, which is shown in Fig. (b)b. If BS still cannot continue decoding, \(s_{2,1}\) will be retransmitted. This toy example shows that splitting a message into multiple parts would be helpful to handle collision. However, as the number of steams increases, the power allocation and decoding order should be considered carefully. They are interesting topics and could be left for future work.
### _Chase Combining_
When CC is used, BS can use both previous unsuccessful copies and the new packet to decode the message, and the decoding error probability after \(l\) retransmissions is given by [32],
\[p_{error}^{l}=\Pr\left\{\log_{2}\left(1+\sum_{i=0}^{l}{\rm SINR}_{i}\right)<R \right\}, \tag{12}\]
where \({\rm SINR}_{i}\) is the SINR of \(i\)th retransmission round, and \(R\) is the desired rate. For RSMA, finding the error probability with consideration of \(l\) times retransmissions can be extremely complicated, so the choice of \(\alpha\) only depends on the current round and the very next retransmission round.
First, let us consider \(p_{1,1}\) when only \(s_{1,1}\) is retransmitted, which is shown in Fig. (c)c. If \(s_{1,1}\) fails, \(s_{1,1}\), \(s_{2}\) and \(s_{1,2}\) will all receive NACK, but only \(s_{1,1}\) is retransmitted. After successfully decoding \(s_{1,1}\), then \(s_{2}\) and \(s_{1,2}\) will be decoded, so \(p_{2}\) is exactly \(p_{1,1}\). Since we have obtained instantaneous channel gains \(g_{1}\) and \(g_{2}\), in the following parts \(h_{1}\) and \(h_{2}\) denote the channels in the retransmission turn and they are all variables. BS uses the previous interfered copy of \(s_{1,1}\) and retransmitted \(s_{1,1}\) to jointly decode \(s_{1,1}\). \(p_{1,1}\) can be represented as
\[p_{1,1}=\Pr\left\{\log_{2}(1+\sigma_{1,1}+\alpha|h_{1}|^{2})<r_{1}-\log_{2}(1 +\sigma_{1,2})\right\}, \tag{13}\]
and it can be rewritten as
\[p_{1,1}=\Pr\left\{|h_{1}|^{2}<\frac{2^{r_{1}}}{\alpha\left(1+\sigma_{1,1} \right)}-\frac{1+\sigma_{1,1}}{\alpha}=\gamma_{cc_{1,1}}\right\}, \tag{14}\]
where \(\gamma_{cc_{1,1}}\) is a term denoting'residual SNR' for \(s_{1,1}\). Introducing this term is enlightened by [31], and it represents the needed signal power to decode the stream. Since CC and IR are discussed in separate sections, the subscript of HARQ type is omitted for simplicity, so \(\gamma_{cc_{1,1}}\) and \(\gamma_{ir_{1,1}}\) will be presented as \(\gamma_{1,1}\) in this subsection and next subsection, respectively. The subscripts of HARQ type will also be omitted in other variables in the same way. The distribution of the channel gain is exponential, and the probability density function (pdf) is
\[f\left(x,\Gamma_{k}\right)=\frac{1}{\Gamma_{k}}e^{-\frac{r}{R_{k}^{2}}}, \tag{15}\]
so that
\[p_{1,1}=1-e^{-\frac{\gamma_{1,1}}{\gamma_{1}}}, \tag{16}\]
and \(p_{1,1}\) increases as \(\gamma_{1,1}\) decreases. The \(\alpha\) which gives the lowest \(p_{1,1}\) can be found by sequential quadratic programming (SQP).
Then, \(p_{2}\) when only \(s_{2}\) is retransmitted is analyzed and an illustration is shown in Fig. 4. The three streams are sent to BS. BS decodes and cancels \(s_{1,1}\) first, but it fails to decode \(s_{2}\), so \(s_{1,2}\) also cannot be decoded. BS only asks user 2 to retransmit \(s_{2}\). Then, BS can use the unsuccessful copies of \(s_{2}\) with interference and the retransmitted \(s_{2}\) to decode \(s_{2}\) and cancel it, and then continue decoding \(s_{1,2}\). Thus,
\[p_{2} =\Pr\left\{|h_{2}|^{2}<2^{r_{2}}-1-\sigma_{2}=\gamma_{2}\right\} \tag{17}\] \[=1-e^{-\frac{r_{2}}{r_{2}}},\]
where \(\gamma_{2}\) is the residual SNR for \(s_{2}\). Similar to \(p_{1,1}\), \(p_{2}\) decreases as \(\gamma_{2}\) decreases. Obviously, the highest applicable \(\alpha\) brings the lowest \(\gamma_{2}\), so for this situation the optimal \(\alpha=\alpha_{h}\).
Finally, \(p_{1,1}\) and \(p_{2}\) when both \(s_{1,1}\) and \(s_{2}\) need to be retransmitted are analyzed, and this is shown in Fig. 5. Although we assume that it is always possible to find a low rate to decode \(s_{1,2}\), the prerequisite is that \(s_{2}\) has been decoded. In the previous two situations, once \(s_{1,1}\) or \(s_{2}\) is decoded, the decoding process can continue, but here \(s_{1,1}\) and \(s_{2}\) do not depend on each other completely, so here actually \(p_{1,1}\) is considered as the error probability of whole message from user 1. Thus,
\[p_{1,1} =\Pr\left\{\frac{\alpha|h_{1}|^{2}}{1+|h_{2}|^{2}}<\gamma_{1,1}^{ (1)},\frac{|h_{2}|^{2}}{1+\alpha|h_{1}|^{2}}<\gamma_{2}^{(1)}\right\}\] \[+\Pr\left\{\frac{|h_{2}|^{2}}{1+\alpha|h_{1}|^{2}}\geq\gamma_{2}^ {(1)},\alpha|h_{1}|^{2}<\gamma_{1,1}^{(2)}\right\} \tag{18}\] \[+\Pr\left\{\frac{\alpha|h_{1}|^{2}}{1+|h_{2}|^{2}}\geq\gamma_{1,1 }^{(1)},|h_{2}|^{2}<\gamma_{2}^{(2)}\right\},\]
where
\[\gamma_{1,1}^{(1)}=\frac{2^{r_{1}}}{1+\sigma_{1,2}}-1-\sigma_{1,1}, \tag{19}\]
Fig. 3: A 3-user case
\[\gamma_{2}^{(1)}=2^{r_{2}}-1-\frac{g_{2}}{1+g_{1}}, \tag{20}\]
\[\gamma_{1,1}^{(2)}=\frac{2^{r_{1}}}{1+\sigma_{1,2}}-1-\frac{\alpha g_{1}}{1+(1- \alpha)g_{1}}, \tag{21}\]
and
\[\gamma_{2}^{(2)}=2^{r_{2}}-1-\sigma_{2}. \tag{22}\]
The error probability of \(s_{2}\) can be represented as
\[\begin{split} p_{2}=&\mathrm{Pr}\left\{\frac{ \alpha|h_{1}|^{2}}{1+|h_{2}|^{2}}<\gamma_{1,1}^{(1)},\frac{|h_{2}|^{2}}{1+ \alpha|h_{1}|^{2}}<\gamma_{2}^{(1)}\right\}\\ &+\mathrm{Pr}\left\{\frac{\alpha|h_{1}|^{2}}{1+|h_{2}|^{2}}\geq \gamma_{1,1}^{(1)},|h_{2}|^{2}<\gamma_{2}^{(2)}\right\}.\end{split} \tag{23}\]
The derivation of \(p_{1,1}\) and \(p_{2}\) is in Appendix A, and the results are (24) and (25), respectively, where \(c=-\frac{\left(1+\gamma_{1}^{(1)}\right)\gamma_{1,1}^{(1)}}{\alpha\left(\gamma_ {2}^{(1)}\gamma_{1,1}^{(1)}-1\right)}\). The \(\alpha\) which brings the lowest \(p_{1,1}+p_{2}\) will be chosen.
Although the above discussion is general for RSMA, \(\alpha=0\) and \(\alpha=1\) could be two special cases when considering the error probability. In the previous analysis, user 1 and user 2 will stop generating new packets in retransmission turns, because the failure of either \(s_{1,1}\) or \(s_{2}\) means that BS does not decode messages from user 1 and user 2 successfully. Similarly, if \(\alpha=1\) and the message of user 1 denoted by \(s_{1}\) fails, both user 1 and user 2 will pause transmitting new packets. For \(\alpha=0\) and \(s_{2}\) fails, the situation is the same. These two situations can be included in the above discussion. However, if \(\alpha=1\) and \(s_{1}\) can be decoded, user 1 will transmit a new packet in the next turn, so user 2 will retransmit \(s_{2}\) along with a new \(s_{1}\). Similarly, when \(\alpha=0\) and \(s_{2}\) can be decoded, \(s_{1}\) will be retransmitted with a new \(s_{2}\). In these situations, the error probability of the new message also needs to be considered, and this could affect the choice of \(\alpha\).
The formulations of error probabilities of these two situations are similar. We consider \(\alpha=1\) first, and \(s_{2}\) will be retransmitted with a new \(s_{1}\). For user 1,
\[\begin{split} p_{1}=&\mathrm{Pr}\left\{\frac{|h_{1 }|^{2}}{1+|h_{2}|^{2}}<\gamma_{1},\ \frac{|h_{2}|^{2}}{1+|h_{1}|^{2}}<\gamma_{2}\right\}\\ &+\mathrm{Pr}\left\{\frac{|h_{2}|^{2}}{1+|h_{1}|^{2}}\geq\gamma_ {2},\ |h_{1}|^{2}<\gamma_{1}\right\},\end{split} \tag{26}\]
and for user 2,
\[\begin{split} p_{2}=&\mathrm{Pr}\left\{\frac{(|h_{1 }|^{2}}{1+|h_{2}|^{2}}<\gamma_{1},\ \frac{|h_{2}|^{2}}{1+|h_{1}|^{2}}<\gamma_{2}\right\}\\ &+\mathrm{Pr}\left\{\frac{|h_{1}|^{2}}{1+|h_{2}|^{2}}\geq\gamma_ {1},\ |h_{2}|^{2}<\gamma_{2}\right\},\end{split} \tag{27}\]
where
\[\gamma_{1}=2^{r_{1}}-1, \tag{28}\]
and
\[\gamma_{2}=2^{r_{2}}-1-g_{2}. \tag{29}\]
The results are (30) and (31), and the detailed derivation is in Appendix B.
When \(\alpha=0\), \(s_{1}\) will be retransmitted with a new \(s_{2}\). The expressions are as same as (26) and (27), and the difference is the values of \(\gamma_{1}\) and \(\gamma_{2}\), which are
\[\gamma_{1}=2^{r_{1}}-1-g_{1}, \tag{32}\]
and
\[\gamma_{2}=2^{r_{2}}-1, \tag{33}\]
so by substituting (32) and (33) to (30) and (31), \(p_{1}\) and \(p_{2}\) can be obtained.
All the error probabilities of the situation in TABLE I can be obtained, and the \(\alpha\) leads to the lowest error probabilities will be chosen. If the streams still cannot be decoded after one retransmission, the users will retransmit them again until the streams are decoded or they do not have more retransmission turns.
### _Incremental Redundancy_
When IR is applied, the decoding error probability after \(l\) retransmissions can be represented as
\[p_{error}^{l}=\mathrm{Pr}\left\{\sum_{i=0}^{l}\log_{2}\left(1+\mathrm{SINR}_ {i}\right)<R\right\}, \tag{34}\]
Fig. 4: The situation that only \(s_{2}\) needs retransmission. BS fails to decode \(s_{2}\) and \(s_{1,2}\), but only asks user 2 to retransmit \(s_{2}\). Then, BS uses two interfered copy \(s_{2}\) to decode it, and then continue decoding \(s_{1,2}\).
Fig. 5: The situation that both streams need retransmission. BS fails to decode \(s_{1,1}\) and asks both user 1 and user 2 to retransmit. Then, BS uses two interfered copies of \(s_{1,1}\) and \(s_{2}\) to decode them and then continues decoding \(s_{1,2}\).
and \(\mathrm{SINR}_{i}\) is the SINR of the \(i\)th turn. \(\alpha\) can be found by the similar method in Section III-B.
If only \(s_{1,1}\) is retransmitted, which is presented in Fig. 1c, \(p_{1,1}\) is the same as (16), but the'residual SINR' has a difference. In IR,
\[\gamma_{1,1}=\frac{2^{r_{1}}}{\alpha\left(1+\sigma_{1,1}\right)\left(1+\sigma_ {1,2}\right)}-\frac{1}{\alpha}, \tag{35}\]
and then impose (35) into (16), and the \(\alpha\) which gives the lowest \(\gamma_{1,1}\) will be chosen.
For the situation where only \(s_{2}\) is retransmitted shown in Fig. 4,
\[\begin{split} p_{2}=&\mathrm{Pr}\left\{|h_{2}|^{2}< \frac{2^{r_{2}}}{1+\sigma_{2}}-1=\gamma_{2}\right\}\\ &=1-e^{-\frac{\gamma_{2}}{\tau_{2}}}.\end{split} \tag{36}\]
\(p_{2}\) decreases as \(\alpha\) increases, so \(\alpha_{h}\) gives the lowest \(p_{2}\).
Then, when both \(s_{1,1}\) and \(s_{2}\) are retransmitted in Fig. 5, \(p_{1,2}\) and \(p_{2}\) can be represented by the same expressions in (24) and (25), respectively. But
\[\gamma_{1,1}^{(1)}=\frac{2^{r_{1}}}{\left(1+\sigma_{1,2}\right)\left(1+\sigma _{1,1}\right)}-1, \tag{37}\]
\[\gamma_{2}^{(1)}=\frac{2^{r_{2}}}{1+\frac{9}{1+g_{1}}}-1, \tag{38}\]
\[\gamma_{1,1}^{(2)}=\frac{2^{r_{1}}}{\left(1+\sigma_{1,2}\right)\left(1+\frac {\alpha g_{1}}{1+\left(1-\alpha\right)g_{1}}\right)}-1, \tag{39}\]
and
\[\gamma_{2}^{(2)}=\frac{2^{r_{2}}}{1+\sigma_{2}}-1. \tag{40}\]
Substitute (37)-(40) to (24) and (25), and then the \(\alpha\) can be found.
For the special cases when \(\alpha=0\) or \(\alpha=1\), the expressions are also the same as in (26) and (27). When \(\alpha=1\), \(\gamma_{1}\) does not change, and
\[\gamma_{2}=\frac{2^{r_{2}}}{1+g_{2}}-1. \tag{41}\]
When \(\alpha=0\), \(\gamma_{2}\) does not change, and
\[\gamma_{1}=\frac{2^{r_{1}}}{1+g_{1}}-1. \tag{42}\]
## IV HARQ Scheme for FDMA and NOMA
This section will introduce the HARQ scheme for FDMA and NOMA, and we use them as baselines and compare their performances with RSMA in Section V.
In FDMA, users are allocated to non-interfering resources, so each user will be allocated to an isolated bandwidth. We assume that the bandwidth is normalized to 1, and each user is allocated a fraction of the bandwidth. If CC is used, the error probability of user \(k\) after \(L\) retransmissions is
\[p_{k}=\mathrm{Pr}\left\{w_{k}\log_{2}\left(1+\frac{\sum_{l=0}^{L}|h_{k}^{(l)}| ^{2}}{w_{k}}\right)<r_{k}\right\}, \tag{43}\]
where \(w_{k}\) is the fraction of bandwidth allocated to user \(k\), \(\sum_{k=1}^{2}w_{k}=1\), \(h_{k}^{(l)}\) is the channel of user k in \(l\)th round, and
\(r_{k}\) is the desired rate. The error probability when IR is used is
\[p_{k}=\Pr\left\{\sum_{l=0}^{L}\left[w_{k}\log_{2}\left(1+\frac{|h_{k}^{(l)}|^{2}}{ w_{k}}\right)\right]<r_{k}\right\}, \tag{44}\]
The \(w_{k}\) which gives the lowest \(\sum_{k=1}^{2}p_{k}\) will be chosen, and it can be computed by Monte Carlo simulation. Compared to RSMA, the retransmissions are independent in FDMA, i.e., whether a user will retransmit does not depend on other users, and the users retransmit whole streams.
NOMA can be seen as a subset of RSMA since it equals that \(\alpha=0\) or \(\alpha=1\). The decoding order is decided by the channel gain, which means if user 1 has better channel conditions, it will be decoded first (\(\alpha=1\)), otherwise, user 2 will be decoded first (\(\alpha=0\)). For CC, the error probability of user 1 is
\[p_{1}=\Pr\left\{\log_{2}\left(1+\sum_{l=1}^{L}\mathrm{SINR}_{1}^{(l)}\right)< r_{1}\right\}, \tag{45}\]
where
\[\mathrm{SINR}_{1}^{(l)}=\begin{cases}\frac{|h_{1}|^{2}}{1+|h_{2}|^{2}},& \text{if }\alpha=1\\ |h_{1}|^{2},&\text{if }\alpha=0.\end{cases} \tag{46}\]
The error probability of user 2 is
\[p_{2}=\Pr\left\{\log_{2}\left(1+\sum_{l=1}^{L}\frac{|h_{2}^{(l)}|^{2}}{1+(1- \alpha)|h_{1}^{l}|^{2}}\right)<r_{2}\right\}. \tag{47}\]
For IR, similarly, the error probability of user 1 is
\[p_{1}=\Pr\left\{\sum_{l=1}^{L}\left[\log_{2}\left(1+\mathrm{SINR}_{1}^{(l)} \right)\right]<r_{1}\right\}, \tag{48}\]
where \(\mathrm{SINR}_{1}^{(l)}\) is (46), and error probability of user 2 is
\[p_{2}=\Pr\left\{\sum_{l=1}^{L}\left[\log_{2}\left(1+\frac{|h_{2}^{(l)}|^{2}}{ 1+(1-\alpha)|h_{1}^{l}|^{2}}\right)\right]<r_{2}\right\}. \tag{49}\]
NOMA can be seen as a special case of RSMA, but it only retransmits whole streams, so it is not as flexible as RSMA.
## V Numerical Results
In this section, the error probabilities and transmission power per packet of RSMA with HARQ will be presented, and they are compared with FDMA and NOMA. For all the simulation results, the average channel gains, \(\Gamma_{1}\) and \(\Gamma_{2}\), are set to \(20\) dB and \(15\) dB, respectively. Since we assume that user 1 and user 2 require the same service, they have the same reliability requirement and the desired rate, and this rate is the x-axis in the figures.
6, but FDMA outperforms NOMA. FDMA benefits more from IR since the achievable rate is the sum of the rate in each turn, which mitigates the effect of limited bandwidth. While for NOMA the retransmission with interference may not contribute to the achievable rate as much as the non-interfering retransmission, so the error probability increases. However, since the decoding order of RSMA is shuffled, the negative impact of interference is mitigated to some extent. For example, in RSMA, when \(\alpha\) is between 0 and 1 and either \(s_{1,1}\) or \(s_{2}\) needs to be retransmitted, the stream will be retransmitted alone without interference. But the decoding of \(s_{1,2}\) depends on \(s_{2}\), and this is the reason the error probability of user 1 is close to user 2 as the rate increases. While in NOMA, when \(s_{1}\) is decoded before \(s_{2}\), if \(s_{1}\) can be decoded and \(s_{2}\) fails, \(s_{2}\) will be retransmitted with another new \(s_{1}\), and \(s_{2}\) could fail again due to the interference of this new stream. Fig. 9 shows the average power consumption of each packet. Similar to the trends in Fig. 7, RSMA consumes the least power. NOMA consumes more power than FDMA when the rate is lower than around \(2.7\ (\mathrm{bit/s/Hz})\), because NOMA could need more retransmissions than FDMA. However, NOMA consumes less power when the rate is higher than \(2.7\ (\mathrm{bit/s/Hz})\) since FDMA is limited by isolated resources.
Fig. 10 and Fig. 11 present the error probabilities and average power consumption per packet with CC, respectively, and 4 retransmissions are allowed. This set can be seen as for the service which is not very sensitive to the latency. In Fig. 10, for both two users, RSMA has the best performance, NOMA has the second best, and the error probabilities in FDMA are the highest. FDMA curves are not smooth because the objective is minimizing the error probabilities, so when the rate is higher than \(4.5\ (\mathrm{bit/s/Hz})\) it allocates more resources to user 2 to minimize the error probabilities. FDMA with CC is not quite efficient when the rate is high due to the limited bandwidth while allocating resources in a non-orthogonal manner can exploit the resources better. Although NOMA has a much better performance than FDMA, for user 2 there is still room to improve. A simple example can give intuition. We assume that \(s_{1}\) is decoded before \(s_{2}\). Decoding \(s_{1}\) at a relatively high rate is not always possible due to the interference from \(s_{2}\), and decoding \(s_{2}\) actually depends on whether decoding \(s_{1}\) successfully, so the situation that both users cannot be decoded may happen. In RSMA, \(s_{1,1}\) can be decoded and cancelled at a relatively low rate, and this increases the probability of decoding \(s_{2}\) successfully. Besides, the two users will stop generating new packets if either \(s_{1,1}\) or \(s_{2}\) fails. Thus, if only one stream needs to be retransmitted, it will be retransmitted without interference. When 4 retransmissions are allowed, these non-interfering copies can contribute more to achieve a higher rate. The average power consumption is shown in Fig. 11. FDMA consumes the most power per packet among the three schemes. In FDMA, each user occupies a limited bandwidth, so the frequency of retransmission increases. RSMA and NOMA consume less power since they do not retransmit failed streams all the time. RSMA consumes the least power when the rate is lower than \(8\ (\mathrm{bit/s/Hz})\). When the rate is higher than \(8\ (\mathrm{bit/s/Hz})\), the power consumption of RSMA and NOMA are almost the same, because RSMA tends to allocate more power to \(s_{1,1}\) so that \(s_{2}\) can experience less interference. Besides, the frequency of retransmitting \(s_{1,1}\) and \(s_{2}\) together increases. Although RSMA may boil down to NOMA, it still has an advantage in power consumption, because at \(5\ (\mathrm{bit/s/Hz})\) the error probabilities of NOMA and RSMA are around \(10^{-2}\), and services may not desire an error probability which is higher than \(10^{-2}\).
The performances when IR is applied with 4 retransmissions are shown in Fig. 12 and Fig. 13. Compared to HARQ-CC, the achievable rate of HARQ-IR is much higher when 4 retransmissions are allowed since the logarithmic function is a concave function. The error probabilities of the two users are shown in Fig. 12. According to Fig. 12, NOMA has the highest error probabilities for both two users. While for user 1 FDMA has a lower error probability than RSMA, RSMA has a lower error probability for user 2 than FDMA. When a high rate is required and the latency requirement is not strict, it would be better to let isolated resources be dedicated to the users, because the user with good channel conditions cannot tolerate that much interference. The copies with interference would
Fig. 8: Error probabilities of two users in IR. The average channel gains for user 1 and user 2 are \(20\) dB and \(15\) dB, respectively. Retransmission times \(L=2\).
Fig. 9: Average power consumption of each packet in IR with 2 retransmissions.
also not help much, which is also shown in Fig. 8. Hence, FDMA has lower error probabilities than NOMA though the users can only occupy part of the resources. Although users can interfere with each other in RSMA, in the retransmission rounds they do not always interfere with each other. Similar to RSMA with CC, that either \(s_{1,1}\) or \(s_{2}\) fails will cause both users to pause generating new packets, so \(s_{1,1}\) or \(s_{2}\) will not be retransmitted with the interference from a new message. In this way, the two users can share the resources but not always interfere with each other. Fig. 13 shows the average transmission power for each packet. FDMA consumes the most power, NOMA follows, and RSMA is the lowest. For NOMA and RSMA, it is not necessary to transmit all the failed streams, so they consume less power than FDMA. RSMA uses the least power because it can mitigate the effect of interference to some extent, so fewer retransmissions are needed compared to NOMA.
According to the simulation results, RSMA with HARQ has the potential to improve the achievable rate for users who require the same service. Two users can have relatively low error probabilities with both CC and IR and different retransmission times. The simulation results also show that RSMA with HARQ has the lowest average power consumption per packet, which means it is suitable for the service of low-power devices.
## VI Conclusion
In this work, a retransmission scheme for uplink RSMA is proposed. This scheme does not need to retransmit all the failed streams, so it avoids bringing additional complexity. We focus on the scenario that two users requiring the same service share a common BS. The decoding order at BS is set to \(s_{1,1}\), \(s_{2}\), and \(s_{1,2}\), which is the optimal order to exploit the flexibility of RSMA. It is always possible for \(s_{1,2}\) to be decoded with a low rate, so only \(s_{1,1}\) and \(s_{2}\) are involved with retransmitting. Power allocation between \(s_{1,1}\) and \(s_{1,2}\) is important and it decides which stream(s) will be retransmitted. Error probabilities for different power allocation strategies are given, and the \(\alpha\) which gives the lowest sum of the error probabilities is chosen.
The simulation results present the error probabilities and the average power consumption per packet of RSMA with CC and IR, and they are compared with NOMA and FDMA. The results show that RSMA can let both users have a low
Fig. 11: Average power consumption of each packet in CC with 4 retransmissions.
Fig. 12: Error probabilities of two users in IR. The average channel gains for user 1 and user 2 are \(20\) dB and \(15\) dB, respectively. Retransmission times \(L=4\).
Fig. 10: Error probabilities of two users in CC. The average channel gains for user 1 and user 2 are \(20\) dB and \(15\) dB, respectively. Retransmission times \(L=4\).
Fig. 13: Average power consumption of each packets in IR with 4 retransmissions.
error probability while consuming less energy with different retransmission times. This shows that RSMA with HARQ could be applied to services that need low latency and low power consumption. RSMA with retransmission can enhance diversity and improve efficiency, so it could be a promising scheme for future communication networks.
## Appendix A
Here the derivation of \(p_{1,1}\) and \(p_{2}\) when both \(s_{1,1}\) and \(s_{2}\) are retransmitted will be given. We let \(x\) and \(y\) denote the channel gains, \(|h_{1}|^{2}\) and \(|h_{2}|^{2}\), respectively, and their pdf follow the exponential distribution in (15). First, let we compute (18), and it can be rewritten as
\[\begin{split} p_{1,1}&=\int_{0}^{\infty}\left(\int _{\frac{\pi}{\gamma_{1,1}^{(1)}}-1}^{\gamma_{2}^{(1)}(1+\alpha x)}\frac{1}{ \Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy\right)\frac{1}{\Gamma_{1}}e^{-\frac{x}{ \Gamma_{1}}}dx\\ &+\int_{0}^{\frac{\gamma_{1,1}^{(2)}}{\alpha}}\left(\int_{\gamma_ {2}^{(1)}(1+\alpha x)}^{\infty}\frac{1}{\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}} dy\right)\frac{1}{\Gamma_{2}}e^{-\frac{x}{\Gamma_{1}}}dx\\ &+\int_{0}^{\gamma_{2}^{(2)}}\left(\int_{\frac{\gamma_{1,1}^{(1) }(1+y)}{\alpha}}^{\infty}\frac{1}{\Gamma_{1}}e^{-\frac{x}{\Gamma_{1}}}dx \right)\frac{1}{\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy.\end{split} \tag{50}\]
The second term and third term can be computed directly, but if \(\gamma_{1,1}^{(2)}\) or \(\gamma_{2}^{(2)}\) is negative, which means the constraint is not applicable for the variable, it should be seen as \(\infty\), since the exponential distribution is only applicable for the non-negative region. Thus, for the first term, the lower limit of the inner integral should be non-negative, so the first term can be rewritten as
\[\begin{split}&\int_{\frac{\gamma_{1}^{(1)}}{\alpha}}^{\infty} \left(\int_{\frac{\pi}{\gamma_{1}^{(1)}}-1}^{\gamma_{2}^{(1)}(1+\alpha x)}\frac {1}{\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy\right)\frac{1}{\Gamma_{1}}e^{-\frac {x}{\Gamma_{1}}}dx\\ &+\int_{0}^{\frac{\gamma_{1,1}^{(1)}}{\alpha}}\left(\int_{0}^{ \gamma_{2}^{(1)}(1+\alpha x)}\frac{1}{\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy \right)\frac{1}{\Gamma_{1}}e^{-\frac{x}{\Gamma_{1}}}dx.\end{split} \tag{51}\]
For the first term of (51), the higher limit of the inner integral should be larger than the lower one, which can be rearranged as
\[\left(\gamma_{2}^{(1)}-\frac{1}{\gamma_{1,1}^{(1)}}\right)\alpha x\geq-\gamma _{2}^{(1)}-1. \tag{52}\]
Thus, if \(\gamma_{1,1}^{(1)}\gamma_{2}^{(1)}\geq 1\), the integral does not change, but if \(\gamma_{1,1}^{(1)}\gamma_{2}^{(1)}<1\), \(x<-\frac{\left(\gamma_{2}^{(1)}+1\right)\gamma_{1,1}^{(1)}}{\alpha\left(\gamma _{2}^{(1)}\gamma_{1,1}^{(1)}-1\right)}=c\), and this is the new higher limit of the outer integral of the first term of (51). \(p_{2}\) equals that \(p_{1,1}\) subtracts the second term, and can be computed by the same way. Then, (24) and (25) can be obtained.
## Appendix B
The derivations of (30) and (31) are presented. Similarly, we let \(x\) and \(y\) denote \(|h_{1}|^{2}\) and \(|h_{2}|^{2}\), respectively. The first term of (26) can be represented as
\[\begin{split}\int_{0}^{\infty}\left(\int_{\frac{\pi}{\gamma_{1} }-1}^{\gamma_{2}(1+x)}\frac{1}{\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy\right) \frac{1}{\Gamma_{1}}e^{-\frac{x}{\Gamma_{1}}}dx.\end{split} \tag{53}\]
Here the lower limit of the inner integral should not be negative, and if it is negative, the lower limit should be \(0\). Thus, it can be rewritten as
\[\begin{split}&\int_{\gamma_{1}}^{\infty}\left(\int_{\frac{\pi}{ \gamma_{1}}-1}^{\gamma_{2}(1+x)}\frac{1}{\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}} dy\right)\frac{1}{\Gamma_{1}}e^{-\frac{x}{\Gamma_{1}}}dx\\ &+\int_{0}^{\gamma_{1}}\left(\int_{0}^{\gamma_{2}(1+x)}\frac{1} {\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy\right)\frac{1}{\Gamma_{1}}e^{-\frac{x }{\Gamma_{1}}}dx.\end{split} \tag{54}\]
Then, the second term of (26) can be represented as
\[\begin{split}\int_{0}^{\gamma_{1}}\left(\int_{\gamma_{2}(1+x)}^{ \infty}\frac{1}{\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy\right)\frac{1}{\Gamma_{ 1}}e^{-\frac{x}{\Gamma_{1}}}dx.\end{split} \tag{55}\]
The sum of second term of (54) and (55) is
\[\begin{split}\int_{0}^{\gamma_{1}}\frac{1}{\Gamma_{1}}e^{-\frac {x}{\Gamma_{1}}}dx=1-e^{-\frac{\gamma_{1}}{\Gamma_{1}}}.\end{split} \tag{56}\]
For the first term of (54), in the inner integral, the higher limit should not be less than the lower limit, so
\[\left(\gamma_{2}-\frac{1}{\gamma_{1}}\right)x\geq-\gamma_{2}-1. \tag{57}\]
Obviously, if \(\left(\gamma_{2}-\frac{1}{\gamma_{1}}\right)\geq 0\), there does not exist additional requirement for \(x\); while if \(\left(\gamma_{2}-\frac{1}{\gamma_{1}}\right)<0\), \(x<-\frac{(1+\gamma_{2})\gamma_{1}}{\gamma_{1}\gamma_{2}-1}\), and the first term becomes
\[\begin{split}\int_{\gamma_{1}}^{-\frac{(1+\gamma_{2})\gamma_{1}}{ \gamma_{1}\gamma_{2}-1}}\left(\int_{\frac{\pi}{\gamma_{1}}-1}^{\gamma_{2}(1+x )}\frac{1}{\Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy\right)\frac{1}{\Gamma_{1}} e^{-\frac{x}{\Gamma_{1}}}dx.\end{split} \tag{58}\]
(27) can be represented as
\[\begin{split} p_{2}&=\int_{\gamma_{1}}^{\infty}\left( \int_{\frac{\pi}{\gamma_{1}}-1}^{\gamma_{2}(1+x)}\frac{1}{\Gamma_{2}}e^{-\frac {y}{\Gamma_{2}}}dy\right)\frac{1}{\Gamma_{1}}e^{-\frac{x}{\Gamma_{1}}}dx\\ &+\int_{0}^{\gamma_{1}}\left(\int_{0}^{\gamma_{2}(1+x)}\frac{1}{ \Gamma_{2}}e^{-\frac{y}{\Gamma_{2}}}dy\right)\frac{1}{\Gamma_{1}}e^{-\frac{x }{\Gamma_{1}}}dx\\ &+\int_{0}^{\gamma_{2}}\left(\int_{\gamma_{1}(1+y)}^{\infty} \frac{1}{\Gamma_{1}}e^{-\frac{x}{\Gamma_{1}}}dx\right)\frac{1}{\Gamma_{2}} e^{-\frac{y}{\Gamma_{2}}}dy,\end{split} \tag{59}\]
and by the same method, (31) can be obtained.
|
2303.17920 | Hamel equations and quasivelocities for nonholonomic systems with
inequality constraints | In this paper we derive Hamel equations for the motion of nonholonomic
systems subject to inequality constraints in quasivelocities. As examples, the
vertical rolling disk hitting a wall and the Chaplygin sleigh with a knife edge
constraint hitting a circular table are shown to illustrate the theoretical
results. | Alexandre Anahory Simoes, Leonardo Colombo | 2023-03-31T09:28:15Z | http://arxiv.org/abs/2303.17920v1 | # Hamel equations and quasivelocities for
###### Abstract
In this paper we derive Hamel equations for the motion of nonholonomic systems subject to inequality constraints in quasvelocities. As examples, the vertical rolling disk hitting a wall and the Chaplygin sleigh with a knife edge constraint hitting a circular table are shown to illustrate the theoretical results.
## I Introduction
Quasvelocities are the components of velocities, describing a mechanical system, relative to a set of vector fields (in principle, local) that span at each point the fiber of the tangent bundle of the configuration space. The main point is that these vector fields need not be associated with (local) configuration coordinates on the configuration space. One of the reasons for using quasivelocities is that the Euler-Lagrange equations written in generalized coordinates are not always effective for analyzing the dynamics of a mechanical system of interest as it was shown in [4].
Some mechanical systems have a restriction either on the configurations that the system may assume or at the velocities the system is allowed to go. Systems with such restrictions are generally called constrained systems. Nonholonomic systems [5, 7, 9, 14] are, roughly speaking, mechanical systems with constraints on their velocity that are not derivable from position constraints. They arise, for instance, in mechanical systems that have rolling contact (e.g., the rolling of wheels without slipping) or certain kinds of sliding contact (such as the sliding of skates).
We will restrict ourselves to the case of linear constraints on the velocities, where the velocity lies in a subspace of the tangent space. The collection of this subspaces forms a distribution, denoted by \(\mathcal{D}\), and is locally given by an expression of the type \(\mu_{i}^{a}\dot{q}^{i}=0\). The nonholonomic equations of motion are obtained from the Lagrange-d'Alembert principle [5] and its local expression is
\[\frac{d}{dt}\frac{\partial L}{\partial\dot{q}^{i}}-\frac{\partial L }{\partial q^{i}}=\lambda_{a}\mu_{i}^{a},\] \[\dot{q}\in\mathcal{D}_{q(t)}\]
where \(\lambda_{a}\) is a Lagrange multiplier that might be computed using the constraints.
Mechanical systems subject to inequality constraints are confined within a region of space with boundary. Collision with the boundary activates constraint forces forbiding the system to cross the boundary into a non-admissible region of space (see, e.g., [10, 12]). Inequality constraints appear, for instance, in the problem of rigid-body collisions, mechanical grasping models and biomechanical locomotion [2, 13]. Mechanical systems with impulsive effects on quasivelocities has been studied in [3]. The dynamics for systems modeled by using quasivelocities is governed by Hamel equations [4, 5]. In this paper, we introduce Hamel equations for nonholonomic systems subject to inequality constraints building on the work [4]. A first approach to the dynamics of nonholonomic systems with inequality constraints was given in [6], using Weierstrass-Erdemann conditions to obtain the state of the system immediately after the collision, and later in [1], where the authors use a variational principle to obtain both the equations of motion and the collision equations. In this paper, we go a step further and consider the quasivelocities description of nonholonomic systems and the corresponding Hamel equations generalizing the results of [1] to an added basis of vector fields to the nonholonomic distribution defining the nonholonomic constraints.
The remainder of the paper is structured as folows. In Section II we introduce mechanical systems with inequality constraints and the main notation used along the paper. Section III is devoted to study Hamel equations for systems with inequality constraints. We extend the analyis to nonholonomic systems with inequality constraints in Section IV. As examples, in Sections V and VI the vertical rolling disk hitting a wall and the Chaplygin sleigh with a knife edge constraint hitting a circular table, respectively, are shown to illustrate the theoretical results.
## II Mechanical systems with inequality constraints
Suppose \(Q\) is a differentiable manifold of dimension \(n\). Throughout the text, \(q^{i}\) will denote a particular choice of local coordinates on this manifold and \(TQ\) denotes its tangent bundle, with \(T_{q}Q\) denoting the tangent space at a specific point \(q\in Q\) generated by the coordinate vectors \(\frac{\partial}{\partial q^{i}}\). Usually \(v_{q}\) denotes a vector at \(T_{q}Q\) and, in addition, the coordinate chart \(q^{i}\) induces a natural coordinate chart on \(TQ\) denoted by \((q^{i},\dot{q}^{i})\). There is a canonical projection \(\tau_{Q}:TQ\to Q\), sending each vector \(v_{q}\) to the corresponding base point \(q\). Note that in coordinates \(\tau_{Q}(q^{i},\dot{q}^{i})=q^{i}\).
The vertical lift of a vector field \(X\in\mathfrak{X}(Q)\) to \(TQ\) is defined by
\[X_{v_{q}}^{V}=\left.\frac{d}{dt}\right|_{t=0}(v_{q}+tX(q)).\]
\(T_{q}Q\) has a vector space structure, so we may consider its dual space, \(T_{q}^{*}Q\) and define the cotangent bundle as \(T^{*}Q:=\bigcup\limits_{q\in Q}T_{q}^{*}Q\), with local coordinates \((q^{i},p_{i})\).
In this paper, we will analyse the dynamics of nonholonomic systems evolving on the configuration manifold \(Q\) which are subjected to inequality constraints, i.e., constraints determined by a submanifold with boundary \(C\) of the manifold \(Q\). The boundary \(\partial C\) is a smooth manifold of \(Q\) with codimension \(1\). Locally, the boundary \(\partial C\) is a smooth manifold of the type \(\partial C=\{q\in Q\mid g(q)=0\}\) and the manifold \(C\) is \(C=\{q\in Q\mid g(q)\leqslant 0\}\) for some smooth function \(g:Q\to\mathbb{R}\).
In convex geometry, given a closed convex set \(K\) of \(\mathbb{R}^{n}\), the _polar cone_ of \(K\) is the set \(K^{p}=\{z\in\mathbb{R}^{n}\mid\langle z,y\rangle\leqslant 0,\forall y\in K\}\) (see [8] for instance). The _normal cone_ to \(K\) at a point \(x\in K\) is given by \(N_{K}(x)=K^{p}\cap\{x\}^{T}\), where \(\{x\}^{T}\) is the orthogonal subspace to \(x\) with respect to the Euclidean inner product. Based on this construction, we will only use a minimal definition of normal cone suiting the kind of inequality constraints we will be dealing with. Given a submanifold with boundary \(C\) as before, the normal cone to a point \(q\in\partial C\) is the set \(N_{C}(q)=\{\lambda dg(q)|\lambda\geqslant 0\}\). The two definitions match, if \(C\) is a closed convex set of \(\mathbb{R}^{n}\) with boundary being a hypersurface of dimension \(n-1\).
Given a Lagrangian function \(L:TQ\to\mathbb{R}\) describing the dynamics of a mechanical system, with local coordinates \((q^{i},\dot{q}^{i})\), \(i=1,\ldots,n=\dim Q\), the equations of motion under the presence of inequality constraints are given by Euler-Lagrange equations
\[\frac{d}{dt}\frac{\partial L}{\partial\dot{q}^{i}}-\frac{\partial L}{\partial q ^{i}}=0\]
whenever the trajectory is in the interior of the constraint submanifold \(C\setminus\partial C\). At impact times \(t_{i}\in\mathbb{R}\) of the trajectory with the boundary \(q(t_{i})\in\partial C\), there is a discontinuity in the state variables of the system, often called a jump. This jump is determined by the equations:
\[\frac{\partial L}{\partial\dot{q}}|_{t=t_{i}^{+}}-\frac{\partial L}{\partial \dot{q}}|_{t=t_{i}^{-}}\in-N_{C},\,\,\left.E_{L}\right|_{t=t_{i}^{+}}=E_{L} \right|_{t=t_{i}^{-}}. \tag{1}\]
**Remark 1**: _We note that a negative sign in the previous equation appears as a consequence of the non-interpenetrability of the constraint.i.e., the mechanical system may not cross the boundary of the admissible variational constraint. We will see exactly how the negative signs appears in the following section._
_Throuhgout the paper, \(L\) will be a regular mechanical Lagrangian, i.e., it has the form kinetic minus potential energy [5] and the Legendre transform \(\mathbb{F}L:TQ\to T^{*}Q\) is a local diffeomorphism._
## III Hamel's equations for systems with inequality constraints
### _Hamel's equations_
In this section we briefly discuss the Hamel equations. The exposition follows paper [4].
In many cases the Lagrangian and the equations of motion of a mechanical system have a simpler structure when these are written using velocity components measured against a frame that is unrelated to system's local configuration coordinates. Let \(q=(q^{1},\ldots,q^{n})\) be local coordinates on the configuration space \(Q\) and \(u_{i}\in TQ\), \(i=1,\ldots,n\), be smooth independent _local_ vector fields defined in the same coordinate neighborhood (in certain cases, some or all of \(u_{i}\) can be chosen to be _global_ vector fields on \(Q\)). The components of \(u_{i}\) relative to the basis \(\partial/\partial q^{j}\) will be denoted \(\psi_{i}^{j}\); that is,
\[u_{i}(q)=\psi_{i}^{j}(q)\frac{\partial}{\partial q^{j}},\]
where \(i,j=1,\ldots,n\) and where summation on \(j\) is understood by employing Einstein summation notation.
Let \(v=(v^{1},\ldots,v^{n})\in\mathbb{R}^{n}\) be the components of the velocity vector \(\dot{q}\in TQ\) relative to the basis \(u_{1},\ldots,u_{n}\), _i.e._,
\[\dot{q}=v^{i}u_{i}(q); \tag{2}\]
then
\[\ell(q,v):=L(q,v^{i}u_{i}(q)) \tag{3}\]
is the Lagrangian of the system written in the adapted coordinates \((q,v)\) on the tangent bundle \(TQ\). The coordinates \((q,v)\) are Lagrangian analogues of non-canonical variables in Hamiltonian dynamics.
Define the quantities \(c_{ij}^{m}(q)\) by the equations
\[[u_{i}(q),u_{j}(q)]=c_{ij}^{m}(q)u_{m}(q), \tag{4}\]
where \(i,j,m=1,\ldots,n\). These quantities vanish if and only if the vector fields \(u_{i}(q)\), \(i=1,\ldots,n\), commute. Here and elsewhere, \([\cdot,\cdot]:\mathbb{R}^{m}\times\mathbb{R}^{m}\to\mathbb{R}^{m}\) is the Jacobi-Lie bracket of vector fields on \(Q\). Also one can find that
\[c_{ij}^{m}=(\psi^{-1})_{k}^{m}\left(\frac{\partial\psi_{j}^{k}}{\partial q^{l }}\psi_{i}^{l}-\frac{\partial\psi_{i}^{k}}{\partial q^{l}}\psi_{j}^{l}\right).\]
The dual of \([\cdot,\cdot]_{q}\) is defined by the operation \([\cdot,\cdot]_{q}^{*}:V_{q}\times V_{q}^{*}\to V_{q}^{*}\) given by
\[\langle[v,\alpha]_{q}^{*},w\rangle\equiv\langle ad_{v}^{*}\alpha,w\rangle:= \langle\alpha,[v,w]_{q}\rangle\]
where \(V_{q}\) is the Lie algebra given by \(V_{q}=(\mathbb{R}^{m},[\cdot,\cdot]_{q})\). Here \(ad^{*}\) is the dual of the usual _ad_ operator in a Lie algebra.
Viewing \(u_{i}\) as vector fields on \(TQ\) whose fiber components equal \(0\) (that is, taking the vertical lift of these vector fields), one defines the directional derivatives \(u_{i}[\ell]\) for a function \(\ell:TQ\to\mathbb{R}\) by the formula
\[u_{i}[\ell]=\psi_{i}^{j}\frac{\partial\ell}{\partial q^{j}}.\]
The evolution of the variables \((q,v)\) is governed by the _Hamel equations_
\[\frac{d}{dt}\frac{\partial\ell}{\partial v^{j}}=c_{ij}^{m}v^{i}\frac{\partial \ell}{\partial v^{m}}+u_{j}[\ell], \tag{5}\]
coupled with equations (2). If \(u_{i}=\partial/\partial q^{i}\), equations (5) become the Euler-Lagrange equations. Equations (5) were
introduced in [11] (see also [14] for details and some history). Hamel equations can be written as
\[\frac{d}{dt}\frac{\partial\ell}{\partial v}=\left[v,\frac{\partial\ell}{\partial v }\right]_{q}^{*}+u[\ell]\equiv ad_{v}^{*}\frac{\partial\ell}{\partial v}+u[\ell]\]
coupled with the equation \(\dot{q}=v^{i}u_{i}(q)\).
### _The jump equations in quasievelocities_
To obtain the jump equations in terms of quasievelocities, we generalize an extended variational principle derived in [1] for nonholonomic systems, which in turn is the nonholonomic version of the variational principle introduced in [10] to obtain the equations satisfied by a system without constraints after a collision with a smooth submanifold in the configuration space.
**Theorem 1**: _Let \(q:[0,h]\to Q\) and \(v:[0,h]\to TQ\) be trajectories of the Hamel's equations for the Lagrangian function \(\ell:TQ\to\mathbb{R}\) subjected to the inequality constraint \(q(t)\in C.\) Suppose that this system has an impact against the boundary \(\partial C\) at the time \(t_{k}\in[0,h].\) Then the trajectory satisfies Hamel's equations (5) in the intervals \([0,t_{k}^{-}[\) and \(|t_{k}^{+},h]\) and at the impact time \(t_{k}\), the following conditions hold:_
\[\frac{\partial\ell}{\partial v}|_{t=t_{k}^{+}}-\frac{\partial \ell}{\partial v}|_{t=t_{k}^{-}} \in-N_{C}, \tag{6}\] \[E_{\ell}|_{t=t_{k}^{+}}=E_{\ell}|_{t=t_{k}^{+}},\]
_where \(E_{\ell}:TQ\to\mathbb{R}\) is the energy of the system given in local coordinates by \(E_{\ell}(q,v)=\frac{\partial\ell}{\partial v^{i}}v^{i}-\ell(q,v).\)_
The curve \((q(t),v(t))\) is a critical point of the functional
\[\int_{a}^{b}\ell(q,v)\,dt \tag{7}\]
with respect to variations \(\delta v\), induced by the variations \(\delta q=w^{i}u_{i}(q)\), and given by (see [4])
\[\delta v^{k}=\dot{w}^{k}+c^{k}_{ij}(q)v^{i}w^{j}. \tag{8}\]
So,
\[\delta\int_{a}^{t_{k}^{-}}\ell(q,v)\,dt+\delta\int_{t_{k}^{+}}^{b }\ell(q,v)\,dt\] \[=\int_{a}^{t_{k}^{-}}\left(c^{k}_{ij}v^{i}\frac{\partial\ell}{ \partial v^{k}}+\psi^{i}_{j}\frac{\partial\ell}{\partial q^{i}}-\frac{d}{dt} \frac{\partial\ell}{\partial v^{j}}\right)w^{j}\,\,dt\] \[+\int_{t_{k}^{+}}^{b}\left(c^{k}_{ij}v^{i}\frac{\partial\ell}{ \partial v^{k}}+\psi^{i}_{j}\frac{\partial\ell}{\partial q^{i}}-\frac{d}{dt} \frac{\partial\ell}{\partial v^{j}}\right)w^{j}\,\,dt\] \[-\left[\frac{\partial\ell}{\partial v^{j}}w^{j}+\ell\delta t_{k} \right]_{t_{k}^{-}}^{t_{k}^{+}}\]
The jump condition follows from the fact that \(q(t_{k})\in\partial C\) from where
\[\delta(q(t_{k}))\in T(\partial C)\implies\delta q(t_{k})+\dot{q}(t_{k}) \delta t_{k}\in T(\partial C).\]
In quasievelocities this condition becomes
\[w^{i}(t_{k})u_{i}(q(t_{k}))+v^{i}(t_{k})u_{i}(q(t_{k}))\delta t_{k}\in T( \partial C)\]
The variations satisfying the previous equation are spanned by variations \(w^{i}(t_{k})u_{i}(q(t_{k}))\in T(\partial C)\) and \(\delta t_{k}=0\) or \(\delta t_{k}=1\) and \(w^{i}(t_{k})=-v^{i}(t_{k}).\) From the latter we immediately deduce that
\[\left[\frac{\partial\ell}{\partial v^{i}}v^{i}-\ell\right]_{t_{k}^{-}}^{t_{k} ^{+}}=0,\]
which is the energy conservation condition in the jump equations. From \(\delta t_{k}=0\), we get that
\[\frac{\partial\ell}{\partial v}|_{t=t_{k}^{+}}-\frac{\partial\ell}{\partial v }|_{t=t_{k}^{-}},\]
annihilates \(\delta q=w^{i}u_{i}(q)\in T(\partial C).\)
## IV Nonholonomic systems with inequality constraints
Assume that there are velocity constraints imposed on the system. We will restrict to constraints that are linear in the velocities. Consider a distribution \(\mathcal{D}\) on the configuration space \(Q\) describing these constraints, that is, \(\mathcal{D}\) is a collection of linear subspaces of \(TQ\) (\(\mathcal{D}_{q}\subset T_{q}Q\) for each \(q\in Q\)). A curve \(q(t)\in Q\) will be said to satisfy the constraints if \(\dot{q}(t)\in\mathcal{D}_{q(t)}\) for all \(t\). Locally, the constraint distribution can be written as
\[\mathcal{D}=\{\dot{q}\in TQ\,|\,\mu^{a}_{i}(q)\dot{q}^{i}=0,\ a=1,\ldots,m\}.\]
The Lagrange-d'Alembert equations of motion for the system are those determined by \(\delta\int_{a}^{b}L(q,\dot{q})dt=0\), where we choose variations \(\delta q(t)\) of the curve \(q(t)\) that satisfy \(\delta q(a)=\delta(b)=0\) and \(\delta q(t)\in\mathcal{D}_{q(t)}\) for each \(t\in[a,b]\). Note that here the curve \(q(t)\) itself satisfies the constraints. Variations are taken before imposing the constraints and hence, the constraints are not imposed on the family of curves defining the variations.
The nonholonomic equations of motion are obtained from Lagrange-d'Alembert principle and its local expression is
\[\frac{d}{dt}\frac{\partial L}{\partial\dot{q}^{i}}-\frac{\partial L}{\partial q ^{i}}=\lambda_{a}\mu^{a}_{i},\ \ \ \ \mu^{a}_{i}(q)\dot{q}^{i}=0 \tag{9}\]
where \(\lambda_{a}\) is a Lagrange multiplier that might be computed using the constraints.
### _Constrained Hamel's equations_
Consider a nonholonomic system determined by a Lagrangian function \(L:TQ\to\mathbb{R}\) and a constraint distribution \(\mathcal{D}\). Let \(\{u_{1},\ldots,u_{n}\}\) be a local basis of vector fields in \(Q\) such that \(\mathcal{D}_{q}=\text{span}\{u_{1}(q),\ldots,u_{k}(q)\}\) with \(k=n-m\). Each tangent vector \(\dot{q}\in TQ\) can be decomposed as
\[\dot{q}=\sum_{i=1}^{k}v^{i}u_{i}+\sum_{i=k+1}^{n}v^{i}u_{i}\]
where \(\sum_{i=1}^{k}v^{i}u_{i}\) is the component of \(\dot{q}\) along \(\mathcal{D}_{q}\). We will conveniently denote the first term by \(\langle u(q),\dot{q}^{\mathcal{D}}\rangle\) and the second by \(\langle u(q),\dot{q}^{\mathcal{I}}\rangle\).
Similarly, each \(\alpha\in T^{*}Q\) can be uniquely decomposed as
\[\alpha=\langle\alpha_{\mathcal{D}},u^{*}(q)\rangle+\langle\alpha_{\mathcal{U}},u^{ *}(q)\rangle,\]
where \(\langle\alpha_{\mathcal{D}},u^{*}(q)\rangle\) is the component of \(\alpha\) along the dual of \(\mathcal{D}_{q}\) and \(u^{*}(q)\) denotes the dual frame of \(u(q)\). In particular, the annihilator of \(\mathcal{D}\), denoted by \(\mathcal{D}^{o}\), is generated by \(\{u_{k+1}^{*},\ldots,u_{n}^{*}\}\).
Hence, any vector \(v\in\mathcal{D}_{q}\) can be written as
\[v=\langle u(q),v^{\mathcal{D}}\rangle\text{ or }0=\langle u(q),v^{\mathcal{U}}\rangle.\]
Now, the nonholonomic system can also be obtained from the constrained Hamel's equations. Letting \(\ell(q,v)=L(q,v^{i}u_{i})\) be the local expression of the Lagrangian function with respect to coordinates adapted to the local basis \(\{u_{i}\}\), these equations are (locally) given by
\[\frac{d}{dt}\frac{\partial\ell}{\partial v^{i}}=c_{ji}^{m}\frac{ \partial\ell}{\partial v^{m}}v^{j}+u_{i}[\ell],\quad\dot{q}=v^{i}u_{i}(q), \quad i,j=1,\ldots,k,\] \[v^{a}=0,\ a=k+1,...,n.\]
### _Nonholonomic systems with inequality constraints_
If \(C\) is an inequality constraint on the nonholonomic system, then Lagrange-d'Alembert equations are still valid in the interior of \(C\). However, the jump conditions must now be changed to accommodate the constraints our system has on velocities.
**Theorem 2**: _Let \(q:[0,h]\to Q\) be a nonholonomic trajectory of the nonholonomic system \((\ell,\mathcal{D})\) subjected to the inequality constraint \(q(t)\in C\). Suppose that this system has an impact against the boundary \(\partial C\) at the time \(t_{i}\in[0,h]\). Then the trajectory satisfies Lagrange-d'Alembert equations (9) in the intervals \([0,t_{i}^{-}[\) and \(]t_{i}^{+},h]\) and at the impact time \(t_{i}\), the following conditions hold:_
\[\begin{array}{l}\frac{\partial\ell}{\partial v}|_{t=t_{k}^{+}}-\frac{ \partial\ell}{\partial v}|_{t=t_{k}^{-}}=\lambda^{a}u_{a}^{*}(q)+\lambda^{0} dg(q)\\ E_{\ell}|_{t=t_{i}^{+}}=E_{\ell}|_{t=t_{i}^{+}}\\ \dot{q}(t_{i}^{+})\in\mathcal{D}_{q(t_{i}^{+})},\end{array} \tag{10}\]
_with \(a=k+1,\ldots,n\) and \(\lambda^{0}\), \(\lambda^{a}\) are Lagrange multipliers to be determined when solving the jump equations._
The Lagrange-d'Alembert principle for systems with impacts is defined on the path space
\[\Omega=\{(c,t_{i})\mid c:[0,h]\to Q\text{ is a smooth curve and }t_{i}\in\mathbb{R}\}.\]
If the mapping \(\mathcal{A}:\Omega\rightarrow\mathbb{R}\) is the action then, the Lagrange,d'Alembert principle states that the derivative of the action should annihilate all variations \((\delta q,\delta t_{i})\) with \(\delta q\in\mathcal{D}\), i.e., \(\delta q=\sum_{i=1}^{k}w^{i}u_{i}\). Since,
\[\delta\mathcal{A} =\int_{0}^{t_{i}^{-}}\left[c_{ij}^{k}v^{i}\frac{\partial\ell}{ \partial v^{k}}+\psi_{j}^{i}\frac{\partial\ell}{\partial q^{i}}-\frac{d}{dt} \frac{\partial\ell}{\partial v^{j}}\right]w^{j}\ dt\] \[\quad+\int_{t_{i}^{+}}^{h}\left[c_{ij}^{k}v^{i}\frac{\partial \ell}{\partial v^{k}}+\psi_{j}^{i}\frac{\partial\ell}{\partial q^{i}}-\frac{d }{dt}\frac{\partial\ell}{\partial v^{j}}\right]w^{j}\ dt\] \[\quad-\left[\frac{\partial\ell}{\partial v^{j}}w^{j}+\ell\delta t _{k}\right]_{t_{i}^{-}}^{t_{i}^{+}}\]
the fact that constrained Hamel's equations hold on the intervals \([0,t_{i}^{-}[\) and \(]t_{i}^{+},h]\) follows from the application of the fundamental theorem of calculus of variations together with the fact that \(\delta q\in\mathcal{D}\).
The jump condition follows from the fact that \(q(t_{k})\in\partial C\) from where
\[w^{i}(t_{k})u_{i}(q(t_{k}))+v^{i}(t_{k})u_{i}(q(t_{k}))\delta t_{k}\in T( \partial C)\]
The variations satisfying the previous equation are spanned by variations \(\delta q(t_{i})\in T(\partial C)\) and \(\delta t_{i}=0\) or \(\delta t_{i}=1\) and \(\delta q(t_{i})=-\dot{q}(t_{i})\). From the latter we immediately deduce that
\[\left[\frac{\partial\ell}{\partial v}v^{i}-\ell\right]_{t_{i}^{-}}^{t_{i}^{+}}=0,\]
which is the energy conservation condition in the jump equations. From \(\delta t_{i}=0\), we get that
\[\frac{\partial\ell}{\partial v}|_{t=t_{k}^{+}}-\frac{\partial\ell}{\partial v}| _{t=t_{k}^{-}},\]
annihilates \(\delta q\) if either it is on the normal cone \(N_{C}\) or it belongs to the annihilator of the distribution \(\mathcal{D}\), since \(\delta q\) is in \(T(\partial C)\cap\mathcal{D}\). Hence,
\[\frac{\partial\ell}{\partial v}|_{t=t_{k}^{+}}-\frac{\partial\ell}{\partial v} |_{t=t_{k}^{-}}=\lambda^{a}u_{a}^{*}(q)+\lambda^{0}dg(q)\]
where \(a=k+1,\ldots,n\), \(\lambda^{0}\) and \(\lambda^{a}\) are Lagrange multipliers to be determined when solving the jump equations. This is precisely the first jump equation. The third one follows from the nonholonomic constraints.
## V The Chaplygin sleigh knife edge hitting a boundary
The Chaplygin system is a celebrated example of a nonholonomic system. Here we considered the Chaplygin system with knife edge planar coordinates determined by \((x,y)\) and orientation given by \(\theta\) whose center of mass coordinates coincide with those from the blade. Under these circumstances, the dynamics is given by the Lagrangian function
\[L=\frac{m}{2}\left(\dot{x}^{2}+\dot{y}^{2}\right)+\frac{I}{2}\dot{\theta}^{2}\]
together with the constraint \(\sin\theta\dot{x}=\cos\theta\dot{y}\) generating the distribution
\[\mathcal{D}=\left\langle\left\{\cos\theta\frac{\partial}{\partial x}+\sin \theta\frac{\partial}{\partial y},\frac{\partial}{\partial\theta}\right\} \right\rangle.\]
Consider the local basis of vector field determined by
\[u_{1}=\cos\theta\frac{\partial}{\partial x}+\sin\theta\frac{\partial}{\partial y },u_{2}=\frac{\partial}{\partial\theta}\]
\[u_{3}=-\sin\theta\frac{\partial}{\partial x}+\cos\theta\frac{\partial}{\partial y}.\]
The relevant structure functions appearing on Hamel's equations are given by \([u_{1},u_{2}]=-u_{3},\) implying \(c_{12}^{1}=c_{12}^{2}=0\) and \(c_{12}^{3}=-1\).
The Lagrangian function with respect to coordinates adapted to this local frame for \(TQ\) takes the expression
\[\ell(q,v)=\frac{m}{2}((v^{1})^{2}+(v^{3})^{2})+\frac{I}{2}(v^{2})^{2}.\]
Therefore, constrained Hamel's equations give
\[m\dot{v}^{1} =0\] \[I\dot{v}^{2} =0\] \[v^{3} =0\] \[\dot{q} =v^{1}u_{1}+v^{2}u_{2}\]
We will examine Hamel's constrained equations when the knife edge impacts the boundary of the inequality constraint
\[C=\{(x,y,\theta)\ |\ x^{2}+y^{2}\leqslant 1\}\]
The jump equations (10) at a boundary point \((x,y,\theta)\in\partial C\) are now
\[m(v^{1,+}-v^{1,-})= \lambda_{0}(2x\cos\theta+2y\sin\theta)\] \[I(v^{2,+}-v^{2,-})= 0\] \[m(v^{3,+}-v^{3,-})= \lambda_{3}+\lambda_{0}(-2x\sin\theta+2y\cos\theta)\] \[\frac{m}{2}(v^{1,+})^{2}+\frac{I}{2}(v^{2,+})^{2}= \frac{m}{2}(v^{1,-})^{2}+\frac{I}{2}(v^{2,-})^{2}\] \[v^{3,+}= 0\]
We may elimate the third and fifth equations so that we end up with the system
\[m(v^{1,+}-v^{1,-})= \lambda_{0}(2x\cos\theta+2y\sin\theta)\] \[I(v^{2,+}-v^{2,-})= 0\] \[\frac{m}{2}(v^{1,+})^{2}+\frac{I}{2}(v^{2,+})^{2}= \frac{m}{2}(v^{1,-})^{2}+\frac{I}{2}(v^{2,-})^{2},\]
whose admissible solution is \(v^{1,+}=-v^{1,-}\) and \(v^{2,+}=v^{2,-}\).
Below, we simulate Chaplygin system under this inequality constraint for 400 seconds, using \(N=4000\) steps and a time-step of \(h=0.1\) (see Figure 1). The exact solution of Hamel's equations is known in this case and it was used to draw the motion. We used physical constant \(m=I=1\) and initial points \(q_{0}=(0,0,\pi/2)\) and \(v_{0}=(0.1,0.05)\). We can observe how the velocity \(v_{1}(t)\) jumps at each impact with the boundary (Figure 2) and preservation of energy (Figure 3).
The Lagrangian function with respect to coordinates adapted to this local frame for \(TQ\) takes the expression :
\[\ell(q,v)= \frac{m}{2}[(R\cos\varphi v^{1}+v^{3})^{2}+(R\sin\varphi v^{1}+v^{4 })^{2}]\] \[+\frac{I}{2}(v^{1}-R\cos\varphi v^{3}-R\sin\varphi v^{4})^{2}\] \[+\frac{J}{2}(v^{2})^{2}\]
Taking into account that the partial derivatives evaluated at vectors in \(\mathcal{D}\), i.e., at \(v^{3}=v^{4}=0\) give
\[\frac{\partial l}{\partial\varphi}=0,\]
as well as
\[\frac{\partial l}{\partial v^{3}}=R(m-I)v^{1}\cos\varphi\]
and
\[\frac{\partial l}{\partial v^{4}}=R(m-I)v^{1}\sin\varphi\]
The constrained Hamel's equations that now give
\[(mR^{2}+I)\dot{v}^{1}=-R\sin\varphi v^{2}\frac{\partial l}{ \partial v^{3}}+R\cos\varphi v^{2}\frac{\partial l}{\partial v^{4}}\] \[J\dot{v}^{2}=R\sin\varphi v^{2}\frac{\partial l}{\partial v^{3 }}-R\cos\varphi v^{2}\frac{\partial l}{\partial v^{4}}+\frac{\partial l}{ \partial\varphi}\] \[v^{3}=v^{4}=0\] \[\dot{q}=v^{1}u_{1}+v^{2}u_{2}\]
can be simplified, ending up with
\[(mR^{2}+I)\dot{v}^{1}=0\] \[J\dot{v}^{2}=0\] \[v^{3}=v^{4}=0\] \[\dot{q}=v^{1}u_{1}+v^{2}u_{2}\]
We will examine Hamel's constrained equations when the disk impacts the boundary of the inequality constraint
\[C=\{(x,y,\theta,\varphi)\ |\ y+R\sin\varphi\leqslant 10\}\]
at a constant angle \(\varphi=\frac{\pi}{2}\), where the disk makes a right angle with the wall.
In this case, the jump equations (10) are simply
\[(mR^{2}+I)(v^{1,+}-v^{1,-})= \lambda^{0}R(\sin\varphi+\cos\varphi)\] \[J(v^{2,+}-v^{2,-})= 0\] \[R(m-I)\cos\varphi(v^{1,+}-v^{1,-})= \lambda^{3}-\lambda^{0}2R^{2}\cos^{2}\varphi\] \[R(m-I)\sin\varphi(v^{1,+}-v^{1,-})= \lambda^{4}+\lambda^{0}(1-2R^{2}\cos\varphi\sin\varphi)\] \[\frac{mR^{2}+I}{2}(v^{1,+})^{2}+\frac{J}{2}(v^{2,-})^{2}\] \[= \frac{mR^{2}+I}{2}(v^{1,-})^{2}+\frac{J}{2}(v^{2,-})^{2}\] \[v^{3,+}=v^{4,+}= 0\]
However, as before, we can eliminate third, fourth and sixth equations to end up simply with
\[(mR^{2}+I)(v^{1,+}-v^{1,-})= \lambda^{0}R(\sin\varphi+\cos\varphi)\] \[J(v^{2,+}-v^{2,-})= 0\] \[\frac{mR^{2}+I}{2}(v^{1,+})^{2}+\frac{J}{2}(v^{2,-})^{2}\] \[= \frac{mR^{2}+I}{2}(v^{1,-})^{2}+\frac{J}{2}(v^{2,-})^{2}\]
In fact, if the impact is perpendicular to the wall, i.e., \(\varphi=\frac{\pi}{2}\), then the unique admissible solution to the jump equations is \(v^{1,+}=-v^{1,-}\), \(v^{2,+}=v^{2,-}\) and \(\lambda_{0}=-\frac{2(mR^{2}+I)v^{1,-}}{R}\).
Finally, we simulate the system under this variational inequality for 18 seconds, using \(N=180\) steps and a time-step of \(h=0.1\) (see Figure 4). The exact solution of Hamel's equations is known in this case and it was used to draw the motion. We used physical constant \(m=I=J=1\) and initial points \(q_{0}=(0,0,0,\pi/2)\) and \(v_{0}=(1,0)\). Again, the first velocity component \(v^{1}(t)\) is discontinuous (Figure 5) and energy is preserved along the motion (Figure 6).
|
2309.08683 | Shining Light on the Hosts of the Nano-Hertz Gravitational Wave Sources:
A Theoretical Perspective | The formation of supermassive black holes (SMBHs) in the Universe and its
role in the properties of the galaxies is one of the open questions in
astrophysics and cosmology. Though, traditionally, electromagnetic waves have
been instrumental in direct measurements of SMBHs, significantly influencing
our comprehension of galaxy formation, gravitational waves (GW) bring an
independent avenue to detect numerous binary SMBHs in the observable Universe
in the nano-Hertz range using the pulsar timing array observation. This brings
a new way to understand the connection between the formation of binary SMBHs
and galaxy formation if we can connect theoretical models with multi-messenger
observations namely GW data and galaxy surveys. Along these lines, we present
here the first paper on this series based on {\sc Romulus25} cosmological
simulation on the properties of the host galaxies of SMBHs and propose on how
this can be used to connect with observations of nano-Hertz GW signal and
galaxy surveys. We show that the most dominant contribution to the background
will arise from sources with high chirp masses which are likely to reside in
low redshift early-type galaxies with high stellar mass, largely old stellar
population, and low star formation rate, and that reside at centers of galaxy
groups and manifest evidence of recent mergers. The masses of the sources show
a correlation with the halo mass and stellar mass of the host galaxies. This
theoretical study will help in understanding the host properties of the GW
sources and can help in establishing a connection with observations. | Vida Saeedzadeh, Suvodip Mukherjee, Arif Babul, Michael Tremmel, Thomas R. Quinn | 2023-09-15T18:20:26Z | http://arxiv.org/abs/2309.08683v2 | # Shining Light on the Hosts of the Nano-Hertz Gravitational Wave Sources: A Theoretical Perspective
###### Abstract
The formation of supermassive black holes (SMBHs) in the Universe and its role in the properties of the galaxies is one of the open questions in astrophysics and cosmology. Though, traditionally, electromagnetic waves have been instrumental in direct measurements of SMBHs, significantly influencing our comprehension of galaxy formation, gravitational waves (GW) bring an independent avenue to detect numerous binary SMBHs in the observable Universe in the nano-Hertz range using the pulsar timing array observation. This brings a new way to understand the connection between the formation of binary SMBHs and galaxy formation if we can connect theoretical models with multi-messenger observations namely GW data and galaxy surveys. Along these lines, we present here the first paper on this series based on Romulus cosmological simulation on the properties of the host galaxies of SMBHs and propose on how this can be used to connect with observations of nano-Hertz GW signal and galaxy surveys. We show that the most dominant contribution to the background will arise from sources with high chirp masses which are likely to reside in low redshift early-type galaxies with high stellar mass, largely old stellar population, and low star formation rate, and that reside at centers of galaxy groups and manifest evidence of recent mergers. The masses of the sources show a correlation with the halo mass and stellar mass of the host galaxies. This theoretical study will help in understanding the host properties of the GW sources and can help in establishing a connection with observations.
keywords: gravitational waves-- galaxies: evolution--galaxies: formation
## 1 Introduction
The discovery of Gravitational Waves (GWs) by the LIGO-Virgo-KAGRA (LVK) Collaboration from coalescing compact object binaries of a few tens of solar masses inaugurated the era of gravitational-wave astronomy, enabling the observations of previously inaccessible astrophysical phenomena (Aasi et al., 2015; Abbott et al., 2016; Acerheese et al., 2014, 2019; Abbott et al., 2018; Akutsu et al., 2019, 2020). Following this initial discovery, several more binary objects have been detected, one of which (GW170817) also had an electromagnetic (EM) counterpart and stands as the first multi-messenger measurement involving GW signal (Abbott et al., 2016, 2017, 2021, 2021); Abbott et al. (2023); The LIGO Scientific Collaboration et al. (2021); Abbott et al. (2023). With the aid of ongoing and upcoming networks of GW detectors, several more detections of coalescing black hole binaries are likely over the frequency range of 10 Hz and above.
Along with the high-frequency GW signal, coalescing supermassive black holes (SMBHs) can also produce GW signals detectable at low-frequency bands, ranging from a few nano-Hertz to milli-Hertz range. In the milli-Hertz frequency range, upcoming GW detectors - such as Laser Interferometer Space Antenna (LISA; Amaro-Seoane et al., 2017; Baker et al., 2019) - can probe signal from the SMBHs of masses in the range approximately \(10^{4}\)- \(10^{7}\) M\({}_{\odot}\). The nano-Hertz GW signal from sources with masses above \(10^{8}\) M\({}_{\odot}\) can be detected and characterized using the timing data from several extremely well-studied millisecond pulsars (Foster and Backer, 1990). These signals are the target of the International Pulsar Timing Array (IPTA) collaboration (Antoniadis et al., 2022), comprising the European Pulsar Timing Array (EPTA; Desvignes et al., 2016), the North American Nanohertz Observatory for Gravitational Waves (NANOGrav; McLaughlin, 2013), the Indian Pulsar Timing Array Project (InPTA; Joshi et al., 2018) and the Parkes Pulsar Timing Array (PPTA; Manchester et al., 2013). Along with the IPTA Collaboration, the Chinese Pulsar Timing Array (CPTA; Xu et al., 2023) are also making measurements in this band. In the future with the operation of the Square Kilometer Array (SKA) (Terzian and Lazio, 2006; Janssen et al., 2014) more accurate measurement of this signal will be possible (Burke-Spolaor et al., 2019). The recent detection of the stochastic GW background (SGWB) in the nano-Hertz range by CPTA (Xu et al., 2023), EPTA+InPTA (Antoniadis et al., 2023), NANOGrav (Agazie et al., 2023) and PPTA (Zic et al., 2023) promises to open an exciting new window onto the evolving population of binary supermassive black holes (SMBBHs) in the Universe.
The presence of nano-Hertz GW signal leads to several interesting questions such as: What are the astrophysical properties of the host
galaxies of the source SMBBHs, and can one use conventional galaxy surveys to identify (if not uniquely detect) these host galaxies? Since SMBHs reside in the centers of galaxies, SMBBHs are expected to be byproducts of galaxy mergers. Consequently, SMBBHs-host galaxy identifications can potentially shed light on the pathways leading to the formation of the SMBBHs, including their dynamical evolution from the time of first encounter, and more generally on the astrophysics of galaxy mergers. They can also potentially provide insights into the growth of SMBHs and the implications of SMBH-SMBH mergers on galaxy formation (Cattaneo et al., 2009), including conditions leading to the transition between radiative versus kinetic feedback modes (eg Narayan & Quataert, 2005; Merloni & Heinz, 2008; Benson & Babul, 2009; Babul et al., 2013). (See also O'Sullivan et al., 2012; Reynolds et al., 2014; Prasad et al., 2020 and O'Sullivan et al., 2021 for rare examples of observed SMBHs in unexpected state). This would be an important step towards a new paradigm of multi-messenger science capable of addressing a broad spectrum of questions related to astrophysics and cosmology.
However, typical PTA localizations in the near term are expected to encompass several thousand (if not more) galaxies. Theoretical modeling offers a way to narrow the field of candidate host galaxies for more detailed observational scrutiny. In this study, we use results from a high-resolution Romulus cosmological simulation (Tremmel et al., 2017, 2019) to explore this possibility. As we discuss in SS3, Romulus suite of simulations is especially suited for investigating SMBH/SMBBH-galaxy connections because of its unique approach to seeding, accretion, and especially the dynamics of supermassive black holes. Consequently, the simulations have previously been used to explore a variety of related topics, including the timescale for the formation of close SMBH pairs following galaxy mergers (Tremmel et al., 2018), the galaxy-SMBH coevolution (Ricarte et al., 2019), the origin and demographics of wandering black holes (Ricarte et al., 2021), and the demographics of dual active galactic nuclei (Saeedzadeh et al., 2023).
The paper is organized as follows: In SS2, we briefly discuss the motivation behind the present study and in SS3, we discuss the Romulus simulation. The expected SGWB based on the simulation and the astrophysical properties of the galaxies hosting SGWB sources are discussed in SS4 and SS5. Among the properties we consider are: gas density (\(\rho_{\rm gas}\)), star formation rate (M\({}_{*}\) or SFR), stellar mass (M\({}_{*}\)), galaxy morphology, galaxy color, specific star formation rate (sSFR \(\equiv\) M\({}_{*}\)/M\({}_{*}\)) and halo mass (M\({}_{\rm h}\), which we take to be M\({}_{500}\); see SS3.4 for definition of M\({}_{500}\)). Then, we discuss possible techniques to validate the connection between the SMBBHs and their host galaxies that we find in SS6. Finally, we summarise our findings and discuss the future outlook in SS7.
## 2 Motivation
On one hand, we have the recently detected SGWB in the nano-Hertz range from coalescing SMBHs of mass M \(>10^{7}\) M\({}_{\odot}\). On the other hand, we have spectroscopic/photometric galaxy surveys that are capable of detecting faint galaxies up to high redshifts, some of which will be hosts of the SMBBHs that are contributing to the nano-Hertz SGWB. The combination of these two opens up the prospect of a new multi-messenger science that can shed light on several key questions in astrophysics and cosmology. A limited list of these key questions are: (i) How do the SMBHs grows with time? (ii) How do SMBBHs form and is there a relationship between their formation and one or more properties of the host galaxies? (iii) Do the astrophysical properties of the host galaxies play a role in coalescing of the SMBBHs? (iv) What is the occupation number of the SMBHs in galaxies (or halos) of different masses?
We are interested in understanding the theoretical dimensions of these questions and in identifying whether the key astrophysical properties of the host galaxies can be predicted based on our current understanding of galaxy formation. In this paper, we explore the astrophysical "tells" of galaxies that host SMBBHs in the Romulus simulation volume. We also investigate the properties of the halos of these galaxies. Although the Romulus simulations can track black holes across nearly three orders of magnitude in mass (10\(\lx@math@degree\)- 10\({}^{9}\) M\({}_{\odot}\)), in the present paper we focus primarily on coalescing binaries black holes that can contribute to the stochastic gravitational wave background in the frequency band accessible to PTA. We perform a simulation-based study of the correlations between the SMBBHs and their host galaxies. The specific galaxy properties we focus on include their morphology, star formation rate, galaxy color, stellar mass, gas density, and halo mass. Uncovering a theoretical connection between the properties of the host galaxy and its SMBBH will help motivate observational and data analysis strategies aimed at identifying the host galaxies of the GW sources from the photometric/spectroscopic galaxy catalogs. This, in turn, can contribute to building a data-driven understanding of the evolution of SMBBHs in galaxies.
In future papers in this series, we will consider black holes accessible to LISA, examine possible connections between these and SMBBHs detectable with PTA, and its implementation on the latest nano-Hertz observations (Xu et al., 2023; Antoniadis et al., 2023; Agazie et al., 2023; Zic et al., 2023) to identity the possible host candidates. For completeness, we note that there are several analytical and numerical simulation-based studies estimating SGWB signal in the PTA frequency range (Rajagopal & Romani, 1995; Jaffe & Backer, 2003; Sesana et al., 2008; Volonteri et al., 2003; Kocsis & Sesana, 2011; Chen et al., 2017; Kelley et al., 2017; Wolonteri et al., 2020; DeGraf et al., 2021; Muhamed Kozhikkal et al., 2023).
## 3 The Romulus Simulations
In this work, we present results from the analysis of the Romulus25 simulation, which is a (25 cMpc)\({}^{3}\) cosmological volume simulation from the Romulus suite (Tremmel et al., 2017, 2019; Butsky et al., 2019; Jung et al., 2022; Saeedzadeh et al., 2023).
The simulation was run using the Tree+Smoothed Particle Hydrodynamics (Tree+SPH) code CHANGa (Monen et al., 2015; Wadsley et al., 2017), with the Plummer equivalent gravitational force softening of 250 pc (or 350 pc spline kernel), a maximum SPH resolution of 70 pc, and gas and dark matter particle masses of \(2.12\times 10^{5}M_{\odot}\) and \(3.39\times 10^{5}M_{\odot}\), respectively. The background cosmology is a Flat \(\Lambda\)CDM universe with cosmological parameters consistent with the Planck 2016 results (Ade et al., 2016): \(\Omega_{\rm m}=0.309\), \(\Omega_{\Lambda}=0.691\), \(\Omega_{\rm B}=0.0486\), \(\Omega_{\rm H}=67.8\,{\rm km\,s^{-1}\,Mpc^{-1}}\), and \(\sigma_{\rm S}=0.82\).
The full details about the Romulus25 simulation, including a thorough discussion of the hydrodynamics code and the specifics of the Romulus galaxy formation model, the sub-grid physics incorporated therein, the various modeling choices made, and the simulation's many unique features, have been described in a number of published papers. In the interest of brevity, we do not repeat this information here and instead refer interested readers to Tremmel et al. (2015, 2017, 2019, 2020); Sanchez et al. (2019); Butsky et al. (2019); Chadayamuri et al. (2021); and Jung et al. (2022). The latter especially offers a concise yet complete summary.
There are, however, a few aspects of the Romulus25 simulation that are important to highlight as these are relevant to the present
discussion. These pertain to the treatment of SMBH seeding, growth, and dynamical evolution in the Romulus25 (Tremmel et al., 2017).
### SMBH Seeding
The seeding of SMBHs depends only on the local gas properties and not on any prior knowledge of the host halo or the host galaxy (Tremmel et al., 2017). Specifically, unlike many of the other cosmological simulations (e.g., Schaye et al., 2015; Weinberger et al., 2017; Pillepich et al., 2018; Dave et al., 2019), the Romulus SMBH seed model does not necessitate a halo or a galaxy to exceed a certain mass threshold for a SMBH to form. As a result, the formation of SMBHs is not guaranteed within every halo. Additionally, one can also have multiple SMBHs arising in the same halo.
The criteria for converting gas particle into a SMBH seed in Romulus are as follows: (i) The gas particle must be _both_ eligible and selected to form a star. The latter is a probabilistic process. (ii) The gas particle must have very low metallicity (\(Z<3\times 10^{-4}\)); (iii) its density must be very high; i.e. at least \(3~{}m_{p}/c\) or greater. And, (iv) its temperature is within the range of \(9500-10000\) K. This seeding prescription resembles the direct collapse black hole scenario, where high temperatures and low metallicities suppress fragmentation and allow sizeable gas clouds to collapse directly into an SMBH seed (Lodato & Natarajan, 2007; Alexander & Natarajan, 2014; Natarajan, 2021). In Romulus25, the SMBHs are seeded with an initial mass of \(10^{6}\) M\({}_{0}\) to ensure that they are always more massive than dark matter and star particles to mitigate spurious scattering events (Tremmel et al., 2015).
Under the above scheme, the SMBHs are seeded primarily in low-mass galaxies at z \(>5\)(Tremmel et al., 2017) and Ricarte et al. (2019) show that the resulting SMBH occupation fraction at z=0 is consistent with current observations even on the scale of dwarf galaxies.
### SMBH Dynamics and Mergers
The Romulus25 simulation accurately tracks the dynamical evolution of SMBHs down to sub-kpc scales, which is highly advantageous for the present study. To achieve this, a sub-grid correction is employed that accounts for the unresolved dynamical friction from stars and dark matter that the SMBHs ought to be experiencing(Tremmel et al., 2015). For each SMBH in the simulation, this force is estimated by assuming a locally isotropic velocity distribution and integrating Chandrasekhar's equation (Chandrasekhar, 1943) from the 90-degree deflection radius (\(r_{90}\)) to the SMBH's gravitational softening length (\(\epsilon_{g}\)). The resulting acceleration is
\[\mathbf{a}_{DF}=-4\pi G^{2}~{}M_{\bullet}~{}\rho(v<v_{BH})~{}\ln\Lambda~{} \frac{v_{BH}}{v_{BH}^{3}}, \tag{1}\]
In order for two SMBHs to merge, they must be within a distance of two gravitational softening lengths (0.7 kpc) and possess a low enough relative velocity to be mutually bound; i.e. \(\frac{1}{2}~{}\Lambda\mathbf{v}^{2}<\Delta\mathbf{a}\cdot\Lambda\mathbf{r}\), where \(\Delta\mathbf{v}\) and \(\Delta\mathbf{a}\) are the differences in velocity and acceleration of the two black holes, and \(\Delta\mathbf{r}\) is the distance between them (Bellovary et al., 2011; Tremmel et al., 2017)1. The separation limit of two gravitational softening lengths is deemed appropriate because once the separation drops below this limit, the simulation's ability to accurately track the SMBH pair's dynamics becomes less reliable.
Footnote 1: Note that there is a typographical error in the criterion for boundedness in Tremmel et al. (2017).
When a merger takes place, the resulting SMBH is assigned a velocity that conserves momentum, and its mass is the sum of the masses of its progenitors. Mergers are one of the two processes driving the growth of SMBHs.
### SMBH Growth and Feedback
The other process by which SMBHs grow is via the accretion of gas. In Romulus25, this accretion rate is estimated via a modified Bondi-Hoyle-Jyttleton prescription applied to the smoothed properties of the 32 nearest gas particles:
\[\dot{M}_{\bullet}=\alpha\times\begin{cases}\frac{\pi(GM_{\bullet})^{2}\rho_{ \rm gas}}{(v_{bulk}^{2}+c_{s}^{2})^{1/2}}&\text{if $v_{bulk}>v_{\theta}$}\\ \frac{\pi(GM_{\bullet})^{2}\rho_{\rm gas}c_{s}}{(v_{\theta}^{2}+c_{s}^{2})^{2 }}&\text{if $v_{bulk}<v_{\theta}$}\end{cases}, \tag{2}\]
where \(\rho_{\rm gas}\) is the ambient gas density, \(c_{s}\) is the ambient sound speed, \(v_{\theta}\) is the local rotational velocity of surrounding gas, and \(v_{bulk}\) is the bulk velocity relative to the SMBH. All ambient quantities are calculated using the 32 nearest gas particles. The introduction of \(v_{\theta}\) and \(v_{bulk}\) terms in the above aims to remedy the neglect of gas bulk motion and angular momentum in the original Bondi-Hoyle-Jyttleton formulation. Finally, the coefficient \(\alpha\) is introduced to correct for the suppression of the black hole accretion rate due to resolution effects. It is defined as
\[\alpha=\begin{cases}(\frac{n}{n_{th,*}})^{2}&\text{if $n\geq n_{th,*}$}\\ 1&\text{if $n\leq n_{th,*}$}\end{cases}, \tag{3}\]
where \(n_{th,*}\) is the star formation number density threshold (\(0.2~{}m_{p}/cc\)).
Gas accretion onto a SMBH results in energy release into the environment around the black hole. In Romulus25, it is assumed that this energy is electromagnetic and that a fraction of it will couple to the ambient gas and contribute to its internal energy. The thermal energy deposition rate is given by \(\dot{E}_{\bullet,th}=\epsilon_{r}\epsilon_{f}\dot{M}_{\bullet}c^{2}\), where \(\epsilon_{r}\) is the radiative efficiency (assumed to be 10%) and \(\epsilon_{f}\) is gas coupling efficiency (set to 2%). The thermal energy is imparted isotropically to the 32 nearest gas particles, with the energy being distributed among these gas particles according to the smoothing kernel. We refer readers to Tremmel et al. (2017) for further details.
### Selection of Halos and Binary SMBHs
The halos in Romulus simulations are extracted and processed using the Amiga Halo Finder (hereafter, AHF; Knebe et al., 2008; Knollmann & Knebe, 2009), and tracked across time with TANGOS (Pontzen & Tremmel, 2018).
The halos and subhalos exist in a nested hierarchy, where the halos are the primary structures and the subhalos are incorporated within them. To identify these structures, AHF first locates density peaks in an adaptively smoothed density field and identifies all the particles (dark matter, gas, stars, and black holes) that are gravitationally bound to these peak. This process is repeated on successively larger scales until all the structures in the hierarchy have been found. Once the halos are identified, their centers are found by applying the shrinking
sphere approach (Power et al., 2003) to the distribution of bound particles associated with each of the halos.
The masses of the halos (\(M_{\Delta}\)) are determined by creating a sphere with a radius of \(R_{\Delta}\) around each halo center. This sphere is constructed so that the average density within it, \(\left(\rho_{\rm m,A}(z)\right)\), is equal to \(\Delta\) times the critical cosmological density, \(\rho_{\rm crit}(z)=3\rm H^{2}(z)/8\pi\rm G\) (see, for example, Babul et al., 2002). In this study, we reference \((M_{200},R_{200})\) and \((M_{500},R_{500})\), which correspond to \(\Delta=200\) and \(\Delta=500\), respectively. For our assumed cosmology, \(M_{500}/M_{200}\approx 0.7\) and \(R_{500}/R_{200}\approx 0.68\).
In the case of subhaloes, AHF tracks the local density profile from the peak center outward. At some point, the external gravitational field starts to dominate, altering the shape of the density profile. The distance from the peak to where this happens is taken to be the size of the subhalo, and the mass enclosed is recorded as the subhalo's mass.
We also track all the SMBHs in the Romulus25 simulation volume. We use the resulting information to construct merger trees for all the black holes. At each redshift, we then identify black holes that have experienced a merger during the immediately preceeding timestep and flag the about-to-merge SMBH pairs as candidate sources of nano-Hertz SGWB. The typical separation of merging SMBH pairs is \(\sim 1\) kpc and their maximum separation is 2.8 kpc. For completeness, we also identify all black hole pairs separated by \(\leq 1.4\) kpc and which are not flagged as merging in the next time-step. We will refer to these as proximate pairs.
We emphasize that the SGWB from flagged SMBHHs with separation scale of \(\sim 1\) kpc cannot contribute to the nano-Hertz frequency band unless they coalescence down to sub-parsec (\(10^{-5}\) pc) scale. This journey of the SMBBHs from the scale of \(\sim 1\) kpc to \(\leq 10^{-5}\) pc is governed not only by GW emission but also by environmental effects such as dynamical friction, stellar loss cone and viscous gas drag. These processes are not resolvable in Romulus25 or, for that matter, in any other cosmological simulation. We therefore need to model this coalescence separately.
## 4 Estimation of SGWB in the Nano-Hertz
### Modeling SGWB signal from coalescing SMBBHs
In order to calculate the contribution to the SGWB signal from the coalescing SMBBHs, we start with the expression for the characteristic strain of the GW signal \(h_{c}\) at frequency \(f\) for a source emitting at a rest-frame frequency \(f_{r}=(1+z)f\)(Rajgopal & Romani, 1995; Phinney, 2001; Sesana et al., 2008):
\[h_{c}^{2}(f)=\frac{4G}{c^{2}\pi f^{2}}\iiint dz\,dm_{1}\,dm_{2} \,\frac{d^{3}n_{GW}(m_{1},m_{2},z)}{dm_{1}dm_{2}dz} \tag{4}\] \[\times\,\frac{1}{1+z}\frac{dE_{\rm GW}(m_{1},m_{2},z)}{d\ln f_{r}},\]
where the distribution function, \(\frac{d^{3}n_{GW}(m_{1},m_{2},z)}{dm_{1}dm_{2}dz}\), is the number density of SMBBH GW sources with black hole masses in the range \([m_{1},\ m_{1}+dm_{1}]\) and \([m_{2},\ m_{2}+dm_{2}]\) at redshift \([z,\ z+dz]\) and determines the amplitude and spectral shape of the SGWB signal. The second term, \(\frac{dE_{\rm GW}(m_{1},m_{2},z)}{d\ln f_{r}}\), quantifies the amount of GW energy released per logarithm test-frame frequency by a binary of source masses \(m_{1}\) and \(m_{2}\) at redshift \(z\). The latter is the product of the GW energy emission rate (\(\frac{dE_{\rm GW}(m_{1},m_{2},z)}{dt_{r}}\)), and the residence time (i.e. the amount of time a source spends at a frequency: \(\frac{dt_{r}}{d\ln f_{r}}\)). Following Kelley et al. (2017, 2018), we write the energy released as
\[\frac{dE_{\rm GW}(m_{1},m_{2},z)}{d\ln f_{r}}= \frac{dE_{\rm GW}(m_{1},m_{2},z)}{d\ln f_{r}}\bigg{|}_{\rm GW} \frac{\tau_{h}}{\tau_{\rm GW}}(f), \tag{5}\] \[= f_{r}\,\frac{dE_{\rm GW}(m_{1},m_{2},z)}{df_{r}}\bigg{|}_{\rm GW }\frac{\tau_{h}}{\tau_{\rm GW}}(f),\]
where
\[\frac{dE_{\rm GW}(m_{1},m_{2},z)}{df_{r}}\bigg{|}_{\rm GW}=\frac{(\pi G)^{2/3} M_{c}^{5/3}}{3(1+z)J_{r}^{1/3}}, \tag{6}\]
for circular orbits emitting signals up to the innermost circular stable orbit (ISCO). Here, \(M_{c}=(m_{1}m_{2})^{3/5}/(m_{1}+m_{2})^{1/5}\) is the binary's chirp mass, and \(f\) is the frequency, which at ISCO is given by \(f_{r,\rm ISCO}=c^{3}/(6^{3/2}\pi G\,M_{\rm tot})\) in terms of total mass of the binary \(M_{\rm tot}=m_{1}+m_{2}\). In the presence of higher harmonics, this equation modifies to a sum over the higher harmonics(Enoki & Nagashima, 2007).
As for the second term in Eq. (5), the ratio \(\frac{\tau_{h}}{\tau_{\rm GW}}(f)\) captures the residence time of the GW signal at a particular frequency. The numerator (\(\tau_{h}\equiv\frac{d}{da/dt_{r}}\)) is the binary hardening time expressed in terms of the semi-major axis of the binary \(a\). Initially, this timescale depends on the environmental effects arising due to the interaction between the binaries and their local environment. These effects include (i) dynamical friction, (ii) stellar loss-cone scattering, and (iii) viscous drag. The impact of these environmental effects is among the major sources of uncertainty in the spectral shape of the signal but typically these environmental effects reduce the residence time of the GW signal at a particular frequency and the ratio will be less than one.
As we have noted, the above environmental effects cannot be directly computed from the Romulus25 simulation. Moreover, from an EM observations point of view, resolving galaxies in sub-parsec scale at a cosmological distance is not possible with currently ongoing and upcoming surveys. However, we can determine the average astrophysical properties of a galaxy -- like gas density, stellar mass, halo mass, and other properties -- on kpc scales from cosmological simulations as well as observations. We therefore model the ratio, \(\frac{\tau_{\rm BB}}{\tau_{\rm GW}}\), in terms of the average astrophysical properties of host SMBBH galaxies:
\[\frac{\tau_{h}}{\tau_{\rm GW}}(f)=\mathcal{E}(f,\dot{M}_{*},M_{*},M_{h},\rho_{ \rm gas},z). \tag{7}\]
In effect, we want to construct a framework that can relate the nano-Hertz GW signal detectable from PTA with the observable quantities of galaxies.
### Modelling the environmental effect
In this subsection, we discuss the model for \(\mathcal{E}(f,\dot{M}_{*},M_{*},M_{h},\rho_{\rm gas},z)\) in greater detail. But first we note that the impact of the environmental effects is greatest when the binaries are further away from each other and are radiating at lower frequencies of GW (Sesana et al., 2008; Volonteri et al., 2003; Kocsis & Sesana, 2011; Sampson et al., 2015; Chen et al., 2017; Kelley et al., 2017, 2018; Volonteri et al., 2020). As the SMBBHs inspiral and their separation decreases, they emits GW signals at increasingly higher frequencies. At frequencies of around 1 yr\({}^{-1}\) (or about a few \(\times 10^{-8}\) Hz), the environmental effects are no longer dominant. The SMBBHs' evolution is dominantly through GW emission, and the frequency-dependent part of the ratio \(\frac{\tau_{h}}{\tau_{\rm GW}}(f)\)
approaches unity. Impact of these effects on the GW strain were modelled also using parametric forms (Sampson et al., 2015; Chen et al., 2017).
As noted, the environmental effects of interest include (i) dynamical friction, (ii) stellar loss-cone scattering, and (iii) viscous drag. On kiloparsec scales, dynamical friction due to the interaction of the black holes with the dense stellar environment in the central regions of the galaxy is the primary hardening mechanism. These interactions lead to changes in the acceleration of the binaries (Kelley et al., 2017, 2017). On parsec scales, the dominant hardening mechanism is scattering between the black holes and individual stars and rate of evolution depends on loss cone. On these scales, the evolution of the semi-major axis of the binary separation and eccentricity depends on the density profile at the parsec scale and also the velocity dispersion. Once the SMBBH separation shrinks to miliparsec scales (about \(10^{-3}\) pc), viscous drag from the ambient gas can also play a role.
None of the current generation of cosmological simulations have the resolution to follow the above processes below scales of a few hundred parsecs. Moreover, one also needs very high-resolution observations to determine the density profile of stars and gas at these scales. We therefore use a parametric equation to capture the environmental effect \(\mathcal{E}\):
\[\mathcal{E}(f,\dot{M}_{*},M_{*},\rho_{\rm gas},z)=\alpha\left[1+\beta\left( \frac{f}{f_{t}}\right)^{-\kappa}\right]^{-\gamma}, \tag{8}\]
where
\[\alpha= \alpha_{p}\bigg{(}\frac{\log(\rho_{\rm gas}=10^{7}M_{\odot}/{ \rm kpc}^{3})}{\log(\rho_{\rm gas})}\bigg{)}\] \[+\alpha_{M_{*}}\bigg{(}\frac{\log(M_{*}=10^{10}M_{\odot})}{\log(M _{*})}\bigg{)}\] \[+\alpha_{M_{*}}\bigg{(}\frac{\log(\dot{M}_{*}=10^{8}M_{\odot}/{ \rm Gyr})}{\log(\dot{M}_{*})}\bigg{)}, \tag{9}\] \[\beta= \beta_{p}\bigg{(}\frac{\log(\rho_{\rm gas}=10^{7}M_{\odot}/{\rm kpc }^{3})}{\log(\rho_{\rm gas})}\bigg{)}\] \[+\beta_{M_{*}}\bigg{(}\frac{\log(M_{*}=10^{10}M_{\odot})}{\log(M _{*})}\bigg{)}. \tag{10}\]
here \(\alpha\) is a dimensionless quantity that captures the fraction of the GW sources that can reach the GW emission-dominated regime (up to about \(10^{-5}\) pc) from the kpc scale depending on the environment close to the source. So this term controls the total number of SMBBHs that are present in a kpc scale and can successfully end up merging within the age of the Universe. This value of \(\alpha<1\) is possible for scenarios even if there are no significant frequency-dependent environmental effects at high frequencies. This quantity decides the efficiency for the SMBBHs identified in the kpc range in the simulation can end up in a GW-driven phase. The term \(\beta\) is a dimensionless quantity that captures the frequency-dependent deviation from the GW-emission-only phase due to several environmental effects. The term \(\kappa\) controls the spectral behavior of the environmental effects (Sampson et al., 2015). The value for some of the effects such as stellar scattering is \(10/3\). However, the combination of various effects can lead to a different spectral index. Finally, the parameter \(\gamma\) controls the overall tilt of the environmental effects. For a fiducial case with a GW-emission-only scenario, the value of \(\gamma=1\) can be considered as a fiducial. However, there can be deviations due to astrophysical effects. For the spectral shape of the signal, we use three parameters, namely \(\gamma,\kappa\), and \(f_{t}\) where \(f_{t}\) is the transition frequency at which the GW dominant effects become important over the environmental effects. The transition frequency can be expressed in terms of the stellar density \(\rho_{*}\) (in units of M\({}_{\odot}\) pc\({}^{-3}\)) and velocity dispersion \(\sigma_{*}\) (in units of km/s), eccentricity \(e\), and chirp mass \(M_{c}\) (in units of solar mass) as
\[f_{t}=f_{0}\bigg{(}\frac{\rho_{*}}{F(e)\sigma_{*}}\bigg{)}^{3/10}M_{c}^{-2/5}, \tag{11}\]
where \(F(e)=(1+(72/24)e^{2}+(37/96)e^{4})/((1-e^{2})^{7/2})\). \(f_{0}\) is a correction factor incorporating any effect that may not be captured by this simplistic approximate formula (such as mass ratio). For \(\rho_{*}=100M_{\odot}\) pc\({}^{-3}\), \(\sigma_{*}=200\) km/s, \(M_{c}=10^{9}\) M\({}_{\odot}\), \(f_{0}=1\), the value of \(f_{t}\) is around 0.4 nano-Hertz (Chen et al., 2017).
### SGWB estimation from SMBBHs in Romulus simulation
Having put in place the above elements, we can use Eq. (4), Eq. (5), and Eq. (7) to estimate the SGWB signal from discrete sources in a cosmological simulation as follows
\[h_{c}^{2}(f)=\frac{4G}{c^{2}\pi f^{2}V_{c}}\sum_{i}\frac{1}{(1+z^{i})}\left[ \frac{dE_{\rm GW}(m_{1}^{i},m_{2}^{i},z^{i})}{d\ln f_{r}}\right]_{\rm GW} \frac{\tau_{h}}{\tau_{\rm GW}}(f)\right]_{i}, \tag{12}\]
where the sum runs over all the source SMBHHs (or equivalently, host galaxies in which coalescing SMBHs are present) in a simulation box of comoving volume \(V_{c}=(25{\rm Mpc})^{3}\) that contributes to GW background.
We use the above relationship and the sources identified in the Romulus25 simulation to model the SGWB. The results are shown in Fig. 1. We show the results for the fiducial case (in salmon) corresponding to SMBHs with chirp mass \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) and with parameter values \(\alpha=0.2\), \(f_{t}=5\) nano-Hertz, \(\beta=1\), \(\kappa=10/3\), and \(\gamma=1\). The values of the parameter \(\alpha\) (which is the fraction of the SMBBHs present in 1.4 kpc that can coalescence to sub-parsec scale for contributing in the PTA frequency range) is chosen such that the amplitude of the signal matches with the observed nano-Hertz signal at frequency f=1 yr\({}^{-1}\)(Xu et al., 2023; Antoniadis et al., 2023; Agazie et al., 2023; Zic et al., 2023). The values of the other parameters which controls the shape of the signal are taken at a fiducial values based on the physical models as discussed in the previous subsection (cf SS4.2). We also show the variations to the results for fiducial case resulting from changes in the parameter values. Specifically, we show the results for \(f_{t}=8\) nano-Hertz (in purple), \(\kappa=7/3\) (in magenta), \(\beta=0.8\) (in brown), \(\gamma=0.5\) (in cyan), and \(\alpha=0.3\) (in indigo). The spectral shape of the signal changes moderately for the changes in the parameters considered here. We will explore the parameter estimation using galaxy catalog in future work (in preparation). If the properties of the underlying host galaxy can be inferred, then the values of the parameters which control the environmental effects can be measured. We also show the results for SMBHs with chirp mass \(M_{c}<10^{8}\) M\({}_{\odot}\) (in yellow). An important point to note is that the contribution to the signal from BBHs with chirp mass \(M_{c}<10^{8}\) M\({}_{\odot}\) is not significant; they only contribute about 10% of the signal arising from sources with mass M\(\geq 10^{8}\) M\({}_{\odot}\).
### Connecting SGWB signal with galaxy properties
In the case of Romulus25 simulation, we have firsthand knowledge of \(\frac{d^{3}n_{\rm GW}(m_{1},m_{2},z)}{dm_{1}dm_{2}dz}\), the number density of GW sources. However, we
aim to find a map between the GW sources and the EM observations - specifically, the galaxies in a complete galaxy catalog.
To connect the EM and GW observational sectors, we assert that the total number of GW sources should equal the total number of GW source host galaxies
\[\int\,dz\,\frac{dV}{dz}\iint\,dm_{1}\,dm_{2}\frac{d^{3}n_{\rm GW}(m_{1},m_{2},z )}{dm_{1}dm_{2}dz}=\int\,dz\,\frac{dV}{dz}\iiint\,d(\dot{M}_{*})\,dM_{*}\,dM_{ h}\,d\rho_{\rm gas}\,\frac{d^{4}n_{\rm EM}(\dot{M}_{*},M_{*},M_{h},\rho_{\rm gas,z})}{d\dot{M}_{*}dM_{*}dM_{h}d\rho_{\rm gas}}, \tag{13}\]
\[\int\,dz\,\frac{dV}{dz}\iint\,dm_{1}\,dm_{2}\frac{d^{3}n_{\rm GW}(m_{1},m_{2},z )}{dm_{1}dm_{2}dz}=\int\,dz\,\frac{dV}{dz}\iiint\,d(\dot{M}_{*})\,dM_{*}\,dM_{ h}\,d\rho_{\rm gas}\,\frac{d^{4}n_{\rm EM}(\dot{M}_{*},M_{*},M_{h},\rho_{\rm gas,z})}{d\dot{M}_{*}dM_{*}dM_{h}d\rho_{\rm gas}}, \tag{14}\]
\[\int\,dz\,\frac{dV}{dz}\iint\,d\,z\,d\dot{M}_{*}\,dM_{h}\,d\rho_{\rm gas}\eta (z,m_{1},m_{2},\dot{M}_{*},M_{*},M_{h},\rho_{\rm gas})\,\frac{d^{4}n_{\rm gal} (\dot{M}_{*},M_{*},M_{h},\rho_{\rm gas},z)}{d\dot{M}_{*}dM_{*}dM_{h}d\rho_{ \rm gas}}\,\frac{1}{1+z}\,\frac{dE_{\rm GW}(m_{1},m_{2},z)}{d\ln f_{r}}\, \Bigg{|}_{\rm GW}\frac{\tau_{h}}{t_{\rm GW}}(f).\]
The term \(\alpha\), coming from \(\frac{\tau_{h}}{\tau_{\rm GW}}(f)\) (cf Eq. 8), and the occupation fraction \(\eta\) appear in the above as a multiplicative factor (\(\alpha\times\eta\)), which now takes care of the overall occupation of the number of the SMBBHs contributing to the SGWB in terms of both black hole masses and also the astrophysical properties of the galaxies. One can compare this with the observations and make an inference of this quantity.
Finally, we can also write the GW source distribution function, \(\frac{d^{3}n_{\rm GW}(m_{1},m_{2},z)}{dm_{1}dm_{2}dz}=\int\,\frac{d^{4}n_{\rm GW }(m_{1},m_{2},z)}{dm_{1}dm_{2}dz}\), the terms of the halo mass function
\[\frac{d^{3}n_{\rm GW}(m_{1},m_{2},z)}{dm_{1}dm_{2}dz}=\int\,\frac{d^{4}n_{\rm GW }(m_{1},m_{2},z)}{dm_{1}dm_{2}dzdn_{\rm halo}(M_{h},z)}\,\frac{dn_{\rm halo}(M_ {h},z)}{dM_{h}}dM_{h}, \tag{15}\]
where \(\frac{d^{4}n_{\rm GW}(m_{1},m_{2},z)}{dm_{1}dm_{2}dzdn_{\rm halo}(M_{h},z)}\) is the SMBBHs occupation number density in a halo of mass \(\rm M_{h}\) and \(\frac{dn_{\rm halo}(M_{h},z)}{dM_{h}}\) is the halo mass function, i.e. the number density of halos in halo mass bin \(\rm M_{h}\) (We
Figure 1: We show the SGWB strain as a function of the frequency with change in the fiducial values of the parameters \(\alpha=0.2\), \(f_{t}=5\) nano-Hertz, \(\beta=1\), \(\kappa=10/3\), and \(\gamma=1\), and also for SMBBHs with chirp mass \(\rm M_{c}<10^{9}M_{\odot}\) and \(\rm M_{c}>=10^{9}M_{\odot}\).
remind the reader that in the present study, we identify \(\rm M_{h}\) with \(\rm M_{500}\).).
From a simulation, we can estimate the population of SMBBHs which can contribute to the SGWB in the PTA frequency range, and also identify the mass and redshift of the host halo. This gives us a connection between the SMBHs and halo mass \(M_{h}\) written in Eq. (15). Similarly we can identify the astrophysical properties of the host galaxies of the coalescing binaries from simulations. These would also be accessible from EM observations. This gives us an avenue to connect the astrophysical properties of the host galaxies with the SMBH properties.
## 5 Properties of the host galaxies of the nano-hertz GW sources
The dynamics of the SMBBHs and their contribution to the nano-Hertz frequency depends on the local astrophysical properties as discussed in the previous section. However, to identify the key astrophysical properties of the host galaxies that can be identified from observations, we need to explore the large-scale properties of the host galaxy. In this section, we explore both of these aspects. The properties we consider include gas density, stellar mass, and star formation rate. While discussing the host galaxies, we also comment on the properties of the halos in which these galaxies reside.
### Local galactic properties
In this section, we focus on the characteristics of the gas and stars in the vicinity of the SMBBHs, and specifically, within a 1 kpc radius around the more massive black hole in SMBBHs, as the most massive BH drives the chirp mass and hence the strength of the signal. We also restrict ourselves to SMBBHs with a chirp mass \(M_{c}\geq 10^{8}\)\(\rm M_{\odot}\)2 and \(z\leq 2\). These SMBBHs account for \(~{}sim~{}90\%\) of the total SGWB power spectrum (see Fig. 1). Our analysis of the Romulus25 simulation reveals six nano-Hertz GW sources.
Footnote 2: \(M_{c}\geq 10^{8}\)\(\rm M_{\odot}\) implies that the most massive SMBH in the pair is _at least_\(8.7\times 10^{7}\)\(\rm M_{\odot}\) if both SMBHs are equal mass, or more realistically \(>2.5\times 10^{8}\)\(\rm M_{\odot}\) since the BH mass ratios are typically \(<0.2\).
Fig. 2 shows, from top to bottom, the gas density (\(\rho_{\rm gas}\)), star formation rate (SFR), and stellar mass (\(\rm M_{*}\)) vs redshift around the nano-Hertz GW SMBBH sources. No significant evolution with redshift is found for the local SFR and the local stellar mass. However, the top panel shows an increasing trend of \(\rho_{\rm gas}\) with redshift. Nonetheless, examining the properties around black holes solely in nano-Hertz GW sources doesn't provide a comprehensive view. Therefore, we compare these to the same properties surrounding the most massive of the two black holes in our full set of proximate and merging (cf SS3.4) pairs of black holes in the Romulus25 simulation. The resulting distributions for gas density, SFR, and stellar mass are shown as salmon histograms in Fig. 3. The corresponding quantities for nano-Hertz GW sources are denoted by a vertical line. We find that the local \(\rho_{\rm gas}\) and SFR around our nano-Hertz GW sources are typical. Specifically, the increasing gas density with redshift around the nano-Hertz GW sources simply reflects the fact that all systems have more gas at earlier epochs. However, the stellar mass in the vicinity of our nano-Hertz GW sources falls in the high mass tail of the corresponding distribution, as illustrated in Fig. 3. This indicates that nano-Hertz GW sources are more likely to be found in environments with high local stellar mass or equivalently, stellar density. Though it is important to note that this conclusion is based on only 6 events detectable in a simulation box of (25 cMpc)3. A study from a bigger box simulation will help in better understanding the statistical properties.
Footnote 3: \(M_{c}\geq 10^{8}\)\(\rm M_{\odot}\) implies that the most massive SMBH in the pair is _at least_\(8.7\times 10^{7}\)\(\rm M_{\odot}\) if both SMBHs are equal mass, or more realistically \(>2.5\times 10^{8}\)\(\rm M_{\odot}\) since the BH mass ratios are typically \(<0.2\).
### Global galactic and host halo properties
In this section, we consider the global galactic characteristics of the hosts of nano-Hertz GW sources, delving into the observable properties of these host galaxies. We also examine the properties of the halos of these host galaxies.
Figure 4 illustrates the relation between galaxies' gas density (\(\rho_{\rm gas}\)), stellar mass (\(\rm M_{*}\)), star formation rate (SFR), and specific star formation rate (sSFR) vs redshift. Each of these quantities is calculated inside a 25 kpc sphere around the galaxy center. The first panel shows the galaxy gas density. No significant evolution with
Figure 2: The local astrophysical properties of the galaxies: gas density (top), SFR (middle), and stellar mass (bottom) within 1 kpc of the more massive SMBH in merging SMBBHs contributing most of the SGWB signal (i.e. with chirp masses \(M_{c}\geq 10^{8}\)\(\rm M_{\odot}\)), shown as a function of the host redshift.
redshift is observed. On the other hand, the second and third panels clearly show a decrease in the stellar mass and an increase in SFR with redshift, respectively. Not surprisingly, this results in a rising sSFR with redshift, as shown in the lower right panel. Additionally, the stellar mass of the host galaxies suggests that at the redshifts of interest, these galaxies are among the most massive systems in the Romulus volume. Based on analyses of Jung et al. (2022) and Saeedzadeh et al. (2023b), we guess that these are the massive central galaxies in group-scale halos. We will discuss this elaborately at a later part in this section. The specific star formation rate (sSFR) is \(\sim 10^{-3}\), which is consistent with the observed SFRs.
Figure 3: The vertical line in each panel shows the gas density (first and second row), star formation rate (third and fourth row) stellar mass (fifth and sixth row) within 1 kpc of the more massive SMBH in the merging SMBBH pairs identified as nano-Herz GW source (i.e. with chirp masses \(M_{c}\geq 10^{8}\) M\({}_{\odot}\)) at the host galaxy’s redshift. The histogram shows the corresponding quantities around the more massive black holes in all SMBH pairs (merging and proximate) in the Romulus25 simulation at the same redshift.
is frequently used to classify galaxies as star forming or quenched. We show the evolution of the sSFR with redshift for the host of the SMBBHs with \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) in the last panel in Fig. 4.
In this paper, we adopt the criteria from Genel et al. (2018), where they label a galaxy as "main sequence" if its sSFR is within \(\pm 0.5\) dex of the main sequence ridge and as "quenched" if its sSFR is 1 dex below the ridge. Genel et al. (2018) give a relationship between sSFR and \(M_{*}\) at few redshifts. We linearly interpolate across these to determine the main sequence ridge sSFR at redshifts and stellar masses of our host galaxies. In Fig. 5, we show \(\Delta\)log(sSFR) \(\equiv\) log(sSFR\({}_{\rm galaxy}\)) - log(sSFR\({}_{\rm ridge}\)) for our host galaxies. The shaded region shows the main sequence band and the dashed line corresponds to the threshold below which galaxies are classified as quenched. Five of our six hosts either lie in the quenched territory or are on the border. One galaxy, however, the \(z=1.061\) host, falls within the star-forming main sequence band.
Next, we examine the rest-frame U-V colors of the host galaxies. For completeness, we show (U-V)\({}_{\rm rest-frame}\) versus rest-frame V-band luminosity (L\({}_{\rm V}\)), rest-frame absolute magnitude (M\({}_{\rm V}\)), and stellar mass (M\({}_{*}\)). In these plots, large circles represent the host galaxies of nano-Hertz GW sources with \(M_{c}\geq 10^{8}\)M\({}_{\odot}\), while small circles delineate hosts of nano-Hertz GW sources with chirp masses ranging between \(10^{7}M_{\odot}<M_{c}<10^{8}M_{\odot}\). Generally, the latter tend to be bluer and less massive than the hosts of our nano-Hertz GW sources. The four \(z<1\) host galaxies are consistent with the quenched, early-type galaxies in Coma and Virgo clusters (Renzini, 2006) as well as the SDSS and the CANDELS Multi-Cycle Treasury Survey (Bell et al., 2012) while the two \(z>1\) hosts, one of which is star-forming and the other quenched according to the Genel et al. (2018) criterion, are both just slightly bluer than the population of quenched early-types at comparable redshifts in the CANDELS Multi-Cycle Treasury Survey (Bell et al., 2012). The comparison to galaxies at the same redshifts is important since the stellar population in higher redshift galaxies is inherently younger and therefore bluer.
To gain better insight into why the two higher redshift exhibit bluer colors and to further clarify the nature of the host galaxies generally, we examine the images of the host galaxies. These are shown in Fig. 7. The top row shows the edge-on view of the galaxies and the bottom row shows the face-on view. The stars are colored
Figure 4: The global galactic properties of the galaxies hosting the merging SMBBH pairs identified as nano-Hertz GW sources (i.e. with chirp masses \(M_{c}\geq 10^{8}\) M\({}_{\odot}\)). The panels show gas density (first panel), stellar mass (second panel), SFR (third panel), and sSFR (fourth panel) as a function of redshift. All quantities are calculated inside a sphere with a 25 kpc radius around the galaxy center.
Figure 5: Categorising host galaxies of SMBBHs with \(M_{c}>10^{8}M_{\odot}\) as star-forming or quenched following definition of Genel et al. (2018). Here we show \(\Delta\)log(sSFR) \(=\) log(sSFR\({}_{\rm galaxy}\)) - log(sSFR\({}_{\rm ridge}\)) for these host galaxies. The dashed line, which marks 1 dex below the ridge for each redshift, serves as the quenched threshold. The shaded region indicates the “main sequence”. See text for detailed definitions.
based on their magnitudes, determined by their age and metallicity. The magnitudes from the 'i' band influence the red component of the image, 'v' the green, and 'u' the blue. These channels are then combined to produce a multiband composite image of the galaxy.3 Visually, _all_ of the galaxies appear to be early types, and the majority of the stars in these galaxies are old in the sense that 50% had formed within the first 2.5-2.6 Gyrs after the Big Bang. This is illustrated by the histograms in Fig. 8, which show the cosmic age (i.e. the age of the Universe) when the stars in the galaxies formed. Turning back to Fig. 7, we see that nearly all of the galaxies appear to have experienced recent merger(s). They all manifest features like stellar streams and shells (eg Fardal et al., 2007, and references therein). Some of the mergers are gas rich, as in the case of the host galaxy at \(z=1.06\) where one can clearly see an extended stream with ongoing star formation against a background of older stars. A less prominent stellar stream can also be seen in the host galaxy at z = 1.671. A detailed analysis of the morphology of host galaxy of merging binary black holes by Bardati et al. (2023) shows that the dominant morphological signature of SMBH mergers is the presence of a classical bulge that is also a sign of major mergers of these host galaxies. The panels in Fig. 8 provide additional information, including the fraction of stellar mass at the time of observation that has an age \(\leq 1\) Gyr. The latter varies from \(<1\)% to as much as 12% in the highest redshift system. This small fraction of very young stars is the likely explanation for the two \(z>1\) galaxies' bluer colors. As shown by Pipino et al. (2009), even a small fraction (\(\sim 1\)%) of young stars (\(\sim 0.1\) Gyr) can have a dramatic impact on UV-optical colors.
Footnote 3: The galaxy images are generated using pynmody package (Pontzen et al., 2013).
For the SMBBHs, we show the time taken by these sources to grow to masses above \(M_{c}=10^{7}\) M\({}_{\odot}\) from their progenitor mass of 50% of the source masses in Fig. 9 for both the SMBHs in a SMBBH. The distribution of the delay time for the black holes with M\({}_{c}\geq 10^{7}\) M\({}_{\odot}\) and M\({}_{c}\geq 10^{8}\) M\({}_{\odot}\) is shown in blue and orange respectively in Figure 9. The shortest delay time observed for these black holes is 1.4 billion years, roughly 10% of the age of the Universe for sources with M\({}_{c}\geq 10^{8}\) M\({}_{\odot}\), which contribute significantly to the SGWB. This indicates that the number of mergers of the PTA sources is likely to be more towards low redshift than high redshift, and the corresponding properties of the host galaxies will be towards older galaxies.
In Fig. 10, left panel, we show the correlation between the chirp mass of SMBBHs with the stellar mass of their host galaxy. The right
Figure 6: The rest-frame U-V vs rest-frame V-band luminosity (\(L_{V}\)) (first panel), vs rest-frame absolute magnitude (\(M_{V}\)) (second panel), and vs stellar mass \(M_{*}\) (last panel) of the host galaxies of the SMBHs for chirp mass \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) (represented by big circles) and \(10^{7}\) M\({}_{\odot}\leq M_{c}\leq 10^{8}\) M\({}_{\odot}\) (represented by small circles). The data points are color-coded as a function of redshift.
Figure 7: Multi-band composite image of the host galaxies for SMBBHs with chirp mass \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) is shown with edge-on (top) and face-on (bottom) views at redshifts they are detected.
panel shows the mass of more massive black holes in the SMBBHs vs. the host galaxies' stellar mass. Sources with black holes mass ratio \(q<0.1\)4 are shown by stars and ones with \(q>0.1\) are shown by circles. Large symbols correspond to SMBBHs with chirp mass \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) and small symbols denote those with \(10^{7}\) M\({}_{\odot}\leq M_{c}\leq 10^{8}\) M\({}_{\odot}\). The plot indicates that sources with the highest chirp mass are primarily present at low redshift and are hosted in galaxies with stellar mass greater than \(10^{11}\) M\({}_{\odot}\). Furthermore, the heaviest black holes are found in pairs with chirp masses \(M_{c}\geq 10^{8}\) M\({}_{\odot}\), which are hosted by massive galaxies. These observations support the hypothesis we presented at the beginning of this section: that nano Hertz GW sources predominantly inhabit massive, group-central galaxies. We will explore this further below.
Footnote 4: The mass ratio is defined as \(q\equiv m_{1}/m_{2}\) with \(m_{2}>m_{1}\).
In Fig. 11, we compare the global properties (i.e. global stellar mass and sSFR) and the environment (i.e. the halo mass and location therein) of the host galaxies of nano-Hertz GW sources with \(M_{c}>10^{8}\)M\({}_{\odot}\) (vertical lines) against the properties of all galaxies in the simulations within halos with M\({}_{200}>4.5\times 10^{9}\)M\({}_{\odot}\)5 (histograms). The plot supports the discussion we have presented above in the
Figure 8: The age of the Universe when stars in the host galaxies of SMBBHs with a chirp mass \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) formed. The dashed line represents the age of the Universe at the time the SMBBHs are detected. The values in the top right box, from top to bottom, indicate the stellar mass within 25 kpc from the galaxy center, the median age of the stars, and the fraction of the stellar mass observed that is aged \(\leq 1\) Gyr.
Figure 9: The amount of time taken by the SMBHs to grow the last 50% of its mass it possesses at the time of the merger is shown for sources with chirp mass \(M_{c}\geq 10^{7}\) M\({}_{\odot}\) (in blue) and \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) (in orange). The distribution indicates that the SMBHs with chirp mass \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) need at least 10% of the age of the Universe to grow, indicating these objects are likely to be host in old galaxies.
context of the colors and quenched/star-forming status of the host galaxies, that the sSFR of these galaxies places them in the low sSFR tail of the sSFR distribution of the galaxies in the Romulus25 simulation volume. From Fig. 11 we deduce that our host galaxies are among the most massive galaxies in the Romulus25 simulation volume. That the host galaxies of the nano-herts GW sources reside in galaxies with high stellar mass and low sSFR making them a unique class of objects.
This is further confirmed by comparing the host halo M\({}_{500}\) to all6 halos M\({}_{500}\) in the Romulus simulation which is shown in the histogram, we note that the host halo masses are in the high-mass tail of the distribution (see the last two panels in Fig. 11). The halos in which the host galaxies reside are group scale systems and based on the results of Jung et al. (2022) and Saeedzadeh et al. (2023b), we expected - and have subsequently confirmed - that these galaxies are massive central group galaxies. Collectively these findings strongly suggest that the nano-Hertz GW sources are hosted by massive early-type galaxies at the centers of groups and clusters. _However, we assert that the typical hosts of the nano-Hertz GW sources will be group-central galaxies_. For one, there are many more groups than clusters. Moreover, the lower velocity dispersion of group satellites makes dynamical friction in group halos more efficient, and consequently, group environments are much more conducive to mergers, especially between the satellite and the central galaxies (O'Sullivan et al., 2017; Oppenheimer et al., 2021, and references therein).
Footnote 6: All halos with M\({}_{200}>4.5\times 10^{9}\)
For completeness, in Fig. 12, we present the mass of the more massive black hole in nano-Hertz GW sources and the host halo mass of these sources in the top and bottom panels, respectively. As expected, these plots indicate that as redshifts increase, the halo mass decreases, and the black hole mass in the pair also reduces.
In this section, we have elucidated the unique astrophysical properties of the host galaxies of the SMBBHs which contribute to the SGWB signal in comparison to all galaxies in the simulation. Our findings highlight that GW source hosts (with chirp mass \(M_{c}\geq 10^{8}\) M\({}_{\odot}\)) predominantly reside in galaxies characterized by lower star formation, higher stellar mass, and higher halo masses compared to most counterparts at a given redshift. Specifically, these hosts are located within group-scale halo systems, identifying as massive central group galaxies. These host galaxies are early-type galaxies, displaying a distinct trend in the color-magnitude diagram across redshifts. These astrophysical properties inferred theoretically about the SMBBHs make it possible to correlate electromagnetic observations of the galaxies with the GW sources. Exploring such connections, coupled with comparisons to theoretical models, offers insights into the interplay between galaxy formation and black hole formation.
## 6 Possible techniques to connect observations with theoretical models
In the last two sections, we have discussed a scheme to connect the global astrophysical properties of the galaxies with the spectrum of the SGWB signal in the PTA band and have applied that to the Romulus simulation to understand the underlying theoretical correlation. The next interesting step forward is to connect this with the observations available from currently ongoing/upcoming surveys (see Burke-Spolaor et al. (2019) for review article). The observation of the GW signal from PTA observations in the nano-Hertz range can happen as (i) SGWB and (ii) GW signal from individual events. Both of these kind of observations can bring complementary information.
_SGWB:_ The measurement using PTA observations provides a measurement of the spectrum of the SGWB signal. However, it is still unclear what are the properties of the host galaxies of the SMBBHs that contribute to the signal. As we have shown in the previous section, the simulations show that the binaries are likely to form in galaxies with high stellar mass, high halo mass, and low SFR, and mostly early-type galaxies, that show signs of mergers in not too distant a past. We also showed that the host galaxies are central group galaxies. The host of the GW sources also shows a trend in the color-magnitude diagram as a function of redshifts.
Figure 10: Left panel: The chirp mass of SMBBHs plotted against the stellar mass of their host galaxy. Right panel: Mass of the more massive black hole in SMBBHs versus the stellar mass of their host galaxy. SMBBHs with a mass ratio \(q<0.1\) are represented by stars, while those with \(q>0.1\) are represented by circles. Large symbols correspond to SMBBHs with chirp mass \(M_{c}\geq 10^{8}\) M\({}_{\odot}\) and small symbols denote those with \(10^{7}\) M\({}_{\odot}\leq M_{c}\leq 10^{8}\) M\({}_{\odot}\). The data points are colored according to redshift.
Figure 11: vertical line in each panel shows properties of host galaxies and host halo of the merging SMBBH pairs identified as nano-Herz GW sources (i.e. with chirp masses \(M_{c}\geq 10^{8}\) M\({}_{\odot}\)) at the corresponding redshift. The corresponding distribution for all galaxies or halos with M\({}_{200}>4.5\times 10^{7}\)M\({}_{\odot}\) in the Romulus25 simulation at the same redshift with a background histogram. **First and second row**: display the stellar mass. **Third and fourth row:** present the sSFR. **Fifth and six row:** show M\({}_{\odot}\) of the host halos. All properties are measured at the host galaxy’s redshift. The stellar mass and sSFR are calculated within a 25 kpc sphere around the galaxy center.
Based on these understandings, we can classify galaxies from electromagnetic observations based on their color, stellar mass, halo-mass, SFR, and galaxy type and can explore spatial cross-correlation of the galaxy distribution with the anisotropic SGWB signal (Mingarelli et al., 2013; Hotini et al., 2019; Sato-Polito & Kamionkowski, 2023) and explore cross-correlation between the two quantities (Mukherjee & Silk, 2020; Yang et al., 2020; Mukherjee & Silk, 2021; Yang et al., 2023). A detailed paper on this formalism will be followed up in a companion paper. The cross-correlation of the SGWB signal with the galaxies of different types will be maximum for types of galaxies that are host of the GW sources. The exploration of the cross-correlation signal will give us an understanding of the population of the GW sources contributing to the background and we can estimate the occupation number of SMBHs. This will be useful in understanding the SGWB measurement in terms of the astrophysical properties of galaxies given in Eq. (14) based on observations. In future work, we will explore this aspect from the measurement of the SGWB signal and galaxies detected from optical and infrared surveys.
Signal from individual eventsThe measurement of the nano-Hertz GW signal from individual sources is likely to be possible from the future array of radio antennae such as Square Kilometer Array (SKA) (Ellis, 2013; Burke-Spolaor, 2013; Kharb et al., 2017). With such observations of individual GW signals, we can fit the astrophysical properties of the galaxies with the frequency dependence of the GW signal and fit the parameters on the occupation number and the signature of the environmental effects on the GW strain by directly comparing the properties of the host galaxy such as the gas density, stellar mass, halo mass, SFR, galaxy morphology, and color. Furthermore, an interesting avenue will be to perform a dedicated study of the hosts of the GW sources with high-resolution spectroscopic surveys to better understand its astrophysical properties.
## 7 Conclusion and future outlook
In this work, we explore the astrophysical properties of the host galaxies of the SMBBHs which can produce nano-Hertz SGWB using the Romulus cosmological simulation. Romulus is capable of modeling the astrophysical properties of galaxies and its unique approach to seeding, accretion, and particularly the dynamics of SMBHs makes it especially well-suited for investigating SMBH/SMBBH-galaxy connections. Using this simulation, we have calculated the SGWB signal from the SMBBHs by modeling the environmental effects around the SMBHs. We studied the redshift evolution of the astrophysical properties of the host galaxies such as gas density, SFR, stellar mass, and halo mass, across redshifts. Our comparison of the host galaxies of nano-Hertz GW sources with other simulated galaxies reveals that these host galaxies possess a lower SFR, greater stellar mass, and larger halo masses compared to most counterparts at a given redshift. We demonstrate that these hosts are situated within group-scale halo systems and are, in fact, central group galaxies. These host galaxies are early-type galaxies, characterized predominantly by older star populations. They exhibit a distinct trend in the color-magnitude diagram across redshifts, which could be of particular interest to compare with observations.
The theoretical connection of the host galaxy properties of the GW sources and the black hole masses indicates which kind of galaxies and their evolution are linked with the black hole merger. This theoretical connection shown in this work will be a guideline for us to explore the connections from GW observations in the nano-Hertz band and optical and infrared galaxy observations. By measuring the spatial cross-correlation between the anisotropic SGWB with galaxies as well as a targeted search of individual galaxies for the nano-Hz GW events in the SKA era.
The multi-messenger technique by exploring the connection between the astrophysical properties of host galaxies with the SMBHs and the strain of the GW signal from the coalescing SMBHs will make it possible to establish from observations how the SMBBHs evolution depends on the astrophysical properties of the galaxies. The occupation number of SMBHs in galaxies of different types will make it possible to test theoretical models using observations. In the future with the data from the ongoing International Pulsar Timing Array and upcoming Square Kilometer Array (SKA) (Janssen et al., 2015), we will be able to make high-precision measurements of the nano-Hertz GW signal. In synergy with the galaxy surveys up to high redshifts such as Dark Energy Spectroscopic Instrument (DESI Collaboration et al., 2016), Euclid (Laureijs et al., 2011), Vera Rubin Observatory (LSST Dark Energy Science Collaboration, 2012), and Roman Telescope (Akeson et al., 2019) we will make joint estimation of GW and galaxies to unveil the open question of formation of SMBHs and its connection with the galaxy evolution.
AcknowledgementThe work of SM is a part of the \(\langle\)data\(|\)theory\(\rangle\) Universe-Lab which is supported by the TIFR and the Department of Atomic Energy, Government of India. VS and AB acknowledge support from the Natural Sciences and Engineering Research Council of Canada (NSERC) through its Discovery Grant program. AB acknowledges support from the Infosys Foundation via an endowed Infosys Visiting Chair Professorship at the Indian Institute of Science. MT was supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-2001810. AB, TQ, and MT were partially supported by NSF award AST-1514868.
Figure 12: Top panel: Mass of most massive halo in SMBBH pairs identified as nano-Hertz GW sources as a function of redshift. Bottom panel: The M\({}_{500}\) of host halos of nano-Hertz GW sources vs. redshift.
The Romulus simulation suite is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (via awards OCI-0725070, ACI-1238993, and OAC-1613674) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. Resources supporting this work were also provided by the (a) NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center; and (b) Extreme Science and Engineering Discovery Environment (XSEDE), supported by National Science Foundation grant number ACI-1548562. The analysis reported in this paper was enabled in part by WestGrid and Digital Research Alliance of Canada (alliancecan.ca) and on the cluster of \(\langle\)data\(|\)theory\(\rangle\) Universe-Lab supported by DAE. Our analysis was performed using the Python programming language (Python Software Foundation, [https://www.python.org](https://www.python.org)). The following packages were used throughout the analysis: numpy (Harris et al., 2020), matplotlib (Hunter, 2007), Pynbody (Pontzen et al., 2013), SciPy (Virtanen et al., 2020), and TANGOS (Pontzen and Tremmel, 2018).
Finally, VS and AB acknowledge the \(\rm{lx}^{w}\)\(\rm{q}\)\(\rm{n}\) peoples on whose traditional territory the University of Victoria stands, and the Songhees, Equimalt and WSANEC peoples whose historical relationships with the land continue to this day.
## Data Availability
The data directly related to this article will be shared on reasonable request to the corresponding author. Galaxy database and particle data for Romulus is available upon request from Michael Tremmel.
|
2309.06876 | CASSISjuice: open-source pipeline and offline complete atlas of
Spitzer/IRS staring observations | Mid-infrared spectroscopy provides many important diagnostics on gas and dust
features in a wide variety of astrophysical objects. The Spitzer Infrared
Spectrograph observed more than 20000 targets with wavelengths as low as 5.2um
and as long as 38um, thereby complementing JWST/MIRI data for long wavelength
diagnostics and providing overall invaluable diagnostics together with JWST or
in view of future IR facilities. In order to maximize the science output of
Spitzer/IRS, the CASSIS atlas has provided reduced IRS spectra since 2011,
extracting and selecting the best spectrum from various methods.
We now present CASSISjuice, an offline version of the pipeline and atlas,
adding several hundred sources that had never cleared the pipeline in order to
make it complete for the first time. We updated the low- and high-resolution
pipelines in order to be able to process every IRS staring mode observation
(i.e., all observations but maps), and we also upgraded the high-resolution
pipeline to version 2. The new pipeline also associates the pointings within
"cluster" observations resulting in a single spectrum (possibly low- and
high-resolution) per position and therefore overall a single CASSISjuice ID per
targeted position.
The initial repositories are hosted at Zenodo, providing the open-source
pipeline code and the atlas itself with specific attention to producing the
smallest dataset possible. Version controlled repositories are also available
at GitLab, including Python notebooks to illustrate the offline manipulation of
the full atlas. The offline CASSISjuice atlas is meant to facilitate the
analysis of large samples and the ident | Vianney Lebouteiller | 2023-09-13T10:47:45Z | http://arxiv.org/abs/2309.06876v1 | # CASSISjuice: open-source pipeline and offline complete atlas of Spitzer/IRS staring observations
###### Abstract
Context: Mid-infrared spectroscopy provides many important diagnostics on gas and dust features in a wide variety of astrophysical objects. In the JWST era, it is important to maintain a durable database of observations with previous facilities, for preparing new observations or completing existing ones. The Spitzer Infrared Spectrograph observed more than 20 000 targets with wavelengths as low as \(\sim 5.2\,\mu\)m and as long as \(\sim 38.0\,\mu\)m, thereby complementing JWST/MIRI data for long wavelength diagnostics and providing overall invaluable diagnostics together with JWST or in view of future IR facilities.
Aims:In order to maximize the science output of Spitzer/IRS, the CASSIS atlas has provided reduced IRS spectra since 2011, extracting and selecting the best spectrum from various methods. We now present CASSISjuice, an offline version of the pipeline and atlas, adding several hundred sources that had never cleared the pipeline in order to make it complete for the first time.
Methods:We updated the low- and high-resolution pipelines in order to be able to process every IRS staring mode observation (i.e., all observations but maps), and we also upgraded the high-resolution pipeline to version 2. The new pipeline also associates the pointings within "cluster" observations resulting in a single spectrum (possibly low- and high-resolution) per position and therefore overall a single CASSISjuice ID per targeted position. Distinct observations at the same position are considered to be independent.
Results:The initial repositories are hosted at Zenodo, providing the open-source pipeline code and the atlas itself with specific attention to producing the smallest dataset possible. Version controlled repositories are also available at GiltLab, including Python notebooks to illustrate the offline manipulation of the full atlas.
Conclusions:The offline CASSISjuice atlas is meant to facilitate the analysis of large samples and the identification of potentially interesting or relevant spectra. We encourage redistribution and reuse of the atlas following the "Attribution-NonCommercial-ShareAlike" license. Citation guidelines are unchanged with the two seminal papers presenting the low- and high-resolution pipelines (Lebouteiller et al., 2011, 2015), possibly the present arXiv-only paper as well and, if necessary, the code itself associated with the specific version released (Lebouteiller, 2023a, b).
## 1 Introduction
Infrared spectroscopic observations performed with the Infrared Spectrograph (IRS; Houck et al., 2004) onboard the Spitzer Space Telescope (Werner et al., 2004) have led to significant progress and numerous publications in various astrophysical fields. The IRS performed about 15 400 observations ("AORkey") in staring mode (i.e., targeting single sources in two consequent "nod" positions of the detector, as opposed to maps).
A fraction of these observations were done in "cluster" mode in which several pointings were targeted within a single observation, leading to a total of about 20 400 object positions and corresponding spectra. The observation setup includes all or a combination of the following:
* Low-resolution (LR) long-slits: Short-Low (SL) covering \(5.2-14.5\,\mu\)m and Long-Low (LL) covering \(14.0-38.0\,\mu\)m with a spectral resolving power \(R\sim 60-127\).
* High-resolution (HR) apertures: Short-High (SH) covering \(9.6-19.8\,\mu\)m and Long-High (LH) covering \(18.7-37.2\,\mu\)m with \(R\sim 600\).
Figure 1 shows the aperture/slit size and relative orientations. The IRS data thus covers somewhat longer wavelengths than JWST/MIRI between \(\approx 28-37\,\mu\)m and includes sources that may be too bright or too extended for JWST.
The immense legacy value of the IRS is well highlighted by the many studies still using the instrument data, even since 2009 and the end of the cryogenic mission phase. Part of this success is due to the availability of publishable-quality spectra for most observations through the CASSIS atlas, released in two parts with the low-resolution atlas (Lebouteiller et al., 2011) and the high-resolution one (Lebouteiller et al., 2015). CASSIS provides automatically-selected products from various background-subtraction methods, spectral extraction methods, and flux calibration methods, along with many diagnostics to guide the user through the selection process. The spectra available through the web form ([http://cassis.sirff.com](http://cassis.sirff.com)) and described in the above papers correspond to version 7 for low-resolution and version 1 for high-resolution.
The availability of publishable-quality IRS spectra enabled projects such as IDEOS (the Infrared Database of Extragalactic Observables with Spitzer; Hernan-Caballero et al., 2016; Spoon et al., 2022; [http://ideos.astro.cornell.edu/](http://ideos.astro.cornell.edu/)). The IDEOS sample uses CASSIS to identify all extragalactic spectra obtained with the IRS low-resolution mode, with a particular attention to identifying misclassified objects and to select
ing the best spectra when several observations of the same object are available. Other steps include the stitching of the spectra from different modules and the calculation of the redshift.
Building upon CASSIS, CASSISjuice is meant to provide open-source, offline, access to the full IRS spectral atlas, with the goal in mind to facilitate the analysis of large samples and the identification of interesting/relevant spectra. A specific attention has been given to provide streamlined products in small repositories.
CASSISjuice provides the LR and HR pipeline codes (IDL core programs + Python wrappers) through a public repository and also upgrades the HR pipeline to version 2 with several minor improvements. The full CASSISjuice atlas is provided for download through another public repository. The CASSISjuice atlas contains all IRS spectra performed in staring mode (i.e., not including maps), adding several hundred spectra compared to the CASSIS website, mostly due to pipeline fixes related to observations that did not go exactly as planned. Another important update concerns the organization of the pointings for cluster observations, leading to a single ID for each targeted position with corresponding LR and/or HR spectra (compared to previous versions of the atlas where IDs for LR and for HR were mixed).
In summary, we provide:
* a public repository for the LR and HR pipeline codes which can be used for in-depth analysis of some spectral extractions and intermediary products, available as a version-controlled repository ([https://gitlab.com/cassisjuice/pipeline](https://gitlab.com/cassisjuice/pipeline)) including notebooks and as a citable repository corresponding to the present version ([https://zenodo.org/deposit/8339954](https://zenodo.org/deposit/8339954); Lebouteiller 2023b),
* a public repository with the CASSISjuice atlas that can be examined offline, containing the most important products as well as the CASSISjuice "concentrate" that includes the absolute minimal data set (i.e., the best spectra that were automatically selected by the pipeline), available as a version-controlled repository ([https://gitlab.com/cassisjuice/atlas](https://gitlab.com/cassisjuice/atlas)) including notebooks and as a citable repository corresponding to the present version ([https://zenodo.org/record/8339965](https://zenodo.org/record/8339965); Lebouteiller 2023a).
These repositories and the associated codes are released under the license "Attribution-NonCommercial-ShareAlike 4.0 International" (CC BY-NC-SA 4.0), i.e., it is possible to share (copy and redistribute the material in any medium or format) and adapt (remix, transform, and build upon the material) for non-commercial uses as long as proper credit is given and as long as the same license conditions propagate. Works using CASSISjuice data should cite the same reference publications: Lebouteiller et al. (2011) and Lebouteiller et al. (2015). For reproducibility, we advise either simply mentioning the version number of the pipelines (currently v7 for LR and v2 for HR) or else adding the specific code citations Lebouteiller (2023b) and Lebouteiller (2023a) for the present versions. For significant or extensive use of the data, we also suggest citing the seminal Spitzer (Werner et al. 2004) and IRS papers (Houck et al. 2004).
In the following, we briefly describe the main steps of the pipeline leading to the selection of the best methods, the atlas itself, and illustrations of possible applications.
## 2 Pipeline summary
CASSISjuice performs many steps starting from the detector images downloaded from the Spitzer Heritage Archive at IRSA ([https://irsa.ipac.caltech.edu/applications/Spitzer/SHA/](https://irsa.ipac.caltech.edu/applications/Spitzer/SHA/)): cleaning of bad pixels, combination of individual exposures, removal of the background (telescope + large-scale astrophysical emission), spectral extraction, combination of nod spectra, and flux calibration. Some additional steps include the removal of some artefacts either at the 2D or 1D level (e.g., fringes due to interferences from the light path within the instrument). We refer to Lebouteiller et al. (2011, 2015) for the details and describe in the following the most important steps. The pipeline extensively uses scripts from SMART (Higdon et al. 2004), SMART/AdOpt (Lebouteiller et al. 2010), IRSCLEAN (Ingalls 2011), and IRSFRINGE (Lahuis 2007).
### Background subtraction
For low-resolution observations. the background can be removed either by subtracting the detector images corresponding to the other nod position ("by nod"), to another spectral order ("by order"), or by estimating the local continuum at the source location
Figure 1: Low-resolution long slits and high-resolution apertures of the IRS.
("in situ"). The methods by nod/order ensure that the subtracted background corresponds exactly to the same location in the detector, thereby not only removing the background emission but also mitigating rogue pixels. The methods by nod/order can be applied as long there is no contaminating source at the offset position. The various methods are compared to each other by the pipeline in terms of signal-to-noise ratio (SNR) and potential contamination by other sources in the offset positions (see a comparison in Fig. 2).
The echelle-spectroscopy used for high-resolution observations make it impossible to subtract a clean image unless a dedicated offset background was observed. CASSISIjuice does not associate observations with potential dedicated offset background positions and relies instead on other methods to remove the background. The first method consists in removing the local continuum simultaneously with the optimal extraction of the source (similar to the "in situ" method for low-resolution observations). While this effectively removes the large-scale physical emission, rogue pixels may still cause issues. The second method, applicable strictly to point sources, uses the differential spatial profile between the two nods and effectively mitigates rogue pixels (Sect. 2.2).
### Spectral extraction
CASSISjuice then extracts the intended source around the requested position. The source may be point-like in which case the optimal extraction (using the point-spread function (PSF) profile as weights) provides the best SNR, or spatially-extended in which case a simple flux integration is performed along the cross-dispersion direction of the slit/aperture. Figures 3 and 4 illustrate potential differences between integrating the flux or using optimal extraction. In both cases (point source or extended source), optimal extraction is used anyway as a source finder algorithm to locate the source around the requested position and to identify potential sources in the offset images for background subtraction (Sect. 2.1).
If the source is point-like, several versions of the optimal extraction may be used depending on the mode. For low-resolution mode, the pipeline extracts the spectra at the two nod positions
Figure 3: Comparing tapered column extraction (point-source calibration) to optimal extraction for a point source. Here we use the default background subtraction. Optimal extraction usually leads to a larger signal-to-noise ratio (SNR) for faint sources (top; CASSISIjuice ID 26864384_2) while little difference is seen for bright sources (bottom; CASSISIjuice ID 25408000_0).
Figure 2: Comparing the background subtraction methods for optimal extraction (top) and tapered column extraction (bottom) for low-resolution mode (CASSIjuice ID 18147584_0). Usually the subtraction “by nod” provides the best signal-to-noise ratio (SNR) compared to “by order” because the nod background lies in the same detector exposure. The “in situ” background subtraction usually leads to a worse SNR but it may be the only choice if contaminating sources are found in the offset detector images by nod or by order.
on their respective wavelength grid, thereby gaining a slight increase in spectral sampling and resolution if both spectra are interleaved, at at slight expense of SNR. Both versions (common wavelength grid or finer wavelength grid) are available (Fig. 5). For high-resolution mode, the pipeline can either extract 1) the two nods simultaneously leading to a single spectrum, 2) the two nods individually leading to two spectra that are then combined, or 3) the differential profile of the difference between the two nod detector images. In principle, the latter method provides the best SNR but may lead to systematic uncertainties if the source is not strictly point like (Fig. 6). The flux calibration for optimal extraction is performed using theoretical and observed spectra of reference stars and does not require any kind of aperture corrections.
Whether the source is point source or spatially extended, a "tapered column" extraction for low-resolution mode or full aperture for high-resolution mode can be used to integrate the flux. For point sources, the flux calibration accounts for the light lost outside the slit/aperture (especially in the short, dispersion direction) while for "infinitely" extended sources this correction is cancelled due to the fact that the fraction of light coming in/out the slit/aperture balances. If the source is neither a point source or an "infinitely" extended source (i.e., it is referred to as "partially extended"), some attempts can be done to estimate a wavelength-dependent flux calibration that depends on the source spatial extent. Such a product is now proposed in CASSIjsuice and is explicitly considered in the selection of the best spectrum.
While other objects may be showing in the detector images apart from the requested object, corresponding to locations along the long slits or to locations within observations in another spectral order (low-resolution observations), there is currently no method to identify such sources apart from investigating manually the detector images at specific locations.
### Applications
CASSijuice provides the code for both LR and HR pipelines (written in IDL) through public repositories at GitLab and Zemodo as well as Python wrappers that were used to produce the atlas described in Section 3. The general user is not expected to run the pipeline and should use the atlas - described in the following - instead. Expert users may find the pipeline availability useful to understand the spectral products and their automatic selection, to diagnose some issues, to access intermediary products, or even to improve some steps in the pipeline itself. More generally, the availability of the pipeline code indirectly ensures the durability of the IRS spectral atlas and consequently the long-term legacy of the data. In that vein, we release the code under a license permitting copies and adaptations as long as proper credit is given.
Figure 4: Comparing the optimal extraction to the full aperture extraction for the high-resolution mode, for CASSIjsuice ID 25646848_0. Full aperture extraction with point-source flux calibration can be used for point source but the signal-to-noise ratio (SNR) is usually best with optimal extraction.
Figure 5: Comparing optimal extraction of both nods on common wavelength grid to both nods on their own wavelength grid (i.e., gaining a bit of spectral resolution) for CASSIjsuice ID 25408000_0. Using a common wavelength grid (slightly) optimizes the signal-to-noise ratio while using the finer wavelength grid (slightly) optimizes the spectral sampling and resolution.
Figure 6: Comparing the various optimal extraction methods for CASSIjsuice ID 125646848_0. ”nods” is the simultaneous extraction of both nods, ”nodcopub” is the combination of the two nod spectra extracted separately, and ”diff” is the extraction of the differential profile of nod 1 minus nod 2 (for pure point sources). The ”nods” version is always the default choice but the ”diff” version can be checked to optimize the signal-to-noise ratio (although it is valid only for strict point sources).
## 3 Atlas
Ultimately, the various methods presented in Section 2 are compared and a selection is made to choose the best spectrum for the final atlas. The atlas contains the full dataset of IRS spectra observed in staring mode and is provided through other, specific repositories.
### Catalog
From the pipeline described above, we produced a single table (Fig. 7) with each entry defined as a spectral dataset corresponding to a given targeted position either in "single" staring observation (i.e., one position per AORkey) or within a "cluster" staring observation (i.e., several positions/pointings per AORkey). The CASSISjuice ID is thus fully defined by the AORKey + a number corresponding to the pointing. For each ID, the table provides the extracted coordinates as well as resolved SIMBAD and NED object names, and potentially NED redshift if relevant. Each ID therefore corresponds to a single spectral dataset with either LR or HR spectra or both.
Figure 8 shows the all-sky catalog of CASSISjuice data using Aladin (Baumann et al. 2022) as presented in the Python notebooks within the GitLab repository. The atlas repository also includes convenience scripts to find observations using an object name or coordinates as constraints. (see applications in Sect. 4)
### Spectral dataset and ID cards
Each CASSISjuice ID is associated with a spectral dataset formatted as a binary product with metadata. Notebooks are provided to read and manipulate these files and potentially export them in other formats. In addition, we also propose a synthetic ID card that encompasses most of the useful diagnostics to compare the various methods for spectral extraction described in Section 2. An example of such ID cards is shown in Figure 9.
### Archives
We provide two versions of the CASSISjuice atlas for downloading, with the goal in mind to ensure the durability of the IRS spectral atlas and to make it possible for users to work easily on the full database, e.g., through blind scans.
The first atlas version contains the various spectra that use different background subtraction, extraction, and flux calibration methods. The labels for the various methods are provided in Fig. 10 and the various resulting spectra can be examined manually or, for instance, with the help of the ID card.
The second atlas version ("concentrate") contains only the default spectrum for each ID, i.e., following the pipeline automatic decision tree for the various methods mentioned above. The choice of the spectral extraction method depends on the source spatial extent, leading to a single spectral dataset for a given ID (see example in Fig. 11). CASSISjuice concentrate is meant to provide a simple and small archive of all IRS staring observations for users who do not need to confirm/change the automatic selection for the best spectra. However, we do encourage a systematic comparison of the various methods when possible.
## 4 Applications and illustrations
The online repositories provide several Python notebooks to illustrate how the CASSISjuice atlas may be used and manipulated. Apart from the obvious applications to obtain spectra of specific objects (as illustrated in Fig. 9), having access to the entire spectral database makes it possible to perform global calculations in order to, e.g., find spectra reassembling others, find spectra with a particular feature, combine spectra to produce templates etc...
### Finding spectra
The basic way to find spectra relies either on a set of coordinates or on object names. The notebooks in the online repositories show example in each case. For object names, the match can be performed on the object name given by the observer and/or on the object name resolved from SIMBAD or NED at the observed coordinates.
Another way to find spectra relies on the spectra themselves and some particular features. For instance, from a reference spectrum (be it a model, a JWST spectrum, or even another CASSISjuice spectrum), it is often useful to identify all spectra that show a resemblance, either for the full spectrum or for a given wavelength range (see example in Fig. 12). Another method consists in measuring some features on-the-fly and select the spectra that match some constraints (see example in Fig. 13). Such blind scans may reveal observations that have been ignored, e.g., due to cryptic object names or to the lack of systematic search for a specific feature.
### Templates
From a pre-defined sample or from a sample built from observational constraints/parameters (for instance through spectral resemblance or specific features; Sect. 4.1), it is straightforward to produce combinations of spectra that may serve as high-SNR templates. We illustrate this application in Figures 14 and 15 where we show the low- and high-resolution spectral template for extragalactic CASSISjuice IDs (using the redshift from the NED-resolved object) at redshift \(\approx 0\), with no particular constraints on the object type or spectral shape.
### Extragalactic sample
While the CASSISjuice catalog includes NED-resolved object names and redshifts if applicable, there may be confusion between the various sources within a given radius. The IDEOS project (Infrared Database of Extragalactic Observables with Spitzer; Hernan-Caballero et al. 2016; Spoon et al. 2022; [http://ideos.astro.cornell.edu/](http://ideos.astro.cornell.edu/)) precisely aimed at compiling a clean set of extragalactic low-resolution spectra and on-the-fly measurements (redshift, spectral feature fluxes...), using the same data as in CASSISjuice. IDEOS provides a database of extragalactic objects with the best spectrum possible (in particular when several observations are available for a given object) and with a specific attention to homogeneity in the spectral feature measurements between the two low-resolution modules. We provide some examples of practical applications of CASSISjuice and IDEOS within the repositories and we refer to Spoon et al. (2022) for some astrophysical applications.
## 5 Going further
CASSISjuice extracts spectra at the requested position and either integrates the flux for extended sources or fits the PSF profile for point sources. In some cases the extended object shows a spatial structure indicative of several components with a potentially dif
ferent spectral shape for each component. In such cases, we advise the use of the SMART/AdOpt software (Lebouteiller et al. 2010) which is specifically built to simultaneously extract spatial components in low-resolution observations. This may correspond, for instance, to supernovae embedded in a galactic disk, to an active galactic nucleus and the host galaxy etc...
## 6 Conclusions
1. We present CASSISuice, an open-source, offline version of the pipeline and atlas of Spitzer/IRS observations performed in staring mode (i.e., not including maps).
2. Repositories are provided for the pipeline code and the atlas itself with the cleanest structure and smallest size possible.
3. The pipeline has been updated in order to process all observations performed with the IRS, adding a few hundred sources. The high-resolution pipeline has been upgraded to version 2.
4. One CASSISuice ID corresponds to one targetted position, with a specific attention to associate the low- and high-resolution spectra within "cluster" observations.
5. Two versions of the atlas are provided, one with the various methods for background subtraction, spectral extraction, and flux calibration, and another "concentrate" one with only the spectrum corresponding to the best methods chosen automatically by the pipeline.
6. CASSISuice is meant to facilitate the analysis of large samples or to the identification of interesting/relevant spectra taken with the IRS. Several Python notebooks are provided to illustrate the manipulation of the data and of the full atlas in an offline way.
7. If CASSISuice data is used, we encourage citing the pipeline version number (currently v7 for LR and v2 for HR) together with the two reference papers (Lebouteiller et al. 2011, 2015) and possibly the present arXiv-only paper as well. The pipeline code (Lebouteiller 2023b) and atlas (Lebouteiller 2023a) themselves may be cited as well as an alternative to simply mentioning the version number.
###### Acknowledgements.
This work was supported by the Programme National \(\ast\) Physique et Chimie du Milieu Interstellaire \(\ast\) (PCMI) and the \(\ast\) Programme National de Physique Stellaire \(\ast\) (PPS) of the CNRS/INSU with INC/INP co- by CEA and CNES. This research has made use of the NASA/IPAC Infrared Science Archive, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This research has made use of "Malin Schy atlas" developed at CDS, Strasbourg Observatory, France. This research has made use of the NASA/IPAC Extragalactic Database, which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France.
|
2309.06862 | Domain Decomposition Method for Poisson--Boltzmann Equations based on
Solvent Excluded Surface | In this paper, we develop a domain decomposition method for the nonlinear
Poisson-Boltzmann equation based on a solvent-excluded surface widely used in
computational chemistry. The model relies on a nonlinear equation defined in
$\mathbb{R}^3$ with a space-dependent dielectric permittivity and an
ion-exclusion function that accounts for steric effects. Potential theory
arguments transform the nonlinear equation into two coupled equations defined
in a bounded domain. Then, the Schwarz decomposition method is used to
formulate local problems by decomposing the cavity into overlapping balls and
only solving a set of coupled sub-equations in each ball. The main novelty of
the proposed method is the introduction of a hybrid linear-nonlinear solver
used to solve the equation. A series of numerical experiments are presented to
test the method and show the importance of the nonlinear model. | Abhinav Jha, Benjamin Stamm | 2023-09-13T10:17:43Z | http://arxiv.org/abs/2309.06862v2 | # Domain Decomposition Method for Poisson-Boltzmann Equations based on Solvent Excluded Surface
###### Abstract
In this paper, we develop a domain-decomposition method for the generalized Poisson-Boltzmann equation based on a solvent-excluded surface which is widely used in computational chemistry. The solver requires to solve a generalized screened Poisson (GSP) equation defined in \(\mathbb{R}^{3}\) with a space-dependent dielectric permittivity and an ion-exclusion function that accounts for Steric effects. Potential theory arguments transform the GSP equation into two-coupled equations defined in a bounded domain. Then, the Schwarz decomposition method is used to formulate local problems by decomposing the cavity into overlapping balls and only solving a set of coupled sub-equations in each ball in which, the spherical harmonics and the Legendre polynomials are used as basis functions in the angular and radial directions.
A series of numerical experiments are presented to test the method.
**Keywords:** Implicit Solvation Model, Poisson-Boltzmann Equation, Domain Decomposition Method, Solvent Excluded Surface, Stern Layer
## 1 Introduction
In computational chemistry, the nonlinear Poisson Boltzmann (PB) equation is a widely used model for modeling the ionic effects on molecular systems. It belongs to the class of implicit solvation models where the solute is treated microscopically, and the solvent is treated on its macroscopic physical properties, such as dielectric permittivity and ionic strength. Because of this treatment, implicit solvation models are computationally efficient, require fewer parameters, and implicitly consider the sampling over degrees of freedom of the solvent. For this reason, they are widely used in practice and are a popular computational approach to characterize solvent effects in the simulation of properties and processes of molecular systems [55, 49, 40, 54].
The history of the PB model can be traced back to 1910's where Gouy [18] and Chapman [9] independently used it to equate the chemical potential and relative forces acting on a small adjacent volumes in an ionic solution between two plates having different voltages. Later in 1923, Debye and Huckel generalized the concept by applying it to the theroy of ionic solutions leading to a successful interpretation of thermodynamic data [13]. A combination of both the approaches, including a rigid layer close to the charged surface called the Stern layer and the Gouy-Chapman type diffusive layer was introduced in 1924 by Stern [53]. The PB equations that we consider in this paper is the realization of the Gouy-Chapman model with a possibility of including Stern layer correction [48].
In this paper we consider the nonlinear Poisson-Boltzmann (NPB) equation which is used to describe the dimensionless electrostatic potential \(\psi(\mathbf{x})\) and is given by
\[-\nabla\cdot[\varepsilon(\mathbf{x})\nabla\psi(\mathbf{x})]+\lambda(\mathbf{x})\kappa^{2 }\varepsilon_{s}\sinh\left(\psi(\mathbf{x})\right)=\frac{1}{\beta\varepsilon_{ \mathrm{abs}}}\rho^{\mathrm{sol}}(\mathbf{x})\qquad\mathrm{in}\quad\Omega_{0}.\]
for the case of \(1:1\) electrolyte solvents. Here \(\varepsilon(\mathbf{x})\) represents the relative space-dependent dielectric permittivity, \(\varepsilon_{\rm abs}\) is the absolute dielectric permittivity of vacuum, \(\varepsilon_{s}\) is the relative dielectric permittivity of the solvent, \(\rho^{\rm sol}(\mathbf{x})\) is the charge distribution of the solute, \(\lambda(\mathbf{x})\) is the ion exclusion function that ensures the ion concentration tend to zero inside the solute cavity and one in the solvent region, and \(\beta=K_{\rm B}T/e\), where \(K_{\rm B}\) is the Boltzmann constant, \(T\) is the temperature in Kelvins (\(K\)), and \(e\) is the elementary charge.
Some standard numerical methods for solving the PB equation include the finite difference method (FDM), the boundary element method (BEM), and the finite element method (FEM). We briefly overview them and mention some ongoing work in these areas. For an overview of the methods, we refer to [33].
The FDM is one of the most popular methods to solve the linear (LPB) or the nonlinear PB equation. It follows the standard finite difference approach, where a grid covers the region of interest, and then different boundary conditions are chosen. Some of the popular software packages using the FDM include UHBD [34], Delphi [27], MIBPB [10], and APBS [3, 14, 23]. One drawback of this approach is the increase in computational cost with respect to the grid dimension making it challenging to reach high accuracy. Some recent developments in this area can be found in [37, 36, 12].
The BEM is another approach where the LPB equation is recast as an integral equation defined on a two-dimensional solute-solvent interface. This method can be optimized using the fast multipole method or the hierarchical treecode technique. The PAFMPB solver [32, 62] uses the former acceleration technique, whereas the TABI-PB [17, 56] uses the latter one. The PB-SAM solver developed by Head-Gordon et al. [31, 59, 60] discretizes the solute-solvent interface (such as the van der Waals (vdW) surface) with grid points on atomic spheres like a collocation method and solves the associated linear system by use of the fast multipole method. However, one drawback is that integral equation based methods cannot be generalized to NPB. Hybrid approaches combining the FDM and BEM exist [6].
The FEM approach is one of the most flexible approaches for solving the PB equation. It can solve both the linear and the nonlinear PB, providing more flexible mesh refinement and proper convergence analysis [11]. A posteriori error estimation also exists for this method [38, 24]. The SDPBS and SMPBS offer fast and efficient approximations of the size-modified PB equation [57, 61, 22, 58]. Recently a hybrid approach combining the FEM and BEM has been proposed in [7].
Now, we give a brief overview of the domain decomposition methods that have been recently proposed in the context of implicit solvation models. Recently, in [44] a domain decomposition algorithm for the LPB equation (ddLPB) which uses a particular Schwarz domain decomposition method has been developed. A further linear scaling approach for computing the first derivatives and eventually the forces has been presented in [21] following the ideas from [35]. The ideas of the ddLPB method can be traced back to the domain decomposition methods proposed for the conductor-like screening model (COSMO), (ddCOSMO) [8, 28, 29, 30] and the polarizable continuum model (PCM), (ddPCM) [39, 51, 16]. These methods do not require any mesh or grid of the molecular surface, are easy to implement, and about two orders of magnitude faster than the state of the art [29]. In particular, the ddCOSMO solver can perform up to thousands of times faster than equivalent existing algorithms [42, 43]. An open-source software ddX has been released which encompasses all these methods [20].
An essential feature of the implicit solvation model is the choice of the solute-solvent boundary. Most methods use the vdW-cavity or the solvent accessible surface (SAS), [26] as they are topologically simple but they don't describe the solute-solvent interaction well. The solvent-excluded surface (SES) first developed in [47] is one of the few surfaces which captures the interaction quite well. In [42], a mathematical framework was provided for computing the SES surface. The ddLPB method was developed with the vdW cavity and can be extended to the SAS cavity. The ddPCM method with the SES cavity (ddPCM-SES) was proposed and studied in [43].
In this paper, we incorporate all the ideas mentioned previously. We first present a Schwarz domain decomposition method for the NPB equation that includes steric effects, i.e., the presence of a Stern layer. We use the SES for the solute-solvent boundary and introduce a continuous relative dielectric permittivity function and an ion-exclusion function. As the equation is nonlinear, we develop a nonlinear solver in a unit ball and use spectral methods for discretization using spherical harmonics and Legendre polynomials as basis functions.
The breakdown of the paper is as follows: In Sec. 2, we derive the NPB equation and introduce different solute-solvent boundaries. We also present a continuous relative dielectric permittivity function and an ion-exclusion function based on the SES. In Sec. 3, we transform the problem into different domains and introduce a global strategy for solving them and layout the domain decomposition method. In Sec. 4, we derive single-domain solvers for the homogenous screened Poisson (HSP) and the generalized screened Poisson (GSP) equation in a unit ball. Next, in Sec. 5, we present a comprehensive numerical study for the ddPB-SES method for molecules ranging from one to 24 atoms. Lastly, in Sec. 6, we present a summary and an outlook.
## 2 Problem Statement
We represent the solvent by a polarizable and ionic continuum. The freedom of the movement of ions is modeled by Boltzmann statistics, i.e., the Boltzmann equation is used to calculate the local ion density \(c_{i}\) of the \(i^{\rm th}\) type of ion as follows
\[c_{i}=c_{i}^{\infty}\exp\left(\frac{-W_{i}}{K_{\rm B}T}\right), \tag{1}\]
where \(c_{i}^{\infty}\) is the bulk ion concentration at an infinite distance from the solute molecule and \(W_{i}\) is the work required to move the \(i^{\rm th}\) type of ion to a given position from an infinite distance.
The electrostatic potential \(\tilde{\psi}(\mathbf{x})\) of a general implicit solvation model is described by the Poisson equation as follows
\[-\nabla\cdot\left[\varepsilon_{\rm abs}\varepsilon(\mathbf{x})\nabla\tilde{\psi} (\mathbf{x})\right]=\rho^{\rm sol}(\mathbf{x})+\rho^{\rm ions}(\mathbf{x})\quad\mbox{in} \ \ \mathbb{R}^{3}, \tag{2}\]
where \(\tilde{\psi}(\mathbf{x})=\mathcal{O}\left(1/|\mathbf{x}|\right)\) as \(|\mathbf{x}|\to\infty\). Here \(\rho^{\rm ions}(\mathbf{x})\) is the charge distribution of the solvation system.
We derive the PB equation using Eq. (1) and Eq. (2) as
\[-\nabla\cdot\left[\varepsilon_{\rm abs}\varepsilon(\mathbf{x})\nabla\tilde{\psi} (\mathbf{x})\right]=\rho^{\rm sol}(\mathbf{x})+\lambda(\mathbf{x})\sum_{i=1}^{N_{\rm ions }}c_{i}^{\infty}z_{i}e\exp\left(\frac{-z_{i}e\tilde{\psi}(\mathbf{x})}{K_{\rm B}T }\right)\quad\mbox{in}\ \ \mathbb{R}^{3}, \tag{3}\]
where \(z_{i}\) is the partial charge of the \(i^{\rm th}\) type of ion.
In the case of a \(1:1\) electrolyte, there are two opposite charge ions (\(+e\) and \(-e\)) and we then get
\[\sum_{i=1}^{2}c_{i}^{\infty}z_{i}e\exp\left(\frac{-z_{i}e\tilde{ \psi}(\mathbf{x})}{K_{\rm B}T}\right) =ce\exp\left(\frac{-e\tilde{\psi}(\mathbf{x})}{K_{\rm B}T}\right)-ce \exp\left(\frac{e\tilde{\psi}(\mathbf{x})}{K_{\rm B}T}\right)\] \[=-2ce\sinh\left(\frac{e\tilde{\psi}(\mathbf{x})}{K_{\rm B}T}\right). \tag{4}\]
Then substituting Eq. (4) into Eq. (3) we obtain
\[-\nabla\cdot\left[\varepsilon_{\rm abs}\varepsilon(\mathbf{x})\nabla\tilde{\psi} (\mathbf{x})\right]+\lambda(\mathbf{x})2ce\sinh\left(\frac{e\tilde{\psi}(\mathbf{x})}{K_{ \rm B}T}\right)=\rho^{\rm sol}(\mathbf{x})\quad\mbox{in}\ \ \mathbb{R}^{3}, \tag{5}\]
the nonlinear Poisson-Boltzmann equation (NPB).
### Solute Probe
One of the important properties for implicit solvation model is the choice of the solute probe and, accordingly, the solute-solvent boundary. A straightforward choice is using the van der Waals (vdW) surface, i.e., the topological boundary of the union of solute's vdW-atoms with experimentally fitted radii. Another choice is
the solvent-accessible surface (SAS), which is defined by tracing the center of an idealized (spherical) solvent probe (representing a solvent molecule) when rolling over the solute molecule. The region enclosed by the SAS is called the SAS-cavity, which we denote by \(\Omega_{\rm SAS}\) and its boundary by \(\Gamma_{\rm SAS}\).
The vdW and the SAS are topologically not the correct answer to the cavity problem as they poorly describe the region where the solvent can touch. However, as they are topologically simple, they are widely used in numerical computations. Another solute-solvent boundary is the solvent-excluded surface (SES), which represents the boundary of the region where the probe has no access due to the presence of the solute. The region enclosed by the SES is the SES cavity, that we denote by \(\Omega_{\rm SES}\) and the boundary by \(\Gamma_{\rm SES}\). The mathematical characterization of the surface can be found in [42].
The PB equation do not consider the absorbing ions finite size; hence, the ionic concentration can exceed the maximally allowed coverage near the surface. These are referred to as steric effects. To account for these steric effects, the PB model is modified to include a Stern layer, [5]. We introduce two cavities, namely the solvent-excluded surface including the steric effect (SES-S) and the solvent-accessible surface including steric effects (SAS-S), which will be denoted by \(\Omega_{\rm SES-S}\) and \(\Omega_{\rm SAS-S}\) respectively with the corresponding boundary being \(\Gamma_{\rm SES-S}\) and \(\Gamma_{\rm SAS-S}\). Fig. 1 represents all the molecular probes and the boundaries introduced.
Now, we set certain notations. We assume that the molecule is composed of \(M\) atoms and the \(i^{\rm th}\) atom has center \(\mathbf{x}_{i}\) and vdW radii \(r_{i}\). The solvent probe radius is denoted by \(r_{p}\). Furthermore, for each atom, we define an "enlarged" ball \(\Omega_{i}\) with center \(\mathbf{x}_{i}\) and radius \(R_{i}=r_{i}+r_{p}+a+r_{0}\), where \(a\) is the Stern layer length, and \(r_{0}\) is a non-negative constant use to control the nonlinear regime. For the inclusion of the Steric effects we assume that the SAS cavity is formed with a probe of radius \(a+r_{p}\).
The SES cavity is entirely covered by the union of \(\Omega_{0}\) of enlarged balls, i.e.,
\[\Omega_{\rm SES}\subset\Omega_{0}:=\bigcup_{i=1}^{M}\Omega_{i},\quad\text{ where}\quad\Omega_{i}=B_{R_{i}}(\mathbf{x}_{i}).\]
We also denote the solvent region by \(\Omega_{\infty}:=\mathbb{R}^{3}\setminus\Omega_{0}=\Omega_{0}^{\rm C}\).
Let \(f_{\rm SAS}\) denote the distance function to \(\Omega_{\rm SAS}\) (i.e., negative inside the SAS cavity and positive outside the SAS cavity). We then have a mathematical characterization of the two cavities
\[\Omega_{\rm SES}=\left\{\mathbf{x}\in\mathbb{R}^{3}:\ f_{\rm SAS}(\mathbf{x})\leq-r_{ p}-a\right\}\ \ \text{and}\ \ \Omega_{0}=\left\{\mathbf{x}\in\mathbb{R}^{3}:\ f_{\rm SAS}(\mathbf{x})\leq r_{0} \right\}.\]
Also, we can characterize their boundary surfaces by
\[\Gamma_{\rm SES}=f_{\rm SAS}^{-1}(-r_{p}-a)\quad\text{and}\quad\Gamma_{0}=f_{ \rm SAS}^{-1}(r_{0}).\]
Figure 1: Solute probes and solute-solvent boundary for a molecule.
**Remark 1**: _It is reasonable to assume that the solute's charge distribution \(\rho^{\rm sol}\) is supported in \(\Omega_{0}\). In this paper we consider the classic description of \(\rho^{\rm sol}\) given by_
\[\rho^{\rm sol}(\mathbf{x})=\sum_{i=1}^{M}q_{i}\delta(\mathbf{x}-\mathbf{x}_{i}),\]
_where \(q_{i}\) denotes the (partial) charge carried by the \(i^{\rm th}\) atom, and \(\delta\) is the Dirac-delta function. For a quantum description of the solute, \(\rho^{\rm sol}\) comprises of a sum of nuclear charges and the electron charge density._
### Dielectric Permittivity and Ion Exclusion Function
In this subsection we construct a SES-based dielectric permittivity and an ion-exclusion function associated with \(f_{\rm SAS}\) which follows the ideas from [43].
It is assumed that the solvent dielectric permittivity is constant and equal to the bulk dielectric permittivity outside the SAS cavity. This is a reasonable assumption as the solvent density at positions far from the solute molecule is approximately the same, and therefore the dielectric permittivity is the same.
Taking the SES as the solute-solvent boundary implies that the dielectric permittivity in the SES-cavity is always one, i.e., the relative dielectric of vacuum and constant outside SAS, with value \(\varepsilon_{s}\). Let us denote the layer between \(\Omega_{\rm SAS}\) and \(\Omega_{\rm SES}\) by \(\mathcal{L}_{\varepsilon}\), i.e., \(\mathcal{L}_{\varepsilon}:=\Omega_{\rm SAS}\setminus\Omega_{\rm SES}\). In a similar way the ion-exclusion function is zero inside \(\Omega_{\rm SES-S}\) and one outside \(\Omega_{\rm SAS-S}\). We denote the layer between \(\Omega_{\rm SES-S}\) and \(\Omega_{\rm SAS-S}\) by \(\mathcal{L}_{\lambda}\), i.e., \(\mathcal{L}_{\lambda}=\Omega_{\rm SAS-S}\setminus\Omega_{\rm SES-S}\).
The remaining work is to determine \(\varepsilon(\mathbf{x})\) and \(\lambda(\mathbf{x})\) in the intermediate layer, \(\mathcal{L}_{\varepsilon}\) and \(\mathcal{L}_{\lambda}\), respectively. We choose the following modified definition of the permittivity function from [43],
\[\varepsilon(\mathbf{x})=\left\{\begin{array}{ll}1&\mathbf{x}\in\Omega_{\rm SES},\\ 1+(\varepsilon_{s}-1)\xi\left(\frac{f_{\rm SAS}(\mathbf{x})+r_{p}+a}{r_{p}+a} \right)&\mathbf{x}\in\mathcal{L}_{\varepsilon},\\ \varepsilon_{s}&\mbox{else},\end{array}\right. \tag{6}\]
and define the ion exclusion function as
\[\lambda(\mathbf{x})=\left\{\begin{array}{ll}0&\mathbf{x}\in\Omega_{\rm SES-S},\\ \xi\left(\frac{f_{\rm SAS}(\mathbf{x})+r_{p}}{r_{p}+a}\right)&\mathbf{x}\in\mathcal{L} _{\lambda},\\ 1&\mbox{else},\end{array}\right. \tag{7}\]
where \(\xi(\cdot)\) is a continuous function defined on \([0,1]\), satisfying \(\xi(0)=0\), \(\xi(1)=1\), \(\xi^{\prime}(0)=0\), and \(\xi^{\prime}(1)=0\). \(\varepsilon(\mathbf{x})\) and \(\lambda(\mathbf{x})\) can be seen as distance-dependent functions where the "distance" represents the signed distance to SAS, see Fig. 2 for a schematic diagram. The function \(\xi(\cdot)\) can be chosen in different ways. In [52] one possible choice is the error function, \(\mathtt{erf}(\cdot)\). In the numerical simulations we choose
\[\xi(t)=t^{3}\left(10+3t\left(-5+2t\right)\right),\qquad 0\leq t\leq 1.\]
**Remark 2**: _We also define an enlarged cavity \(\mathcal{L}:=\Omega_{0}\setminus\Omega_{\rm SES}\). By the definition of \(\mathcal{L}_{\varepsilon}\) and \(\mathcal{L}_{\lambda}\) we have an immediate consequence of \(\mathcal{L}_{\lambda}\subset\mathcal{L}\) and \(\mathcal{L}_{\varepsilon}\subset\mathcal{L}\)._
## 3 Formulation of the Problem
In this section we reduce our problem to different domains by introducing a new hybrid linear/nonlinear PB model and introduce the domain decomposition strategy use to solve the resulting equation.
In the previous section we introduced the dielectric permittivity function and the ion-exclusion function. Let \(\psi=e\tilde{\psi}/K_{\rm B}T\) denote the dimensionless electrostatic potential then the fully nonlinear PB equation reduces to
\[-\nabla\cdot[\varepsilon_{\rm abs}\varepsilon(\mathbf{x})\nabla\beta \psi(\mathbf{x})]+\lambda(\mathbf{x})2ce\sinh\left(\psi( \mathbf{x})\right)=\rho^{\rm sol}(\mathbf{x})\quad\mbox{in }\ \ \mathbb{R}^{3}. \tag{8}\]
Since the support of \(\rho^{\rm sol}(\mathbf{x})\) is contained in \(\Omega_{0}\), i.e., \({\rm supp}(\rho^{\rm sol})\subset\Omega_{0}\), by using the definition of \(\varepsilon(\mathbf{x})\) and \(\lambda(\mathbf{x})\), Eq. (8) reduces to
\[-\varepsilon_{s}\varepsilon_{\rm abs}\beta\Delta\psi(\mathbf{x})+2 ce\sinh\left(\psi(\mathbf{x})\right)=0\quad\mbox{in }\ \ \Omega_{\infty}.\]
In the solvent region \(\Omega_{\infty}\), we can assume that the potential \(\psi\) satisfies the low potential condition, i.e., \(|\psi|\ll 1\), and hence we can linearise the above equation to get a homogeneous screened Poisson (HSP) equation as
\[-\Delta\psi(\mathbf{x})+\kappa^{2}\psi(\mathbf{x})=0\quad \mbox{in }\ \Omega_{\infty}, \tag{9}\]
where \(\kappa^{2}=2e^{2}c/K_{\rm B}T\varepsilon_{s}\varepsilon_{\rm abs}\) is the square of the Debye Huckel screening constant.
Next we note that inside the cavity \(\Omega_{0}\) we still keep the nonlinear Poisson Boltzmann equation of the form
\[-\nabla\cdot[\varepsilon(\mathbf{x})\nabla\psi(\mathbf{x}) ]+\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\sinh\left(\psi(\mathbf{x})\right)=\frac{1}{\beta\varepsilon_{\rm abs}}\rho^{\rm sol}( \mathbf{x})\quad\mbox{in }\ \ \Omega_{0}. \tag{10}\]
Along the solute-solvent boundary \(\Gamma_{0}\), equations (9)-(10) satisfy the jump conditions
\[[\![\psi]\!]=0,\qquad[\![\partial_{\mathbf{n}}\psi]\!]=0\quad\mbox{ on}\quad\Gamma_{0}:=\partial\Omega_{0}, \tag{11}\]
where \(n\) is the unit normal vector on \(\Gamma_{0}\) pointing outward and \(\partial_{\mathbf{n}}=\nabla\cdot\mathbf{n}\), i.e., the normal derivative which complete the hybrid linear/nonlinear model.
**Remark 3**: _We notice that we reduced our problem defined on the whole space \(\mathbb{R}^{3}\) to the solute cavity \(\Omega_{0}\) and the solvent region \(\Omega_{\infty}\) but with non-local coupling conditions. Fig. 3 shows the schematic diagram of PDEs in different regions._
Figure 2: Schematic diagram of the dielectric permittivity, \(\varepsilon(\mathbf{x})\) (left \(y\)-axis) and the ion-exclusion function, \(\lambda(\mathbf{x})\) (right \(y\)-axis) with respect to \(f_{\rm SAS}\).
### Transformation of the Problem
We noticed that we have two equations defined in \(\mathbb{R}^{3}\), one inside the cavity, \(\Omega_{0}\) and one in the solvent region \(\Omega_{\infty}\) in combination with interface conditions. In this subsection we transform our problem and define them as two coupled PDEs in \(\Omega_{0}\).
Let us denote \(\psi|_{\Omega_{\infty}}\) as the Dirichlet trace from \(H^{1}(\Omega_{\infty})\) to \(L^{2}(\partial\Omega_{\infty})\) in the sense of the trace operator, then by the linearity of Eq. (9) we notice that the free-space electrostatic potential \(\psi|_{\Omega_{\infty}}\) can be represented by a single-layer potential, \(\hat{\mathcal{S}}_{\Gamma_{0}}:H^{-1/2}(\Gamma_{0})\to H^{1}(\mathbb{R}^{3} \setminus\Gamma_{0})\) as
\[\psi(\boldsymbol{x})|_{\Omega_{\infty}}=\tilde{\mathcal{S}}_{\Gamma_{0}} \sigma_{\mathrm{e}}(\boldsymbol{x})\quad\forall\ \boldsymbol{x}\in\Omega_{\infty},\]
where \(\sigma_{\mathrm{e}}(\boldsymbol{x})\) is a distribution in \(H^{-1/2}(\Gamma_{0})\). From the continuity of the single layer potential operator we can extend \(\psi|_{\Omega_{\infty}}\) to \(\Omega_{0}\) as follows:
\[\psi_{\mathrm{e}}(\boldsymbol{x})=\int_{\Gamma_{0}}\frac{\exp\left(-\kappa| \boldsymbol{x}-\boldsymbol{y}|\right)\sigma_{\mathrm{e}}(\boldsymbol{y})}{4 \pi|\boldsymbol{x}-\boldsymbol{y}|}d\boldsymbol{y}\quad\forall\ \boldsymbol{x}\in\Omega_{0},\]
where \(\psi_{\mathrm{e}}(\boldsymbol{x})\) denotes the extended potential and hence \(\psi_{\mathrm{e}}\) satisfies the HSP equation in \(\Omega_{\infty}\) and also satisfies
\[-\Delta\psi_{\mathrm{e}}(\boldsymbol{x})+\kappa^{2}\psi_{\mathrm{e}}( \boldsymbol{x})=0\quad\text{in}\ \ \Omega_{0}.\]
Here we also introduce the single-layer potential operator \(\mathcal{S}_{\Gamma_{0}}:H^{-1/2}(\Gamma_{0})\to H^{1/2}(\Gamma_{0})\) defined by
\[\mathcal{S}_{\Gamma_{0}}\sigma_{\mathrm{e}}(\boldsymbol{x}):=\int_{\Gamma_{0} }\frac{\exp\left(-\kappa|\boldsymbol{x}-\boldsymbol{y}|\right)\sigma_{\mathrm{ e}}(\boldsymbol{y})}{4\pi|\boldsymbol{x}-\boldsymbol{y}|}d\boldsymbol{y}\quad \forall\boldsymbol{x}\in\Gamma_{0}, \tag{12}\]
which is an invertible operator and hence \(\sigma_{\mathrm{e}}=\mathcal{S}_{\Gamma_{0}}^{-1}\psi|_{\Gamma_{0}}\).
From [50, Theorem 3.3.1] we also have the relationship
\[\sigma_{\mathrm{e}}=\partial_{\boldsymbol{n}}\psi_{\mathrm{e}}|_{\Omega_{0}} -\partial_{\boldsymbol{n}}\psi_{\mathrm{e}}|_{\Omega_{\infty}}\quad\text{on} \quad\Gamma_{0}.\]
By the jump condition (Eq. (11)) of the normal derivative we have
\[\partial_{\boldsymbol{n}}\psi|_{\Omega_{\infty}}-\partial_{\boldsymbol{n}} \psi|_{\Omega_{0}}=0\quad\text{on}\ \ \Gamma_{0},\]
Figure 3: PDEs defined in the solute cavity and the solvent region.
which implies \(\partial_{\mathbf{n}}\psi_{\rm e}|_{\Omega_{\infty}}=\partial_{\mathbf{n}}\psi|_{\Omega_{ 0}}\) on \(\Gamma_{0}\). Hence,
\[\sigma_{\rm e}=\partial_{\mathbf{n}}\left(\psi_{\rm e}|_{\Omega_{0}}-\psi|_{\Omega_ {0}}\right)\quad\text{on}\ \ \Gamma_{0}.\]
Also, by the jump condition of the potential we get \(\psi_{\rm e}=\psi|_{\Omega_{0}}\) on \(\Gamma_{0}\). Hence, the extended potential \(\psi_{\rm e}\) is defined on \(\Omega_{0}\) by
\[-\Delta\psi_{\rm e}(\mathbf{x})+\kappa^{2}\psi_{\rm e}(\mathbf{x}) =0 \text{in}\quad\Omega_{0}\] \[\psi_{\rm e}(\mathbf{x}) =\psi(\mathbf{x})\quad\text{on}\quad\Gamma_{0}.\]
Now, we move towards the generalized NPB equation. Let \(\psi_{0}(\mathbf{x})\) denote the potential generated by \(\rho^{\rm sol}(\mathbf{x})/\beta\varepsilon_{\rm abs}\) in vacuum, i.e.,
\[\psi_{0}(\mathbf{x})=\sum_{i=1}^{M}\frac{q_{i}}{4\pi\varepsilon_{\rm abs}\beta| \mathbf{x}-\mathbf{x}_{i}|},\]
satisfying
\[-\Delta\psi_{0}=\frac{1}{\beta\varepsilon_{\rm abs}}\rho^{\rm sol}(\mathbf{x}) \quad\text{in}\ \ \mathbb{R}^{3}. \tag{13}\]
Let us denote the reaction potential by, \(\psi_{\rm r}:=\psi-\psi_{0}\), i.e., the difference between the electrostatic potential with and without the presence of solvent. Then Eq. (10) equivalently writes
\[-\nabla\cdot\left[\varepsilon(\mathbf{x})\nabla\psi_{\rm r}(\mathbf{x})\right] +\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\sinh\left(\left(\psi_{ \rm r}+\psi_{0}\right)(\mathbf{x})\right)\] \[=\frac{1}{\beta\varepsilon_{\rm abs}}\rho^{\rm sol}(\mathbf{x})+ \nabla\cdot\left[\varepsilon(\mathbf{x})\nabla\psi_{0}(\mathbf{x})\right]\qquad\text {in}\ \ \Omega_{0}. \tag{14}\]
Substituting Eq. (13) in Eq. (14) and denoting \(\sinh(\Phi)\) by \(\mathcal{F}(\Phi)\Phi\) where
\[\mathcal{F}(\Phi)=\frac{\sinh(\Phi)}{\Phi},\]
for any positive function \(\Phi\). We further reduce the equation to
\[-\nabla\cdot\left[\varepsilon(\mathbf{x})\nabla\psi_{\rm r}(\mathbf{x})\right] +\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\mathcal{F}\left(\left( \psi_{\rm r}+\psi_{0}\right)(\mathbf{x})\right)\left(\psi_{\rm r}+\psi_{0}\right) (\mathbf{x})\] \[=\nabla\cdot\left[\left(\varepsilon(\mathbf{x})-1\right)\nabla\psi_ {0}(\mathbf{x})\right]\qquad\qquad\qquad\qquad\text{in}\ \ \Omega_{0}.\]
From the jump condition,, \(\llbracket\psi\rrbracket=0\) we have \(\psi|_{\Omega_{0}}-\psi|_{\Omega_{\infty}}=0\) which implies
\[\psi_{\rm r}+\psi_{0}=\psi_{\rm e}\quad\text{on}\ \ \Gamma_{0},\]
and similarly
\[\sigma_{\rm e}=\partial_{\mathbf{n}}\psi_{\rm e}-\partial_{\mathbf{n}}\left(\psi_{\rm r }+\psi_{0}\right)\quad\text{on}\ \ \Gamma_{0}.\]
From Eq. (12) we also get the global coupling condition
\[\psi_{\rm e}(\mathbf{x})=\mathcal{S}_{\Gamma_{0}}\sigma_{\rm e}=\mathcal{S}_{ \Gamma_{0}}\left[\partial_{\mathbf{n}}\psi_{\rm e}-\partial_{\mathbf{n}}\left(\psi_{ \rm r}+\psi_{0}\right)\right]\quad\text{on}\ \ \Gamma_{0}.\]
In summary our original equation reduces to
\[-\nabla\cdot\left[\ \varepsilon(\mathbf{x})\nabla\psi_{\rm r}(\mathbf{x})\right] +\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\mathcal{F}\left(\left( \psi_{\rm r}+\psi_{0}\right)(\mathbf{x})\right)\left(\psi_{\rm r}+\psi_{0}\right) (\mathbf{x})\] \[=\nabla\cdot\left[\left(\varepsilon(\mathbf{x})-1\right)\nabla\psi_{0 }(\mathbf{x})\right]\qquad\qquad\qquad\qquad\qquad\qquad\ \
**Remark 4**: _The PDEs defined in Eq. (15)-(16) encompass different solvation models. In the absence of Stern layer, constant discontinuous dielectric permittivity function, and the solute probe given by the vdW surface the model is reduced to the linear Poisson Boltzmann (LPB). If \(\kappa\to 0\) then we recover the polarizable continuum model with SES boundary (PCM-SES). For the vdW surface, we get the classical PCM model. Lastly, for \(\kappa\to\infty\) the model tends to the conductor line screening model (COSMO) which is reasonable as then the solvent becomes a perfect conductor as the ionic strength tends to \(\infty\) and screens any change from the solute. Domain decomposition algorithms for all the four methods denoted by ddLPB, ddPCM-SES, ddPCM, and ddCOSMO can be found in [44, 43, 51, 8], respectively._
### Global Strategy
For solving Eq. (15)-(16) we follow the same ideas as prescribed in [43, 44]. Let \(g^{(0)}\) be an initial guess for the Dirichlet condition \(\psi_{\mathrm{e}}|_{\Gamma_{0}}\) and set \(\Bbbk=1\):
* Solve the following nonlinear Dirichlet boundary problem for \(\psi_{\mathrm{r}}^{(\Bbbk)}\): \[-\nabla\cdot\left[\varepsilon(\mathbf{x})\nabla\psi_{\mathrm{r}}^{( \Bbbk)}(\mathbf{x})\right] +\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\mathcal{F}\left(( \psi_{\mathrm{r}}^{(\Bbbk)}+\psi_{0})\right)\left(\psi_{\mathrm{r}}^{( \Bbbk)}+\psi_{0}\right)(\mathbf{x})\] \[=\nabla\cdot\left[\left(\varepsilon(\mathbf{x})-1\right)\nabla\psi_{ 0}(\mathbf{x})\right] \text{in}\ \ \Omega_{0},\] \[\psi_{\mathrm{r}}^{(\Bbbk)}(\mathbf{x}) =g^{(\Bbbk-1)}-\psi_{0}(\mathbf{x}) \text{on}\ \ \Gamma_{0},\] and derive the Neumann trace \(\partial_{\mathbf{n}}\psi_{\mathrm{r}}^{(\Bbbk)}\) on \(\Gamma_{0}\).
* Solve the Dirichlet boundary problem for \(\psi_{\mathrm{e}}^{(\Bbbk)}\): \[-\Delta\psi_{\mathrm{e}}^{(\Bbbk)}(\mathbf{x})+\kappa^{2}\psi_{ \mathrm{e}}^{(\Bbbk)}(\mathbf{x}) =0 \text{in}\ \ \Omega_{0},\] \[\psi_{\mathrm{e}}^{(\Bbbk)}(\mathbf{x}) =g^{(\Bbbk-1)} \text{on}\ \ \Gamma_{0},\] and derive the Neumann trace \(\partial_{\mathbf{n}}\psi_{\mathrm{e}}^{(\Bbbk)}\) on \(\Gamma_{0}\).
* Build the charge density \(\sigma_{\mathrm{e}}^{(\Bbbk)}=\partial_{\mathbf{n}}\psi_{\mathrm{e}}^{(\Bbbk)} -\partial_{\mathbf{n}}\left(\psi_{0}+\psi_{\mathrm{r}}^{(\Bbbk)}\right)\) and compute a new Dirichlet condition \(g^{(\Bbbk)}=\mathcal{S}_{\Gamma_{0}}\sigma_{\mathrm{e}}^{(\Bbbk)}\).
* Compute the contribution \(E_{s}^{\Bbbk}\) to the solvation energy based on \(\psi_{\mathrm{r}}^{(\Bbbk)}\) at the \(\Bbbk^{\mathrm{th}}\) iteration step, set \(\Bbbk\to\Bbbk+1\), go back to [1] and repeat until \(|E_{s}^{\Bbbk}-E_{s}^{\Bbbk-1}|/|E_{s}^{\Bbbk}|<\mathtt{tol}\) for \(\mathtt{tol}\ll 1\).
In the above algorithm \(E_{s}^{\Bbbk}\) denotes the electrostatic solvation energy at iteration \(\Bbbk\) which will be defined in Sec. 5.
**Remark 5**: _For choosing a suitable guess to \(g^{(0)}\) (defined on \(\Gamma_{0}\)) we consider the (unrealistic) situation when the whole space \(\mathbb{R}^{3}\) is covered by the solvent medium. Then the electrostatic potential \(\psi\) would be given by_
\[\psi(\mathbf{x})=\sum_{i=1}^{M}\frac{q_{i}}{\beta\varepsilon_{s}}\frac{\exp\left( -\kappa|\mathbf{x}-\mathbf{x}_{i}|\right)}{4\pi\varepsilon_{\mathrm{abs}}|\mathbf{x}-\bm {x}_{i}|}\quad\forall\ \ \mathbf{x}\in\mathbb{R}^{3},\]
_see [25, Sec 1.3.2]. Hence \(g^{(0)}\) is chosen as this potential restricted to \(\Gamma_{0}\)._
**Remark 6**: _We note that the global strategy is an iterative process. The final convergent solution satisfies, after discretisation, a global nonlinear system that can be solved. Unlike the LPB approach the solution of the nonlinear problem in Step [1] needs to be studied properly and will be discussed in Remark 8._
### Domain Decomposition Strategy
In this work we will consider the Schwarz domain decomposition method [45] as it is aimed at solving PDEs defined on complex domain which can be decomposed as a union of overlapping and simple sub-domain. The main idea is to solve in each sub-domain the same equation but with boundary conditions that depend on the global boundary conditions and on the solution of neighboring domains.
We recall that we have a natural decomposition of \(\Omega_{0}\) as follows
\[\Omega_{0}=\bigcup_{j=1}^{M}\Omega_{j},\quad\Omega_{j}=B_{R_{j}}(\mathbf{x}_{j}),\]
where \(\mathbf{x}_{j}\) is the center of ball \(\Omega_{j}\) and \(R_{j}=r_{j}+r_{p}+r_{0}+a\). We replace the global equation (15)-(16) by the following coupled equations, each restricted to \(\Omega_{j}\),
\[-\nabla\cdot\left[\varepsilon(\mathbf{x})\nabla\psi_{r}|_{\Omega_{j} }(\mathbf{x})\right] +\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\mathcal{F}\left((\psi_{ r}|_{\Omega_{j}}+\psi_{0})\right)\left(\psi_{r}|_{\Omega_{j}}+\psi_{0}\right)( \mathbf{x})\] \[=\nabla\cdot\left[(\varepsilon(\mathbf{x})-1)\,\nabla\psi_{0}(\mathbf{x})\right] \text{in}\ \ \Omega_{j},\] \[\psi_{r}|_{\Omega_{j}}(\mathbf{x}) =h_{r,j} \text{on}\ \ \Gamma_{j}, \tag{20}\]
with
\[h_{r,j}=\left\{\begin{array}{ll}\psi_{r}&\text{on}\ \ \ \Gamma_{j}^{i}\\ g-\psi_{0}&\text{on}\ \ \Gamma_{j}^{e}.\end{array}\right. \tag{21}\]
Here, \(\Gamma_{j}^{e}\) is the external part of \(\Gamma_{j}\) not contained in any other \(\Omega_{i}\) (\(i\neq j\)), i.e., \(\Gamma_{j}^{e}=\Gamma_{0}\cap\Gamma_{j}\), and \(\Gamma_{j}^{i}\) is the internal part of \(\Gamma_{j}\), i.e., \(\Gamma_{j}^{i}=\Omega\cap\Gamma_{j}\) (see Fig. 4).
Similarly the equations for the extended potential are
\[-\Delta\psi_{\mathrm{e}}|_{\Omega_{j}}(\mathbf{x})+\kappa^{2}\psi_{ \mathrm{e}}|_{\Omega_{j}}(\mathbf{x}) =0 \text{in}\ \ \Omega_{j},\] \[\psi_{\mathrm{e}}|_{\Omega_{j}}(\mathbf{x}) =h_{\mathrm{e},j} \text{on}\ \ \Gamma_{j}, \tag{22}\]
with
\[h_{\mathrm{e},j}=\left\{\begin{array}{ll}\psi_{\mathrm{e}}&\text{on}\ \ \ \Gamma_{j}^{i}\\ g&\text{on}\ \ \ \Gamma_{j}^{e}.\end{array}\right. \tag{23}\]
**Remark 7**: _The Dirichlet boundary conditions in Eq. (20)-(22) are implicit since \(\psi_{r}\) (respectively \(\psi_{\mathrm{e}}\)) is not known on \(\Gamma_{j}^{i}\). Hence, one needs to use an iterative scheme to solve Eq. (20)-(21) (respectively Eq. (22)-(23)), such as the parallel Schwarz algorithm or the alternating Schwarz algorithm as presented in for ddCOSMO in [8]. The visual overview of the domain decomposition algorithm is presented in Fig. 5._
**Remark 8**: _We note that in Eq. (20) we have a nonlinearity because of the term_
\[\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\mathcal{F}\left((\psi_{\mathrm{r}}|_{ \Omega_{j}}+\psi_{0})\right)\left(\psi_{\mathrm{r}}|_{\Omega_{j}}+\psi_{0} \right)(\mathbf{x}). \tag{24}\]
_A standard way of solving such nonlinear equations after discretisation is the use of a fixed point technique and replace Eq. (24) by_
\[\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\mathcal{F}\left(\left(\psi_{\mathrm{r}} |_{\Omega_{j}}^{(\nu-1)}+\psi_{0}\right)\right)\left(\psi_{\mathrm{r}}|_{\Omega _{j}}^{(\nu)}+\psi_{0}\right)(\mathbf{x}).\]
_where \(\psi_{\mathrm{r}}^{(\nu-1)}(\mathbf{x})\) denotes the solution at the \((\nu-1)^{\mathrm{th}}\) iterative step. As \(\psi_{\mathrm{r}}^{(\nu-1)}(\mathbf{x})\) is known throughout the derivation we replace Eq. (20) by the linear counterpart_
\[-\nabla\cdot\left[\varepsilon(\mathbf{x})\nabla\psi_{\mathrm{r}}|_{ \Omega_{j}}(\mathbf{x})\right] +\lambda(\mathbf{x})\kappa^{2}\varepsilon_{s}\mathcal{F}\left(( \overline{\psi}_{\mathrm{r}}|_{\Omega_{j}}+\psi_{0})\right)\left(\psi_{ \mathrm{r}}|_{\Omega_{j}}+\psi_{0}\right)(\mathbf{x})\] \[=\nabla\cdot\left[\left(\varepsilon(\mathbf{x})-1\right)\nabla\psi_{ 0}(\mathbf{x})\right] \text{in}\ \ \Omega_{j}\] \[\psi_{\mathrm{r}}|_{\Omega_{j}}(\mathbf{x}) =h_{\mathrm{r},j} \text{on}\ \ \Gamma_{j}, \tag{25}\]
_where we have dropped the \((\nu)^{\mathrm{th}}\) iterative step and denote \((\nu-1)^{\mathrm{th}}\) solution of \(\psi_{\mathrm{r}}(\mathbf{x})\) by \(\overline{\psi_{\mathrm{r}}}(\mathbf{x})\). We refer to Eq. (25) as generalized screened Poisson (GSP) equation._
## 4 Single Domain Solvers
In this section we will develop two single domain solvers in the unit ball for solving Eq. (22) and Eq. (25), respectively. Without loss of generality, we will consider an unit ball centered at the origin for developing the solvers.
Figure 5: Schematic diagram of the domain decomposition algorithm for the Poisson–Boltzmann equation.
### HSP Solver
In [44] an HSP solver was developed for the ddLPB method. One can use the same solver for ddPB as well. For completeness we outline the main ideas in this subsection.
The HSP equation in the unit ball is given by
\[-\Delta u_{\rm e}+\kappa^{2}u_{\rm e} =0\qquad\mbox{ in }\quad B_{1}({\bf 0}),\] \[u_{\rm e} =\phi_{\rm e}\qquad\mbox{ on }\quad\mathbb{S}^{2}, \tag{26}\]
where for \(j=1,\ldots,M\), \(\phi_{\rm e}(\mathbf{x})=h_{\rm e,j}(\mathbf{x}_{j}+R_{j} \mathbf{x})\) for the HSP equation in the sub-domain \(\Omega_{j}\).
The solution of Eq. (26) in \(H^{1}(B_{1}({\bf 0}))\) can be written as
\[u_{\rm e}(r,\theta,\varphi)=\sum_{\ell=0}^{\infty}\sum_{m=-\ell}^{\ell}[\phi_ {\rm e}]_{\ell}^{m}\,\frac{i_{\ell}(r)}{i_{\ell}(1)}Y_{\ell}^{m}(\theta, \varphi)\ \ 0\leq r\leq 1,\ \ 0\leq\theta\leq\pi,\quad 0\leq\varphi\leq 2\pi,\]
where \(i_{\ell}\) is the modified spherical Bessel function of the first kind, see [2, Chapter 14], \(Y_{\ell}^{m}\) denotes the (real orthonormal) spherical harmonics of degree \(\ell\) and order \(m\) defined on \(\mathbb{S}^{2}\), and
\[[\phi_{\rm e}]_{\ell}^{m}=\int_{\mathbb{S}^{2}}\phi_{\rm e}(\mathbf{ s})Y_{\ell}^{m}(\mathbf{s})d\mathbf{s},\]
is the real coefficient of \(u_{\rm e}\) corresponding to the mode \(Y_{\ell}^{m}\). Now \(u_{\rm e}\) can be numerically approximated by \(\overline{u}_{\rm e}\) in the discretisation space spanned by truncated basis of spherical harmonics \(\{Y_{\ell}^{m}\}_{0\leq\ell\leq\ell_{\rm max},-\ell\leq m\leq\ell}\), defined by
\[\overline{u}_{\rm e}(r,\theta,\varphi)=\sum_{\ell,m}\left[\tilde{\phi}_{\rm e }\right]_{\ell}^{m}\frac{i_{\ell}(r)}{i_{\ell}(1)}Y_{\ell}^{m}(\theta,\varphi )\quad 0\leq r\leq 1,\quad 0\leq\theta\leq\pi,\quad 0\leq\varphi\leq 2\pi, \tag{27}\]
where
\[\sum_{\ell,m}=\sum_{\ell=0}^{\ell_{\rm max}}\sum_{m=-\ell}^{\ell},\]
\(\ell_{\rm max}\) denotes the maximum degree of spherical harmonics, and
\[\left[\tilde{\phi}_{\rm e}\right]_{\ell}^{m}=\sum_{n=1}^{N_{\rm lab}}\omega_{ n}^{\rm leb}\phi_{\rm e}(\mathbf{s}_{n})Y_{\ell}^{m}(\mathbf{s}_{n}).\]
To approximate the integration we use the Lebedev quadrature formula [19] where, \(\mathbf{s}_{n}\in\mathbb{S}^{2}\) are the Lebedev quadrature points [19], \(\omega_{n}^{\rm leb}\) are the corresponding weights, and \(N_{\rm leb}\) is the number of Lebedev quadrature points.
**Remark 9**: _The modified spherical Bessel function of the first kind is given by_
\[i_{\ell}(r)=\sqrt{\frac{\pi}{2\kappa r}}I_{\ell+\frac{1}{2}}\left(\kappa r \right),\]
_where \(I_{\alpha}(r)\) are the modified Bessel function of the first kind [1]._
### GSP Solver
For the GSP equation (25) consider the following problem in the unit ball
\[-\nabla\cdot[\tilde{\varepsilon}(\mathbf{x})\nabla u( \mathbf{x})]+\tilde{\lambda}(\mathbf{x})\widetilde{\cal F} \left(\overline{u}(\mathbf{x})\right)u(\mathbf{x}) =f(\mathbf{x})\qquad\mbox{ in }\quad B_{1}({\bf 0}),\] \[u(\mathbf{x}) =\phi_{\rm r}(\mathbf{x})\qquad\mbox{ on }\ \partial B_{1}({\bf 0}), \tag{28}\]
where for \(j=1,\ldots,M\), \(\widetilde{\mathcal{F}}\left(\overline{u}(\mathbf{x})\right)=\kappa^{2}\varepsilon_{s} \mathcal{F}\left(\left(\overline{\psi_{\mathrm{r}}}+\psi_{0}\right)\left(\mathbf{x }_{j}+R_{j}\mathbf{x}\right)\right)\); \(f(\mathbf{x})=\nabla\cdot\left[\left(\varepsilon(\mathbf{x}_{j}+R_{j}\mathbf{x})-1\right) \nabla\psi_{0}\left(\mathbf{x}_{j}+R_{j}\mathbf{x}\right)\right]-\mathcal{F}\left( \left(\overline{\psi_{\mathrm{r}}}+\psi_{0}\right)\left(\mathbf{x}_{j}+R_{j}\mathbf{x }\right)\right)\psi_{0}(\mathbf{x}_{j}\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad
Let \(\mathbf{n}_{\delta}\) be the unit normal vector pointing outward on the ball \(\partial B_{\delta}(\mathbf{0})\) with respect to the ball \(B_{\delta}(\mathbf{0})\). As a consequence we compute the normal derivative \(\partial_{\mathbf{n}_{\delta}}w=\nabla w\cdot\mathbf{n}_{\delta}\) on \(\partial B_{\delta}(\mathbf{0})\) as
\[\left(\mathcal{T}w\right)|_{B_{\delta}(\mathbf{0})}(\delta,\theta,\varphi) =\partial_{\mathbf{n}_{\delta}}w(\delta,\theta,\varphi)\] \[=\sum_{\ell,m}\gamma_{\ell m}\left(\frac{\ell}{\delta}\right)Y_{ \ell}^{m}(\theta,\varphi),\ \ \ \ \ 0\leq\theta\leq\pi;\ \ \ 0\leq\varphi\leq 2\pi. \tag{34}\]
**Remark 10**: _The bilinear form on the left side of Eq. (31),_
\[a(\vartheta,\varphi)=\int_{\mathcal{D}}\varepsilon(\mathbf{x})\nabla\vartheta( \mathbf{x})\nabla\varphi(\mathbf{x})+\int_{\mathcal{D}}\tilde{\lambda}(\mathbf{x}) \widetilde{\mathcal{F}}\left(\overline{\vartheta}\right)\vartheta(\mathbf{x}) \varphi(\mathbf{x})+\int_{\partial B_{\delta}(\mathbf{0})}\left(\mathcal{T}\vartheta( \mathbf{x})\right)\varphi(\mathbf{x}), \tag{35}\]
_is positive definite and symmetric by the definition of \(\widetilde{\mathcal{F}}\) and the Dirichlet-to-Neumann operator \(\mathcal{T}\) for given \(\overline{\vartheta}\)._
#### 4.2.1 Galerkin Solution in Unit Ball
For finding the functions belonging to \(H^{1}_{0,\delta}(\mathcal{D})\); we introduce the radial functions
\[\varrho_{i}(r)=(1-r)L_{i}^{\prime}\left(\frac{2(r-\delta)}{1-\delta}-1\right),\]
i.e., \(\varrho_{i}(1)=0\). Here \(L_{i}\) denotes the Legendre polynomial of the \(i^{\text{th}}\) degree. We then discretise both, the radial part and the spherical part of the unknown \(w\), by linear combination of the basis element \(\{\varrho_{i}Y_{\ell}^{m}\}\) with \(1\leq i\leq N;0\leq\ell\leq\ell_{\text{max}}\); and \(-\ell\leq m\leq\ell\), where \(N\) denotes the maximum degree of Legendre polynomials and \(\ell_{\text{max}}\) denotes the maximum number of spherical harmonics. We denote the space spanned by these elements by \(\mathcal{B}_{N,\ell_{\text{max}}}(\mathcal{D})\) which is defined as
\[\mathcal{B}_{N,\ell_{\text{max}}}(\mathcal{D}) =\text{span}\left\{\varrho_{i}(r)Y_{\ell}^{m}(\theta,\varphi):1\leq i \leq N,\ \ \ 0\leq\ell\leq\ell_{\text{max}},\ \ \ -\ell\leq m\leq\ell\right\}\] \[\subset H^{1}_{0,\delta}(\mathcal{D}).\]
Then, the Galerkin discretisation of the variational formulation (31) reads: Find \(w_{\mathcal{B}}\in\mathcal{B}_{N,\ell_{\text{max}}}(\mathcal{D})\), such that
\[a\left(w_{\mathcal{B}},\varrho_{Y}\right)=\int_{\mathcal{D}}\tilde{f}\varrho_{ Y}\ \ \ \forall\ \ \varrho_{Y}\in\mathcal{B}_{N,\ell_{\text{max}}}(\mathcal{D}), \tag{36}\]
where \(a(\cdot,\cdot)\) is given by Eq. (35). Since \(w_{\mathcal{B}}\in\mathcal{B}_{N,\ell_{\text{max}}}(\mathcal{D})\), we can write \(w_{\mathcal{B}}\) in the form
\[w_{\mathcal{B}}(r,\theta,\varphi)=\sum_{i=0}^{N}\sum_{\ell,m}\left[\phi_{i} \right]_{i\ell}^{m}\varrho_{i}(r)Y_{\ell}^{m}(\theta,\varphi)\ \ \ \forall\ \ \delta\leq r\leq 1;\ \ \ 0\leq\theta\leq\pi;\ \ \ 0\leq\varphi\leq 2\pi, \tag{37}\]
and consequently
\[\mathcal{T}w_{\mathcal{B}}|_{B_{\delta}(\mathbf{0})}(\delta,\theta,\varphi)=\sum_{i =0}^{N}\sum_{\ell,m}\left[\phi_{i}\right]_{i\ell}^{m}\left(\frac{\ell}{\delta} \right)\varrho_{i}(\delta)Y_{\ell}^{m}(\theta,\varphi)\ \ \ \forall\ \ 0\leq\theta\leq\pi;\ \ \ 0\leq \varphi\leq 2\pi, \tag{38}\]
where \(\left[\phi_{i}\right]_{i\ell}^{m}\) is the real coefficient of \(w_{\mathcal{B}}\) corresponding to the node \(\varrho_{i}Y_{\ell}^{m}\).
Substituting Eq. (37) and Eq. (38) in Eq. (36) and taking the test function \(\varrho_{Y}=\varrho_{j}(r)Y_{\ell^{\prime}}^{m^{\prime}}(\theta,\varphi)\), we then obtain a system of equations for all \(1\leq j\leq N,0\leq\ell^{\prime}\leq\ell_{\text{max}}\), and \(-\ell^{\prime}\leq m^{\prime}\leq\ell^{\prime}\)
\[\sum_{i=0}^{N}\sum_{\ell,m}\left[\phi_{i}\right]_{i\ell}^{m} \left(\int_{\mathcal{D}}\tilde{\varepsilon}(\mathbf{x})\nabla\Big{(} \varrho_{i}Y_{\ell}^{m}\Big{)}\cdot\nabla\Big{(}\varrho_{j}Y_{\ell^{\prime}}^ {m^{\prime}}\Big{)}+\int_{\mathcal{D}}\tilde{\lambda}(\mathbf{x})\widetilde{ \mathcal{F}}\left(\overline{w}_{\mathcal{B}}^{\tilde{u}_{1}}(\mathbf{x})\right) \varrho_{i}Y_{\ell}^{m}\varrho_{j}Y_{\ell^{\prime}}^{m^{\prime}}\right.\] \[\
In order to write a system of equations, we define the index
\[k=N\left(\ell^{2}+m+1\right)+i\in\{1,2,\ldots,N\left(\ell_{\max}+1\right)^{2}\},\]
which corresponds to the triple \((i,\ell,m)\). Let \(k\) corresponds to \((i,\ell,m)\) and \(k^{\prime}\) corresponds to \((j,\ell^{\prime},m^{\prime})\). Then Eq. (39) can be recast as
\[\overline{\mathbf{A}}\ \overline{X}_{\mathrm{r}}=\overline{\mathbf{F}}, \tag{40}\]
where \(\overline{\mathbf{A}}\) is a matrix of dimension \(N(\ell_{\max}+1)^{2}\times N(\ell_{\max}+1)^{2}\) with elements \(\left(\overline{\mathbf{A}}\right)_{k,k^{\prime}}\) for all \(1\leq k,k^{\prime}\leq N(\ell_{\max}+1)^{2}\), defined by
\[\left(\overline{\mathbf{A}}\right)_{k,k^{\prime}}= \int_{\mathcal{D}}\tilde{\varepsilon}(\mathbf{x})\nabla\left(\varrho _{i}Y_{\ell}^{m}\right)\cdot\nabla\left(\varrho_{j}Y_{\ell^{\prime}}^{m^{ \prime}}\right)+\int_{\mathcal{D}}\tilde{\lambda}(\mathbf{x})\widetilde{\mathcal{ F}}\left(\overline{w}_{\mathcal{B}}^{\hat{\mu}_{1}}(\mathbf{x})\right)\varrho_{i}Y_{\ell}^{m} \varrho_{j}Y_{\ell^{\prime}}^{m^{\prime}}\] \[+\frac{\ell}{\delta}\int_{\partial B_{\delta}(\mathbf{0})}\varrho_{i }Y_{\ell}^{m}\varrho_{j}Y_{\ell^{\prime}}^{m^{\prime}}, \tag{41}\]
\(\overline{X}_{\mathrm{r}}\) is the column vector of \(N\left(\ell_{\max}+1\right)^{2}\) unknowns \(\left[\phi_{\mathrm{r}}\right]_{i\ell}^{m}\), i.e.,
\[\left(\overline{X}_{\mathrm{r}}\right)_{k}=\left[\phi_{\mathrm{r}}\right]_{i \ell}^{m}\quad\forall\ \ k\in\{1,\ldots,N(\ell_{\max}+1)^{2}\}, \tag{42}\]
and \(\overline{\mathbf{F}}\) is a column vector with \(N(\ell_{\max}+1)^{2}\) entries defined by
\[\left(\overline{\mathbf{F}}\right)_{k^{\prime}}=\int_{\mathcal{D}}\tilde{f} \varrho_{j}Y_{\ell^{\prime}}^{m^{\prime}}\quad\forall\ \ k\in\{1,\ldots,N(\ell_{\max}+1)^{2}\}. \tag{43}\]
To summarize, in order to solve Eq. (31) we need to solve Eq. (40) to obtain \(\left[\phi_{\mathrm{r}}\right]_{i\ell}^{m}\) and then obtain an approximate solution \(w_{\mathcal{B}}(r,\theta,\varphi)\in\mathcal{B}_{N,\ell_{\max}}(\mathcal{D})\) according to Eq. (37). Since, \(w_{\mathcal{B}}\) is harmonic in \(B_{\delta}(\mathbf{0})\), \(w_{\mathcal{B}}\) can be extended harmonically in the ball \(B_{\delta}(\mathbf{0})\) following Eq. (33) and hence we obtain an approximate solution defined in \(B_{1}(\mathbf{0})\) to Eq. (30).
**Remark 11**: _The final thing remaining in the discretisation process is the evaluation of the integrals in \(\left[\mathbf{A}^{i_{0}}\right]\) (see Eq. (41)) for \(i_{0}=1,\ldots,M\). We have integrals over the torus \(\mathcal{D}_{i_{0}}\) and the boundary of \(B_{\delta_{i_{0}}}(\mathbf{x}_{i_{0}})\). Here we follow the ideas from [43]. For simplicity we take the example of unit ball. The integral over \(\partial B_{\delta}(\mathbf{0})\) is given by_
\[\frac{\ell}{\delta}\int_{\partial B_{\delta}(\mathbf{0})}\varrho_{i}Y _{\ell^{\prime}m^{\prime}}\varrho_{j}Y_{\ell m} =\ell\delta\varrho_{i}(\delta)\varrho_{j}(\delta)\int_{\mathbb{S} ^{2}}Y_{\ell m}Y_{\ell^{\prime}m^{\prime}}\] \[=\ell\delta\varrho_{i}(\delta)\varrho_{j}(\delta)\delta\ell_{ \ell^{\prime}}\delta_{mm^{\prime}}.\]
_The integral over \(\mathcal{D}\) can be divided into two parts, radial and spherical. Let \(h\in\mathcal{L}^{1}\left(B_{1}(\mathbf{0})\right)\) then the integral of \(h\) over \(\mathcal{D}\) can be written separately as_
\[\int_{\mathcal{D}}h(\mathbf{x})d\mathbf{x}=\int_{\delta}^{1}r^{2}\int_{\mathbb{S}^{2}} h(r,\mathbf{s})d\mathbf{s}dr,\quad\mathbf{s}\in\mathbb{S}^{2},\]
_and \(\mathbf{x}=r\mathbf{s}\). The spherical part can be computed using the Lebedev quadrature [19]. For the radial part we use the Legendre-Gauss-Lobatto (LGL) quadrature rule [41] defined by quadrature points \(x_{m}\in[-1,1]\) and the quadrature weights \(\omega_{m}^{\mathrm{igl}}\), \(1\leq m\leq N_{\mathrm{lgl}}\) for \(N_{\mathrm{lgl}}\) quadrature points. Using change of variable_
\[r=\tfrac{1-\delta}{2}\left(x+1\right)+\delta,\quad x\in[-1,1],\]
_we approximate the integral by the following quadrature rule_
\[\int_{\mathcal{D}}h(\mathbf{x})d\mathbf{x}\approx \tfrac{1-\delta}{2}\sum_{m=1}^{N_{\mathrm{lgl}}}\sum_{n=1}^{N_{ \mathrm{lbl}}}\omega_{m}^{\mathrm{igl}}\omega_{n}^{\mathrm{leb}}\left(\tfrac{1- \delta}{2}(x_{m}+1)+\delta\right)^{2}h\left(\tfrac{1-\delta}{2}(x_{m}+1)+ \delta,\mathbf{s}_{n}\right).\]
Numerical Simulations
In this section, we present some numerical studies for the ddPB-SES method.
We first introduce the electrostatic solvation energy for the PB equation. For the nonlinear PB, the solvation energy becomes more involved compared to the linear PB. We follow the definition of the energy from [52], which is given by
\[E_{s}=\frac{\beta}{2}\int_{\Omega}\rho^{\text{sol}}(\mathbf{x})\psi_{\text{r}}(\mathbf{x })+\frac{\beta^{2}\kappa^{2}\varepsilon_{s}}{8\pi}\int_{\Omega}\lambda(\mathbf{x}) \left(\psi_{\text{r}}(\mathbf{x})\sinh\left(\psi_{\text{r}}(\mathbf{x})\right)-2\cosh \left(\psi_{\text{r}}(\mathbf{x})\right)\right)\]
In the case of linear Poisson-Boltzmann equations, the last two term in the energy cancel each other.
By default, we assume the solute cavity in vacuum and the solvent to be water. Hence, the relative dielectric permittivity of the solute cavity is one and \(\varepsilon_{s}=78.54\) at room temperature \(298.15\)K. Further, we set the Debye Huckel constant, \(\kappa=0.104\) A\({}^{-1}\) for an ionic strength of \(0.1\) molar. We use the Hartree energy unit system and thus \(4\pi\varepsilon_{\text{abs}}=1\). We read the input files in A units but then we internally convert them to the Hartree units and hence the distance is represented by atomic units (a.u.).
The atomic centres, charges, and the vdW radii are obtained from the PDB files [4] and the PDB2PQR package [14, 15].
Now, we provide more details about solving the system of nonlinear equations. We notice that in matrix \(\overline{\mathbf{\Lambda}}\), the first and third part in Eq.(41) are constant throughout the iterative process and hence they needed to computed only once and can be used later. For computing the new solution we follow a damping approach in which
\[X^{(\nu)}=X^{(\nu-1)}+\omega\left(X^{\text{aux}}-X^{(\nu-1)}\right),\]
where \(X^{\text{aux}}\) is obtained by solving Eq. (40) and \(\omega\) is a damping parameter. For the simulations we fix \(\omega=0.25\) which is obtained empirically from simulations.
We have three different iteration loops in our method. We refer to the global iteration process as outer iterations (indexed by k), and the convergence is reached when the relative increment satisfies
\[\texttt{inc}_{\texttt{k}}:=\frac{|E_{s}^{\texttt{k}}-E_{s}^{\texttt{k}-1}| }{|E_{s}^{\texttt{k}}|}\leq\texttt{tol},\qquad\text{for}\ \ \texttt{k}\geq 1,\]
and given tolerance, tol. The second is the dd-iterative loop for the GSP solver (Eq. (20)-(21)); here, the stopping criteria is the relative reduction of the reaction potential but with a relaxed stopping criteria of \(10\times\texttt{tol}\). Lastly is the nonlinear loop for solving Eq. (40). Here we again use the relative reduction of the solution vector with the stopping criteria of \(100\times\texttt{tol}\). To solve the outer iterative loop, we start with the zero solution as the initial iterate, but we use the solution from the previous iterative step for subsequent iterations. We follow the same procedure for the other iterative loops as well. Unless specified by default, we set \(\texttt{tol}=10^{-6}\), and we solve the system of equations using the LU decomposition method.
Finally, if not mentioned otherwise we set the probe radius \(r_{p}=1.4\) A. All the simulations were performed on the MATLAB code ddPB-SES.
### GSP Solver
We first present results with respect to the GSP solver presented in Sec. 4.2 in one ball. We assume a \(0.1\) charge at the origin with a vdW radii of \(2\), i.e., \(r_{1}=2\) A and \(r_{0}=1\) A. We also assume the absence of Stern layer and hence set \(a=0\) A. As we have a single atom we have the rotational symmetry of the system and
hence we define the dielectric permittivity and the ion-exclusion function only as a radial variable,. i.e.,
\[\varepsilon(r) =\left\{\begin{array}{ll}1&r<r_{1},\\ 1+(\varepsilon_{s}-1)\xi\left(\frac{r-r_{1}}{r_{p}}\right)&r_{1}\leq r\leq r_{1} +r_{p}\\ \varepsilon_{s}&r>r_{1}+r_{p}.\end{array}\right.,\] \[\lambda(r) =\left\{\begin{array}{ll}0&r<r_{1},\\ \xi\left(\frac{r-r_{1}}{r_{p}}\right)&r_{1}\leq r\leq r_{1}+r_{p}\\ 1&r>r_{1}+r_{p}.\end{array}\right..\]
Throughout this example if not mentioned we set the discretisation parameter \(N=20\) and \(N_{\rm lgl}=200\).
Fig. 6 shows various potentials with respect to the radial component. The Bessel extension is obtained by extending \(\psi_{\rm e}\) on the outer boundary using modified spherical Bessel function of the second kind and the harmonic extension is obtained by extending \(w_{\mathcal{B}}\) (see Eq. (37)) on the inner boundary. We notice that we have the continuity of both the extensions along with zero jump of the normal derivative.
Next, we show the variation between the LPB and the PB equation. For this, we define a function, \(\mbox{Var}_{\psi}\), given by
\[\mbox{Var}_{\psi}(r):=|\psi_{\rm PB}(r)-\psi_{\rm LPB}(r)|\qquad\forall r_{1} \leq r\leq R_{1},\]
where \(\psi_{\rm PB}\) and \(\psi_{\rm LPB}\) refers to the reaction potential of the Poisson-Boltzmann and the linear Poisson-Boltzmann equation, respectively. We expect that after a certain \(r_{0}\), \(\mbox{Var}_{\psi}\leq 10^{-3}\). We set \(r_{0}=10\) A and notice that in Fig. 7 after \(r\geq 8\) A (\(r=15.11780\) a.u.), \(\mbox{Var}_{\psi}\leq 10^{-3}\) which is a close approximation to the linear solution and hence the ddPB-SES solution converges to the ddLPB-SES solution with the solution becoming closer as \(r_{0}\) increases. This example shows the importance of having a nonlinear region the PB equation.
Finally, we present results by varying the discretisation parameters. For this we set \(r_{0}=5\) A. We compute an exact solution using the discretisation parameters \(N=30\) and \(N_{\rm lgl}=300\). Fig. 8 present results with varying \(N\)(left) and \(N_{\rm lgl}\)(right), respectively. We notice that the solvation energy improves as we increase the discretisation parameter.
Figure 6: Example 5.1: Electrostatic potential along the radial direction.
### Convergence of Global Strategy
In this example, we consider the caffeine molecule to show the convergence of the global strategy introduced in Sec. 3.2. Caffeine being a relatively bigger molecule (\(24\) atoms) gives a good idea about the method developed.
The Schwarz domain decomposition method, that is used to solve Eq. (20) and Eq. (22), is well studied and its convergence can be guaranteed [46] in a continuous setting. To study the convergence of our global strategy we set the discretisation parameters as \(\ell_{\max}=9\), \(N_{\mathrm{leb}}=350\), \(N=15\), and \(N_{\mathrm{lgl}}=50\). With this we compute an "exact" solvation energy, \(E_{s}^{\infty}\) for \(15\) outer iterations and then define an error function as
\[\mathtt{Error}_{N_{\mathrm{it}}}:=|E_{s}^{\infty}-E_{s}^{N_{\mathrm{it}}}|,\]
where \(N_{\mathrm{it}}\) are the number of outer iterations. The geometric parameters are fixed as \(r_{0}=5\) A, and \(a=1\) A.
We observe in Fig. 9 that the \(E_{s}\) converges with respect to \(N_{\mathrm{it}}\) (left) and the process stops when the desired tolerance is reached. The error also decreases monotonically with increasing \(N_{\mathrm{it}}\) (middle). We finally present the number of domain decomposition (\(N_{\mathrm{dd}}\)) loops required to solve Eq. (20) in the global iterative process (right). We notice that as the number of outer loops increase the number of dd loops monotonically decrease. This example also gives a good idea for choosing the maximum number of outer iterations, which is around \(5\) in this case.
### Effect of Discretisation Parameters
In Example 5.1 we studied the effect of radial discretisation parameters on \(E_{s}\). In this example we study the effects of both the radial and the spherical discretisation parameters for the formaldehyde molecule.
Figure 8: Example 5.1: Electrostatic solvation energy contribution with respect to \(N\) (left) by setting \(N_{\mathrm{lgl}}=300\), and with respect to \(N_{\mathrm{lgl}}\) (right) by setting \(N=30\).
Figure 7: Example 5.1: Variation of \(\psi\) with respect to\(r\).
The geometric parameters are set as \(r_{p}=1.4\) A, \(r_{0}=2\) A, and \(a=1\) A. For the reference ("exact") solvation energy we use the discretisation parameters \(\ell_{\max}=11\), \(N_{\text{leb}}=1202\), \(N=15\), and \(N_{\text{lgl}}=80\).
We first present the results with respect to the spherical parameters. In Fig. 10 the \(\ell_{\max}\) is varied from \(1\) to \(9\) (left) and \(N_{\text{leb}}\) is varied from \(100\) to \(974\). We can observe that the proposed algorithm improves systematically with an increase in the number of parameters.
Similarly, in Fig. 11 we present the results for the radial discretisation parameters. We have similar
Figure 11: Example 5.3: Electrostatic solvation energy contribution with respect to \(N\) (left) by setting \(N_{\text{lgl}}=770\), and with respect to \(N_{\text{lgl}}\) (right) by setting \(N=9\).
Figure 10: Example 5.3: Electrostatic solvation energy contribution with respect to \(\ell_{\max}\) (left) by setting \(N_{\text{leb}}=770\), and with respect to \(N_{\text{leb}}\) (right) by setting \(\ell_{\max}=9\).
Figure 9: Example 5.2: Electrostatic solvation energy (left), error (middle), and number of dd loops (right) for the caffeine molecule with respect to number of outer iterations.
observations, i.e., the approximations are improved with an increase in the number of parameters.
This example gives us a good choice in selecting the discretisation parameters.
### Stern Layer Length
Until now we have not discussed the effects of the Stern layer length. In this example, we study its effect. We consider the hydrogen fluoride molecule with \(r_{0}=2\) A. In this example we set the discretisation parameters as \(\ell_{\max}=8\), \(N_{\mathrm{leb}}=1202\), \(N=15\), and \(N_{\mathrm{lgl}}=50\).
We vary \(a\) from \(0\) to \(5\) A. For \(a=0\) A, we have an absence of Stern layer and hence the ions are close to the SES surface. In Fig. 12 we notice that as \(a\) increases the solvation energy decreases. The reason being the solvent ions are away from the SES-surface and the layer \(\mathcal{L}_{\lambda}\) decreases in width and hence the domination contribution is from \(\rho^{\mathrm{sol}}(\mathbf{x})\).
### Rotational Symmetry
We now study the rotational symmetry of the ddPB-SES method. For this we fix the hydrogen atom at the center and rotate the fluorine atom around the hydrogen atom.
The geometric parameters are set to \(r_{0}=2\) Aand \(a=1\) A. The radial discretisation parameters are set to \(N=15\) and \(N_{\mathrm{lgl}}=50\). We present results with respect to two sets of spherical discretisation parameters, \(\ell_{\max}=6\) and \(10\). The number of Lebedev quadrature points are set according to [8] so that we get exact quadrature. For \(\ell_{\max}=7\) we set \(N_{\mathrm{leb}}=86\) and for \(\ell_{\max}=10\), \(N_{\mathrm{leb}}=302\).
In Fig. 13 we notice that variation is systematically controlled with a variance of \(6\%\) for \(\ell_{\max}=7\) and \(2\%\) for \(\ell_{\max}=12\). Hence, the variation of the energy under rotational symmetry is systematically controlled
Figure 12: Example 5.4: Electrostatic solvation energy with respect to \(a\).
Figure 13: Example 5.5: Variation of the electrostatic solvation energy of the hydrogen-fluoride molecule with respect to the angle of rotating fluorine atom.
as it decreases with an increase in number of spherical harmonics.
### Visualisation of Reaction Potential
In this example, we visualize the reaction potential on the enlarged cavity \(\Omega_{0}\). Fig. 14 shows the reaction potential for hydrogen-fluoride (left) and the formaldehyde (right) molecule, respectively, with the discretization parameter \(\ell_{\rm max}=11\), \(N_{\rm leb}=1202\), \(N=15\), and \(N_{\rm lgl}=50\). The geometric paratermers are set to \(r_{p}=1.5\) A, \(r_{0}=1\) A, and \(a=0.5\)A. We observe the rotational symmetry for the hydrogen-fluoride molecule and the mirror symmetry for the formaldehyde molecule.
Fig. 15 present results for the caffeine molecule, with 24 atoms. Here we use the discretisation parameters \(\ell_{\rm max}=9\), \(N_{\rm leb}=350\), \(N=15\), adn \(N_{\rm lgl}=30\). The geometric parameters are set similarly to the previous molecules.
### Variation of Debye-Huckel Screening Constant
In this example we study the effect of \(\kappa\) on the electrostatic solvation energy. We consider the hydrogen fluoride molecule with \(r_{0}=0\) A, and \(a=0\) A; and set the discretisation parameters as \(\ell_{\rm max}=7\), \(N_{\rm leb}=86\), \(N=15\), and \(N_{\rm lgl}=30\).
Figure 16: Example 5.7: Electrostatic solvation energy with respect to \(\kappa\).
Figure 14: Example 5.6: Reaction potential for hydrogen-fluoride molecule (left) and formaldehyde molecule (right) on \(\Omega_{0}\).
Figure 15: Example 5.6: Reaction potential for caffeine on \(\Omega_{0}\).
We vary \(\kappa\) from \(10^{-6}\) to \(10^{-4}\). On a continuous level the ddPB-SES model converges to the ddPCM-SES model when \(\kappa\to 0\) (see [43]). We notice similar observations numerically, see Fig. 16. As \(\kappa\) decreases the energy becomes constant and converges to a value. We obtain a ddPCM-SES result from [43] using \(r_{0}=0\) A.
## 6 Summary and Outlook
This paper proposes a new method for solving the Poisson-Boltzmann equation using the domain decomposition paradigm for the solvent-excluded surface.
The original problem defined in \(\mathbb{R}^{3}\) is transformed into two-coupled equations described in the bounded solute cavity based on potential theory arguments. An enlarged cavity was defined for each atom encompassing the Stern layer. Then, the Schwarz domain decomposition method was used to solve these two problems by decomposing them into balls. We develop two single-domain solvers for solving the GSP and HSP equations in a unit ball. The GSP solver was a nonlinear solver that used spherical harmonics for angular direction and Legendre polynomials for radial direction. An SES-based continuous dielectric permittivity function and an ion-exclusion function were proposed and were encompassed in the GSP solver. The novelty of the method is that the nonlinearity is incorporated only in the proximity of the molecule whereas in the solvent region the linear model was used. A series of numerical results have been presented to show the performance of the ddPB-SES method.
In the future, we would like to implement this method in our open-source software ddX, [20], to simulate bigger molecules. We would also like to study the nonlinear solver better using Newton's methods and incorporating acceleration techniques such as dynamic damping and Anderson acceleration.
|
2309.09589 | Maximum-likelihood fits of piece-wise Pareto distributions with finite
and non-zero core | We discuss multiple classes of piece-wise Pareto-like power law probability
density functions $p(x)$ with two regimes, a non-pathological core with
non-zero, finite values for support $0\leq x\leq x_{\mathrm{min}}$ and a
power-law tail with exponent $-\alpha$ for $x>x_{\mathrm{min}}$. The cores take
the respective shapes (i) $p(x)\propto (x/x_{\mathrm{min}})^\beta$, (ii)
$p(x)\propto\exp(-\beta[x/x_{\mathrm{min}}-1])$, and (iii) $p(x)\propto
[2-(x/x_{\mathrm{min}})^\beta]$, including the special case $\beta=0$ leading
to core $p(x)=\mathrm{const}$. We derive explicit maximum-likelihood estimators
and/or efficient numerical methods to find the best-fit parameter values for
empirical data. Solutions for the special cases $\alpha=\beta$ are presented,
as well. The results are made available as a Python package. | Benjamin F. Maier | 2023-09-18T08:53:40Z | http://arxiv.org/abs/2309.09589v1 | # Maximum-likelihood fits of piece-wise Pareto distributions with finite and non-zero core
###### Abstract
We discuss multiple classes of piece-wise Pareto-like power law probability density functions \(p(x)\) with two regimes, a non-pathological core with non-zero, finite values for support \(0\leq x\leq x_{\min}\) and a power-law tail with exponent \(-\alpha\) for \(x>x_{\min}\). The cores take the respective shapes (i) \(p(x)\propto(x/x_{\min})^{\beta}\), (ii) \(p(x)\propto\exp(-\beta[x/x_{\min}-1])\), and (iii) \(p(x)\propto[2-(x/x_{\min})^{\beta}]\), including the special case \(\beta=0\) leading to core \(p(x)=\) const. We derive explicit maximum-likelihood estimators and/or efficient numerical methods to find the best-fit parameter values for empirical data. Solutions for the special cases \(\alpha=\beta\) are presented, as well. The results are made available as a Python package.
## I Introduction
Non-negative data that is distributed with a "heavy tail" is abundant [1; 2; 3; 4; 5]. There is a whole zoo of theoretical distributions used to describe such data [3; 4; 5], one of them the Pareto distribution with a power-law tail \(p(x)\propto x^{-\alpha}\) where \(\alpha>1\)[3]. This distribution is only non-zero, however, for support \(x\geq x_{\min}\) because of the function's pathological behavior at \(x=0\). Yet, more often than not, real-world data does not follow a "lower cutoff" behavior, instead the empirical probability density function (pdf) reaches finite values for data points that are smaller [4; 6].
Previous work derived and presented maximum-likelihood estimation methods to fit the tail observed in the distributions of empirical data, disregarding values below a threshold \(x_{\min}\)[4; 7]. Doing so is entirely reasonable because the shape of the distribution's tail strongly determines the outcome of dynamical systems [1; 8] or is an indicator for a system's criticality [9]. Nonetheless, there might be situations in which the finite, non-zero core of an empirical distribution might be of interest. One such example is the accurate estimation of an empirical contact distribution's first and second moment which are important for epidemic threshold estimations [8] and can both be heavily skewed by outliers. Here, robustly estimating the entire distribution by means of maximum-likelihood estimation may be a more resilient method than simply computing the moments from the data itself.
Therefore, we discuss multiple piece-wise distribution functions that have non-pathological behavior below a threshold, i.e. have non-zero and finite probabilities for support \(0\leq x\leq x_{\min}\) in the following. Each of the distributions takes an additional shape parameter \(\beta\) to describe the behavior of the core. For each of the models, we define the log-likelihood and semi-analytical methods to find the parameter values that maximize it given a set of data points. Note that we only present solutions for continuous support \(\{x\in\mathbb{R}:x\geq 0\}\).
We make the results available as an open-source Python package [10; 11].
## II Methods
### Definitions
We mainly discuss piece-wise Pareto distributions of the form
\[p(x)=\begin{cases}C\gamma(x,x_{\min},\beta),&0\leq x\leq x_{\min}\\ C\left(x_{\min}/x\right)^{\alpha},&x>x_{\min}\end{cases} \tag{1}\]
on the support \(x\in\{x^{\prime}\in\mathbb{R}:x\geq 0\}\) (with a few exceptions where \(x>0\)). The parameters are bounded as \(\alpha>1\) and \(x_{\min}>0\). Other bounds will be discussed when appropriate. The respective functions \(\gamma\) will be referred to as 'core' hereinafter. The normalization constant can be found as
\[C(\alpha,x_{\min},\beta)=\left(\frac{x_{\min}}{\alpha-1}+\int\limits_{0}^{x_{ \min}}\gamma(x,x_{\min},\beta)\right)^{-1}. \tag{2}\]
Given a set of observations \(x_{i}\in\Omega\) and a threshold \(x_{\min}\) we split the data into a set \(S\subseteq\Omega\) of observations that are smaller than or equal to \(x_{\min}\), and a set \(\Lambda\subseteq\Omega\) that contains observations larger than the threshold, i.e. \(\Lambda=\{x_{i}\in\Omega:x_{i}>x_{m}\}\) and \(S=\Omega-\Lambda\), with \(n_{S}\equiv|S|\) and \(n_{\Lambda}\equiv|\Lambda|\), as well as \(n=|\Omega|=n_{S}+n_{\Lambda}\).
The likelihood of a set of observations given a distribution and a parameter set \(\{\alpha,x_{\min},\beta\}\) is therefore
\[\mathcal{L}(\Omega|\alpha,x_{\min},\beta)=C^{n}(\alpha,x_{\min},\beta)\prod_{ i=1}^{n_{\Lambda}}\left(\frac{x_{\min}}{x_{i}}\right)^{\alpha}\prod_{i=1}^{n_{S}} \gamma(x_{i},x_{\min},\beta). \tag{3}\]
Consequently, the log-likelihood is given as
\[\ln\mathcal{L} = n\ln C(\alpha,x_{\min},\beta)-\alpha n_{\Lambda}\left(\ln\left( \frac{x}{x_{\min}}\right)\right)_{\Lambda} \tag{4}\] \[+ n_{S}\left(\ln\gamma(x,x_{\min},\beta)\right)_{S}\]
where we denote as \(\langle f(x)\rangle_{X}=(1/n_{X})\sum_{i\in X}f(x_{i})\) the average over observations in set \(X\). Please note that \(\langle\ln\left(x/x_{\min}\right)\rangle_{\Lambda}=\langle\ln x\rangle_{ \Lambda}-\ln x_{\min}\), i.e. the average can be taken independently from \(x_{\min}\) for support regions where the sets \(\Lambda\) and \(S\) are constant. Furthermore, we have \(\langle\ln\left(x/x_{\min}\right)\rangle_{\Lambda}>0\) as per the definition of \(\Lambda\).
### General outline of the procedure to find best-fit parameters
To find parameter values \(\hat{\alpha}\), \(\hat{x}_{\min}\), and \(\hat{\beta}\) that maximize the likelihood of a model given a dataset, we proceed as follows.
Typically, we first assume that \(x_{\min}\) and \(\beta\) are constant and known. Then we solve the equation \(\partial\ln\mathcal{L}/\partial\alpha=0\), finding \(\hat{\alpha}\). In a next step, we solve \(\partial\ln\mathcal{L}/\partial\beta=0\) for \(\alpha\) to find \(\hat{\alpha}_{\beta}\). Then, \(\hat{\beta}\) is given as the solution of \(\hat{\alpha}=\hat{\alpha}_{\beta}\).
Now, consider the following. Define as \(Y=(y_{1},y_{2},\ldots,y_{m})\) the ordered tuple of _unique_ elements of \(\Omega\). For \(x_{\min}\in[y_{i},y_{j})\), the sets \(\Lambda\) and \(S\) are constant. Therefore, we can use \(\partial\ln\mathcal{L}/\partial x_{\min}=0\) in concurrence with the previous solutions to find (\(\hat{\alpha}\), \(\hat{x}_{\min}\), \(\hat{\beta}\)), under the condition that \(y_{j}\leq\hat{x}_{\min}<y_{j+1}\), \(\hat{\alpha}>1\) as well as conditions for \(\beta\). In general, it is possible that the maximum likelihood is located at boundary value \(\hat{x}_{\min}=y_{j}\) with \(\partial\ln\mathcal{L}/\partial x_{\min}\neq 0\).
With this in mind, iterate through \(x_{\min}=y_{1},\ldots,y_{m-1}\), construct the respective sets \(\Lambda\) and \(S\), then compute \(\hat{\alpha}\) and \(\hat{\beta}\) under the assumption that \(x_{\min}=y_{j}=\text{const.}\), afterwards attempt to find a local maximum on the interval \(x_{\min}\in[y_{j},y_{j+1})\).
### Software
The methods developed herein have been implemented in the Python programming language and are available at [https://github.com/benmaier/fincoretails](https://github.com/benmaier/fincoretails) and [https://zenodo.org/record/8349920](https://zenodo.org/record/8349920) [10; 11].
## III Results
### Power-law core
#### iii.1.1 General case
The "pow-Pareto" pdf is given as
\[p(x)=\begin{cases}C\left(x/x_{\min}\right)^{\beta},&0\leq x\leq x_{\min}\\ C\left(x_{\min}/x\right)^{\alpha},&x>x_{\min}\end{cases} \tag{5}\]
for \(\beta>-1\) and with normalization constant
\[C=\frac{(\alpha-1)\left(\beta+1\right)}{y\left(\alpha+\beta\right)}. \tag{6}\]
Note that the pdf has a singularity at \(x=0\) for \(-1<\beta<0\) but is normalizable nonetheless. We discuss the special case \(\beta=0\) in Sec. III.4. The log-likelihood is
\[\ln\mathcal{L}=n\ln\left(\frac{(\alpha-1)\left(\beta+1\right)}{ y\left(\alpha+\beta\right)}\right)-\alpha n_{\Lambda}\left\langle\ln\left( \frac{x}{x_{\min}}\right)\right\rangle_{\Lambda}\] \[-\alpha n_{S}\left\langle\ln\left(\frac{x_{\min}}{x}\right) \right\rangle_{S}, \tag{7}\]
with derivative
\[\frac{\partial\ln\mathcal{L}}{\partial\alpha}=\frac{(\beta+1)n}{(\alpha-1)( \alpha+\beta)}-n_{\Lambda}\left\langle\ln\left(\frac{x}{x_{\min}}\right) \right\rangle_{\Lambda}, \tag{8}\]
For constant \(\beta\) and \(x_{\min}\), this yields
\[\hat{\alpha}=\frac{1}{2}\left(1-\beta+(1+\beta)\sqrt{1+\frac{4n}{(\beta+1)n_{ \Lambda}\left\langle\ln\left(\frac{x}{x_{\min}}\right)_{\Lambda}\right\rangle }}\right). \tag{9}\]
The second solution leads to values \(\hat{\alpha}\leq 1\) and can therefore be ignored.
Figure 1: Example distributions for the three Pareto-like pdf classes discussed in this paper. Vertically marked is the position of the transition point \(x_{\min}\). **(a)** Power-law-core Pareto distribution Eq. (5), discussed in Sec. III.1. **(b)** Exponential-core Pareto distribution Eq. (26), discussed in Sec. III.2. **(c)** Algebraic-core Pareto distribution Eq. (45), discussed in Sec. III.3. Note that \(\beta=0\) corresponds to the uniform-core Pareto distribution Eq. (57), discussed in Sec. III.4.
For varying \(\beta\) we have
\[\frac{\partial\ln\mathcal{L}}{\partial\beta}=\frac{(\alpha-1)n}{(\beta+1)(\alpha+ \beta)}-n_{S}\left\langle\ln\left(\frac{x_{\min}}{x}\right)\right\rangle_{S} \tag{10}\]
which yields
\[\hat{\alpha}=\frac{n+\beta(\beta+1)n_{S}\left\langle\ln\left(\frac{x_{\min}}{ x}\right)\right\rangle_{S}}{n-(\beta+1)n_{S}\left\langle\ln\left(\frac{x_{\min}}{ x}\right)\right\rangle_{S}} \tag{11}\]
and consequently, by equating Eq. (9) and Eq. (11) the two solutions
\[\hat{\beta}_{+}=-1+\frac{n}{n_{S}\left\langle\ln\left(\frac{x_{\min}}{x} \right)\right\rangle_{S}+\sqrt{n_{\Lambda}n_{S}\left\langle\ln\left(\frac{x_{ \min}}{x}\right)\right\rangle_{S}\left\langle\ln\left(\frac{x}{x_{\min}} \right)\right\rangle_{\Lambda}}} \tag{12}\]
\[\hat{\beta}_{-}=-1+\frac{n}{n_{S}\left\langle\ln\left(\frac{x_{\min}}{x} \right)\right\rangle_{S}-\sqrt{n_{\Lambda}n_{S}\left\langle\ln\left(\frac{x_{ \min}}{x}\right)\right\rangle_{S}\left\langle\ln\left(\frac{x}{x_{\min}} \right)\right\rangle_{\Lambda}}} \tag{13}\]
Only \(\hat{\beta}_{+}\) generally meets the condition \(\beta>-1\), nonetheless, it is computationally cheap to check both solutions.
For regions \(x_{\min}\in[y_{j},y_{j+1})\), the sets \(\Lambda\) and \(S\) are constant. We have
\[\frac{\partial\ln\mathcal{L}}{\partial x_{\min}}=-\frac{n-\alpha n_{\Lambda}+ \beta n_{S}}{x_{\min}} \tag{14}\]
and therefore
\[\hat{\alpha}=\frac{n(\beta+1)-\beta n_{\Lambda}}{n_{\Lambda}} \tag{15}\]
where we used \(n_{S}=n-n_{\Lambda}\). We put this in Eq. (8) and solve for \(\ln x_{\min}\) to find
\[\ln\hat{x}_{\min}=\frac{n_{\Lambda}}{(\beta+1)(n-n_{\Lambda})}+\left\langle \ln x\right\rangle_{\Lambda}. \tag{16}\]
Both these results only depend on \(\beta\), which is why we can use them in Eq. (10) to find
\[\hat{\beta}=-1+\frac{n_{\Lambda}-n_{S}}{n_{S}(\left\langle\ln x\right\rangle_ {S}-\left\langle\ln x\right\rangle_{\Lambda})}. \tag{17}\]
Note that because \(\left\langle\ln x\right\rangle_{S}<\left\langle\ln x\right\rangle_{\Lambda}\), the condition \(\beta>-1\) is only met if \(n_{S}>n_{\Lambda}\). As described in the Methods section, we can iterate through the ordered tuple \(Y\) to find the maximum as either of the boundaries of an interval \([y_{j},y_{j+1})\) or within the interval.
Figure 2: Random variates sampled from distributions discussed in this paper, their respective empirical pdfs (squares) and maximum-likelihood fits (solid), using \(x_{\min}=10\) and \(\alpha=2\). Vertical lines mark the inferred maximum-likelihood parameter \(\hat{x}_{\min}\). **(a)** Power-law-core Pareto distribution Eq. (5), discussed in Sec. III.1 with \(\beta=-1/2\) and \(\beta=1\)**(b)** Exponential-core Pareto distribution Eq. (26), discussed in Sec. III.2 with \(\beta=-1/2\) and \(\beta=1\). **(c)** Algebraic-core Pareto distribution Eq. (45), discussed in Sec. III.3 with \(\beta=0\) and \(\beta=1\). **(d)** Forced power-law-core Pareto distribution Eq. (18), discussed in Sec. III.1 with \(\beta=\alpha=2\). **(e)** Forced exponential-core Pareto distribution Eq. (38), discussed in Sec. III.2 with \(\beta=\alpha=2\). **(f)** Forced algebraic-core Pareto distribution Eq. (53), discussed in Sec. III.3 with \(\beta=\alpha=2\).
#### ii.1.2 Special case \(\beta=\alpha\)
The "forced pow-Pareto" pdf is given as
\[p(x)=\begin{cases}C\left(x/x_{\min}\right)^{\alpha},&0\leq x\leq x_{\min}\\ C\left(x_{\min}/x\right)^{\alpha},&x>x_{\min}\end{cases} \tag{18}\]
with
\[C=\frac{\alpha^{2}-1}{2y\alpha}. \tag{19}\]
The log-likelihood is
\[\ln\mathcal{L}=n\ln\left(\frac{\alpha^{2}-1}{2y\alpha}\right)- \alpha n_{\Lambda}\left\langle\ln\left(\frac{x}{x_{\min}}\right)\right\rangle _{\Lambda}-\alpha n_{S}\left\langle\ln\left(\frac{x_{\min}}{x}\right)\right\rangle _{S}, \tag{20}\]
with derivative
\[\frac{\partial\ln\mathcal{L}}{\partial\alpha}=\frac{\left(\alpha^ {2}+1\right)n}{\alpha^{3}-\alpha}-n_{\Lambda}\left\langle\ln\left(\frac{x}{x_ {\min}}\right)\right\rangle_{\Lambda}-n_{S}\left\langle\ln\left(\frac{x_{\min} }{x}\right)\right\rangle_{S}, \tag{21}\]
While this equation is in principle solvable, the three solutions are not very insightful. We can compute the second derivative
\[\frac{\partial^{2}\ln\mathcal{L}}{\partial\alpha^{2}}=-\frac{ \left(\alpha^{4}+4\alpha^{2}-1\right)n}{\alpha^{2}\left(\alpha^{2}-1\right)^{2}} \tag{22}\]
And use both in Newton's method to find \(\hat{\alpha}\) as the root of Eq. (21).
For regions \(x_{\min}\in[y_{j},y_{j+1})\), the sets \(\Lambda\) and \(S\) are constant. We have
\[\frac{\partial\ln\mathcal{L}}{\partial x_{\min}}=-\frac{n-\alpha n _{\Lambda}+\alpha n_{S}}{x_{\min}} \tag{23}\]
and therefore
\[\hat{\alpha}=\frac{n}{n_{\Lambda}-n_{S}}. \tag{24}\]
Note that (i) for \(n_{\Lambda}=n_{S}\) there is no zero, (ii) for \(n_{S}>n_{\Lambda}\) the solution for \(\hat{\alpha}\) becomes negative and is therefore out of our range. We put this result in Eq. (21) and solve for \(\ln x_{\min}\) to find
\[\ln\hat{x}_{\min}=1-\frac{n^{2}}{2n_{\Lambda}n_{S}}+\frac{n_{S} \left\langle\ln x\right\rangle_{S}-n_{\Lambda}\left\langle\ln x\right\rangle _{\Lambda}}{n_{S}-n_{\Lambda}}. \tag{25}\]
We can proceed as above, iterating through the ordered tuple \(Y\) to find the maximum as either of the boundaries of an interval \([y_{j},y_{j+1})\) or within the interval.
### Exponential core
#### ii.2.1 General case
The "exp-Pareto" pdf is defined as
\[p(x)=\begin{cases}C\exp\left[-\beta(x/x_{\min}-1)\right],&0\leq x \leq x_{\min}\\ C\left(x_{\min}/x\right)^{\alpha},&x>x_{\min}\end{cases} \tag{26}\]
for \(x_{\min}>0\), \(\beta\neq 0\), and \(\beta\neq\alpha\) and with normalization constant
\[C=\frac{\beta(\alpha-1)}{x_{\min}\left(\left(\alpha-1\right)(e^{\beta}-1)+ \beta\right)}. \tag{27}\]
The log-likelihood is given as
\[\ln\mathcal{L}=n\left[\ln\beta+\ln(\alpha-1)-\ln x_{\min}-\ln \left(\left(\alpha-1\right)(e^{\beta}-1)+\beta\right)\right]\] \[-\alpha n_{\Lambda}\left\langle\ln\frac{x}{x_{\min}}\right\rangle _{\Lambda}-\beta n_{S}\left(\frac{\left\langle x\right\rangle_{S}}{x_{\min}}-1\right) \tag{28}\]
with derivative
\[\frac{\partial\ln\mathcal{L}}{\partial\alpha}=\frac{n}{\alpha-1} -\frac{n\left(e^{\beta}-1\right)}{\left(\alpha-1\right)(e^{\beta}-1)+\beta}- n_{\Lambda}\left\langle\ln\frac{x}{x_{\min}}\right\rangle_{\Lambda}, \tag{29}\]
such that for constant \(\beta\) and \(x_{\min}\) we find
\[\hat{\alpha}=1-\frac{\beta}{2\left(e^{\beta}-1\right)}\left(1- \sqrt{1+\frac{4n\left(e^{\beta}-1\right)}{\beta n_{\Lambda}\left\langle\ln(x /x_{\min})\right\rangle_{\Lambda}}}\right). \tag{30}\]
The second solution to \(\alpha\) would give \(\hat{\alpha}<1\) which he have ruled out (note that \(\beta/(e^{\beta}-1)>0\) for negative and positive beta--the second solution has a plus sign preceding the root, which would yield \(\hat{\alpha}<1\)). Moreover, we have
\[\frac{\partial\ln\mathcal{L}}{\partial\beta}=-\frac{\left(\alpha-1\right) \left(e^{\beta}(\beta-1)+1\right)n}{\beta\left(\left(\alpha-1\right)e^{\beta}- \alpha+\beta+1\right)}-n_{S}\left(\frac{\left\langle x\right\rangle_{S}}{x_{ \min}}-1\right). \tag{31}\]
For a fixed value of \(x_{\min}\), one may find \(\hat{\beta}\) numerically as the solution to \(\partial\ln\mathcal{L}/\partial\beta\big{|}_{\alpha=\hat{\alpha}}=0\).
If we allow \(x_{\min}\) to vary, too, we have
\[\frac{\partial\ln\mathcal{L}}{\partial x_{\min}}=\frac{-nx_{\min}+n_{\Lambda}x _{\min}\alpha+n_{S}\left\langle x\right\rangle_{S}\beta}{x_{\min}^{2}} \tag{32}\]
for values of \(x_{\min}\) where the sets \(\Lambda\) and \(S\) are constant. Setting Eqs. (31) and (32) equal to zero and solving for \(x_{\min}\) gives the conditions
\[\hat{x}_{\min}=\frac{\beta n_{S}\left\langle x\right\rangle_{S} \left(\left(\alpha-1\right)e^{\beta}-\alpha+\beta+1\right)}{\beta n_{S}\left( \left(\alpha-1\right)e^{\beta}-\alpha+\beta+1\right)-\left(\alpha-1\right) \left(e^{\beta}(\beta-1)+1\right)n}, \tag{33}\] \[\hat{x}_{\min}=\frac{\beta n_{S}\left\langle x\right\rangle_{S}}{n- \alpha n_{\Lambda}}. \tag{34}\]
Equating both and using the identity \(n=n_{S}+n_{\Lambda}\), we can find \(\hat{\alpha}\) as
\[\hat{\alpha}=1+\frac{n_{S}}{n_{\Lambda}}\frac{\beta}{e^{\beta}-1}. \tag{35}\]
The second solution gives \(\alpha=\beta\), which we have ruled out. The solution above (Eq. (34)) gives
\[\hat{x}_{\min}=\left\langle x\right\rangle_{S}\frac{\beta}{1-\beta/(e^{\beta}-1 )}. \tag{36}\]
Using this expression in Eq. (30) and equating the resulting expression with Eq. (35), we find the function
\[z(\beta)=-2n+n_{\Lambda}\left(1+\sqrt{1+\frac{4\left(e^{\beta}-1\right)n}{\beta n_{ \Lambda}\left[\left\langle\ln x\right\rangle_{\Lambda}-\ln\left\langle x \right\rangle_{S}+\ln\left(1/\beta+1/\left\{1-e^{\beta}\right\}\right\}\right]}\right) \tag{37}\]
the root of which gives \(\hat{\beta}\).
To find the global maximum of the log-likelihood, one iterates over all intervals \([y_{j},y_{j+1})\) checking both at the left boundary and within the interval.
#### iii.2.2 Special case \(\beta=\alpha\)
In [4], an additional model for non-pathological cores was introduced, namely
\[p(x)=\begin{cases}C\exp\left[-\alpha(x/x_{\min}-1)\right],&0\leq x\leq x_{ \min}\\ C\left(x_{\min}/x\right)^{\alpha},&x>x_{\min}\end{cases} \tag{38}\]
(or "forced exp-Pareto") with normalization constant
\[C=\frac{\alpha(\alpha-1)}{x_{\min}\left((\alpha-1)e^{\alpha}+1\right)}. \tag{39}\]
Note that for this case we have both continuity in \(p(x)\) as well as its first derivative at \(x=x_{\min}\).
The log-likelihood is given as
\[\ln\mathcal{L}=n\ln C-\alpha n_{\Lambda}\left(\left\langle\ln x\right\rangle _{\Lambda}-\ln x_{\min}\right)-\alpha n_{S}\left(\frac{1}{x_{\min}}\left\langle x \right\rangle_{S}-1\right). \tag{40}\]
with \(\Lambda\) and \(S\) defined as above. The derivative for \(\alpha\) can be found as
\[\frac{\partial\ln\mathcal{L}}{\partial\alpha} =n\frac{1}{\alpha}+n\frac{1}{\alpha-1}-n\frac{\alpha e^{\alpha}} {(\alpha-1)e^{\alpha}+1}-n_{\Lambda}\left\langle\ln\frac{x}{x_{\min}}\right\rangle _{\Lambda}\] \[-n_{S}\left(\frac{\left\langle x\right\rangle_{S}}{x_{\min}}-1 \right), \tag{41}\]
the zero of which can be found numerically.
For regions where the sets \(\Lambda\) and \(S\) are constant, the derivative by \(x_{\min}\) is given as
\[\frac{\partial\ln\mathcal{L}}{\partial x_{\min}}=-\frac{n}{x_{\min}}+\frac{ \alpha n_{\Lambda}}{x_{\min}}+\frac{\alpha n_{S}\left\langle x\right\rangle_ {S}}{x_{\min}^{2}} \tag{42}\]
and therefore
\[\hat{\alpha}=\frac{nx_{\min}}{n_{\Lambda}+n_{S}\left\langle x\right\rangle_ {S}}. \tag{43}\]
Now we can proceed similarly to above, iterating through the ordered set of unique observation values \(y_{j}\in Y\). For each pair of observations \((y_{j},y_{j+1})\) we can find the numerical solution of the equation
\[\frac{\partial\ln\mathcal{L}}{\partial\alpha}\Big{|}_{\alpha=\hat{\alpha}}=0 \tag{44}\]
using the bisection method on the interval \(x_{\min}\in[y_{j},y_{j+1}]\) to find \(\hat{x}_{\min}\).
### Algebraic core
#### iii.3.1 General case
We define the "alg-Pareto" pdf as
\[p(x)=\begin{cases}C\left[2-(x/x_{\min})^{\beta}\right],&0\leq x\leq x_{\min} \\ C\left(x_{\min}/x\right)^{\alpha},&x>x_{\min}\end{cases} \tag{45}\]
for \(\beta>0\) and with normalization constant
\[C=\frac{1}{x_{\min}}\frac{(\alpha-1)\left(\beta+1\right)}{2\alpha\beta-\beta +\alpha}. \tag{46}\]
Note that the pdf has a singularity at \(x=0\) for \(-1<\beta<0\) but is normalizable nonetheless. We discuss the special case \(\beta=0\) in Sec. III.4.
The log-likelihood is given as
\[\ln\mathcal{L}=n\ln C-\alpha n_{\Lambda}\left\langle\ln\left(\frac{x}{x_{\min }}\right)\right\rangle_{\Lambda}+n_{S}\left\langle\ln\left[2-\left(\frac{x}{x _{\min}}\right)^{\beta}\right]\right\rangle_{S}, \tag{47}\]
In order to find the value of \(\alpha\) that maximizes \(\mathcal{L}\) given \(x_{\min}\) and \(\beta\), we compute
\[\frac{\partial}{\partial\alpha}\ln\mathcal{L}=-n_{\Lambda}\left\langle\ln \left(\frac{x}{x_{\min}}\right)\right\rangle_{\Lambda}+n\frac{\beta+1}{(\alpha -1)(2\alpha\beta-\beta+\alpha)}. \tag{48}\]
The zero of this expression gives us our estimator for the exponent
\[\hat{\alpha} =\frac{3\beta+1}{4\beta+2}+\frac{1}{4\beta+2}\sqrt{\frac{(1+\beta )\left(4+\lambda+\beta(8+\lambda)\right)}{\lambda}},\text{ with} \tag{49}\] \[\lambda =\frac{n_{\Lambda}}{n}\left\langle\ln\left(\frac{x}{x_{\min}} \right)\right\rangle_{\Lambda} \tag{50}\]
Note that \((3\beta+1)/(4\beta+2)<1\) for \(\beta>-1\), which is why we disregard the second solution of the above equation that would give us exponents \(\hat{\alpha}<1\), i.e. in a parameter regime we exclude.
Next, we want to find the maximum-likelihood estimator for \(x_{\min}\) for support regions where \(\Lambda\) and \(S\) are constant. Then, we find
\[\frac{\partial}{\partial x_{\min}}\ln\mathcal{L}=-\frac{n}{x_{\min}}+\frac{ \alpha n_{\Lambda}}{x_{\min}}+\frac{n_{S}}{x_{\min}}\left\langle\frac{1}{2(x_{ \min}/x)^{\beta}-1}\right\rangle_{S}, \tag{51}\]
and thus \(\hat{x}_{\min}\) is determined by the equation
\[\left\langle\frac{1}{2(\hat{x}_{\min}/x)^{\beta}-1}\right\rangle_{S}-\frac{n}{n_{S} }-\frac{n_{\Lambda}}{n_{S}}\left(\frac{3\beta+1}{4\beta+2}+\frac{1}{4\beta+2} \sqrt{\frac{(1+\beta)(4+\lambda+\beta(8+\lambda))}{\lambda}}\right)=0 \tag{52}\]
where \(\hat{\alpha}\) is given by Eq. (49). Now we can iterate over intervals \(x_{\min}\in[y_{j},y_{j+1})\) to find the maximum either at interval boundaries or within, using the bisection method.
Unfortunately, for varying \(\beta\) we have to resort to maximizing Eq. (47) numerically. Since we do have a method to find \(\hat{x}_{\min}\) and \(\hat{\alpha}\) given \(\beta\), we can use a one-dimensional Nelder-Mead method to find \(\hat{\beta}\).
#### iii.3.2 Special case \(\beta=\alpha\)
The "forced alg-Pareto" pdf is given as
\[p(x) =\begin{cases}C\left[2-(x/x_{\min})^{\alpha}\right],&0\leq x \leq x_{\min}\\ C\left(x_{\min}/x\right)^{\alpha},&x>x_{\min}\end{cases} \tag{53}\] \[C =\frac{1}{x_{\min}}\frac{\alpha^{2}-1}{2\alpha^{2}}. \tag{54}\]
Note that for this case we have both continuity in \(p(x)\) as well as its first derivative at \(x=x_{\min}\).
The log-likelihood is given by
\[\ln\mathcal{L} =-n\ln x_{\min}+n\ln(\alpha^{2}-1)-n(\ln 2+2\ln\alpha)\] \[-\alpha n_{\Lambda}\left\langle\ln\left(\frac{x}{x_{\min}}\right) \right\rangle_{\Lambda}+n_{S}\left\langle\ln\left[2-\left(\frac{x}{x_{\min}} \right)^{\alpha}\right]\right\rangle_{S}. \tag{55}\]
The derivative by \(\alpha\) (for pre-determined \(x_{\min}\)) is
\[\frac{\partial\ln\mathcal{L}}{\partial\alpha} =-n_{\Lambda}\left\langle\ln\left(\frac{x}{x_{\min}}\right) \right\rangle_{\Lambda}+\frac{2n\alpha}{\alpha^{2}-1}-\frac{2n}{\alpha}\] \[-n_{S}\left\langle\frac{\ln\left(x/x_{\min}\right)}{2\left(x/x_{ \min}\right)^{-\alpha}-1}\right\rangle_{S}. \tag{56}\]
The zero of this equation can be found using the Newton-Raphson method. Since, again, the sets \(\Lambda\) and \(S\) are constant on the interval \(x_{\min}\in[y_{j},y_{j+1})\), one may proceed by iterating through all data intervals and finding the maximum within each interval with the Nelder-Mead method.
### Uniform core
We define the "uni-Pareto" pdf as
\[p(x) =\begin{cases}C&x\leq x_{\min}\\ C\left(x_{\min}/x\right)^{\alpha}&x>x_{\min}\end{cases} \tag{57}\] \[C =\frac{\alpha-1}{\alpha}\frac{1}{x_{\min}}. \tag{58}\]
We compute the log-likelihood as
\[\ln\mathcal{L}=n\ln\left(\frac{\alpha-1}{\alpha}\frac{1}{x_{\min}}\right)- \alpha n_{\Lambda}\left\langle\ln\frac{x}{x_{\min}}\right\rangle_{\Lambda} \tag{59}\]
with derivative
\[\frac{\partial}{\partial\alpha}\ln\mathcal{L}=n\left(\frac{1}{\alpha-1}-\frac {1}{\alpha}\right)-n_{\Lambda}\left\langle\ln\frac{x}{x_{\min}}\right\rangle_{\Lambda} \tag{60}\]
for constant \(x_{\min}\). Demanding \(\partial\ln\mathcal{L}/\partial\alpha=0\) and \(\alpha>1\) gives us the estimator
\[\hat{\alpha}=\frac{1}{2}+\sqrt{\frac{1}{4}+\frac{n}{n_{\Lambda}\left\langle\ln \frac{x}{x_{\min}}\right\rangle_{\Lambda}}}. \tag{61}\]
For regions where \(\Lambda\) and \(S\) are constant, we find
\[\frac{\partial}{\partial x_{\min}}\ln\mathcal{L}=-n\frac{1}{x_{\min}}+\alpha n _{\Lambda}\frac{1}{x_{\min}}, \tag{62}\]
so we have to solve
\[0 =\frac{n}{(\alpha-1)\alpha}-n_{\Lambda}\left\langle\ln x\right\rangle _{\Lambda}+n_{\Lambda}\ln x_{\min} \tag{63}\] \[0 =\frac{\alpha n_{\Lambda}-n}{x_{\min}}. \tag{64}\]
Then
\[\hat{\alpha}=\frac{n}{n_{\Lambda}} \tag{65}\]
solves the second equation. Putting this solution in the first equation gives
\[\ln\hat{x}_{\min}=\left\langle\ln x\right\rangle_{\Lambda}-\frac{n_{\Lambda}} {n-n_{\Lambda}}. \tag{66}\]
The determinant of the Hessian of this problem gives
\[\det(H)=-n_{\Lambda}^{2}\exp\left(\dots\right), \tag{67}\]
which, for a two-dimensional problem like this, means that the only extremum \((\hat{a},\hat{x}_{\min})\) is a saddle point, i.e. one cannot find a better maximum than given by \(x_{\min}\in Y\), which in turn means that iterating over unique observation values and then finding \(\hat{x}_{\min}=y_{j}\) that maximizes \(\ln\mathcal{L}\) suffices.
## IV Discussion and conclusion
We discussed how to fit three classes of piece-wise Pareto-like distributions to data using maximum-likelihood estimation, with the distributions reaching non-zero and finite values for support region \(0\leq x\leq x_{\min}\).
The results presented in this study are neither particularly insightful nor exciting. Nonetheless, they might be of use to future analyses dealing with data that is distributed according to any of the proposed shapes.
In the future and if the use case demands it, the results may be extended to discrete random variates.
Furthermore, in a future analysis, the principle of splitting observations into sets of values below and above a threshold \(x_{\min}\) might be used for efficiently minimizing the Kullback-Leibler divergence between piece-wise finite-core models and data if the data is only accessible in binned form.
###### Acknowledgements.
BFM expresses his gratitude to Antonio Desiderio, Sune Lehmann, and Aaron Clauset for helpful comments and discussions regarding this manuscript. BFM received funding through Grant CF20-0044, HOPE: How Democracies Cope with Covid-19 from the Carlsberg Foundation.
|
2303.17934 | Conflict-Averse Gradient Optimization of Ensembles for Effective Offline
Model-Based Optimization | Data-driven offline model-based optimization (MBO) is an established
practical approach to black-box computational design problems for which the
true objective function is unknown and expensive to query. However, the
standard approach which optimizes designs against a learned proxy model of the
ground truth objective can suffer from distributional shift. Specifically, in
high-dimensional design spaces where valid designs lie on a narrow manifold,
the standard approach is susceptible to producing out-of-distribution, invalid
designs that "fool" the learned proxy model into outputting a high value. Using
an ensemble rather than a single model as the learned proxy can help mitigate
distribution shift, but naive formulations for combining gradient information
from the ensemble, such as minimum or mean gradient, are still suboptimal and
often hampered by non-convergent behavior.
In this work, we explore alternate approaches for combining gradient
information from the ensemble that are robust to distribution shift without
compromising optimality of the produced designs. More specifically, we explore
two functions, formulated as convex optimization problems, for combining
gradient information: multiple gradient descent algorithm (MGDA) and
conflict-averse gradient descent (CAGrad). We evaluate these algorithms on a
diverse set of five computational design tasks. We compare performance of
ensemble MBO with MGDA and ensemble MBO with CAGrad with three naive baseline
algorithms: (a) standard single-model MBO, (b) ensemble MBO with mean gradient,
and (c) ensemble MBO with minimum gradient.
Our results suggest that MGDA and CAGrad strike a desirable balance between
conservatism and optimality and can help robustify data-driven offline MBO
without compromising optimality of designs. | Sathvik Kolli | 2023-03-31T10:00:27Z | http://arxiv.org/abs/2303.17934v1 | # Conflict-Averse Gradient Optimization of Ensembles for Effective Offline Model-Based Optimization
###### Abstract
Data-driven offline model-based optimization (MBO) is an established practical approach to black-box computational design problems for which the true objective function is unknown and expensive to query. However, the standard approach which optimizes designs against a learned proxy model of the ground truth objective can suffer from distributional shift. Specifically, in high-dimensional design spaces where valid designs lie on a narrow manifold, the standard approach is susceptible to producing out-of-distribution, invalid designs that "fool" the learned proxy model into outputting a high value. Using an ensemble rather than a single model as the learned proxy can help mitigate distribution shift, but naive formulations for combining gradient information from the ensemble, such as minimum or mean gradient, are still suboptimal and often hampered by non-convergent behavior.
In this work, we explore alternate approaches for combining gradient information from the ensemble that are robust to distribution shift without compromising optimality of the produced designs. More specifically, we explore two functions, formulated as convex optimization problems, for combining gradient information: multiple gradient descent algorithm (MGDA) [12] and conflict-averse gradient descent (CAGrad) [11]. We evaluate these algorithms on a diverse set of five computational design tasks [13]. We compare performance of ensemble MBO with MGDA and ensemble MBO with CAGrad with three naive baseline algorithms: (a) standard single-model MBO, (b) ensemble MBO with mean gradient, and (c) ensemble MBO with minimum gradient.
Each algorithm produces 128 optimized designs, and we report performance of these designs under three metrics: (a) max ground truth score, (b) average ground truth score, (c) 50th percentile ground truth score.
* For the max ground truth score, we find that MGDA is in the top 2 best-performing algorithms on 4 tasks and is the best-performing algorithm on 2 tasks. CAGrad is the in the top 2 best-performing algorithms on 3 tasks and is the best-performing algorithm on 2 tasks.
* For the 50th percentile ground truth score, MGDA is the best-performing algorithm on 1 task. CAGrad is in the top 2 best-performing algorithms on 3 tasks and is the best-performing algorithm on 2 tasks.
* Finally, for the average ground truth score, MGDA is the best-performing algorithm on 2 tasks. CAGrad is in the top 2 best-performing algorithms on 3 tasks and is the best-performing algorithm on 2 tasks.
Our results suggest that MGDA and CAGrad strike a desirable balance between conservatism and optimality. In general, we noticed that MGDA and CAGrad performed equally well, if not better, than other algorithms on the max ground truth score. However, both algorithms lead to significant improvement on the average and 50th percentile ground truth scores, suggesting they may be more conservative and less susceptible to being "fooled" by invalid designs. Our results demonstrate that MGDA and CAGrad can help robustify data-driven offline MBO without compromising optimality of designs.
Introduction
We study the problem of computational design, which arises in settings ranging from synthetic biology to robot design. Specifically, we focus on the setting of black-box optimization, which attempts to generate optimal designs where the objective function and constraints are unknown. Put simply, we want to find the optimal design, \(x\), that maximizes some unknown objective function, \(f(x)\):
\[\operatorname*{arg\,max}_{x}\ \ f(x)\]
Examples of black-box optimization problems include optimizing robot morphologies, biological sequences (proteins, genes), computer chips, neural network architectures, or superconducting materials.
### Offline Model Based Optimization (MBO)
One promising approach to solving black-box optimization problems is data-driven MBO, where a proxy model of the unknown objective function is learned from empirically collected data and used to guide the design procedure.
In order to model the true objective function with high fidelity, it is often critical to actively collect additional data during the training procedure [1]. However, in many design problems, active real-world data collection is expensive (e.g. requires synthesizing protein structures for protein optimization or building and testing a robot for robot design) or dangerous (e.g. when optimizing over aircraft designs). Thus, we focus instead on the more practical setting of offline MBO, where we are given a static dataset of designs and cannot make any queries to the ground truth.
In essence, when we use offline MBO to solve black-box optimization problems, we are trying to solve the problem
\[\operatorname*{arg\,max}_{x}\ \ f(x)\]
with two key assumptions:
1. Black-box assumption: \(f(x)\) is an unknown function
2. Offline assumption: \(f(x)\) is expensive to query
The general offline MBO workflow is illustrated in Figure 1.
### Distribution Shift in MBO
The most basic approach to offline, data-driven model-based optimization involves the following steps [11]:
1. We have a static dataset \(D\) of input designs and their corresponding objective values: \[\{(x_{1},y_{1}),\dots,(x_{N},y_{N})\}\] We assume this paired data was generated from a true, unknown objective function, \(y=f(x)\).
2. Using the dataset \(D\), we learn a proxy model \(\hat{f}_{\theta}(x)\) of the unknown objective function \(f(x)\), via supervised regression on the training dataset.
3. Finally, we find an optimal generated design \(x^{*}\), by optimizing data point \(x\in D\) against the learned model \(\hat{f}_{\theta}(x)\) (e.g. using \(T\) gradient ascent/descent steps on the learned function): \[x_{k+1}\gets x_{k}+\alpha\nabla_{x}f_{\theta}(x)|_{x=x_{k}},\text{ for }k\in[1,T]\]
As described, the above approach does not perform well in high-dimensional input spaces, where the space of valid input designs lie on a narrow manifold, because overestimation errors in the proxy model \(\hat{f}_{\theta}(x)\) would cause to the optimization procedure in step (3) to produce out-of-distribution, invalid, and low-valued designs. Consequently, for the above method to work, it is critical that we ensure that the proxy model, \(f_{\theta}(x)\), does not overestimate the objective value of out-of-distribution points.
Some existing approaches to prevent overestimation of out-of-distribution inputs include generative modeling, explicit density estimation, or regularization techniques that incentivize conservatism in regions with limited data (known as Conservative Objective Models [11]).
Figure 1: A typical offline MBO workflow [11]. We are given a static dataset of designs, which we use to learn a proxy model, \(\hat{f}_{\theta}\), of the true objective. Then, our design procedure is guided by the learned proxy model.
### Ensemble MBO
One simple approach to addressing the issue of distribution shift is to use an ensemble rather than a single model as a proxy for the true objective function. The rationale behind this is that it is less likely for multiple different models to be "fooled" by the same out-of-distribution input than it is for a single model to suffer from overestimation errors.
The ensemble can have individual models with varied architectures and regularization techniques. Although we don't study this in this project, future work can try including density models and/or Conservative Objective Models with varying levels of conservatism in the ensemble.
When we used a single proxy model, we used gradient ascent on the model in order to generate new designs:
\[x_{k+1}\gets x_{k}+\alpha\nabla_{x}\hat{f}_{\theta}(x_{k})\]
Now, that we use an ensemble \(\{\hat{f}_{1}(x),\ldots,\hat{f}_{m}(x)\}\), we need to update our design using gradient information from all the models in the ensemble. Thus, we get the following update:
\[x_{k+1}\gets x_{k}+\alpha g(\nabla_{x}\hat{f}_{1}(x_{k}),\ldots,\nabla_{x }\hat{f}_{m}(x_{k}))\]
where \(g\) is some function of all the gradients.
Two naive approaches for the function \(g\) are:
* Mean Gradient: \[\nabla_{x}\left(\sum_{i=1}^{m}f_{i}(x)\right)\] The benefit of this approach is that it captures gradient information from all the models in the ensemble in each gradient step. However, while this approach may lead to optimization that is more robust to distribution shift, it is still possible for the optimization to be "fooled" by an out-of-distribution input, particularly when a model or group of models within the ensemble dominate the update. Furthermore, it is also possible for the optimization to get stuck and fail to optimize further (sometimes becoming stuck in a oscillating manner) due to conflicting gradients.
* Minimum Gradient: \[\nabla_{x}\min\left(f_{1}(x),f_{2}(x),\ldots,f_{m}(x)\right)\] The benefit of this approach is that it is conservative and is less likely to be "fooled" by an out-of-distribution input. In fact, we can interpret the minimum gradient update as searching for a design for which _all_ the models in the ensemble assign a high score. By definition, this algorithm would produce an out-of-distribution point if and only if every model in the ensemble overestimated the value of that point. The drawback of this method is that it has poor convergence guarantees and is susceptible to oscillatory behavior.
### Multiple Gradient Descent Algorithm (MGDA) and Conflict-Averse Gradient Descent (CAGrad)
We consider two alternative functions for \(g\) that combine the gradient information from the models in the ensemble: multiple gradient descent algorithm (MGDA) [10] and conflict-averse gradient descent (CAGrad) [12]. Both functions are formulated as convex optimization problems. MGDA was developed in the setting of multi-objective optimization, while CAGrad was developed for multi-task learning.
For MGDA, the gradient is defined in terms of the following convex optimization problem:
\[\max_{d}\ \min_{i}\ \langle d,g_{i}\rangle-\frac{1}{2}\left\|d\right\|^{2}\]
where
\[g_{i}=\nabla_{x}\hat{f}_{i}(x),\ \ i=1,\ldots,m\]
For CAGrad, the gradient is defined in terms of the following convex optimization problem:
\[\max_{d}\ \min_{i}\ \langle d,g_{i}\rangle\ \text{s.t.}\ \ \left\|d-g_{0} \right\|\leq c\left\|g_{0}\right\|\]
where
\[g_{i}=\nabla_{x}\hat{f}_{i}(x),\ \ i=1,\ldots,m\]
\(c\in[0,1)\) is a hyper-parameter, and \(g_{0}=\frac{1}{m}\nabla_{x}\hat{f}_{i}(x)\) is the average gradient.
The high-level intuition behind these methods is that they search for a design with a high model-predicted objective value, while leveraging the worst local improvement of the models in the ensemble to regularize the optimization trajectory.
More specifically, assume we update our design \(x\) by \(x\gets x+\alpha d\), where \(\alpha\) is a step size and \(d\) is the update vector. Then, the minimum improvement rate across the models in the ensemble is given by:
\[R(x,d) =\min_{i\in\{1,\ldots,m\}}\left(\frac{1}{\alpha}(f_{i}(x+\alpha d )-f_{i}(x))\right)\] \[\approx\min_{i\in\{1,\ldots,m\}}\langle g_{i},d\rangle\]
where we use the first-order Taylor approximation, assuming \(\alpha\) is small.
We can view both MGDA and CAGrad as looking for the "best" update vector \(d\) within a local ball, where we define the "best" update vector as the one that maximizes the worst improvement rate. The difference between the two is the local ball within which we search for the optimal update vector, where MGDA is centered at zero, while CAGrad is centered at the average gradient \(g_{0}\).
In theory, MGDA provably converges to an arbitrary point on the Pareto set [10], while CAGrad provably converges to a stationary point of the average proxy objective, when \(0\leq c<1\)[11].
The differences between mean gradient, MGDA, and CAGrad are illustrated in Figure 2, taken from [11].
## 2 Methods
We evaluate the performance of five algorithms
* Single Proxy Model, Naive Gradient Ascent
* Ensemble Proxy Model, Mean Gradient Ascent
* Ensemble Proxy Model, Min Gradient Ascent
* Ensemble Proxy Model, MGDA
* Ensemble Proxy Model, CAGrad
on five diverse benchmark tasks selected from [12]. The tasks and relevant details are listed in Table 1.
### General Procedure
Each task has a corresponding dataset, which we refer to as the "total dataset" for the task. In order to evaluate our MBO algorithms, we take a subset of the total dataset, which we call the "MBO dataset."
The general procedure we use for evaluation is as follows:
1. The design/proxy model(s) are trained on the MBO dataset, which is a subset (bottom K% of target values) of the total dataset for the task.
2. We take the top 128 (i.e. highest target values) inputs in the MBO dataset and optimize each one using our algorithm to produce 128 designs.
3. Next, we calculate the (a) max ground-truth score, (b) the 50th percentile ground-truth score, and (c) the average ground-truth score of the 128 designs.
For all the ensemble proxy model methods, we use six ensemble models, which all have the same architecture, but are trained and validated on different subsets of the MBO dataset.
The motivation for the above approach is that, in real design scenarios, it is usually impractical to empirically test and use every design produced by a given algorithm. Instead, we would only select the top-scored designs according to our algorithm and experimentally validate them and use them. Thus, ideally, we want an algorithm whose best set of designs is actually the best under the true objective function.
### Oracles
For the third step in the above procedure, some tasks have an exact oracle, meaning that the ground truth
Figure 2: Comparison of methods from “Conflict-Averse Gradient Descent for Multi-task Learning” [11]. We compare naive gradient descent (GD, top) to MGDA (middle) and CAGrad (bottom).
for every possible permutation of inputs is provided in a lookup table or that individual designs are cheap to evaluate, while other tasks use a learned oracle (e.g. neural network, random forest model) as a proxy for the ground truth. In this case, we train a separate oracle model on the total dataset for the task. Details on the oracles for each task are listed in Table 1.
### Dual Formulations
When we use the primal formulation of MGDA and CAGrad, which are presented above, the number of decision variables is equal to the dimensionality of the input designs. In some cases, this is computationally feasible. However, for some of our tasks, the inputs are high-dimensional, and we need to use the dual formulation of MGDA and CAGrad, where the number of decision variables is equal to the number of models in the ensemble (i.e. 6). For both MGDA and CAGrad, since the primal problem is convex and Slater's condition holds, we have strong duality.
As before, we define:
\[g_{i}=\nabla_{x}\hat{f}_{i}(x),\;\;i=1,\ldots,m\]
\(c\in[0,1)\) is a hyper-parameter for CAGrad, and \(g_{0}=\frac{1}{m}\nabla_{x}\hat{f}_{i}(x)\) is the average gradient.
The dual formulation of MGDA is
\[\min_{w}\frac{1}{2}\left\|\sum_{i=1}^{K}w_{i}g_{i}\right\|^{2}\;\;\text{s.t.} \;\;\sum_{i=1}^{K}w_{i}=1\;\;\text{and}\;\;\forall i,w_{i}\geq 0\]
[Liu+21].
The dual formulation of CAGrad is
\[\min_{w}g_{w}^{T}g_{0}+\sqrt{\phi}||g_{w}||\;\;\text{s.t.}\;\;\sum_{i=1}^{K}w_ {i}=1\;\;\text{and}\;\;\forall i,w_{i}\geq 0\]
where \(g_{w}=\sum_{i}w_{i}g_{i}\) and \(\phi=c^{2}||g_{0}||^{2}\). The optimal update vector is \(d^{*}=g_{0}+g_{w^{*}}/\lambda^{*}\), where \(\lambda^{*}=\left\|g_{w^{*}}\right\|/\sqrt{\phi}\), [Liu+21].
We can interpret the dual formulations as finding the weights for a weighted average of the gradients from each model in the ensemble.
Details regarding which formulation we use for each benchmark task are listed in Table 1.
### Gradient Ascent Procedure
For discrete tasks, we perform gradient updates in one-hot space. After each update, we map the sequence back to discrete space by taking an argmax.
For continuous tasks, we normalize the inputs, so that each position has zero mean and unit variance. We then perform gradient updates in this space.
### Hyperparameter Selection
Each task has three relevant hyperparameters:
* Design Model Architecture and Training Parameters
* Gradient Ascent Parameters: (Number of Gradient Update Steps, Learning Rate)
* CAGrad Hyperparameter
\begin{table}
\begin{tabular}{l l l l l l l} & \multicolumn{1}{c}{**Total**} & \multicolumn{1}{c}{**MBO**} & \multicolumn{1}{c}{**Oracle (Sear.**} & \multicolumn{1}{c}{**Primal or**} \\ & \multicolumn{1}{c}{**Dataset**} & \multicolumn{1}{c}{**Dataset**} & \multicolumn{1}{c}{**Dimensions**} & \multicolumn{1}{c}{**Type**} & \multicolumn{1}{c}{**Correlation)**} & \multicolumn{1}{c}{**Dual? (\# of**} \\ & \multicolumn{1}{c}{**Size**} & \multicolumn{1}{c}{**Size**} & & & & & \\ \hline
**TF Bind 8** & 65536 & 32898 & (8, 4) & Discrete & Lookup Table & Primal (32) \\
**TF Bind 10** & 1048576 & 50000 & (10, 4) & Discrete & Lookup Table & Primal (40) \\
**ChEMBL** & 1093 & 546 & (31, 591) & Discrete & Random Forest (0.792) & Dual (6) \\
**Hopper Controller** & 3200 & 3200 & 5126 & Continuous & Exact & Dual (6) \\
**Ant Morphology** & 25009 & 15005 & 60 & Continuous & Exact & Primal (60) \\ \hline \hline & \multicolumn{1}{c}{**Grad. Asc.**} & \multicolumn{1}{c}{**Design Model**} & \multicolumn{1}{c}{**Design Model**} & \multicolumn{1}{c}{**CAGrad**} \\ & \multicolumn{1}{c}{**Parameters**} & \multicolumn{1}{c}{**Architecture**} & \multicolumn{1}{c}{**Results (Val. Spear.,**} & \multicolumn{1}{c}{**Parameter**} \\ & \multicolumn{1}{c}{**(\# of rounds, \(\alpha\))**} & \multicolumn{1}{c}{**(\# of parameters)**} & \multicolumn{1}{c}{**Val. Loss)**} & \multicolumn{1}{c}{**CBFarameter**} \\ \hline
**TF Bind 8** & (200, 10) & FullyConnected (1140801) & (0.43, 0.15) & c=0.5 \\
**TF Bind 10** & (200, 50) & FullyConnected (2122753) & (0.97, 0.00) & c=0.5 \\
**ChEMBL** & (200, 100) & FullyConnected (6582401) & (0.77, 0.13) & c=0.5 \\
**Hopper Controller** & (200, 1) & FullyConnected (5252097) & (0.87, 0.28) & c=0.3 \\
**Ant Morphology** & (200, 0.03) & FullyConnected (8460289) & (0.57, 0.33) & c=0.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Comparison of the TF Bind 8, TF Bind 10, ChEMBL, Hopper, and Ant Morphology tasks.
In order to select and train the design model architecture, we randomly split the MBO Dataset into a training set and validation set, and we selected the model with the best validation spearman correlation and validation mean-squared loss.
For the gradient ascent parameters, we fixed the number of gradient update steps to 200, and we selected the learning rate by visually analyzing optimization trajectories (i.e. plot of proxy model(s) prediction vs. number of rounds of mutation). We selected the CAGrad hyperparameter \(c\) in a similar fashion.
It is important that hyperparameter tuning should be done purely offline, without any access to the ground truth objective or oracle.
Details regarding the hyperparameters we use for each task are listed in Table 1.
### Discrete Benchmark Tasks
We detail the three discrete tasks on which we perform evaluation.
The **TF Bind 8** task [1] is based on an empirical dataset of measurements of binding activity between a variety of human transcription factors and every possible length-8 DNA sequence. The optimization goal is to identify DNA sequences that maximizing the binding activity score for each TF. The design space for sequences is comprised of four categorical variables, one representing each nucleotide (A, T, C, or G). The oracle for TF Bind 8 is exact (a lookup table containing ground-truth values for every possible permutation of inputs).
The **TF Bind 10** task [1] is a neural network produced dataset of predicted estimates of the relative binding affinities between all unique length-10 DNA sequences and each of two protein targets. The optimization goal is to identify DNA sequences that maximize the predicted binding affinity to targets. The design space for sequences is A, T, C, and G as before. The oracle for TF Bind 10 is exact (a lookup table containing ground-truth values for every possible permutation of inputs).
The **ChEMBL** task [1] is a dataset of pairs of molecules and assays which test for specific functional properties of those molecules. The optimization goal is to design a molecule that achieves a high functional property score on a specific assay. The design space for molecules is based on SMILES encodings (rather than amino acids), resulting in a design space of categorical variables that take one of 591 values, for sequences of length 31. The oracle for ChEMBL is a random forest, which achieves a spearman correlation of 0.792 on the dataset.
### Continuous Benchmark Tasks
We detail the two continuous tasks on which we perform evaluation.
The **Hopper Controller** task is an OpenAI gym locomotion task [1]. The optimization goal is to design a set of weights for a controller neural network (representing a policy) that will optimize for expected return on the locomotion task. While Hopper is typically a reinforcement learning task, we formulate it as offline MBO by utilizing a supervised dataset of neural network controlled weights matched with expected return values. There are 5126 continuous variables corresponding to the flattened weights of this neural network. In order to evaluate the ground truth score for a design, we simply load in the weights of the neural network controller and run 1000 steps of simulation in the MuJoCo simulator [1] used with this environment.
The **Ant Morphology** task is an OpenAI gym task [1]. The goal is to optimize the morphology (e.g. size, orientation, location of limbs) of Ant, a simulated robot whose goal is to run fast (i.e. a locomotion task) in its environment. There are 60 continuous variables corresponding to these morphological parameters. We obtain a design's ground truth score by running robotic simulation in the MuJoCo simulator [1] for 100 time steps, averaging 16 independent trials.
## 3 Results
We report three metrics from the top 128 designs of each algorithm: (a) max ground truth score (Table 2), (b) 50th percentile ground truth score (Table 3), (c) average ground truth score (Table 4).
In order to report performance on the same order of magnitude across tasks, we normalize the max ground truth score and the 50th percentile ground truth scores using the formula
\[y_{\text{normalized}}(y)=\frac{y-y_{\text{min}}}{y_{\text{max}}-y_{\text{min}}}\]
where \(y_{\text{max}}\) and \(y_{\text{min}}\) are the maximum and minimum objective values in the total dataset for each task. By definition, a normalized score greater than 1 means that we have designed an input that is better than any input in the total dataset for the task.
We do not normalize the average ground truth scores.
Here are the results we observed:
* For the max ground truth score, we find that MGDA is in the top 2 best-performing algorithms on 4 tasks and is the best-performing algorithm on 2 tasks.
CAGrad is the in the top 2 best-performing algorithms on 2 tasks.
* For the 50th percentile ground truth score, MGDA is the best-performing algorithm on 1 task. CAGrad is in the top 2 best-performing algorithms on 3 tasks and is the best-performing algorithm on 2 tasks.
* Finally, for the average ground truth score, MGDA is the best-performing algorithm on 2 tasks. CAGrad is in the top 2 best-performing algorithms on 3 tasks and is the best-performing algorithm on 2 tasks.
In general, we observe that MGDA and CAGrad performed roughly as well, if not better, than other algorithms on the max ground truth score. However, when considering the 50th percentile and average ground truth scores, we found that MGDA performed much better than other algorithms on the continuous tasks and CAGrad performed much better than other algorithms on the discrete tasks. This suggests that that MGDA and CAGrad are more conservative and less susceptible to being "fooled" by invalid designs.
## 4 Discussion and Future Work
### Interpretation of Results
Based on our results, MGDA and CAGrad seem to robustify data-driven offline MBO without compromising optimality of designs. In real-world design scenarios, utilizing MGDA or CAGrad over mean or minimum gradient could be well-motivated in contexts where we care about generating a diverse dataset of good designs rather than one-off good designs. Often, computational design projects involve repeated iteration between design generation and experimental validation. Usually, it is more practical and efficient to experimentally validate many proposed good designs at a time (e.g. a dataset), rather than repeated iteration over a single design.
### Future Work
We faced some key challenges in this work that present opportunities for future research. First, hyperparameter selection in a purely offline manner is difficult, and future work should explore better, more rigorous methods for offline hyperparameter tuning.
Second, there are a lot of different approaches for using a gradient ascent optimizer in discrete space: gradient normalization, alternating between updates in soft-space and updates in hard-space, and more. In our experimentation, we found that certain methods, such as gradient normalization, improved the performance of MGDA on discrete tasks significantly, but for consistency, we present results for a relatively simple gradient ascent optimizer. Future work should study alternate methods for gradient ascent in discrete space.
Finally, a key challenge we faced was with the ChEMBL
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline & **TF Bind 8** & **TF Bind 10** & \begin{tabular}{l} **ChEMBL** \\ **(Random Forest Oracle)** \\ \end{tabular} & \begin{tabular}{l} **Hopper** \\ **Controller** \\ \end{tabular} &
\begin{tabular}{l} **Ant** \\ **Morphology** \\ \end{tabular} \\ \hline
**dataset** & **0.439** & **0.240** & **0.635** & **1.0** & **0.747** \\ \hline
**single model** & **0.976** & **0.682** & **0.808** & **1.544** & **0.807** \\ \hline
**ensemble, mean** & **0.973** & **0.754** & **0.777** & **2.829** & **0.944** \\ \hline
**ensemble, min** & **0.976** & **0.726** & **0.788** & **3.040** & **0.977** \\ \hline
**ensemble, MGDA** & **0.979** & **0.734** & **0.800** & **3.579** & **0.949** \\ \hline
**ensemble, CAGrad** & **0.976** & **0.735** & **0.774** & **2.815** & **0.924** \\ \hline \end{tabular}
\end{table}
Table 2: **Max (Normalized) ground-truth score of the top 128 generated designs.** For each task, the best score is green, and the second best score is blue. The “dataset” row in yellow is the normalized score of the best design in the starting offline MBO dataset.
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|} \hline & **TF Bind 8** & **TF Bind 10** & \begin{tabular}{l} **ChEMBL** \\ **(Random Forest Oracle)** \\ \end{tabular} & \begin{tabular}{l} **Hopper** \\ **Controller** \\ \end{tabular} &
\begin{tabular}{l} **Ant** \\ **Morphology** \\ \end{tabular} \\ \hline
**single model** & **0.758** & **0.568** & **0.740** & **0.658** & **0.415** \\ \hline
**ensemble, mean** & **0.750** & **0.685** & **0.770** & **0.655** & **0.689** \\ \hline
**ensemble, min** & **0.737** & **0.688** & **0.685** & **0.646** & **0.707** \\ \hline
**ensemble, MGDA** & **0.683** & **0.686** & **0.760** & **0.650** & **0.735** \\ \hline
**ensemble, CAGrad** & **0.811** & **0.692** & **0.768** & **0.629** & **0.704** \\ \hline \end{tabular}
\end{table}
Table 3: **50th Percentile (Normalized) ground-truth score of the top 128 generated designs.** For each task, the best score is green, and the second best score is blue.
task and other tasks we tried out which don't have an exact oracle. Using a learned oracle to evaluate how robust our design algorithms are to out-of-distribution designs is unreliable, because learned oracles usually suffer from the same distribution shift problem as the proxy design models. Finding a way to reliably evaluate design algorithms using learned oracles is an important area for future research, because many real-world design tasks don't have exact oracles.
Finally, although we use a diverse set of tasks, future work can study CAGrad and MGDA on more tasks, especially with a focus on the unique characteristics of tasks that may make MGDA more suitable than CAGrad, or vice versa.
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline & **TF Bind 8** & **TF Bind 10** & **ChEMBL** & **Hopper** & **Ant** \\ & & & **(Random Forest Oracle)** & **Controller** & **Morphology** \\ \hline
**single model** & **1.524** & **0.404** & **0.547** & **563.45** & **19.426** \\ \hline
**ensemble, mean** & **1.494** & **1.112** & **0.706** & **578.11** & **251.980** \\ \hline
**ensemble, min** & **1.410** & **1.138** & **0.285** & **598.55** & **281.266** \\ \hline
**ensemble, MGDA** & **1.055** & **1.117** & **0.615** & **602.12** & **306.254** \\ \hline
**ensemble, CAGrad** & **1.895** & **1.150** & **0.695** & **564.72** & **236.802** \\ \hline \end{tabular}
\end{table}
Table 4: **Average (Unnormalized) ground-truth score of the top 128 generated designs. For each task, the best score is green, and the second best score is blue.** |
2309.10966 | MBR and QE Finetuning: Training-time Distillation of the Best and Most
Expensive Decoding Methods | Recent research in decoding methods for Natural Language Generation (NLG)
tasks has shown that MAP decoding is not optimal, because model probabilities
do not always align with human preferences. Stronger decoding methods,
including Quality Estimation (QE) reranking and Minimum Bayes' Risk (MBR)
decoding, have since been proposed to mitigate the model-perplexity-vs-quality
mismatch. While these decoding methods achieve state-of-the-art performance,
they are prohibitively expensive to compute. In this work, we propose MBR
finetuning and QE finetuning which distill the quality gains from these
decoding methods at training time, while using an efficient decoding algorithm
at inference time. Using the canonical NLG task of Neural Machine Translation
(NMT), we show that even with self-training, these finetuning methods
significantly outperform the base model. Moreover, when using an external LLM
as a teacher model, these finetuning methods outperform finetuning on
human-generated references. These findings suggest new ways to leverage
monolingual data to achieve improvements in model quality that are on par with,
or even exceed, improvements from human-curated data, while maintaining maximum
efficiency during decoding. | Mara Finkelstein, Subhajit Naskar, Mehdi Mirzazadeh, Apurva Shah, Markus Freitag | 2023-09-19T23:39:07Z | http://arxiv.org/abs/2309.10966v6 | # MBR and QE Finetuning: Training-time Distillation of the Best and Most Expensive Decoding Methods
###### Abstract
Recent research in decoding methods for Natural Language Generation (NLG) tasks has shown that MAP decoding is not optimal, because model probabilities do not always align with human preferences. Stronger decoding methods, including Quality Estimation (QE) reranking and Minimum Bayes' Risk (MBR) decoding, have since been proposed to mitigate the model-perplexity-vs-quality mismatch. While these decoding methods achieve state-of-the-art performance, they are prohibitively expensive to compute. In this work, we propose _MBR finetuning_ and _QE finetuning_, which distill the quality gains from these decoding methods at training time, while using an efficient decoding algorithm at inference time. Using the canonical NLG task of Neural Machine Translation (NMT), we show that even with self-training, these finetuning methods significantly outperform the base model. Moreover, when using an external LLM as a teacher model, these finetuning methods outperform finetuning on human-generated references. These findings suggest new ways to leverage monolingual data to achieve improvements in model quality that are on par with, or even exceed, improvements from human-curated data, while maintaining maximum efficiency during decoding.
## 1 Introduction
Beam search and greedy decoding are the most common decoding methods used for Natural Language Generation (NLG) tasks. However, Eikema and Aziz (2020) showed that maximum _a posteriori_ (MAP) decoding methods, which approximate the most likely prediction based on model probabilities, may be suboptimal due to misaligned probability distributions. They instead proposed Minimum Bayes' Risk (MBR) decoding as an alternative decoding method. Unlike MAP decoding, MBR decoding does not aim to produce the prediction with the highest estimated model probability. Instead, it chooses the prediction that is estimated to have the highest quality with respect to a utility metric. A follow-up study by Freitag et al. (2022) showed that MBR decoding with neural utility metrics like _BLEURT_(Sellam et al., 2020) or _COMET_(Rei et al., 2020) significantly outperforms beam search decoding, according to expert-based human evaluation.
However, the main drawback of MBR decoding is that it is prohibitively expensive. In particular, the algorithm requires that, for every input query, a large number \(n\) of candidates be generated from the model, and then an (expensive) scoring function be computed on every pair of distinct candidates \((n_{i},n_{j})\), for a total of \(O(n^{2})\) computations.
Given the significant quality improvements afforded by MBR decoding relative to beam search, we propose to distill the MBR quality gains at training time, without affecting decoding speed or resource usage. Despite its quality advantages, the slow inference speed of MBR decoding remains a limitation even when generating distillation data. Instead of MBR decoding, we can as an alternative rerank the same candidate model predictions using a neural quality estimation (QE) metric. Reranking is faster than MBR decoding, because its inference speed scales linearly with the number of candidate predictions, rather than quadratically. Fernandes et al. (2022) showed that reranking with neural metrics produces better predictions than beam search, and that it has similar benefits to MBR decoding.
In this work, we focus on the NLG task of Neural Machine Translation (NMT), and show that we can benefit from the quality gains of MBR decoding and QE reranking by finetuning NMT models on MBR-decoded and QE-reranked sentences generated from monolingual sources, either via self-training or using an external teacher model, and then using a more efficient decoding method (such as beam search) at inference time.
Our contributions can be summarized as follows:
* We propose two finetuning methods, _MBR_
finetuning_ and _QE finetuning_, each of which distills performance gains from MBR decoding and QE reranking, respectively, into the same model while avoiding expensive decoding at inference time.
* Using the task of NMT, we show that these finetuning methods significantly outperform the base model across two language pairs, while fine-tuning on beam search output degrades quality.
* We show that both MBR and QE finetuning on top of a model finetuned on human translations yields additional quality improvements.
* We show that using a LLM as the teacher model substantially outperforms using a self-teacher for MBR and QE finetuning.
* Moreover, we show that both MBR and QE finetuning from the base student model using a LLM teacher even outperforms finetuning on human translations.
## 2 MBR Decoding and QE Reranking
Broadly, both MBR decoding and QE reranking can be decomposed into two steps:
1. Given a source segment, generate a list of candidate model outputs. In this work, we use sampling to generate the candidate translations from an NMT model.
2. Choose the best output based on a utility function. In this work, we use either a neural QE or a neural reference-based metric as utility function.
### Candidate list generation
The first step is identical for both decoding strategies. We generate candidate translations using epsilon sampling (setting \(\epsilon\)=0.02), which was shown to be the best sampling method for MBR decoding in Freitag et al. (2023).
### Minimum Bayes' Risk (MBR) scoring
MBR scoring uses the set \(\mathcal{H}\) of samples obtained from the first step both as candidate translations and as "pseudo-references", then uses a reference-based utility metric to estimate the expected utility of each candidate translation with respect to the set of pseudo-references. The candidate with the highest expected utility is chosen as the best translation.
That is, given a utility metric \(u(h,r)\) which estimates the quality of a candidate translation \(h\) conditioned on a reference translation \(r\), we select the Minimum Bayes' Risk (MBR) translation \(h^{mbr}\) from a set of hypotheses \(\mathcal{H}\) as
\[h^{mbr}=\operatorname*{arg\,max}_{h\in\mathcal{H}}\frac{1}{|\mathcal{H}|}\sum _{y\in\mathcal{H}}u(h,y)\]
Freitag et al. (2022) showed that neural utility metrics outperform lexical overlap-based metrics, so we use _BLEURT v0.2_(Sellam et al., 2020) as the utility function \(u\).
Note that the number of forward passes through the utility function required to compute \(h^{mbr}\) is quadratic in the size of the candidate set \(\mathcal{H}\). In practice, this means that MBR decoding is prohibitively expensive.
### Scoring with a QE metric
Alternatively, QE reranking uses a reference-free utility metric to score the candidate translations, so that rather than requiring an average over all pseudo-references to compute each candidate's utility, only a single forward pass through the metric model is required. Thus, QE reranking is linear, rather than quadratic, in the candidate size.
Formally, a QE utility metric \(u(h,s)\) estimates the quality of a candidate translation \(h\) conditioned on the source \(s\), rather than on the reference. We select the best QE translation \(h^{qe}\) of the source \(s\) from a set of hypotheses \(\mathcal{H}\) as
\[h^{qe}=\operatorname*{arg\,max}_{h\in\mathcal{H}}u(h,s)\]
There is no QE version of _BLEURT_, so we instead use _MetricX-XXL-QE_ as our utility function. This metric has the same architecture and was trained on the same human judgements data as _MetricX-XXL_, the winning submission to the WMT 2022 Metrics Shared Task (Freitag et al., 2022). To make it a QE metric, we pass the source segment as input to the metric instead of the reference.
Table 1 shows the meta-evaluation of the (reference-based) _BLEURT_ metric against the (reference-free) _MetricX-XXL-QE_. We measure the metric performance with respect to ground-truth translation quality ratings using the benchmark MQM dataset from WMT 2022 on en\(\rightarrow\)de. At the segment-level, we report the "group-by-item" variant of the pairwise accuracy meta-evaluation metric proposed by Deutsch et al. (2023). Note that a random metric would achieve \(33\%\) accuracy. We also report system-level pairwise accuracy. According to both the segment-level and system-level
meta-evaluation metrics, _BLEURT_ and _MetricX-XXL-QE_ perform comparably.
See Figure 2 for a comparison of the ranking of all sampled transitions of a single source sentence by MBR scoring with _BLEURT_ (as described in SS2.2) versus QE scoring with _MetricX-XXL-QE_ (as described in SS2.3), and see Figure 3 for examples of the MBR and QE score distributions of all sampled candidates versus the top-1 candidate.
## 3 MBR and QE Finetuning
MBR decoding and QE reranking are both inference-time solutions to the "model-perplexity-versus-quality mismatch". We propose _MBR and QE finetuning_ to instead ameliorate this mismatch at training time via direct adaptation of the model weights, without having to incur high costs and resource usage at inference time.
Concretely, the method of MBR finetuning can be decomposed into the following steps:
1. Dataset generation: Generate MBR-decoded translations (see SS2.2) for some monolingual datasource using a teacher model \(T\).
2. Finetuning: Finetune student model \(S\) on the dataset generated in step 1. 1. _Variant 1: Self-training (\(T=S\))_. The finetuning is initialized from the same model checkpoint used to generate the MBR decoded dataset. 2. _Variant 2: Distillation (\(T\neq S\))_. The student model \(S\) is finetuned on the dataset generated by the teacher model \(T\).
3. Finally, at inference time, decode with the finetuned student model \(S\) with a more efficient decoding method such as beam search, greedy search, or sampling.
QE finetuning is analogous to MBR finetuning, with the only difference being that QE reranking (SS2.3) is used as the decoding strategy instead of MBR decoding during dataset creation in step 1.
## 4 Related Work
MBR decoding with neural metrics was first proposed by Freitag et al. (2022), where it was shown to outperform QE reranking when using _BLEURT v0.2_(Sellam et al., 2020) as the MBR utility metric and _COMET-QE-21_(Rei et al., 2021) as the QE utility metric.
Fernandes et al. (2022) systematically explored different ranking strategies for so-called _quality-aware decoding_ and also found that (top-1) QE reranking underperformed the beam search baseline. However, _tuned reranking_, where four different QE metrics are linearly combined with weights learned so as to maximize a reference-based metric on a validation set, showed strong improvements over the beam search baseline.
Despite the strong performance of MBR decoding and QE reranking on average, Amrhein and Sennrich (2022) showed that these decoding methods suffer from failure modes which can be attributed to underlying problems with the utility function. They performed the study using _COMET_(Rei et al., 2020) as the utility function, and showed that this metric is not sensitive enough to discrepancies in numbers and named entities. Our proposed methods of MBR and QE finetuning, on the other hand, would not necessarily be as susceptible to these same failure modes since, even though the model would be exposed to these extreme examples, its representation would be smoothed out over the course of training.
Shen et al. (2015) propose a training-time approach to directly optimize evaluation metrics, which they call _Minimum Bayes Risk Training_. They initialize from a model already trained on the maximum likelihood estimation (MLE) objective and then use an alternative loss function which aligns the model's probability with the evaluation metric. In particular, for each source sentence, they score \(N\) samples from the model using the evaluation metric, with the target side of the training example used as the reference. Then the risk for each sample is calculated by multiplying the metric score with the sample's model probability, and the loss for this example is computed as the negative sum of the risks of all samples.
So rather than using the single best sample according to the evaluation metric (as in MBR finetuning), they weight the loss by the risk of all generated samples. Also, unlike MBR finetuning which uses a static teacher, their (self-)teacher evolves
\begin{table}
\begin{tabular}{l c c} \hline \hline & **Seg-Level** & **Sys-Level** \\ \cline{2-3}
**Metric** & \(\mathrm{acc}^{*}\) & \(\mathrm{acc}\) \\ \hline BLEURT & 57.5 & 76.9 \\ MetricX-XXL-QE & 59.1 & 76.9 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Segment-level and system-level meta-evaluation results on the WMT’22 MQM en\(\rightarrow\) de test set.
over the course of training as model weights are updated. Note that they only consider _BLEU_(Papineni et al., 2002) as their evaluation metric (and do not consider neural metrics). Also, unlike MBR finetuning, their method requires references, though using Bayes' Risk scoring instead would be a natural alternative. The quality improvements achieved with this method were modest, and came at the cost of unstable and expensive training.
Gulcehre et al. (2023) proposed a method for offline reinforcement learning-based finetuning called _ReST_, which generates a finetuning dataset by sampling from the "policy" (i.e. model), scoring each sample with a QE metric, then iteratively fine-tuning on these samples using a reward-weighted loss based on the QE scores (after removing samples with scores below a fixed threshold, which becomes stricter at later iterations). The method of QE finetuning proposed in this paper, on the other hand, only selects the top-1 sample based on QE score. Morevoer, _ReST_ finetuning uses ancestral, rather than epsilon, sampling and samples from the original base training dataset, without exploring the effect of dataset used on model performance.
Extensive prior work in unsupervised and semi-supervised machine translation has also investigated how monolingual data can be used to improve translation model quality. The techniques of forward and backward translation have resulted from this line of research (Sennrich et al., 2016), but the use of alternative decoding algorithms to generate this data has not been fully explored.
A related line of work has focused on distilling large teacher models into smaller student models, by penalizing differences between the translations produced by the teacher and the student (Hinton et al., 2015). This work has shown that the quality and style of teacher models can be transferred to students (Freitag and Al-Onaizan, 2016).
## 5 Experimental Setup
### Datasets
We perform experiments on two language pairs, English-German (high-resource) and English-Japanese (medium-resource).
#### 5.1.1 English-German (en\(\rightarrow\)de)
Base training dataThe base model was trained on the English-German WMT 2021 training data (Akhbardeh et al., 2021). We used all parallel data, after filtering based on length heuristics (removing source sentences longer than 250 tokens and sentences pairs with a source/target ratio exceeding 1.5) and language id, performing deduplication, and normalizing punctuation. We also used the WMT 2021 monolingual NewsCrawl data for backtranslation (with a WMT German-English backtranslation model). After filtering, the training dataset had 89.4 million sentences.
Finetuning dataWe used previous WMT test sets from the years 2009-2019 as finetuning data (Akhbardeh et al., 2021). Our finetuning dataset had a total of 30,426 sentences. The target side of this dataset is human-translated references, and we finetuned both on this human references data, as well as on MBR-decoded and QE-reranked translations generated from the source-side sentences.
#### 5.1.2 English-Japanese (en\(\rightarrow\)ja)
Base training dataThe base model was trained on the English-Japanese WMT 2022 parallel training data (Kocmi et al., 2022). The data was deduped and filtered using CDS (Wang et al., 2018), where the top \(30\%\) of the data by CDS score was preserved. After filtering, our training dataset had 8 million sentences.
Finetuning dataWe used the WMT 2020 test set, comprised of 1000 sentences, as finetuning data (Kocmi et al., 2022). As with English-German, we finetuned both on the provided references, and on MBR and QE translations generated from this dataset's source sentences.
#### 5.1.3 Monolingual data
The finetuning datasets for en\(\rightarrow\)de and en\(\rightarrow\)ja only have 30k and 1k examples, respectively, as these datasets are constrained by the requirement of high-quality human reference translations. Since MBR and QE finetuning do not require references, we also experiment with using monolingual data-sources. For that, we take a random sample of 200k English sentences from Common Crawl1.
Footnote 1: [https://commoncrawl.org](https://commoncrawl.org)
#### 5.1.4 MBR and QE data
We generate MBR and QE translations both from the en\(\rightarrow\)de and en\(\rightarrow\)ja finetuning datasets (SS5.1.1 and SS5.1.2) and from the monolingual CommonCrawl dataset (SS5.1.3). We generate 256 candidate translations per source via epsilon sampling (setting \(\epsilon\)=\(0.02\)), and use the same set of sampling translations to generate both the MBR and QE data.
We use _BLEURT_(Sellam et al., 2020) as the MBR utility metric and _MetricX-XXL-QE_, a QE version of _MetricX-XXL_(Freitag et al., 2022) with the same architecture and trained on the same human judgements data, as the QE utility metric. In addition to generating MBR and QE translations as finetuning data, we also generate beam search translations from the same datasets as baselines. We use beam size of 4 and length penalty as described in Equation 10 in Wu et al. (2016) with \(\alpha=0.5\).
#### 5.1.5 Development and test sets
For both language pairs, we used newstest2021 as our development set for checkpoint picking, and report all results on the generalMT2022 test set (Kocmi et al., 2022).
### Models and training recipe
Student modelFor both language pairs (en\(\rightarrow\)de and en\(\rightarrow\)ja), we used a \(375.4\) million parameter Transformer encoder-decoder architecture, implemented in _lingvo_(Shen et al., 2019). The model architecture is similar to the _transformer-big_ setting in Vaswani et al. (2017), with 6 encoder and 6 decoder layers, model dimension of 1024, hidden dimension of 8192, and 16 multi-attention heads. The en\(\rightarrow\)de model used a bilingual vocabulary of 32k subword units and the en\(\rightarrow\)ja model used a multilingual vocabulary of 256k subword units (Kudo and Richardson, 2018). The models were trained without label smoothing to avoid distorting the probability distributions when sampling translations for MBR and QE data generation, as it has been found that label smoothing negatively impacts model fit (Eikema and Aziz, 2021). The best (base and incremental) checkpoints were chosen to maximize _BLEURT_ on the development set.
Teacher modelsIn addition to self-training, we also experimented with using a LLM as the teacher model. Instead of using a prompt-based approach, we finetuned _PaLM-2 Bison_(Anil et al., 2023) on the (reference-based) finetuning datasets (see SS5.1.1 and SS5.1.2) to maximize translation quality.
### Evaluation
We evaluate our models on four automatic metrics: _BLEURT_(Sellam et al., 2020), _MetricX-XXL_(Freitag et al., 2022), _Comet20_(Rei et al., 2020), and the _MetricX-23-c_ submission to the WMT 2023 Metrics Shared Task, which resembles _MetricX-XXL_ but uses a _PaLM-2_ backbone. _MetricX-23-c_ reports scores in the MQM range of \(-25\) to \(0\) (higher is better), where a score of \(-1\) corresponds to 1 minor error and a score of \(-5\), to 1 major error.
Since the MBR data is generated using _BLEURT_ as the utility function, and the QE data is generated using _MetricX-XXL-QE_, the MBR-decoded and MBR-finetuned models may overfit to the _BLEURT_ metric, while the QE-reranked and QE-finetuned models may overfit to the _MetricX-XXL_ metric. So while these metrics can provide useful signals about the effectiveness of the distillation (e.g. from the MBR-decoded teacher to the MBR-finetuned student), they cannot be trusted as unbiased measures of model quality.
To measure model quality, we instead use _Comet20_. Note that _BLEURT_ and _Comet20_ were finetuned on the same human judgements data, as were _MetricX-XXL_ and _MetricX-23-c_, so these pairs of metrics may tend to be highly correlated with each other. Thus, we also report _MetricX-23-c_ as a counterbalance to any _Comet20_-specific biases, which may tend to favor models optimized for _BLEURT_ (e.g. MBR-decoded and MBR-finetuned models). We also verify the trustworthiness of _Comet20_ with a human evaluation study for en\(\rightarrow\)de (see Section 5.3.1). Unless otherwise indicated, all of the evaluation results for the finetuned models are reported using beam search as the decoding strategy (with the same hyperparameters as in SS5.1.4).
#### 5.3.1 Human Evaluation
We hired 9 professional translators and measure translation quality with a document context version of MQM (Lommel et al., 2014) which mimics the setup proposed in Freitag et al. (2021). This includes using the same error categories, severity levels and error weighting schema. As suggested in the study, we weight each major error with \(5\) and each minor error with \(1\), except for minor punctuation errors which get a score of \(0.1\). The final segment-level score is an average over scores from all annotators. We refer the reader to Freitag et al. (2021) for the details on error categories and annotator instructions.
### Experiment Arms
We performed experiments in three phases:
Phase 1Finetune from the base checkpoint using the finetuning datasets described in SS5.1. This phase allows for a direct comparison of MBR and QE finetuning against finetuning on references. In the remaining phases, large corpora of
unpaired monolingual corpora are used to generate the MBR and QE datasets, so no comparison against reference-based finetuning is possible.
Phase 2Finetune from both base and incremental (reference-finetuned) checkpoints using self-MBR and self-QE data generated from the monolingual Common Crawl corpus described in SS5.1.3. In this phase, we investigate whether using a larger corpus of source-side data to generate MBR and QE translations yields additional quality improvements over the MBR and QE-finetuned models from **Phase 1**, and the extent to which this closes the performance gap with finetuning on references.
Phase 3Finetune from both base and incremental (reference-finetuned) checkpoints using MBR and QE data generated from the _PaLM-2 Bison_ teacher model. In this phase, we investigate whether using a teacher model which is stronger than the student affords further quality gains.
## 6 Results
### en\(\rightarrow\)de
The results from all experiment phases are summarized in Table 2, and the _Comet20_ results by domain are shown in Table 3. We compare our results against the WMT 2022 _Ref-B_ human reference (evaluated with respect to _Ref-A_) and the winning WMT 2022 submission.
We observe that MBR decoding (2c) and QE reranking (2b) using the base checkpoint both outperform beam search decoding (2a), according to _Comet20_. This establishes the premise that the MBR and QE "teachers" used to generate the MBR and QE finetuning data are stronger than the beam search baseline.
Phase 1Finetuning on references achieves the highest _Comet20_ score (with a gain of \(3.16\) over the base model), though MBR and QE finetuning still achieve gains of \(0.23\) and \(1.84\)_Comet20_ against the base model, respectively, in contrast to beam finetuning which degrades performance. In fact, the QE-finetuned student is stronger than its QE-reranked teacher by 0.42 _Comet20_, while the MBR-finetuned student is somewhat weaker than its MBR-decoded teacher. Note that according to _BLEURT_, the MBR-finetuned model outperforms the QE-finetuned model, while according to _MetricX-XXL_, the opposite is true. Thus we do observe overfitting to the metric used as the utility function for MBR and QE data generation.
Phase 2MBR and QE finetuning from the base checkpoint on the larger Common Crawl dataset both yield gains against the base model, and outperform the **Phase 1** finetuning on the smaller dataset. In this phase, MBR finetuning even outperforms MBR decoding (\(59.94\) vs \(59.35\)_Comet20_, respectively). For QE finetuning, the gains in this phase are \(0.86\)_Comet20_ larger than in **Phase 1**, representing a \(2.70\)_Comet20_ improvement over the base model.
Moreover, performing a second round of finetuning from the reference-finetuned checkpoint on either self-QE-Common-Crawl or self-MBR-Common-Crawl data achieves further gains in _Comet20_ (by \(0.16\) and \(0.54\) points, respectively). So MBR and QE finetuning from a large monolingual corpus can complement finetuning on a small dataset of high-quality human references. Also note that beam finetuning (using the same source-side Common Crawl dataset) degrades performance relative to the initial reference-finetuned checkpoint.
\begin{table}
\begin{tabular}{l|c|c|c|c} \hline
**Model** & **MBP** & **MBR-XXL** & **Comet20** & **MBR-XXL** \\ \hline \(12\) for MBR & \(20.86\) & \(21.27\) & \(0.85\) & \(0.53\) \\ \(12\) MBR 222 \(12\) byte phonom
Phase 3Using QE and MBR finetuning data generated by the _PaLM-2 Bison_ teacher model (with the same source-side Common Crawl dataset as in **Phase 2**) outperforms using the self-teacher model by a large margin (of \(1.85\) and \(3.57\)_Comet20_ points, respectively) and, in fact, outperforms finetuning on references (by \(1.39\) and \(2.56\)_Comet20_ points, respectively). Recall that the _PaLM-2 Bison_ teacher model was finetuned on references, but the student model never directly saw them.
Moreover, performing a second round of QE or MBR finetuning from the reference-finetuned checkpoint using the _PaLM-2 Bison_ teacher yields further performance improvements, with the MBR-finetuned model achieving an additional large gain of \(2.67\)_Comet20_ points on top of the reference-finetuned model, outperforming all other models including the winning WMT 2022 submission. As a baseline, note that finetuning from the reference-finetuned checkpoint on _PaLM-2 Bison_ greedy (rather than QE or MBR) translations achieves a performance improvement of \(0.66\)_Comet20_, which is slightly stronger than the performance of self-MBR and self-QE finetuning using the same source-side dataset. This highlights the importance of the teacher model.
Human Evaluation: MQM StudyWe perform a human evaluation to compare the "from-ref" QE-finetuned systems 4d) and 5c) in Table 2 (using a self-teacher and _PaLM-2_ teacher, respectively), against the beam search decoded and QE-reranked base model, as well as the reference-finetuned model from which QE finetuning was initialized. As shown in Table 4, the ranking of the systems according to human evaluation agrees with the ranking according to _Comet20_. Both QE-finetuned systems outperform the reference-finetuned baseline, and QE finetuning using the _PaLM-2_ teacher outperforms using the self-teacher.
### en\(\rightarrow\)ja
We then investigate whether the gains from QE finetuning also extend to a lower-resource setting. Recall that the reference-based finetuning dataset for English-Japanese only has 1k sentences, while monolingual English data is widely available (to use for generating Japanese MBR and QE translations). The results for all en\(\rightarrow\)ja experiment phases are summarized in Table 5, and the trends align with those observed for en\(\rightarrow\)de.
Phase 1QE finetuning achieves a \(4.35\)_Comet20_ performance improvement over the base model and, even more notably, outperforms the reference-finetuned model by \(0.43\)_Comet20_. The baseline of beam finetuning, on the other hand, degrades performance relative to the base model. Note that for beam finetuning, the max-_BLEURT_ checkpoint was the initial checkpoint, so no finetuned checkpoint was evaluated, as per our checkpoint selection procedure.
Phase 2QE finetuning from the base checkpoint on the larger Common Crawl dataset (200k sentences, versus 1k from **Phase 1**) achieves even stronger performance (better by \(2.34\)_Comet20_), outperforming the reference-finetuned model by \(2.77\)_Comet20_. Moreover, QE finetuning from the reference-finetuned checkpoint achieves additional large performance improvements, beating the reference-finetuned model by \(5.73\)_Comet20_.
Phase 3As with en\(\rightarrow\)de, the largest performance improvements are observed when using the _PaLM-2 Bison_ teacher model, rather than the self-teacher setup from the preceding phases. Here, we see that QE finetuning from the base checkpoint using the _PaLM-2 Bison_ teacher model dramatically outperforms finetuning on references by 9.67_Comet20_, while finetuning from the reference-finetuned checkpoint does not afford any additional performance improvements.
\begin{table}
\begin{tabular}{l|c|c} \hline \hline
**Model** & **Comet20 \(\uparrow\)** & **MQM score \(\downarrow\)** \\ \hline
2a) Beam search decoding (base model) & 57.79 & 1.724 \\
2b) QE reranking (base model) & 59.21 & 1.666 \\
3d) Reference-finetuned & 60.95 & 1.546 \\
4d) Self-QE-from-ref-finetuned & 61.11 & 1.476 \\
5c) PaLM-2-QE-from-ref-finetuned & 62.35 & 1.473 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Results of MQM study on WMT’22 test set. Lower MQM scores are better (indicating fewer errors).
\begin{table}
\begin{tabular}{l|c|c|c} \hline \hline
**Model** & **MEKK** & **Mork-XX-XX** & **CMCT20** & **Mork-XX-2a** \\ \hline In WMT 2022 Piso Shannon (175) & 69.49 & 83.54 & 64.35 & -0.62 \\ \hline
2a Bison search decoding (base model) & 64.49 & 77.28 & 47.24 & -0.45 \\
2a) QE reranking (base model) & 68.15 & 83.15 & 60.10\({}^{-0.4}\) & -0.63 \\
3a) Reference-finetuned & 65.70 & 78.13 & 50.10\({}^{-0.4}\) & -0.92 \\
3b) QE-finetuned & 67.17 & 78.35 & 51.39\({}^{-0.4}\) & -0.89 \\
3a) Room-based & 68.71 & 88.15 & 60.10\({}^{-0.4}\) & -0.89 \\
3b) QE-finetuned & 68.71 & 88.15 & 60.10\({}^{-0.4}\) & -0.89 \\
3a) SQL-free-finetuned & 65.52 & 78.36 & 53.73\({}^{-0.4}\) & -0.84 \\
4b) Self-QE-from-ref-finetuned & 66.56 & 79.31 & 56.69\({}^{-0.4}\) & -0.81 \\ \hline
5b) PaLM-2 QE-from-ref-finetuned & 68.02 & 80.51 & 60.63\({}^{-0.4}\) & -0.47 \\
5b) PaLM-2 QE-from-ref-finetuned & 67.94 & 80.35 & 60.63\({}^{-0.4}\) & -0.78 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Results on English-Japanese WMT’22 test set. Asterisks in the _Comet20_ column indicate scores which are significantly better than their baseline. For models finetuned from the base checkpoint, the baseline is row 2a), and for models finetuned from the reference-finetuned checkpoint, the baseline is row 3a).
### Ablation Studies
There are many source-side and target-side variables which may affect the performance of MBR and QE finetuning. We perform ablation studies (on en\(\rightarrow\)de) to isolate the effects of several of these variables. On the source side, we consider the effect of the type and size of monolingual dataset used for MBR and QE data generation. On the target side, we investigate the effect of candidate size used for MBR and QE data generation on the performance of the finetuned model. We also investigate alternative finetuning regimes, including finetuning on back-translated MBR and QE data, mixtures of MBR and QE data, as well as iterative finetuning. Finally, we investigate the effect of the decoding strategy used by the finetuned model at inference time.
#### 6.3.1 Does the monolingual dataset used for MBR and QE finetuning matter?
To probe the effect of source-side dataset on the downstream performance of MBR and QE finetuning, we sample 30k sentences from the NewsCrawl (Akhbardeh et al., 2021) and CommonCrawl (SS5.1.3) datasets, then generate MBR and QE translations from the en\(\rightarrow\)de reference-finetuned checkpoint, which are used for a second round of finetuning. As shown in Table 6, choice of dataset matters very little. The NewsCrawl dataset has a small advantage over CommonCrawl in _Comet20_ scores for the MBR-finetuned model, while the reverse is true for the QE-finetuned model.
We also investigate the extent to which dataset size affects model performance, by finetuning on the entire Common Crawl (SS5.1.3) MBR and QE dataset (generated from the reference-finetuned checkpoint) of 200k sentences, versus finetuning on a sample of 30k sentences. The results are shown in Table 7. Reducing the dataset to only \(15\%\) of its original size leads to a consistent, but small, decrease in performance across all metrics.
|
2301.13495 | Dimension-free estimates on distances between subsets of volume
$\varepsilon$ inside a unit-volume body | Average distance between two points in a unit-volume body $K \subset
\mathbb{R}^n$ tends to infinity as $n \to \infty$. However, for two small
subsets of volume $\varepsilon > 0$ the situation is different. For unit-volume
cubes and euclidean balls the largest distance is of order $\sqrt{-\ln
\varepsilon}$, for simplexes and hyperoctahedrons $-$ of order $-\ln
\varepsilon$, for $\ell_p$ balls with $p \in [1;2]$ $-$ of order $(-\ln
\varepsilon)^{\frac{1}{p}}$. These estimates are not dependent on the
dimensionality $n$. The goal of the paper is to study this phenomenon.
Isoperimetric inequalities will play a key role in our approach. | Abdulamin Ismailov, Alexei Kanel-Belov, Fyodor Ivlev | 2023-01-31T09:22:09Z | http://arxiv.org/abs/2301.13495v1 | ###### Abstract
###### Abstract
Average distance between two points in a unit-volume body \(K\subset\mathbb{R}^{n}\) tends to infinity as \(n\to\infty\). However, for two small subsets of volume \(\varepsilon>0\) the situation is different. For unit-volume cubes and euclidean balls the largest distance is of order \(\sqrt{-\ln\varepsilon}\), for simplexes and hyperoctahedrons - of order \(-\ln\varepsilon\), for \(\ell_{p}\) balls with \(p\in[1;2]\) - of order \((-\ln\varepsilon)^{\frac{1}{p}}\). These estimates are not dependent on the dimensionality \(n\). The goal of the paper is to study this phenomenon. Isoperimetric inequalities will play a key role in our approach.
Dimension-free estimates on distances between subsets of volume \(\varepsilon\) inside a unit-volume body
Abdulamin Ismailov1
Footnote 1: E-mail: [email protected]
Alexei Kanel-Belov
Fyodor Ivlev
###### Contents
* 1 Introduction.
* 2 Preliminaries.
* 3 Euclidean balls.
* 4 Unit cubes.
* 5 Simplexes and \(\ell_{p}\)-balls.
* 5.1 Simplexes.
* 5.2 \(\ell_{p}\)-balls.
* 6 Lower bounds.
* 7 Discrete isoperimetric problem.
* 8 Conclusions.
* A Asymptotic behavior of \(\Phi^{-1}\).
* B Function \(x(-\log x)^{1-\frac{1}{p}}\) is increasing on \((0;\frac{1}{2}]\).
* C Functions \(V_{n}(x)\) and \(S_{n}(x)\) in limit.
* D Average distance.
Introduction.
In high dimensions we observe a variety of different phenomena. For example, Vladimir Igorevich Arnold liked to ask his students the following question: \(\ast\)What percent of the overall mass is occupied by the pulp of the 100-dimensional watermelon of diameter 1 meter, if the crust is of width 1 centimeter?\(\ast\) The answer is approximately \(1-e^{-1}\). This question in a simple way demonstrates the concentration of measure phenomenon: how most of the mass of a body could lie inside a thin shell. Here is another example, the volume of a euclidean ball of radius 2023 tends to 0 as the dimensionality goes to infinity. More generally, we have the isodiametric inequality, which suggests that in high dimensions the diameter of a unit-volume body shall become arbitrarily large.
The goal of this paper is to achieve a better understanding how things work in high-dimensional spaces by studying the following phenomenon: two points in a unit-volume convex body could be at an arbitrarily large distance from each other; consider, for example, the unit cubes \((0;1)^{n}\) - as \(n\) tends to \(+\infty\) the diameter equal to \(\sqrt{n}\) also tends to infinity, similarly, the average distance would be of order at least \(\sqrt{n}\)(see Appendix D); even the distance between a point and a subset of some fixed volume \(\varepsilon<1\) could be arbitrarily big, but it turns out that the distance between two subsets of some fixed volume \(\varepsilon>0\) in the unit cube is bounded above by some constant dependent on \(\varepsilon\) but not on the dimension \(n\). What about convex bodies other than the unit cubes?
Consider a family of unit-volume bounded convex bodies \(K_{n}\). For each \(K_{n}\) it makes sense to consider the supremum of all possible distances between two subsets of some fixed volume \(\varepsilon\in(0;\frac{1}{2})\). Denote this value by \(d_{n}(\varepsilon)\).
By \(\Phi\) we mean the function
\[\Phi(a)=\int_{-\infty}^{a}e^{-\pi x^{2}}dx\]
Function \(\Phi^{-1}(\varepsilon)\) is asymptotically equivalent to
\[-\frac{1}{\sqrt{\pi}}\sqrt{-\ln\varepsilon}\]
as \(\varepsilon\) tends to 0(see Appendix A).
**Theorem 6.2**: _When \(K_{n}\) are the unit-volume euclidean balls_
\[\lim_{n\to\infty}d_{n}(\varepsilon)=-2\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)\]
**Theorems 4.3 and 6.3**: _When \(K_{n}\) are the unit cubes we have_
\[-2\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon)\leq\liminf_{n\to\infty}d_{n}( \varepsilon)\leq\limsup_{n\to\infty}d_{n}(\varepsilon)\leq-2\Phi^{-1}(\varepsilon)\]
**Theorems 5.5 and 6.4**: _When \(K_{n}\) are the unit-volume simplexes we have_
\[-\frac{\sqrt{2}}{e}\ln(2\varepsilon)\leq\liminf_{n\to\infty}d_{n}(\varepsilon) \leq\limsup_{n\to\infty}d_{n}(\varepsilon)\leq-c\ln\varepsilon\]
_for some universal constant \(c>0\) independent of \(n\) and \(\varepsilon\)._
**Theorems 5.8 and 6.5**: _Fix some \(p\in[1;2]\). When \(K_{n}\) are the unit-volume \(\ell_{p}\) balls_
\[-2\Psi_{p}^{-1}(\varepsilon)\leq\liminf_{n\to\infty}d_{n}(\varepsilon)\leq \limsup_{n\to\infty}d_{n}(\varepsilon)\leq C_{p}(-\ln\varepsilon)^{\frac{1}{p }},\]
_where \(C_{p}\) is some universal constant determined by \(p\), and function \(-2\Psi_{p}^{-1}(\varepsilon)\)(see Appendix C) is asymptotically equivalent to_
\[\frac{1}{e^{\frac{1}{p}}\Gamma\left(1+\frac{1}{p}\right)}(-\ln\varepsilon)^{ \frac{1}{p}}\]
_as \(\varepsilon\to 0\)._
A version of our problem, in which the euclidean distance is replaced by the Manhattan distance, can be approached by discretization.
**Theorem 7.3**: _If by \(d_{n}(\varepsilon)\) we denote the largest Manhattan distance between two bodies of volume \(\varepsilon\in(0;\frac{1}{2})\) in the unit cube \([0;1]^{n}\), then_
\[\lim_{n\to\infty}\frac{d_{n}(\varepsilon)}{\sqrt{n}}=-2\sqrt{\frac{\pi}{6}} \Phi^{-1}(\varepsilon)\]
We also establish a sort of a general lower bound, showing that in a way euclidean balls are optimal in regard to our problem.
**Theorem 6.6**.: _When \(K_{n}\) are unit-volume centrally symmetric bounded convex bodies_
\[-2\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)\leq\liminf_{n\to\infty}d_{n}(\varepsilon)\]
Lower bounds on our problem could be derived simply by considering some hyperplane cuts. But how can we bound above the distance between two subsets \(A\) and \(B\) in a unit-volume convex body?
Well, first, we observe that if both \(A\) and \(B\) are of volume at least \(\frac{1}{2}\), then the distance between them is zero(see Lemma 2.1). That is why we assume that both \(A\) and \(B\) are of some volume \(\varepsilon\in(0;\frac{1}{2})\). Next we introduce the concept of a \(\delta\)-enlargement of a body defined as the set of all points at a distance at most \(\delta\) from our body, i. e.
\[A_{\delta}=\{x\in X\mid\exists y\in A\colon d(x,y)\leq\delta\}\]
What happens if we replace \(A\) with its \(\delta\)-enlargement for a small enough value of \(\delta\)? Roughly speaking, a layer of width \(\delta\) will be added to our body. The volume of this layer \(A_{\delta}\setminus A\) could be approximated as \(\delta\cdot S(A)\), where \(S(A)\) is the surface area of the body \(A\). So the volume of \(A\) increases by approximately \(\delta\cdot S(A)\), but the distance between bodies \(A\) and \(B\) will decrease exactly by \(\delta\) after we enlarge \(A\)(or might become zero). To estimate the distance between \(A\) and \(B\) we will be slowly enlarging them simultaneously until both bodies would be of volume \(\frac{1}{2}\) at least, at which point the distance between them is already zero. The double of the amount of time it took both bodies to reach volume \(\frac{1}{2}\) would be an upper bound on the distance between them.
This was just a rough description of how we approach the problem. To make this idea work we are going to need more. We have not said anything about how our bodies may look like, at this point they could be arbitrary subsets of volume \(\varepsilon\), which may present a problem, since we plan to rely on concepts such as surface area. In part, these issues might be mitigated by the following observation: after enlarging both \(A\) and \(B\) by a little \(\delta\) distances and volumes would not change much, but smoothness properties might improve. Anyway, throughout this whole text we assume that the bodies we are dealing with are as smooth as needed.
Now consider the process of a slow enlargement of a body \(A\) at its very beginning. Instead of talking about the approximate volume of the layer \(A_{\delta}\setminus A\) it would be better to take the right derivative at the point \(\delta=0\)
What we will get is called the Minkowski-Steiner formula for the free surface area
\[\mu^{+}(A)=\lim_{\delta\to 0^{+}}\frac{\mu(A_{\delta})-\mu(A)}{\delta}\]
Thus it is vital to our approach to be able to estimate this surface area. But we are only aware of the initial volumes of \(A\) and \(B\), which leads us to the isoperimetric problem: given the information about the initial volume of a body, find a lower bound on its surface area.
Euclidean balls have really good symmetry properties. Symmetrization techniques could be applied. Isoperimetric regions inside the euclidean balls have been completely classified([16, Theorems 1 and 5]). This allows us to get tight enough estimates that lead to the proof of Theorem 6.2.
Unit cubes, however, are not as good as euclidean balls in that regard. That is why instead of dealing with the interior of the unit cube we perform a transfer(Lemma 4.1) to a different space, where the situation with the isoperimetric problem is better, and by doing so derive the lower bounds on the initial space([16, Theorem 7]).
At last, to derive lower bounds in the case of simplex new ideas and methods would need to be introduced. Here we repeat the approach from an article by Sasha Sodin [19], where an isoperimetric inequality for \(\ell_{p}\) balls with \(p\in[1;2]\) was proven. In particular, in case \(p=1\) we get hyperoctahedrons. Theorem 5.8 is an immediate consequence of the isoperimetric inequality established in article [19].
Even though our method does provide asymptotically correct estimates, we should not expect it to lead to exact constants. We bound the growth of \(\mu(A_{\delta})\) below by considering the isoperimetric problem for volume \(\mu(A_{\delta})\), but that might lead to suboptimal estimates, since as \(\delta\) varies \(A_{\delta}\) does not have to look like an optimal isoperimetric region.
(a) region \(A\) (b) region \(A_{\delta}\) (c) optimal region of volume \(\mu(A_{\delta})\)
Preliminaries.
Assume that we are working in the space \(X\) with metric \(d\) and probability measure \(\mu\), i. e. \(\mu(X)=1\). In this section we are going to introduce some basic concepts related to our problem.
**Definition 1**.: _The distance between a pair of non-empty subsets \(A,B\subseteq X\) is the infimum of distances between points from \(A\) and \(B\)_
\[\operatorname{dist}(A,B)=\inf_{x\in A,y\in B}d(x,y)\]
**Definition 2**.: _A point \(x\in X\) belongs to the \(\delta\)-enlargement of a subset \(A\subseteq X\) if it is at distance at most \(\delta\) from some point of \(A\)_
\[A_{\delta}=\{x\in X\mid\exists y\in A\colon d(x,y)\leq\delta\}\]
We want to know how far apart from each other two subsets \(A,B\subset X\) of measure \(\mu(A)=\mu(B)=\varepsilon>0\) could be. To that end, note that, if their \(\delta\)-enlargements intersect, then the distance is bounded above by \(2\delta\)
\[A_{\delta}\cap B_{\delta}\neq\varnothing\Rightarrow\operatorname{dist}(A,B) \leq 2\delta\]
Our problem is concerned with the case of \(X\) being an open convex bounded subset of \(\mathbb{R}^{n}\) of unit volume, \(d\) being the euclidean metric, and \(\mu\) being the Lebesgue measure. In that case the following lemma holds.
**Lemma 2.1**.: _If \(A\) and \(B\) are two subsets of \(X\) with \(\mu(A)+\mu(B)\geq 1\), then they are at a distance \(0\) from each other_
\[\operatorname{dist}(A,B)=0\]
Proof.: Assume the contrary. Let the distance between \(A\) and \(B\) be a positive number
\[r=\operatorname{dist}(A,B)>0\]
This would mean that our subsets do not intersect, and thus
\[\mu(A)+\mu(B)=\mu(A\cap B)+\mu(A\cup B)=\mu(A\cup B)=1\]
Pick a pair of points \(a\in A\) and \(b\in B\) at a distance less than \(2r\)
\[d(a,b)<2r\]
By \(c\) denote the midpoint of the segment between \(a\) and \(b\). The distance from \(c\) to both subsets \(A\) and \(B\) is strictly less than \(r\), so the point \(c\) does not belong to any of our subsets. Now consider a \(\delta\)-neighborhood of \(c\) that lies inside \(X\) with \(\delta<r-\frac{d(a,b)}{2}\). Clearly, it could not intersect neither \(A\) nor \(B\), and at the same time it has a non-zero measure, so
\[\mu(X\setminus(A\cup B))>0,\]
which leads to contradiction.
To ensure the nonemptiness of the intersection of \(A_{\delta}\) and \(B_{\delta}\) the following condition would suffice
\[\mu(A_{\delta})\geq\frac{1}{2}\text{ and }\mu(B_{\delta})\geq\frac{1}{2}\]
Thus we are interested in the growth of \(\mu(A_{\varepsilon})\) considered as a function of \(\varepsilon\), since that might lead to an upper bound on \(\delta\) and consequently on \(\operatorname{dist}(A,B)\). The derivative of \(\mu(A_{\varepsilon})\) at \(\varepsilon=0\) gives us
**Definition 3**.: _By the surface area of \(A\subseteq X\) we mean the following limit_
\[\mu^{+}(A)=\lim_{\varepsilon\to 0^{+}}\frac{\mu(A_{\varepsilon})-\mu(A)}{\varepsilon}\]
Lower bounds on \(\mu^{+}(A)\) might allow us to get results on the growth of \(\mu(A_{\varepsilon})\), but all we know is the measure \(\mu(A)\) of our subset \(A\). So we want to know the least possible value of \(\mu^{+}(A)\) when \(\mu(A)\) is fixed, or at least bound \(\mu^{+}(A)\) below.
**Definition 4**.: _By the isoperimetric profile we mean a function that maps \(t\) to the infimum of possible values that \(\mu^{+}(A)\) could take when \(\mu(A)=t\)._
\[I_{\mu}(t)=\inf_{\mu(A)=t}\mu^{+}(A)\]
We are no longer interested in the growth of \(\mu(A_{\varepsilon})\) after we reach the measure of one half, also our initial \(\mu(A)\) is greater than \(0\). This means that we are only interested in the values of \(I_{\mu}(t)\) when \(0<t<\frac{1}{2}\). That is why throughout this paper by default the domain of the isoperimetric profile is the interval \((0;\frac{1}{2})\).
A region, which has the minimal surface area amongst all the regions of the same measure, is called an isoperimetric region, and its boundary is called an isoperimetric hypersurface.
Euclidean balls.
**Introduction.** The euclidean ball is a perfect candidate for applying the symmetrization techniques. An argument([16, Theorems 1 and 5]) involving them completely classifies the optimal isoperimetric regions of the euclidean ball. Lower bounds on the isoperimetric profile thus could be extracted by considering these optimal regions.
By \(B^{n}\) we denote the unit \(n\)-ball. Its volume is
\[\frac{\sqrt{\pi}^{n}}{\Gamma\left(\frac{n}{2}+1\right)}\]
So the unit-volume \(n\)-ball will be of radius
\[\omega_{n}=\frac{\Gamma\left(\frac{n}{2}+1\right)^{\frac{1}{n}}}{\sqrt{\pi}} \sim\sqrt{\frac{n}{2\pi e}}\]
By \(\mu\) denote the Lebesgue measure on \(\omega_{n}B^{n}\).
Combination of theorems 1 and 5 from [16] provides a classification of optimal isoperimetric regions in \(\omega_{n}B^{n}\).
**Theorem 3.1** ([16, Theorems 1 and 5]).: _Isoperimetric hypersurfaces in a ball are either hyperplanes passing through the origin or spherical caps which are orthogonal to the surface of \(\omega_{n}B^{n}\)._
We would like to find lower bounds on the isoperimetric profile \(I_{\mu}\) of \(\omega_{n}B^{n}\), by Theorem 3.1 it would suffice to consider intersections with balls orthogonal to \(\omega_{n}B^{n}\).
By \(\Psi(x)\) we denote \(\Phi(\sqrt{e}x)\). Note that function \(\Psi(x)\) has a finite Lipschitz constant \(C>0\), since its derivative is a bounded function. On the interval \((-\infty;0)\) both \(\Psi(x)\) and \(\Psi^{\prime}(x)\) are increasing functions.
**Theorem 3.2**.: _For every \(\varepsilon_{0}\in(0;\frac{1}{2})\) and \(\tau>0\) there is a number \(N\) such that the isoperimetric inequality_
\[I_{\mu}(\Psi(t)+\tau)\geq\Psi^{\prime}(t)\]
_would hold for all \(n>N\) and \(\Psi(t)\in(\varepsilon_{0};\frac{1}{2}-\tau)\)._
Proof.: First, note that \(I_{\mu}\) is a non-decreasing function on the interval \((0;\frac{1}{2})\). Indeed, if one would take a ball orthogonal to \(\omega_{n}B^{n}\) whose intersection with \(\omega_{n}B^{n}\) is of volume \(\varepsilon\in(0;\frac{1}{2})\) and replace it with a ball that has the same radius but whose center is further away from the center of \(\omega_{n}B^{n}\), one would get a region of \(\omega_{n}B^{n}\) of smaller volume and smaller surface area.
Here we are going to prove that for any \(D,\delta_{1},\delta_{2}>0\) there is a number \(N\) such that for every \(n>N\) and \(0<d\leq D\) there is an optimal isoperimetric region in \(\omega_{n}B^{n}\) whose volume \(V\) and surface area \(S\) satisfy
\[\Psi(-d)+\delta_{1}\geq V\]
\[S\geq\Psi^{\prime}(-d-\delta_{2})\]
Note that our theorem follows from this last claim. Indeed, in the statement of the theorem we require \(\Psi(t)\) to be in range \((\varepsilon_{0};\frac{1}{2}-\tau)\), which means that \(t\geq\Psi^{-1}(\varepsilon_{0})\). Set \(D=-\Psi^{-1}(\varepsilon_{0})\) and pick \(\delta_{1},\delta_{2}>0\) so that
\[C\delta_{2}+\delta_{1}\leq\tau\]
\[t<\Psi^{-1}\left(\frac{1}{2}-\tau\right)\leq-\delta_{2}\]
By our claim there will be a number \(N\) such that for every \(n>N\) and \(d\in(0;D]\) there would be an optimal isoperimetric region in \(\omega_{n}B^{n}\) of volume \(V\) not greater than \(\Psi(-d)+\delta_{1}\) and surface area \(S\) at least \(\Psi^{\prime}(-d-\delta_{2})\). Since this region is optimal,
\[I_{\mu}(V)=S\]
And our bounds imply
\[I_{\mu}(\Psi(-d)+\delta_{1})\geq I_{\mu}(V)=S\geq\Psi^{\prime}(-d-\delta_{2}) \tag{1}\]
For an arbitrary \(t\) satisfying \(\Psi(t)\in(\varepsilon_{0};\frac{1}{2}-\tau)\) we can set \(d=-t-\delta_{2}\in(0;D]\), but then
\[\Psi(-d)+\delta_{1}=\Psi(t+\delta_{2})+\delta_{1}\leq\Psi(t)+C\delta_{2}+ \delta_{1}\leq\Psi(t)+\tau\]
We combine this with (1) and get the desired isoperimetric inequality
\[I_{\mu}(\Psi(t)+\tau)\geq I_{\mu}(\Psi(-d)+\delta_{1})\geq\Psi^{\prime}(-d- \delta_{2})=\Psi^{\prime}(t)\]
Now we need to prove our claim. Fix \(D>0\). We would be considering the intersection of \(\omega_{n}B^{n}\) and a ball \(B^{n}_{r}(A)\) orthogonal to it such that the
distance from the origin, which we will denote here as \(O\), to \(B_{r}^{n}(A)\) is some number \(\frac{d}{2}\) in range \((0;\frac{D}{2}]\).
Two balls are orthogonal if their centers together with an arbitrary point on the intersection of the corresponding spheres form a right triangle.
In the figure above ball \(\omega_{n}B^{n}\) corresponds to the circle with center at \(O\) and radius \(OX\), ball \(B_{r}^{n}(A)\) - to the circle with center at \(A\) and radius \(AX\). Since we have a right triangle,
\[OA=\sqrt{OX^{2}+AX^{2}}=\sqrt{\omega_{n}^{2}+r^{2}}\]
We require that \(ON=\frac{d}{2}\)
\[ON=OA-AN=\sqrt{\omega_{n}^{2}+r^{2}}-r=\frac{d}{2}\]
\[\sqrt{\omega_{n}^{2}+r^{2}}=r+\frac{d}{2}\]
\[\omega_{n}^{2}=rd+\frac{d^{2}}{4}\]
Figure 2: The right triangle described above
\[r=\frac{\omega_{n}^{2}}{d}-\frac{d}{4}\]
Consider the altitude \(XH\) of the right triangle \(OAX\) and note that
\[OH=\frac{OX^{2}}{OA}=\frac{\omega_{n}^{2}}{\sqrt{\omega_{n}^{2}+r^{ 2}}}=\frac{1}{\sqrt{\frac{1}{\omega_{n}^{2}}+\left(\frac{r}{\omega_{n}^{2}} \right)^{2}}}=\frac{1}{\sqrt{\frac{1}{\omega_{n}^{2}}+\left(\frac{1}{d}-\frac {d}{4\omega_{n}^{2}}\right)^{2}}}\\ =\frac{1}{\sqrt{\frac{1}{d^{2}}+\frac{1}{2\omega_{n}^{2}}+\frac{d ^{2}}{16\omega_{n}^{4}}}}=\frac{1}{\sqrt{\left(\frac{1}{d}+\frac{d}{4\omega_{n }^{2}}\right)^{2}}}=\frac{1}{\frac{1}{d}+\frac{d}{4\omega_{n}^{2}}}=\frac{d}{1 +\frac{d^{2}}{4\omega_{n}^{2}}}\leq d \tag{2}\]
Let \(P\) be a hyperplane at a distance \(x\) from the origin. By \(S_{n}(x)\) denote the volume of the hyperplane section of \(\omega_{n}B^{n}\) by \(P\). Hyperplane \(P\) cuts \(\omega_{n}B^{n}\) into two parts, at least one which is of volume not greater than \(\frac{1}{2}\), denote that volume by \(V_{n}(x)\). Both \(V_{n}(x)\) and \(S_{n}(x)\) are decreasing functions defined on \([0;\infty)\).
It follows from the proof of theorem 1 from [20] that the sequence of functions \(V_{n}(x)\) uniformly converges to \(\Psi(-x)\) and that the sequence of functions \(S_{n}(x)\) uniformly converges to \(\Psi^{\prime}(-x)\)(see Appendix C).
On the interval \([0;D]\) positive continuous function \(\Psi^{\prime}(-x)-\Psi^{\prime}(-x-\delta_{2})\) reaches its minimum value \(\varepsilon_{1}>0\). By uniform convergence for all sufficiently large \(n\) we shall have
\[S_{n}(x)\geq\Psi^{\prime}(-x)-\varepsilon_{1},\]
which gives us
\[S_{n}(OH)\geq S_{n}(d)\geq\Psi^{\prime}(-d)-\varepsilon_{1}\geq\Psi^{\prime}( -d-\delta_{2})\]
for all \(d\in(0;D]\).
We also know that \(S_{n}(OH)\leq S_{n}(0)\) and \(S_{n}(0)\) converge towards \(\Psi^{\prime}(0)\). Thus \(S_{n}(OH)\) is always bounded above by some constant \(S_{0}\).
Hyperplane passing through the point \(H\) orthogonal to \(OA\) divides the intersection of two balls \(\omega_{n}B^{n}\) and \(B_{r}^{n}(A)\) into two spherical domes: \(\Omega_{1}\) belonging to \(\omega_{n}B^{n}\) and \(\Omega_{2}\) belonging to \(B_{r}^{n}(A)\). The volume of \(\omega_{n}B^{n}\cap B_{r}^{n}(A)\) is equal to the sum of volumes of \(\Omega_{1}\) and \(\Omega_{2}\).
We would like to bound above the volume of \(\Omega_{2}\). Its base, the hyperplane section passing through \(H\), has area not greater than \(S_{0}\). Fix some number \(1>\varepsilon_{2}>0\). On the segment \(HN\) pick a point \(M\) such that \(HM\colon HN=\varepsilon_{2}\)
By \(S_{1}\) denote the area of the hyperplane section of \(B_{r}^{n}(A)\) passing through \(M\) orthogonal to \(OA\). Volume of \(\Omega_{2}\) could be bounded above as
\[S_{1}NM+S_{0}MH=S_{1}NH(1-\varepsilon_{2})+S_{0}NH\varepsilon_{2} \tag{3}\]
Radius of the \((n-1)\)-ball corresponding to \(S_{1}\) equals to
\[\sqrt{AX^{2}-AM^{2}}=\sqrt{AX^{2}-(AH+HM)^{2}}\\ =\sqrt{(AX^{2}-AH^{2})-2AH\cdot HM-HM^{2}}\leq\sqrt{XH^{2}-2 \varepsilon_{2}AH\cdot HN}\]
Radius of the \((n-1)\)-ball corresponding to \(S_{0}\) is \(XH\), so the ratio of the two radii is
\[\sqrt{1-2\varepsilon_{2}\frac{AH}{XH^{2}}HN}\]
Since \(XH\) is the altitude in the right triangle, \(XH^{2}\) is equal to \(OH\cdot HA\), and the ratio could be rewritten as
\[\sqrt{1-2\varepsilon_{2}\frac{HN}{OH}}\]
Note that \(HN=OH-ON\) and that by formula (2)
\[\sqrt{1-2\varepsilon_{2}\frac{HN}{OH}}\;=\;\sqrt{1-2\varepsilon_{2}\left(1- \frac{ON}{\frac{2ON}{1+\frac{d^{2}}{4\omega_{n}^{2}}}}\right)}\;=\;\sqrt{1-2 \varepsilon_{2}\left(1-\frac{1+\frac{d^{2}}{4\omega_{n}^{2}}}{2}\right)}\]
For all sufficiently large \(n\)
\[\frac{d^{2}}{4\omega_{n}^{2}}\leq\frac{1}{2}\Rightarrow\sqrt{1-2\varepsilon_{2 }\left(1-\frac{1+\frac{d^{2}}{4\omega_{n}^{2}}}{2}\right)}\leq\sqrt{1-\frac{ \varepsilon_{2}}{2}}\]
We conclude
\[S_{1}\leq S_{0}\left(1-\frac{\varepsilon_{2}}{2}\right)^{\frac{n-1}{2}}\]
Clearly, \(NH\leq OH\leq d\leq D\), and by (3) the volume of \(\Omega_{2}\) is not greater than
\[DS_{0}\left(\left(1-\frac{\varepsilon_{2}}{2}\right)^{\frac{n-1}{2}}\left(1- \varepsilon_{2}\right)+\varepsilon_{2}\right)\]
Note that we could pick \(\varepsilon_{2}\in(0;1)\) so that for all sufficiently large \(n\)
\[DS_{0}\left(\left(1-\frac{\varepsilon_{2}}{2}\right)^{\frac{n-1}{2}}(1- \varepsilon_{2})+\varepsilon_{2}\right)\leq\frac{\delta_{1}}{2} \tag{4}\]
The volume of \(\Omega_{1}\) is \(V_{n}(OH)\). And by (2)
\[d-OH=d-\frac{d}{1+\frac{d^{2}}{4\omega_{n}^{2}}}=d\cdot\frac{\frac{d^{2}}{4 \omega_{n}^{2}}}{1+\frac{d^{2}}{4\omega_{n}^{2}}}\leq\frac{D^{3}}{4\omega_{n}^ {2}}\]
We noted that the sequence of functions \(V_{n}(x)\) uniformly converges to \(\Psi(-x)\). Thus for all sufficiently large \(n\)
\[\Psi(-OH)+\frac{\delta_{1}}{4}\geq V_{n}(OH)\]
Note that for all \(x\geq 0\)
\[0<\Psi^{\prime}(-x)\leq\Psi^{\prime}(0),\]
which means
\[\Psi(-d)+\Psi^{\prime}(0)\frac{D^{3}}{4\omega_{n}^{2}}+\frac{\delta_{1}}{4} \geq\Psi(-OH)+\frac{\delta_{1}}{4}\geq V_{n}(OH) \tag{5}\]
And for large enough \(n\) we have
\[\Psi^{\prime}(0)\frac{D^{3}}{4\omega_{n}^{2}}\leq\frac{\delta_{1}}{4} \tag{6}\]
We conclude that the volume \(V\) of \(\omega_{n}B^{n}\cap B_{r}^{n}(A)\) is equal to the sum of volumes of \(\Omega_{1}\) and \(\Omega_{2}\), and thus by inequalities (4), (5), (6)
\[\Psi(-d)+\delta_{1}\geq V\]
By Theorem 3.1 the intersection between \(\omega_{n}B^{n}\) and \(B_{r}^{n}\) is an optimal isoperimetric region in \(\omega_{n}B^{n}\). Its surface area \(S\) is equal to the surface area of the spherical cap corresponding to \(\Omega_{2}\), which can be bounded below by the area of the base of \(\Omega_{2}\), i. e. \(S_{n}(OH)\). We have thus shown that the volume \(V\) and the free surface area \(S\) of \(\omega_{n}B^{n}\cap B_{r}^{n}(A)\) satisfy
\[\Psi(-d)+\delta_{1}\geq V\]
\[S\geq S_{n}(OH)\geq\Psi^{\prime}(-d-\delta_{2})\]
for all sufficiently large \(n\) and that the number \(d\) here can be chosen here as an arbitrary number from \((0;D]\), which proves our claim.
By \(d_{n}(\varepsilon)\) denote the supremum of all possible distances between two subsets of volume \(\varepsilon\in(0;\frac{1}{2})\) inside \(\omega_{n}B^{n}\).
Using our isoperimetric inequality we derive
**Theorem 3.3**.: _For every \(\varepsilon\in(0;\frac{1}{2})\)_
\[\limsup_{n\to\infty}d_{n}(\varepsilon)\leq-2\frac{1}{\sqrt{e}}\Phi^{-1}( \varepsilon),\]
_where the function \(-2\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)\) is asymptotically equivalent to_
\[-2\frac{1}{\sqrt{\pi e}}\sqrt{-\ln\varepsilon}\]
_as \(\varepsilon\to 0\)._
Proof.: Pick any \(\varepsilon_{0}\in(0;\varepsilon)\) and \(\tau\in(0;\varepsilon)\). By Theorem 3.2 there is a number \(N\) such that inequality
\[I_{\mu}(\Psi(t)+\tau)\geq\Psi^{\prime}(t) \tag{7}\]
holds for all \(n>N\) and \(t\) such that \(\Psi(t)\in(\varepsilon_{0};\frac{1}{2}-\tau)\).
Assume that \(n>N\). We consider two bodies \(A\) and \(B\) of volume \(\varepsilon\) inside the unit-volume euclidean ball \(\omega_{n}B^{n}\).
We are interested in the least values \(\delta_{A},\delta_{B}\) such that the \(\delta_{A}\)-enlargement of body \(A\) in \(\omega_{n}B^{n}\) will be of volume \(\frac{1}{2}\) and \(\delta_{B}\)-enlargement of body \(B\) will be of volume \(\frac{1}{2}\) too. For these enlargements we shall have
\[\operatorname{dist}(A_{\delta_{A}},B_{\delta_{B}})=0,\]
from which
\[\operatorname{dist}(A,B)\leq\delta_{A}+\delta_{B}\]
follows.
Isoperimetric inequality (7) provides an estimate on the growth of \(\delta\)-enlargements of our bodies:
\[\partial_{+}\mu(A_{\delta})\geq\Psi^{\prime}(\Psi^{-1}(\mu(A_{\delta})-\tau)) \tag{8}\]
when \(\mu(A_{\delta})\leq\frac{1}{2}\).
By \(\delta_{0}\) denote \(\Psi^{-1}(\varepsilon-\tau)\). Now consider the function
\[y(\delta)=\Psi(\delta_{0}+\delta)+\tau\]
By \(\delta_{M}\) denote the moment when \(y\) reaches \(\frac{1}{2}\), i. e. \(y(\delta_{M})=\frac{1}{2}\). Assume that \(\delta_{M}<\delta_{A}\). Functions \(\mu(A_{\delta})\) and \(y(\delta)\) coincide at \(\delta=0\). Furthermore, because of inequality (8), we should have
\[\mu(A_{\delta})\geq y(\delta)\text{ and }\partial_{+}\mu(A_{ \delta})\geq\Psi^{\prime}(\Psi^{-1}(\mu(A_{\delta})-\tau))\\ \geq\Psi^{\prime}(\Psi^{-1}(y(\delta)-\tau))=\Psi^{\prime}(\Psi^{ -1}(\Psi(\delta_{0}+\delta)))=\partial_{+}y(\delta)\]
for all \(\delta\in[0;\delta_{M}]\). But then we have a contradiction
\[\frac{1}{2}=\mu(A_{\delta_{A}})>\mu(A_{\delta_{M}})\geq y(\delta_{M})=\frac{1 }{2}\]
Thus \(\delta_{A}\) and similarly \(\delta_{B}\) are bounded above by \(\delta_{M}\), which implies
\[\text{dist}(A,B)\leq\delta_{A}+\delta_{B}\leq 2\delta_{M}\]
The value \(\delta_{M}\) satisfies
\[\frac{1}{2}=y(\delta_{M})=\Psi(\delta_{0}+\delta_{M})+\tau\]
\[\frac{1}{2}-\tau=\Psi(\delta_{0}+\delta_{M})\]
\[\delta_{M}=\Psi^{-1}\left(\frac{1}{2}-\tau\right)-\Psi^{-1}\left(\varepsilon-\tau\right)\]
As \(\tau\) tends to \(0\)
\[\Psi^{-1}\left(\frac{1}{2}-\tau\right)-\Psi^{-1}\left(\varepsilon-\tau\right) \rightarrow-\Psi^{-1}(\varepsilon)=-\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)\]
And since we can choose \(\tau\) to be an arbitrary number in \((0;\varepsilon)\)
\[\limsup_{n\rightarrow\infty}d_{n}(\varepsilon)\leq-2\frac{1}{\sqrt{e}}\Phi^{- 1}(\varepsilon)\]
**Conclusions.** Symmetrization techniques lead to the solution of the isoperimetric problem in other different cases: the classical isoperimetric problem in \(\mathbb{R}^{n}\); isoperimetric inequality on the sphere([8, Appendix], [10, Theorem 2.2.1]), from which the gaussian isoperimetric inequality could be derived([10, Theorem 2.2.3], [16, Theorem 20]).
Unit cubes.
**Introduction.** Unlike the euclidean ball the cube does not have \(*\)many\(*\) symmetries. To derive lower bounds on the isoperimetric profile we are going to perform a \(*\)transfer\(*\) to a different space. Descriptions of this idea could be found in [16, Theorem 7], [13, Proposition 2.8]. In this section we will follow the approach presented in [16].
Consider an \(n\)-dimensional unit cube \((0;1)^{n}\). We can think of it as of a space with Lebesgue measure \(\mu\) and Euclidean metric. Now we would like to be able to show estimates on the isoperimetric profile of \(\mu\). However, it is quite unclear how to deal with the corresponding space. For example, the cube only has a finite number of symmetries, so symmetrization methods would not get us far. That is why it makes sense to consider a way to transfer to a different, \(*\)better\(*\) space - an idea that plays a key role in our approach.
**Proposition 4.1** ([16, Proposition 1]).: _Assume that for a pair of spaces \(M\) and \(M^{\prime}\) with measures \(\upsilon\) and \(\upsilon^{\prime}\), respectively, we have a map \(\phi:M\to M^{\prime}\) which transforms measure \(\upsilon\) into \(\upsilon^{\prime}\), i. e. \(\mu^{\prime}(A)=\mu(\phi^{-1}(A))\), and that is also \(c\)-Lipschitz for some \(c>0\), i. e. a pair of points in \(M\) at a distance \(d\) has images at distance at most \(c\cdot d\). The following inequality holds_
\[I_{\upsilon}\leq c\cdot I_{\upsilon^{\prime}}\]
Proof.: Consider a closed \(R^{\prime}\subseteq M^{\prime}\) and its preimage \(R=\phi^{-1}(R^{\prime})\). Since \(\phi\) transforms \(\upsilon\) into \(\upsilon^{\prime}\), we shall have \(\upsilon^{\prime}(R^{\prime})=\upsilon(R)\). The fact that \(\phi\) is \(c\)-Lipschitz gives us \(\phi(R_{\varepsilon})\subseteq R^{\prime}_{c\varepsilon}\), from which it follows that
\[\upsilon^{\prime}(R^{\prime}_{c\varepsilon})=\upsilon(\phi^{-1}(R^{\prime}_{c \varepsilon}))\geq\upsilon(\phi^{-1}(\phi(R_{\varepsilon})))\geq\upsilon(R_ {\varepsilon})\]
By combining this inequality with \(\upsilon^{\prime}(R^{\prime})=\upsilon(R)\) we get
\[\frac{\upsilon^{\prime}(R^{\prime}_{c\varepsilon})-\upsilon^{\prime}(R^{ \prime})}{c\varepsilon}\geq\frac{\upsilon(R_{\varepsilon})-\upsilon(R)}{c\varepsilon}\]
And by taking limit \(\varepsilon\to 0\) we reach conclusion
\[(\upsilon^{\prime})^{+}(R^{\prime})\geq\frac{1}{c}\upsilon^{+}(R)\]
So for every closed \(R^{\prime}\subseteq M^{\prime}\) we can find \(R\subseteq M\) that has the same measure and whose surface area is at most \(c\) times the surface area of \(R^{\prime}\). Thus we shall have
\[I_{\upsilon}(t)\leq cI_{\upsilon^{\prime}}(t)\]
We are going to apply the above lemma to get lower bounds on \(I_{\mu}\). The role of \(M^{\prime}\) will play our cube with the Lebesgue measure \(\mu\) on it. The role of \(M\) will play the space \(\mathbb{R}^{n}\) with Gaussian measure \(\gamma_{n}\) defined by its density at a point \(x=(x_{1},\ldots,x_{n})\) as
\[\frac{d\gamma_{n}}{dx}=e^{-\pi(x_{1}^{2}+\ldots+x_{n}^{2})}\]
In the one-dimensional case the map \(\Phi\) defined by
\[\Phi(a)=\int_{-\infty}^{a}e^{-\pi x^{2}}dx\]
transforms \((-\infty;+\infty)\) into \((0;1)\). Also \(\Phi\) turns Gaussian measure \(\gamma_{1}\) on \((-\infty;+\infty)\) into the Lebesgue measure on \((0;1)\). Indeed, the Gaussian measure of the segment \([a;b]\) is equal to the integral
\[\int_{a}^{b}e^{-\pi x^{2}}dx\]
of its density, which in turn is equal to \(\Phi(b)-\Phi(a)\), but the image of \([a;b]\) under our map \(\Phi\) is \([\Phi(a);\Phi(b)]\), whose Lebesgue measure is equal to \(\Phi(b)-\Phi(a)\).
Now the role of \(\phi\) in the above lemma will be played by
\[\phi(x_{1},\ldots,x_{n})=(\Phi(x_{1}),\ldots,\Phi(x_{n})),\]
i. e. we are applying \(\Phi\) coordinatewise. It indeed transforms \((-\infty;+\infty)^{n}=\mathbb{R}^{n}\) into \((0;1)^{n}\). For every box \([a_{1};b_{1}]\times\ldots\times[a_{n};b_{n}]\) we could note that
\[\gamma_{n}([a_{1};b_{1}]\times\ldots\times[a_{n};b_{n}])=\int_{a_{1}}^{b_{1}} \ldots\int_{a_{n}}^{b_{n}}e^{-\pi x_{1}^{2}}\cdot\ldots\cdot e^{-\pi x_{n}^{2} }dx_{n}\ldots dx_{1}\]
\[=\left(\int_{a_{1}}^{b_{1}}e^{-\pi x_{1}^{2}}dx_{1}\right)\ldots\left(\int_{a _{n}}^{b_{n}}e^{-\pi x_{n}^{2}}dx_{n}\right)\]
\[=(\Phi(b_{1})-\Phi(a_{1}))\ldots(\Phi(b_{n})-\Phi(a_{n}))=\mu(\phi([a_{1};b_{1 }]\times\ldots\times[a_{n};b_{n}])),\]
so \(\phi\) turns measure \(\gamma_{n}\) into \(\mu\). And finally, map \(\phi\) is 1-Lipschitz since \(|\Phi^{\prime}(x)|=|e^{-\pi x^{2}}|\leq 1\) and \(\phi\) applies \(\Phi\) coordinatewise.
As we see the requirements of Proposition 4.1 are met. So the isoperimetric profile \(I_{\mu}\) of our unit cube could be bounded below by \(I_{\gamma_{n}}\). But what do we know about \(I_{\gamma_{n}}\)? Well, there are tight inequalities on the isoperimetric profile of Gaussian measures, but here we would only need the following theorem.
**Theorem 4.1** ([10, Lemma 2.2.2]).: _Let \(\gamma^{n}\) be the standard Gaussian measure defined by its density at a point \(x=(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\) as_
\[\frac{d\gamma^{n}}{dx}=\frac{1}{\sqrt{2\pi}^{n}}e^{-\frac{1}{2}(x_{1}^{2}+ \ldots+x_{n}^{2})}\]
_Amongst all subsets \(A\subset\mathbb{R}^{n}\) with fixed measure \(\gamma_{n}(A)\in(0;\frac{1}{2})\) the minimum surface area is attained at half-spaces._
In general, a Gaussian measure \(\gamma_{\mu,\sigma^{2}}^{n}\) is a measure defined by its density at a point \(x\in\mathbb{R}^{n}\) as
\[\frac{d\gamma_{\mu,\sigma^{2}}^{n}}{dx}=\frac{1}{\sqrt{2\pi\sigma^{2}}^{n}}e^ {-\frac{1}{2\sigma^{2}}\|x-\mu\|^{2}}\]
But Gaussian measures are equivalent to each other under translation and scaling. For example, if we shrink the standard Gaussian measure \(\gamma^{n}\) by a factor of \(\sqrt{2\pi}\), the density \(\rho^{\prime}\) of the resulting measure is related to the density \(\rho\) of \(\gamma^{n}\) as
\[\rho^{\prime}(x)=\sqrt{2\pi}^{n}\rho(\sqrt{2\pi}x)=\sqrt{2\pi}^{n}\frac{1}{ \sqrt{2\pi}^{n}}e^{-\frac{1}{2}\|\sqrt{2\pi}x\|^{2}}=\frac{d\gamma_{n}}{dx}\]
And we could also note that for \(A\subset\mathbb{R}^{n}\)
\[\gamma^{n}(A)=\gamma_{n}\left(\frac{1}{\sqrt{2\pi}}A\right)\quad(\gamma^{n})^ {+}(A)=\frac{1}{\sqrt{2\pi}}\gamma_{n}^{+}\left(\frac{1}{\sqrt{2\pi}}A\right)\]
So shrinking everything by a factor of \(\sqrt{2\pi}\) in Theorem 4.1 would not change the fact that half-spaces are optimal solutions to the isoperimetric problem. And that is why to figure out lower bounds on \(I_{\gamma_{n}}\) we would only need to consider half-spaces.
The density of \(\gamma_{n}\) at a point \(x\) only depends on \(\|x\|\), so our measure is rotation-invariant, which means that we could only consider half-spaces \(H_{a}\) defined by \(x_{n}\leq a\) for some \(a\). First, we could note that
\[\gamma_{n}(H_{a})=\gamma_{n}(\underbrace{(-\infty;+\infty)\times \ldots\times(-\infty;+\infty)}_{n-1\text{ times}}\times(-\infty;a])\\ =\underbrace{\gamma_{1}((-\infty;+\infty))\times\ldots\times \gamma_{1}((-\infty;+\infty))}_{n-1\text{ times}}\times\gamma_{1}((-\infty;a])\\ =\gamma_{1}((-\infty;a])=\Phi(a)\]
And, because of this last observation,
\[\gamma_{n}^{+}(H_{a})=\lim_{\varepsilon\to 0}\frac{\gamma_{n}((H_{a})_{ \varepsilon})-\gamma_{n}(H_{a})}{\varepsilon}=\lim_{\varepsilon\to 0}\frac{ \gamma_{n}(H_{a+\varepsilon})-\gamma_{n}(H_{a})}{\varepsilon}\\ =\lim_{\varepsilon\to 0}\frac{\gamma_{1}((-\infty;a+ \varepsilon])-\gamma_{1}((-\infty;a])}{\varepsilon}\\ =\lim_{\varepsilon\to 0}\frac{\gamma_{1}((-\infty;a]_{ \varepsilon})-\gamma_{1}((-\infty;a))}{\varepsilon}=\gamma_{1}^{+}((-\infty;a])\]
Equalities
\[\gamma_{n}(H_{a})=\gamma_{1}((-\infty;a])\quad\gamma_{n}^{+}(H_{a})=\gamma_{1 }^{+}((-\infty;a])\]
imply
\[I_{\gamma_{n}}=I_{\gamma_{1}}\]
And now we only have to estimate \(I_{\gamma_{1}}\). To do that we need to consider intervals \((-\infty;a]\) for \(a<0\). The measure of such an interval is \(\Phi(a)\) and the surface area is \(\Phi^{\prime}(a)=e^{-\pi a^{2}}\), from which we get
\[I_{\gamma_{1}}(\Phi(a))=e^{-\pi a^{2}}\]
By combining our observations we conclude
**Theorem 4.2** ([16, Theorem 7]).: _For the Lebesgue measure \(\mu\) on the unit cube \((0;1)^{n}\) isoperimetric inequality_
\[I_{\mu}(t)\geq e^{-\pi\Phi^{-1}(t)^{2}}\]
_holds for all \(t\in(0;\frac{1}{2})\)._
Proof.: Note that for all \(a<0\)
\[I_{\mu}(\Phi(a))\geq I_{\gamma_{n}}(\Phi(a))=I_{\gamma_{1}}(\Phi(a))=e^{-\pi a ^{2}}\]
Using this isoperimetric inequality we derive
**Theorem 4.3**.: _Inside a unit cube \((0;1)^{n}\) two bodies \(A\) and \(B\) of volume \(\varepsilon\in(0;\frac{1}{2})\) are at a distance at most_
\[-2\Phi^{-1}(\varepsilon)\]
_Function \(-2\Phi^{-1}(\varepsilon)\) is asymptotically equivalent to_
\[\frac{2}{\sqrt{\pi}}\sqrt{-\ln\varepsilon}\]
_as \(\varepsilon\to 0\)._
Proof.: We are interested in the least values \(\delta_{A},\delta_{B}\) such that the \(\delta_{A}\)-enlargement of body \(A\) in the unit cube \((0;1)^{n}\) will be of volume \(\frac{1}{2}\) and the \(\delta_{B}\)-enlargement of body \(B\) will be of volume \(\frac{1}{2}\) too. For these enlargements we shall have
\[\operatorname{dist}(A_{\delta_{A}},B_{\delta_{B}})=0,\]
from which
\[\operatorname{dist}(A,B)\leq\delta_{A}+\delta_{B}\]
follows.
Isoperimetric inequality from Theorem 4.2 provides an estimate on the growth of \(\delta\)-enlargements of our bodies:
\[\partial_{+}\mu(A_{\delta})\geq e^{-\pi\Phi^{-1}(\mu(A_{\delta}))^{2}} \tag{9}\]
when \(\mu(A_{\delta})<\frac{1}{2}\).
By \(\delta_{M}\) denote \(-\Phi^{-1}(\varepsilon)\). Now consider the function
\[y(\delta)=\Phi(-\delta_{M}+\delta)\]
Assume that \(\delta_{M}<\delta_{A}\). Functions \(\mu(A_{\delta})\) and \(y(\delta)\) coincide at \(\delta=0\). Furthermore, because of inequality (9), we should have
\[\mu(A_{\delta})\geq y(\delta)\text{ and }\partial_{+}\mu(A_{ \delta})\geq e^{-\pi\Phi^{-1}(\mu(A_{\delta}))^{2}}\geq e^{-\pi\Phi^{-1}(y( \delta))^{2}}\\ =e^{-\pi(-\delta_{M}+\delta)^{2}}=\partial_{+}y(\delta)\]
for all \(\delta\leq\delta_{M}\). But then we have a contradiction
\[\frac{1}{2}=\mu(A_{\delta_{A}})>\mu(A_{\delta_{M}})\geq y(\delta_{M})=\Phi(0) =\frac{1}{2}\]
Thus \(\delta_{A}\) and similarly \(\delta_{B}\) are bounded above \(\delta_{M}\), which implies
\[\operatorname{dist}(A,B)\leq\delta_{A}+\delta_{B}\leq 2\delta_{M}=-2\Phi^{-1}(\varepsilon)\]
**Conclusions.** The transfer from one space to another might have lead to the loss of accuracy to some extent, so there is not much of what we could say about how precise our estimates are. The problem of finding optimal hypersurfaces in the \(n\)-cube also seems to be quite complicated. More details about the isoperimetric inequalities in a cube and their applications the reader may find in section 1.5 of [16].
In a similar way we could derive lower bounds on the isoperimetric profile of the euclidean ball, since the transition to the space \(\mathbb{R}^{n}\) with gaussian measure is possible(see [13, Proposition 2.9]).
Simplexes and \(\ell_{p}\)-balls.
**Introduction.** In this section we are going to describe the approach used in [19] to prove an isoperimetric inequality for \(\ell_{p}\) balls(\(p\in[1;2]\)) by presenting a very similar proof of an isoperimetric inequality for simplexes. Much like in the case of the unit cube we will be performing a transfer to a different space(see Lemmas 5.1, 5.2). But to address the problems with Lipschitz continuity(see Lemma 5.3) a number of new ideas and methods needs to be introduced.
### Simplexes.
By \(\mathbb{R}_{+}\) we denote the interval \((0;+\infty)\). Consider a regular simplex \(\Delta_{n}\) defined as
\[\Delta_{n}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}_{+}^{n}\mid x_{1}+\ldots+x_{n}=1\}\]
By \(\mu\) we will denote the normalized Lebesgue measure on \(\Delta_{n}\). Note that \(\mu(\Delta_{n})=1\). Simple calculations show that the area of \(\Delta_{n}\) is equal to \(\frac{n\sqrt{n}}{n!}\). So if we set
\[\omega_{n}=\left(\frac{n!}{n\sqrt{n}}\right)^{\frac{1}{n-1}}\sim\frac{n}{e},\]
the area of \(\omega_{n}\Delta_{n}\) will be equal to \(1\). By \(\lambda\) denote the Lebesgue measure on \(\omega_{n}\Delta_{n}\). One could note
\[\mu^{+}\left(\frac{1}{\omega_{n}}A\right)=\lim_{\varepsilon\to 0}\frac{\mu \left(\left(\frac{1}{\omega_{n}}A\right)_{\varepsilon}\right)-\mu\left(\frac{ 1}{\omega_{n}}A\right)}{\varepsilon}\\ =\lim_{\varepsilon\to 0}\frac{\lambda(A_{\omega_{n} \varepsilon})-\lambda(A)}{\varepsilon}=\omega_{n}\lambda^{+}(A) \tag{10}\]
for \(A\subseteq\Delta_{n}\).
In this section we will be using a slightly different notion of an isoperimetric profile.
**Definition 5**.: _By the isoperimetric function we mean a function that maps \(t\in(0;\frac{1}{2})\) to the infimum of possible values that \(\mu^{+}(A)\) could take when \(t\leq\mu(A)<\frac{1}{2}\)_
\[\mathcal{I}_{\mu}(t)=\inf_{t\leq\mu(A)<\frac{1}{2}}\mu^{+}(A)\]
Our observation (10) implies
\[\mathcal{I}_{\mu}=\omega_{n}\mathcal{I}_{\lambda}, \tag{11}\]
i. e. the isoperimetric functions are proportional.
To solve our problem we would need estimates on \(\mathcal{I}_{\lambda}\), but for the sake of simplicity we would be working with \(\mathcal{I}_{\mu}\) instead. Yet again it is quite unclear how to deal with \(\Delta_{n}\) as a space, so we would like to be able to transfer to a <<better>> space.
The map \(T\colon\mathbb{R}_{+}^{n}\to\Delta_{n}\) defined as
\[T(x_{1},\ldots,x_{n})=\left(\frac{x_{1}}{x_{1}+\ldots+x_{n}},\ldots,\frac{x_{n }}{x_{1}+\ldots+x_{n}}\right)\]
transforms the measure \(\nu_{n}\) on \(\mathbb{R}_{+}^{n}\) defined by its density at a point \((x_{1},\ldots,x_{n})\in\mathbb{R}_{+}^{n}\) as
\[\frac{d\nu_{n}}{dx}=e^{-x_{1}-\ldots-x_{n}}\]
into the normalized Lebesgue measure \(\mu\) on \(\Delta_{n}\) as a corollary of the following lemma.
**Lemma 5.1** ([18, Lemma 2.1]).: _Let \(X_{1},\ldots,X_{n}\) be independent random variables each with density function \(\frac{1}{2}e^{-|t|}\) and put \(S=\sum_{i}|X_{i}|\). Then \((\frac{X_{1}}{S},\ldots,\frac{X_{n}}{S})\) induces the normalized Lebesgue measure on the surface of \(\ell_{1}^{n}\) ball. Moreover, \((\frac{X_{1}}{S},\ldots,\frac{X_{n}}{S})\) is independent of \(S\)._
Indeed, because of the symmetry amongst the orthants of \(\mathbb{R}^{n}\),2 we can restrict our attention to the positive orthant \(\mathbb{R}_{+}^{n}\) in Lemma 5.1 and reach the desired conclusion.
Footnote 2: the orthants of \(\mathbb{R}^{n}\) are multidimensional analogues of the quadrants of \(\mathbb{R}^{2}\)
But what do we know about the isoperimetric profile of \(I_{\nu_{n}}\)? The following lemma completely determines this isoperimetric profile in the one-dimensional case.
**Lemma 5.2** ([21, Remark 1]).: _By \(\nu\) denote \(\nu_{1}\), then_
\[I_{\nu}(t)=\min(t,1-t),\]
_where the domain of \(I_{\nu}\) is the whole interval \((0;1)\)._
For a measure \(\upsilon\) we could consider its isoperimetric constant - the largest value \(Is(\upsilon)\) for which the following holds for all subsets \(A\) with \(\upsilon(A)\in(0;1)\)
\[\upsilon^{+}(A)\geq Is(\upsilon)\min(\upsilon(A),1-\upsilon(A))\]
And by Lemma 5.2 we have \(Is(\nu)=1\). Now we could note that3\(\nu_{n}=\nu^{n}\) since the density of \(\nu_{n}\) at a point \((x_{1},\ldots,x_{n})\in\mathbb{R}_{+}^{n}\) could be written as a product
Footnote 3: by this we mean the product measure \(\underbrace{\nu\times\ldots\times\nu}_{n\text{ times}}\)
\[e^{-x_{1}}\ldots e^{-x_{n}}\]
And thus the following theorem
**Theorem 5.1** ([2, Theorem 1.1]).: _For triple \((X,d,\psi)\) - space, metric, measure,_
\[Is(\psi^{n})\geq\frac{1}{2\sqrt{6}}Is(\psi)\]
gives us
\[I_{\nu^{n}}(t)\geq\frac{1}{2\sqrt{6}}\min(t,1-t)\]
or4
Footnote 4: recall that, generally, when we talk about isoperimetric profiles we are only interested in the values of \(t\in(0;\frac{1}{2})\), i. e. the domain of \(I_{\nu_{n}}\) is \((0;\frac{1}{2})\)
\[I_{\nu_{n}}(t)\geq\frac{1}{2\sqrt{6}}t \tag{12}\]
So we have a map \(T\colon\mathbb{R}_{+}^{n}\to\Delta_{n}\) that transforms \(\nu_{n}\) into \(\mu\) and a lower bound on \(I_{\nu_{n}}(t)\). But to use Proposition 4.1 we would also need our mapping \(T\) to be Lipschitz continuous.
In the neighborhood of a point \(x\in\mathbb{R}_{+}^{n}\) the behavior of our map \(T\) could be described by a linear operator defined by the matrix, whose entries are
\[\frac{\partial T_{j}(x)}{\partial x_{i}},\]
where \(T_{j}(x)\) is the \(j\)-th coordinate of \(T(x)\). We are interested in the norm of this linear operator. The next lemma gives an upper bound
**Lemma 5.3** (corresponds to Lemma 1 from [19]).: \[\left\|\frac{\partial T_{j}(x)}{\partial x_{i}}\right\|_{2}\leq\frac{1}{\|x \|_{1}}(1+\sqrt{n}\|T(x)\|_{2})\]
Proof.: First, we will calculate the entries of our matrix, i. e. the partial derivatives
\[\frac{\partial}{\partial x_{i}}T_{j}(x)=\frac{\partial}{\partial x _{i}}\frac{x_{j}}{x_{1}+\ldots+x_{n}}\\ =\frac{(x_{1}+\ldots+x_{n})\frac{\partial}{\partial x_{i}}x_{j}-x _{j}\frac{\partial}{\partial x_{i}}(x_{1}+\ldots+x_{n})}{(x_{1}+\ldots+x_{n})^ {2}}\\ =\frac{1}{x_{1}+\ldots+x_{n}}\left(\delta_{ij}-\frac{x_{j}}{x_{1} +\ldots+x_{n}}\right)\]
If \(\Delta y\) is the image of \(\Delta x\), then
\[\Delta y_{j}=\sum_{i}\frac{\partial T_{j}}{\partial x_{i}}\Delta x_{i}=\frac{ 1}{\|x\|_{1}}\left(\Delta x_{j}-\frac{x_{j}}{\|x\|_{1}}\sum_{i}\Delta x_{i}\right) \tag{13}\]
The length of the vector, whose coordinates are \(\frac{x_{j}}{\|x\|_{1}}\sum_{i}\Delta x_{i}\), could be estimated as
\[\|T(x)\|_{2}\left|\sum_{i}\Delta x_{i}\right|\leq\sqrt{n}\|T(x)\|_{2}\|\Delta x \|_{2}\]
since coordinates \(\frac{x_{j}}{\|x\|_{1}}\) define \(T(x)\) and, clearly,
\[\sum_{i}|\Delta x_{i}|\leq\sqrt{n}\left(\sum_{i}(\Delta x_{i})^{2}\right)^{ \frac{1}{2}}\]
And now by triangle inequality from (13) we get
\[\|\Delta y\|_{2}\leq\frac{1}{\|x\|_{1}}\left(\|\Delta x\|_{2}+ \sqrt{n}\|T(x)\|_{2}\|\Delta x\|_{2}\right)\\ =\frac{1}{\|x\|_{1}}\left(1+\sqrt{n}\|T(x)\|_{2}\right)\|\Delta x \|_{2},\]
from which the statement of the lemma follows.
Indeed, we cannot apply Proposition 4.1 in our case since there are places where the image of the mapping \(T\) varies wildly. For example, the simplex in \(\mathbb{R}_{+}^{n}\) defined by \(x_{1}+\ldots+x_{n}=\delta\), where \(\delta\) is a very small number.
However, the upper bound from Lemma 5.3
\[\left\|\frac{\partial T_{j}(x)}{\partial x_{i}}\right\|_{2}\leq\frac{1}{\|x\|_{1 }}(1+\sqrt{n}\|T(x)\|_{2})\]
tells us that the only parts of \(\mathbb{R}_{+}^{n}\), where the variance of \(T\) could be high, are the regions where \(\|x\|_{1}\) is too small, which corresponds to a little corner of the orthant \(\mathbb{R}_{+}^{n}\), or where \(\|T(x)\|_{2}\) is too large, and, since the distance from the origin to the center of \(\Delta_{n}\) - \(\frac{1}{\sqrt{n}}\) is much smaller than the distance from the origin to the vertices of \(\Delta_{n}\) - \(1\), the region where \(\|T(x)\|_{2}\) is too large corresponds to the little corners of \(\Delta_{n}\) near the vertices.5
Footnote 5: here we are talking about the image of \(T\)
Our last observation suggests that the parts of our space, where \(T\) does not behave the way we want it to, i. e. high variance, might be negligible. And now we are going to present an argument that will allow us to <<get rid>> of these regions, perform the transfer to the space \(\mathbb{R}_{+}^{n}\) with measure \(\nu_{n}\) and then apply the lower bound for \(I_{\nu_{n}}\).
But, first of all, since we want to estimate \(\mathcal{I}_{\mu}(t)\) on the whole half-interval \((0;\frac{1}{2}]\), including the small values of \(t\), and since we are intending to remove some negligible parts of our space in the main argument, we need to employ a different approach for those values of \(t\) that might, perhaps, be lesser than the measure of the regions that we are getting rid of. In other words, an estimate on the surface area of the <<small>> subsets of \(\Delta_{n}\).
This could be done with the help of the following theorem.
**Theorem 5.2** ([1, Theorem 1.1]).: _Let \(\psi\) be a log-concave probability measure on \(\mathbb{R}^{n}\). For all measurable sets \(A\subset\mathbb{R}^{n}\), for every point \(x_{0}\in\mathbb{R}^{n}\) and every number \(r>0\),_
\[\psi^{+}(A)\geq\frac{1}{2r}\Big{(}\psi(A)\ln\frac{1}{\psi(A)}+(1- \psi(A))\ln\frac{1}{1-\psi(A)}\\ +\ln\psi(\{|x-x_{0}|\leq r\})\Big{)}\]
Convexity of \(\Delta_{n}\) ensures that the normalized Lebesgue measure \(\mu\) on it is log-concave.
In the application of the above theorem to measure \(\mu\) on \(\Delta_{n}\) it is possible to set \(x_{0}=(0,\ldots,0)\), even though \(x_{0}\notin\Delta_{n}\), since when \(r>\frac{1}{\sqrt{n}}\) for \(x_{1}=\frac{1}{\sqrt{n}}\), then the
\((\frac{1}{n},\ldots,\frac{1}{n})\) and \(r^{\prime}=\sqrt{r^{2}-\frac{1}{n}}\) we shall have
\[\ln\mu(\{|x|\leq r\})=\ln\mu(\{|x-x_{1}|\leq r^{\prime}\})\]
\[\frac{1}{2r^{\prime}}\geq\frac{1}{2r}\]
One question that arises after examining the lower bound is: how do we estimate \(\psi(\{|x-x_{0}|\leq r\})\) - the measure of the ball with center at \(x_{0}\) and radius \(r\)? In terms of Lemma 5.1 we have the following result
**Theorem 5.3** ([18, Theorem 2.2]).: _There are absolute positive constants \(T,c\) such that for all \(t>\frac{T}{\sqrt{n}}\), putting \(X=(X_{1},\ldots,X_{n})\) and \(S=X_{1}+\ldots+X_{n}\),_
\[\Pr\left(\frac{\|X\|_{2}}{S}>t\right)\leq e^{-ctn}\]
Again, because of the symmetry, we can restrict our attention to the positive orthant \(\mathbb{R}^{n}_{+}\).
By combining Theorem 5.3 with Lemma 5.1 we get
\[\mu(\{|x|\leq r\})\geq 1-e^{-cnr}\]
for \(r>\frac{T}{\sqrt{n}}\). Now note
\[r>\frac{T}{\sqrt{n}}\Rightarrow cnr\geq c\sqrt{n}T\geq cT\Rightarrow e^{-cnr }\leq e^{-cT},\]
which implies that for \(r>\frac{T}{\sqrt{n}}\)
\[\ln\mu(\{|x|\leq r\})\geq\ln(1-e^{-cnr})\geq-Ce^{-cnr}\]
for some constant \(C>0\).
If we assume that \(\mu(A)<c^{\prime}\) for some constant \(0<c^{\prime}<1\), then we will have
\[(1-\mu(A))\ln\left(\frac{1}{1-\mu(A)}\right)\geq C_{1}\mu(A)\]
for some constant \(C_{1}>0\).
We would like the sum of the two last terms of
\[\mu(A)\ln\frac{1}{\mu(A)}+(1-\mu(A))\ln\frac{1}{1-\mu(A)}+\ln\mu(\{|x-x_{0}| \leq r\})\]
to be non-negative, for this the following will be sufficient
\[C_{1}\mu(A)\geq Ce^{-cnr},\]
which could be rewritten as
\[\ln\frac{C_{1}\mu(A)}{C}\geq-cnr\Leftrightarrow r\geq\frac{1}{cn}\ln\frac{C}{C_ {1}\mu(A)}=\frac{1}{cn}\left(\ln\frac{1}{\mu(A)}+\ln\frac{C}{C_{1}}\right)\]
If we replace \(\geq\) above with equality and apply Theorem 5.2 with \(x_{0}=0\), we will get
\[\mu^{+}(A)\geq\frac{1}{2}cn\mu(A)\frac{\ln\frac{1}{\mu(A)}}{\ln\frac{1}{\mu(A )}+\ln\frac{C}{C_{1}}}\]
The condition \(r>\frac{T}{\sqrt{n}}\) means that we need
\[\ln\frac{1}{\mu(A)}+\ln\frac{C}{C_{1}}>cT\sqrt{n}\]
to apply Theorem 5.2 here. And so these last inequalities imply
**Proposition 5.1**.: _There are universal constants \(c_{s},C>0\) such that 6_
Footnote 6: here for the sake of simplicity we forget about the auxiliary constants \(c^{\prime},C,C_{1}\) introduced during the proof of this proposition
\[\mu^{+}(A)\geq c_{s}n\mu(A)\]
_for all subsets \(A\subset\Delta_{n}\) with \(\mu(A)<e^{-C\sqrt{n}}\)._
Now that we have dealt with the case of <<small>> sets, we can proceed to the main argument.
**Definition 6**.: _The gradient modulus \(\|\nabla f\|_{2}\) of a locally Lipschitz function \(f\) is_
\[\|\nabla f(x)\|_{2}=\limsup_{\|x-y\|_{2}\to 0^{+}}\frac{\|f(x)-f(y)\|_{2}}{\|x-y \|_{2}}\]
The next lemma would allow us to switch between the two equivalents of the isoperimetric problem.
**Lemma 5.4** ([19, Proposition A]).: _Let \(\mu\) be a probability measure, \(0<a<\frac{1}{2}\) and \(b>0\). The following are equivalent_
_(a) \(\mathcal{I}_{\mu}(a)\geq b\)_
_(b) for any locally Lipschitz function \(\phi:\operatorname{supp}\mu\to[0;1]\) such that \(\mu\{\phi=0\}\geq\frac{1}{2}\) and \(\mu\{\phi=1\}\geq a\),_
\[\int\|\nabla\phi\|_{2}d\mu\geq b\]
**Remark 1**.: _In this theorem and its applications in this article we can replace <<locally Lipschitz>> in \((b)\) with just <<Lipschitz>>, since it would not affect the implication \((a)\Rightarrow(b)\), and in the proof([19]) of the implication \((b)\Rightarrow(a)\) only Lipschitz functions \(\phi\colon\operatorname{supp}\mu\to[0;1]\) were considered._
The idea of <<getting rid>> of unwanted parts of our space would be realized through the so-called cut-off functions. A cut-off function maps our space to \([0;1]\), where \(0\) corresponds to the regions we are getting rid of, \(1\) - to the regions we want to keep, and we would also need our function to take values in between \(0\) and \(1\) to ensure continuity.
The following lemma shows how a cut-off function \(h\) can be applied.
**Lemma 5.5** ([19, Lemma 2]).: _If \(k,h\colon\mathbb{R}^{n}\to[0;1]\) are two locally Lipschitz functions, then_
\[\|\nabla k\|_{2}\geq\|\nabla(kh)\|_{2}-\|\nabla h\|_{2}\]
By Lemma 5.4 we can speak about the isoperimetric problem in terms of the integral of gradient modulus \(\|\nabla\phi\|_{2}\). Lemma 5.5 would allow us to pass from \(\phi\) to function \(\phi\cdot h\), which vanishes on the unwanted region of our space, at the cost of an error term \(\|\nabla h\|_{2}\)
\[\int\|\nabla\phi\|_{2}d\mu\geq\int\|\nabla(\phi\cdot h)\|_{2}d\mu-\int\|\nabla h \|_{2}d\mu\]
Recall that our upper bound on gradient modulus of mapping \(T:\mathbb{R}^{n}_{+}\to\Delta_{n}\) is
\[\|\nabla T\|_{2}=\left\|\frac{\partial T_{j}(x)}{\partial x_{i}}\right\|_{2} \leq\frac{1}{\|x\|_{1}}(1+\sqrt{n}\|T(x)\|_{2})\]
So we are going to need two cut-off functions: one for parts of \(\Delta_{n}\) that are too far from the origin will be of the form
\[h_{1}\colon\mathbb{R}^{n}\to[0;1],\quad h_{1}(x)=\max(0,\min(1,2-c_{1}\sqrt{n }\|x\|_{2})),\]
and will take care of large values of \(\|T(x)\|_{2}\); another for the region of \(\mathbb{R}^{n}\) with low \(\|x\|_{1}\) will be of the form
\[h_{2}\colon\mathbb{R}^{n}\to[0;1],\quad h_{2}(x)=\max(0,\min(1,c_{2}n^{-1}\|x\|_ {1}-1)).\]
Constants \(c_{1}\) and \(c_{2}\) will be chosen later. Note that the two cut-off functions are meant for different domains: \(h_{1}\) for \(\Delta_{n}\) and \(h_{2}\) for \(\mathbb{R}^{n}_{+}\), but since both \(\mathbb{R}^{n}_{+}\) and \(\Delta_{n}\) lie inside \(\mathbb{R}^{n}\) we choose \(\mathbb{R}^{n}\) as their domain of definition.
To employ Lemma 5.5 in our argument we are going to need some estimates related to the arising error terms. In the following lemma some properties of our cut-off functions and their gradients will be established.
**Lemma 5.6** (corresponds to Lemma 3 from [19]).: _The cut-off function \(h_{1}\) has the following properties_
\[h_{1}(x)=1\Leftrightarrow\|x\|_{2}\leq\frac{1}{c_{1}\sqrt{n}} \tag{14}\]
\[h_{1}(x)=0\Leftrightarrow\|x\|_{2}\geq\frac{2}{c_{1}\sqrt{n}} \tag{15}\]
\[\|\nabla h_{1}\|_{2}\leq c_{1}\sqrt{n} \tag{16}\]
_The cut-off function \(h_{2}\) has the following properties_
\[h_{2}(x)=1\Leftrightarrow\|x\|_{1}\geq\frac{2}{c_{2}}n \tag{17}\]
\[h_{2}(x)=0\Leftrightarrow\|x\|_{1}\leq\frac{n}{c_{2}} \tag{18}\]
\[\|\nabla h_{2}\|_{2}\leq\frac{c_{2}}{\sqrt{n}} \tag{19}\]
Proof.: Properties (14), (15), (17), (18) immediately follow from the definition of our cut-off functions.
By the triangle inequality we note that the gradient modulus of \(\|x\|_{2}\) considered as a function from \(\mathbb{R}^{n}_{+}\) to \(\mathbb{R}_{+}\) is not greater than 1, thus
\[\|\nabla h_{1}\|_{2}\leq c_{1}\sqrt{n}\]
Inequalities
\[\|x+\Delta x\|_{1}\leq\|x\|_{1}+\|\Delta x\|_{1},\]
\[\|\Delta x\|_{1}\leq\sqrt{n}\|\Delta x\|_{2}\]
imply that the gradient modulus of \(\|x\|_{1}\) is not greater than \(\sqrt{n}\), from which we derive
\[\|\nabla h_{2}\|_{2}\leq\frac{c_{2}}{\sqrt{n}}\]
**Lemma 5.7** (corresponds to Lemma 4 from [19]).: _For \(\alpha\geq 0\) we have_
\[\nu^{n}\{\|x\|_{1}\leq\alpha n\}\leq\frac{1}{\sqrt{2\pi n}}(\alpha e)^{n}\]
_And for every \(\alpha>T\) by Lemma 5.3 we have_
\[\mu\left\{\|x\|_{2}\geq\frac{\alpha}{\sqrt{n}}\right\}\leq e^{-\alpha c\sqrt{n}}\]
Proof.: Since the density of \(\nu^{n}\) everywhere in \(\mathbb{R}_{+}^{n}\) is not greater than \(1\), we can bound \(\nu^{n}\{\|x\|_{1}\leq\alpha n\}\) above by the volume of the region of \(\mathbb{R}_{+}^{n}\) defined by \(\|x\|_{1}\leq\alpha n\), which is equal to
\[\frac{1}{n!}(\alpha n)^{n}\]
By Stirling's approximation
\[n!\geq\sqrt{2\pi n}\left(\frac{n}{e}\right)^{n}e^{\frac{1}{12n+1}}\]
And so we arrive at
\[\nu^{n}\{\|x\|_{1}\leq\alpha n\}\leq\frac{1}{n!}(\alpha n)^{n}\leq\frac{1}{ \sqrt{2\pi n}}\left(\frac{e}{n}\right)^{n}(\alpha n)^{n}e^{-\frac{1}{12n+1}} \leq\frac{1}{\sqrt{2\pi n}}(\alpha e)^{n}\]
Note that \(\frac{\alpha}{\sqrt{n}}>\frac{T}{\sqrt{n}}\), so by Theorem 5.3
\[\mu\left\{\|x\|_{2}\geq\frac{\alpha}{\sqrt{n}}\right\}\leq e^{-c\frac{\alpha} {\sqrt{n}}n}=e^{-\alpha c\sqrt{n}}\]
Now we are ready to present the main argument.
**Proposition 5.2** (corresponds to Proposition 2 from [19]).: _There is a universal constant \(c_{b}>0\) such that for all \(e^{-C\sqrt{n}}\leq t<\frac{1}{2}\)_
\[\mathcal{I}_{\mu}(t)\geq c_{b}nt.\]
Proof.: Pick \(e^{-C\sqrt{n}}\leq a<\frac{1}{2}\). According to Lemma 5.4 and Remark 1 the problem of finding lower bounds on \(\mathcal{I}_{\mu}(a)\) is equivalent to the estimation of
\[\int_{\Delta_{n}}\|\nabla f\|_{2}d\mu\]
for a Lipschitz function \(f\colon\Delta_{n}\to[0;1]\) such that
\[\mu\{f=0\}\geq\frac{1}{2}\text{ and }\mu\{f=1\}\geq a \tag{20}\]
To \(\ast\)get rid\(\ast\) of the parts of \(\Delta_{n}\) that are too far from the origin we can use our cut-off function \(h_{1}\) and by Lemma 5.5 we will get
\[\int_{\Delta_{n}}\|\nabla f\|_{2}d\mu\geq\int_{\Delta_{n}}\|\nabla(fh_{1})\|_{ 2}d\mu-\int_{\Delta_{n}}\|\nabla h_{1}\|_{2}d\mu \tag{21}\]
Here by Lemma 5.6 we can estimate the error term as
\[\int_{\Delta_{n}}\|\nabla h_{1}\|_{2}d\mu\leq c_{1}\sqrt{n}\,\mu\left\{\|x\|_ {2}\geq\frac{1}{c_{1}\sqrt{n}}\right\} \tag{22}\]
Mapping \(T\colon\mathbb{R}_{+}^{n}\to\Delta_{n}\) transforms measure \(\nu^{n}\) into \(\mu\), which allows us to replace integrals over \(\Delta_{n}\) with integrals over \(\mathbb{R}_{+}^{n}\) as follows
\[\int_{\Delta_{n}}w\,d\mu=\int_{\mathbb{R}_{+}^{n}}(w\circ T)d\nu^{n}\]
Denote \((fh_{1})\circ T\) by \(g\). As we already noted
\[\int_{\Delta_{n}}\|\nabla(fh_{1})\|_{2}d\mu=\int_{\mathbb{R}_{+}^{n}}\|\nabla (fh_{1})\circ T\|_{2}d\nu^{n} \tag{23}\]
An observation similar to the chain rule of differentiation could be made
\[\|\nabla(a\circ b)\|_{2}\leq\|(\nabla a)\circ b\|_{2}\cdot\|\nabla b\|_{2},\]
which in our case would mean that
\[\int_{\mathbb{R}^{n}_{+}}\|\nabla(fh_{1})\circ T\|_{2}d\nu^{n}\geq\int_{\mathbb{R }^{n}_{+}}\frac{\|\nabla g\|_{2}}{\|\nabla T\|_{2}}d\nu^{n}, \tag{24}\]
and since \(\|\nabla T\|_{2}\neq 0\) by Lemma 5.3 could be bounded above by \(\frac{1}{\|x\|_{1}}(1+\sqrt{n}\|T(x)\|_{2})\) we have
\[\int_{\mathbb{R}^{n}_{+}}\frac{\|\nabla g\|_{2}}{\|\nabla T\|_{2}}d\nu^{n}\geq \int_{\mathbb{R}^{n}_{+}}\frac{\|\nabla g\|_{2}\|x\|_{1}}{1+\sqrt{n}\|T(x)\|_ {2}}d\nu^{n} \tag{25}\]
But \(h_{1}\) is zero when \(\|x\|_{2}\geq\frac{2}{c_{1}\sqrt{n}}\) by Lemma 5.6. So the gradient modulus \(\|\nabla g\|_{2}\) is equal to zero when \(\|T(x)\|_{2}>\frac{2}{c_{1}\sqrt{n}}\). And because of this,
\[\int_{\mathbb{R}^{n}_{+}}\frac{\|\nabla g\|_{2}\|x\|_{1}}{1+\sqrt {n}\|T(x)\|_{2}}d\nu^{n}\geq\int_{\mathbb{R}^{n}_{+}}\frac{\|\nabla g\|_{2}\| x\|_{1}}{1+\sqrt{n}\frac{2}{c_{1}\sqrt{n}}}d\nu^{n}\\ =\frac{1}{1+\frac{2}{c_{1}}}\int_{\mathbb{R}^{n}_{+}}\|\nabla g \|_{2}\|x\|_{1}d\nu^{n} \tag{26}\]
Now to \(*\)get rid\(*\) of the region of \(\mathbb{R}^{n}_{+}\) where \(\|x\|_{1}\) is too small we will apply our cut-off function \(h_{2}\) and by Lemma 5.5 get
\[\int_{\mathbb{R}^{n}_{+}}\|\nabla g\|_{2}\|x\|_{1}d\nu^{n}\geq\int_{\mathbb{R} ^{n}_{+}}\|\nabla(gh_{2})\|_{2}\|x\|_{1}d\nu^{n}-\int_{\mathbb{R}^{n}_{+}}\| \nabla h_{2}\|_{2}\|x\|_{1}d\nu^{n} \tag{27}\]
By Lemma 5.6 we have the following upper bound on the error term
\[\int_{\mathbb{R}^{n}_{+}}\|\nabla h_{2}\|_{2}\|x\|_{1}d\nu^{n} \leq\frac{c_{2}}{\sqrt{n}}\frac{2n}{c_{2}}\nu^{n}\left\{\|x\|_{1}\leq\frac{2n} {c_{2}}\right\}\\ =2\sqrt{n}\nu^{n}\left\{\|x\|_{1}\leq\frac{2n}{c_{2}}\right\} \tag{28}\]
Cut-off function \(h_{2}\) is zero when \(\|x\|_{1}\leq\frac{n}{c_{2}}\), thus \(\|\nabla(gh_{2})\|_{2}\) is equal to zero when \(\|x\|_{1}<\frac{n}{c_{2}}\), from which it follows that
\[\int_{\mathbb{R}^{n}_{+}}\|\nabla(gh_{2})\|_{2}\|x\|_{1}d\nu^{n}\geq\frac{n}{c _{2}}\int_{\mathbb{R}^{n}_{+}}\|\nabla(gh_{2})\|_{2}d\nu^{n} \tag{29}\]
Now consider function \(gh_{2}\colon\mathbb{R}_{+}^{n}\to[0;1]\)
\[gh_{2}=((f\cdot h_{1})\circ T)\cdot h_{2}\]
Note that if \((f\circ T)(x)=0\) for \(x\in\mathbb{R}_{+}^{n}\), then \((gh_{2})(x)=0\) too. By our assumption \(\mu\{f=0\}\geq\frac{1}{2}\), which implies \(\nu^{n}\{gh_{2}=0\}\geq\frac{1}{2}\). Function \(gh_{2}\) equals to \(1\) at a point \(x\in\mathbb{R}_{+}^{n}\) if and only if
\[(f\circ T)(x)=1,\text{ and }(h_{1}\circ T)(x)=1,\text{ and }h_{2}(x)=1.\]
To estimate \(\nu^{n}\{gh_{2}=1\}\) we will subtract \(\nu^{n}\{(h_{1}\circ T)<1\}=\mu\{h_{1}<1\}\) and \(\nu^{n}\{h_{2}<1\}\) from \(\nu^{n}\{(f\circ T)=1\}=\mu\{f=1\}\), which by our assumption (20) is greater than \(a\), and get
\[\nu^{n}\{gh_{2}=1\}\geq a-\mu\{h_{1}<1\}-\nu^{n}\{h_{2}<1\}\]
By Lemma 5.6
\[\mu\{h_{1}<1\}=\mu\left\{\|x\|_{2}>\frac{1}{c_{1}\sqrt{n}}\right\}\]
\[\nu^{n}\{h_{2}<1\}=\nu^{n}\left\{\|x\|_{1}<\frac{2n}{c_{2}}\right\}\]
Isoperimetric inequality (12) on \(\nu^{n}\) combined with Lemma 5.4 would give us
\[\int_{\mathbb{R}_{+}^{n}}\|\nabla(gh_{2})\|_{2}d\nu^{n}\geq\frac{ 1}{2\sqrt{6}}\bigg{(}a\\ -\mu\left\{\|x\|_{2}>\frac{1}{c_{1}\sqrt{n}}\right\}-\nu^{n} \left\{\|x\|_{1}<\frac{2n}{c_{2}}\right\}\bigg{)} \tag{30}\]
Putting inequalities (21), (22), (23), (24), (25), (26), (27), (28), (29) together, we arrive at
\[\int_{\Delta_{n}}\|\nabla f\|_{2}d\mu\geq\frac{1}{c_{2}}\frac{1}{ 1+\frac{2}{c_{1}}}n\int_{\mathbb{R}_{+}^{n}}\|\nabla(gh_{2})\|_{2}d\nu^{n}\\ -c_{1}\sqrt{n}\mu\left\{\|x\|_{2}\geq\frac{1}{c_{1}\sqrt{n}} \right\}-\frac{2}{1+\frac{2}{c_{1}}}\sqrt{n}\nu^{n}\left\{\|x\|_{1}\leq\frac{ 2n}{c_{2}}\right\}\]
We combine this with inequality (30) and get
\[\int_{\Delta_{n}}\|\nabla f\|_{2}d\mu\geq\frac{1}{2\sqrt{6}}\frac{1} {c_{2}}\frac{1}{1+\frac{2}{c_{1}}}na\\ -\left(\frac{1}{2\sqrt{6}}\frac{1}{c_{2}}\frac{1}{1+\frac{2}{c_{1} }}n+c_{1}\sqrt{n}\right)\mu\left\{\|x\|_{2}\geq\frac{1}{c_{1}\sqrt{n}}\right\} \\ -\left(\frac{1}{2\sqrt{6}}\frac{1}{c_{2}}\frac{1}{1+\frac{2}{c_{1} }}n+\frac{2}{1+\frac{2}{c_{1}}}\sqrt{n}\right)\nu^{n}\left\{\|x\|_{1}\leq\frac{ 2n}{c_{2}}\right\},\]
which could be rewritten as
\[\int_{\Delta_{n}}\|\nabla f\|_{2}d\mu\geq\frac{1}{2\sqrt{6}}\frac {1}{c_{2}}\frac{1}{1+\frac{2}{c_{1}}}n\Bigg{(}a\\ -\left(1+2\sqrt{6}c_{1}c_{2}\left(1+\frac{2}{c_{1}}\right)\frac{ 1}{\sqrt{n}}\right)\mu\left\{\|x\|_{2}\geq\frac{1}{c_{1}\sqrt{n}}\right\}\\ -\left(1+4\sqrt{6}c_{2}\frac{1}{\sqrt{n}}\right)\nu^{n}\left\{\|x \|_{1}\leq\frac{2n}{c_{2}}\right\}\Bigg{)}\]
If \(\frac{1}{c_{1}}>T\Leftrightarrow c_{1}<\frac{1}{T}\), then by Lemma 5.7 we should have
\[\int_{\Delta_{n}}\|\nabla f\|_{2}d\mu\geq\frac{1}{2\sqrt{6}}\frac {1}{c_{2}}\frac{1}{1+\frac{2}{c_{1}}}n\Bigg{(}a\\ -\left(1+2\sqrt{6}c_{2}\left(c_{1}+2\right)\frac{1}{\sqrt{n}} \right)e^{-\frac{c}{c_{1}}\sqrt{n}}\\ -\left(1+4\sqrt{6}c_{2}\frac{1}{\sqrt{n}}\right)\frac{1}{\sqrt{2 \pi n}}\left(\frac{2e}{c_{2}}\right)^{n}\Bigg{)}\]
Now we can choose appropriate values for constants \(c_{1}\) and \(c_{2}\). We choose \(c_{2}\) to be large enough so that
\[\left(1+4\sqrt{6}c_{2}\frac{1}{\sqrt{n}}\right)\frac{1}{\sqrt{2\pi n}}\left( \frac{2e}{c_{2}}\right)^{n}\leq\frac{1}{3}e^{-C\sqrt{n}}\]
holds for all natural \(n\). This is possible since one can note that
\[\left(\frac{2e}{c_{2}}\right)^{n}=e^{-\ln\left(\frac{2e}{c_{2}}\right)n}\]
After that we choose \(c_{1}\) to be small enough so that
\[\left(1+2\sqrt{6}c_{2}\left(c_{1}+2\right)\frac{1}{\sqrt{n}}\right)e^{-\frac{c}{ c_{1}}\sqrt{n}}\leq\frac{1}{3}e^{-C\sqrt{n}}\]
holds for all natural \(n\).
Our \(a\) is at least \(e^{-C\sqrt{n}}\), which means
\[\int_{\Delta_{n}}\|\nabla f\|_{2}d\mu\geq\frac{1}{2\sqrt{6}}\frac{1}{c_{2}} \frac{1}{1+\frac{2}{c_{1}}}n\left(a-\frac{2}{3}e^{-C\sqrt{n}}\right)\geq\frac{ 1}{2\sqrt{6}}\frac{1}{c_{2}}\frac{1}{1+\frac{2}{c_{1}}}n\left(\frac{1}{3}a\right)\]
And since \(f\) here can be an arbitrary Lipschitz function \(f\colon\Delta_{n}\to[0;1]\) with
\[\mu\{f=0\}\geq\frac{1}{2}\text{ and }\mu\{f=1\}\geq a\]
we by Lemma 5.4 conclude
\[\mathcal{I}_{\mu}(a)\geq\frac{1}{6\sqrt{6}}\frac{1}{c_{2}}\frac{1}{1+\frac{2} {c_{1}}}na\]
Propositions 5.1 and 5.2 imply
**Theorem 5.4**.: _For the Lebesgue measure \(\lambda\) on the unit-volume simplex \(\omega_{n}\Delta_{n}\) the following isoperimetric inequality_
\[\mathcal{I}_{\lambda}(t)\geq c_{\lambda}t\]
_holds for all \(t\in(0;\frac{1}{2})\), where \(c_{\lambda}>0\) is a universal constant independent of the dimension \(n\)._
Proof.: By Proposition 5.1
\[\mu^{+}(A)\geq c_{s}n\mu(A)\]
for all \(A\subset\Delta_{n}\) with \(\mu(A)\in(0;e^{-C\sqrt{n}})\), and by Proposition 5.2
\[\mathcal{I}_{\mu}(t)\geq c_{b}nt\]
for all \(t\in[e^{-C\sqrt{n}};\frac{1}{2})\), which means that
\[\mathcal{I}_{\mu}(t)\geq\min(c_{s},c_{b})nt\]
for all \(t\in(0;\frac{1}{2})\).
Equation (11) relates \(\mathcal{I}_{\mu}\) and \(\mathcal{I}_{\lambda}\) to each other as
\[\mathcal{I}_{\lambda}=\frac{1}{\omega_{n}}\mathcal{I}_{\mu}\]
Thus for all \(t\in(0;\frac{1}{2})\) we must have
\[\mathcal{I}_{\lambda}(t)\geq\min(c_{s},c_{b})\frac{n}{\omega_{n}}t.\]
Here we could note that \(\frac{n}{\omega_{n}}\) is positive for all \(n\) and that by Stirling's approximation
\[\lim_{n\to\infty}\frac{n}{\omega_{n}}=e,\]
which must imply that
\[\inf_{n}\frac{n}{\omega_{n}}>0\]
So we can take
\[\min(c_{s},c_{b})\inf_{n}\frac{n}{\omega_{n}}\]
as our constant \(c_{\lambda}\).
From this isoperimetric inequality we conclude
**Theorem 5.5**.: _Inside a unit-volume simplex \(\omega_{n}\Delta_{n}\) two bodies \(A\) and \(B\) of volume \(\varepsilon\in(0;\frac{1}{2})\) are at a distance at most_
\[-c\ln\varepsilon\]
_for some universal constant \(c>0\) independent of the dimension \(n\) and volume \(\varepsilon\)._
Proof.: We are interested in the least values \(\delta_{A},\delta_{B}\) such that the \(\delta_{A}\)-enlargement of body \(A\) in \(\omega_{n}\Delta_{n}\) will be of volume \(\frac{1}{2}\) and the \(\delta_{B}\)-enlargement of body \(B\) will be of volume \(\frac{1}{2}\) too. For these enlargements we shall have
\[\operatorname{dist}(A_{\delta_{A}},B_{\delta_{B}})=0,\]
from which
\[\operatorname{dist}(A,B)\leq\delta_{A}+\delta_{B}\]
follows.
Isoperimetric inequality from Theorem 5.4 provides an estimate on the growth of \(\delta\)-enlargements of our bodies
\[\delta_{+}\lambda(A_{\delta})\geq c_{\lambda}\lambda(A_{\delta})\]
which holds as long as \(\lambda(A_{\delta})<\frac{1}{2}\).
And so to bound \(\delta_{A}\) above we would like consider a function \(y(\delta)\) that behaves in accordance with our lower bound
\[y(0)=\lambda(A) \tag{31}\]
\[y^{\prime}=c_{\lambda}y \tag{32}\]
If by \(\delta_{M}\) we will denote the moment when \(y\) reaches \(\frac{1}{2}\), i. e. \(y(\delta_{M})=\frac{1}{2}\), then \(\delta_{A}\leq\delta_{M}\). Indeed, otherwise \(\delta_{M}<\delta_{A}\), but functions \(\lambda(A_{\delta})\) and \(y(\delta)\) coincide at \(\delta=0\) and for all \(\delta\in[0;\delta_{M}]\) we should have
\[\lambda(A_{\delta})\geq y(\delta)\text{ and }\delta_{+}\lambda(A_{\delta}) \geq c_{\lambda}\lambda(A_{\delta})\geq c_{\lambda}y(\delta)=\delta_{+}y(\delta)\]
And so we reach contradiction
\[\frac{1}{2}=\lambda(A_{\delta_{A}})>\lambda(A_{\delta_{M}})\geq y(\delta_{M}) =\frac{1}{2}\]
A solution to differential equation (32) should be of the form
\[Ce^{c_{\lambda}\delta}\]
and since at \(\delta=0\) by our initial condition (31) we should have \(y(0)=\lambda(A)\) we reach conclusion
\[y(\delta)=\lambda(A)e^{c_{\lambda}\delta}\]
So \(\delta_{M}\) will be a solution to equation
\[\lambda(A)e^{c_{\lambda}\delta_{M}}=\frac{1}{2},\]
which after taking logarithm on both sides turns into
\[\ln\lambda(A)+c_{\lambda}\delta_{M}=-\ln 2\]
\[\delta_{M}=-\frac{1}{c_{\lambda}}\left(\ln\lambda(A)+\ln 2\right)\]
By the same reasoning \(\delta_{B}\leq\delta_{M}\), and thus
\[\operatorname{dist}(A,B)\leq\delta_{A}+\delta_{B}\leq-\frac{2}{c_{\lambda}}( \ln\lambda(A)+\ln 2)\leq-\frac{2}{c_{\lambda}}\ln\lambda(A)\]
### \(\ell_{p}\)-balls.
By the \(\ell_{p}^{n}\) unit ball we mean
\[\ell_{p}^{n}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}^{n}\mid|x_{1}|^{p}+\ldots+|x_{n} |^{p}\leq 1\}\]
Let \(\mu\) be a normalized Lebesgue measure on it. Note that \(\mu(\ell_{p}^{n})=1\). The volume of \(\ell_{p}^{n}\) is equal to
\[2^{n}\frac{\Gamma\left(1+\frac{1}{p}\right)^{n}}{\Gamma\left(1+\frac{n}{p} \right)}\]
by theorem 1 from [23]. So in order to get a unit-volume \(\ell_{p}^{n}\) ball we would need to stretch the \(\ell_{p}^{n}\) unit ball by a factor of
\[\omega_{n}=\frac{\Gamma\left(1+\frac{n}{p}\right)^{\frac{1}{n}}}{2\Gamma \left(1+\frac{1}{p}\right)}\sim\frac{n^{\frac{1}{p}}}{2\Gamma\left(1+\frac{1} {p}\right)\left(pe\right)^{\frac{1}{p}}}\]
By \(\lambda\) denote the Lebesgue measure on \(\omega_{n}\ell_{p}^{n}\). Yet again by (11) we should have proportionality of the isoperimetric functions
\[\mathcal{I}_{\mu}=\omega_{n}\mathcal{I}_{\lambda} \tag{33}\]
The following theorem was proven by Sasha Sodin in [19].
**Theorem 5.6** ([19, Theorem 1]).: _There exists a universal constant \(c>0\) such that for \(1\leq p\leq 2\), \(0<a<\frac{1}{2}\)_
\[\mathcal{I}_{\mu}(a)\geq cn^{\frac{1}{p}}a\log^{1-\frac{1}{p}}a\]
It follows that
**Theorem 5.7**.: _For every \(p\in[1;2]\) there exists a positive constant \(c_{p}>0\) such that_
\[\mathcal{I}_{\lambda}(a)>c_{p}a\log^{1-\frac{1}{p}}a\]
Proof.: By (33) we already now that
\[\mathcal{I}_{\lambda}(a)\geq c\frac{1}{\omega_{n}}n^{\frac{1}{p}}a\log^{1- \frac{1}{p}}a\]
The number \(\frac{c}{\omega_{n}}n^{\frac{1}{p}}\) is positive for all \(n\) and by Stirling's approximation
\[\lim_{n\to\infty}\frac{c}{\omega_{n}}n^{\frac{1}{p}}=2c\Gamma\left(1+\frac{1}{p }\right)(pe)^{\frac{1}{p}}>0\]
Thus
\[c_{p}=\inf_{n}c\frac{1}{\omega_{n}}n^{\frac{1}{p}}>0\]
and
\[\mathcal{I}_{\lambda}(a)\geq c_{p}a\log^{1-\frac{1}{p}}a\]
From this isoperimetric inequality we derive
**Theorem 5.8**.: _Inside a unit-volume \(\ell_{n}^{p}\) ball \(\omega_{n}\ell_{p}^{n}\) two bodies \(A\) and \(B\) of volume \(\varepsilon\in(0;\frac{1}{2})\) are at a distance at most_
\[C_{p}\log^{\frac{1}{p}}\frac{1}{\varepsilon}\]
_for some constant \(C_{p}>0\) independent of dimension \(n\) and volume \(\varepsilon\)._
Proof.: We are interested in the least values \(\delta_{A},\delta_{B}\) such that the \(\delta_{A}\)-enlargement of body \(A\) in \(\omega_{n}\ell_{p}^{n}\) will be of volume \(\frac{1}{2}\) and the \(\delta_{B}\)-enlargement of body \(B\) will be of volume \(\frac{1}{2}\) too. For these enlargements we shall have
\[\operatorname{dist}(A_{\delta_{A}},B_{\delta_{B}})=0,\]
from which
\[\operatorname{dist}(A,B)\leq\delta_{A}+\delta_{B}\]
follows.
The isoperimetric inequality from Theorem 5.7 allows to estimate the growth of \(\lambda(A_{\delta})\) as
\[\delta_{+}\lambda(A_{\delta})\geq c_{p}\lambda(A_{\delta})\log^{1-\frac{1}{p} }\frac{1}{\lambda(A_{\delta})}\]
while \(\lambda(A_{\delta})<\frac{1}{2}\).
So we would like to consider a function \(y(\delta)\) that behaves in accordance with our lower bound
\[y(0)=\varepsilon \tag{34}\]
\[y^{\prime}=c_{p}y\log^{1-\frac{1}{p}}\frac{1}{y} \tag{35}\]
If by \(\delta_{M}\) we will denote the moment when \(y\) reaches one half, i. e. \(y(\delta_{M})=\frac{1}{2}\), then \(\delta_{A}\leq\delta_{M}\). Indeed, otherwise \(\delta_{M}<\delta_{A}\), but functions \(\lambda(A_{\delta})\) and \(y(\delta)\) coincide at \(\delta=0\) and for all \(\delta\in[0;\delta_{M}]\) we should have
\[\lambda(A_{\delta})\geq y(\delta)\text{ and }\delta_{+} \lambda(A_{\delta})\geq c_{p}\lambda(A_{\delta})\log^{1-\frac{1}{p}}\frac{1} {\lambda(A_{\delta})}\\ \geq c_{p}y(\delta)\log^{1-\frac{1}{p}}\frac{1}{y(\delta)}= \delta_{+}y(\delta),\]
since \(x(-\log x)^{1-\frac{1}{p}}\) is increasing on \((0;\frac{1}{2}]\)(see Appendix B). And so we reach a contradiction
\[\frac{1}{2}=\mu(A_{\delta_{A}})>\mu(A_{\delta_{M}})\geq y(\delta_{M})=\frac{1 }{2}\]
Differential equation (35) is separable
\[dy=c_{p}y(-\log y)^{1-\frac{1}{p}}d\delta\]
\[-(-\log y)^{\frac{1}{p}-1}\left(-\frac{1}{y}dy\right)=c_{p}d\delta\]
\[-\int(-\log y)^{\frac{1}{p}-1}d(-\log y)=\int c_{p}d\delta\]
\[-p(-\log y)^{\frac{1}{p}}=c_{p}\delta+C_{0}\]
Our initial condition (34) gives us
\[-p(-\log\varepsilon)^{\frac{1}{p}}=C_{0}\]
And for \(\delta=\delta_{M}\) we should have
\[-p(\log 2)^{\frac{1}{p}}=c_{p}\delta_{M}-p(-\log\varepsilon)^{\frac{1}{p}}\]
\[\delta_{M}=\frac{1}{c_{p}}\left(p(-\log\varepsilon)^{\frac{1}{p}}-p(\log 2)^{ \frac{1}{p}}\right)\leq\frac{p}{c_{p}}(-\log\varepsilon)^{\frac{1}{p}}\]
By the same reasoning \(\delta_{B}\leq\delta_{M}\), and we conclude
\[\operatorname{dist}(A,B)\leq\delta_{A}+\delta_{B}\leq 2\delta_{M}\leq\frac{2p}{c _{p}}(-\log\varepsilon)^{\frac{1}{p}}\]
Lower bounds.
**Introduction.** Here we are going to be concerned with the lower bounds on the largest distance between two subsets of volume \(0<\varepsilon<\frac{1}{2}\). We will derive the lower bounds simply by considering certain hyperplane cuts of our convex bodies. For families of convex bodies such as the euclidean balls, cubes, hyperoctahedrons, simplexes and \(\ell_{p}\) balls specific lower bounds will be shown in Theorems 6.1, 6.3, 6.4, 6.5. It turns out that for euclidean balls our lower bounds coincide with the upper bounds(see Theorem 6.2). In Theorem 6.6 a general lower bound will be established, showing that in a way the family of euclidean balls is optimal in regard to our problem.
It was already shown that for unit-volume cube, ball, simplex and \(\ell_{p}\) balls with \(p\in[1;2]\) the largest distance is bounded above by some constant dependent on \(\varepsilon\) but not on the dimension \(n\). That is why it makes sense to consider the lower bounds on the supremum of all possible distances between two subsets of volume \(\varepsilon\) that take place as \(n\) tends to infinity.
For a family of convex bodies \(K_{n}\) by \(d_{n}(\varepsilon)\) here we denote the supremum of all possible distances between two subsets of volume \(\varepsilon\in(0;\frac{1}{2})\) in \(K_{n}\).
**Theorem 6.1**.: _When \(K_{n}\) are the unit-volume euclidean balls we have_
\[\liminf_{n\to\infty}d_{n}(\varepsilon)\geq-2\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)\]
_The function \(-2\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)\) is asymptotically equivalent to_
\[-2\frac{1}{\sqrt{\pi e}}\sqrt{-\ln\varepsilon}\]
_as \(\varepsilon\to 0\)._
Proof.: In the unit-volume \(n\)-ball \(\omega_{n}B^{n}\subset\mathbb{R}^{n}\), where the radii is
\[\omega_{n}=\frac{\Gamma\left(\frac{n}{2}+1\right)^{\frac{1}{n}}}{\sqrt{\pi}} \sim\sqrt{\frac{n}{2\pi e}},\]
consider the diagonal from \((-\omega_{n},0,\ldots,0)\) to \((\omega_{n},0,\ldots,0)\), i. e. a diagonal corresponding to the \(X_{1}\)-axis. We will be interested in the hyperplanes orthogonal to this diagonal, i. e. hyperplanes defined by \(X_{1}=t\).
Pick a number \(a\) such that
\[\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)<-a<0\]
If we consider the uniform probability distribution on \(\omega_{n}B^{n}\), then we could think of \(X_{1}\) as of a random variable. We would like to consider the part of our ball that corresponds to \(X_{1}\leq-a\). The volume would be equal to
\[\Pr(X_{1}\leq-a)=\Pr(\sqrt{n}\omega_{n}^{-1}X_{1}\leq-\sqrt{n}\omega_{n}^{-1}a)\]
By theorem 1 of [20] as \(n\) tends to infinity the distribution of \(n^{\frac{1}{2}}\omega_{n}^{-1}X_{1}\) converges in total variation to the standard normal distribution on \(\mathbb{R}\), whose probability density function is
\[\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^{2}}\]
Furthermore, note that
\[\lim_{n\to\infty}-\sqrt{n}\omega_{n}^{-1}a=\lim_{n\to\infty}-\sqrt{n}\frac{ \sqrt{2\pi e}}{\sqrt{n}}a=-\sqrt{2\pi e}a>\sqrt{2\pi}\Phi^{-1}(\varepsilon)\]
So for all sufficiently large \(n\) we shall have
\[\sqrt{2\pi}\Phi^{-1}(\varepsilon)+\delta<-\sqrt{n}\omega_{n}^{-1 }a\\ \Rightarrow\Pr(\sqrt{n}\omega_{n}^{-1}X_{1}\leq-\sqrt{n}\omega_ {n}^{-1}a)\geq\Pr(\sqrt{n}\omega_{n}^{-1}X_{1}\leq\sqrt{2\pi}\Phi^{-1}( \varepsilon)+\delta)\]
for some \(\delta>0\). Because distribution of \(n^{\frac{1}{2}}\omega_{n}^{-1}X_{1}\) converges in total variation to the standard normal distribution, for all sufficiently large \(n\) we have
\[\Pr(X_{1}\leq-a)\geq\Pr(\sqrt{n}\omega_{n}^{-1}X_{1}\leq\sqrt{2 \pi}\Phi^{-1}(\varepsilon)+\delta)\\ \geq\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{\sqrt{2\pi}\Phi^{-1}( \varepsilon)}e^{-\frac{1}{2}x^{2}}dx=\int_{-\infty}^{\sqrt{2\pi}\Phi^{-1}( \varepsilon)}e^{-\pi\left(\frac{1}{\sqrt{2\pi}}x\right)^{2}}d\left(\frac{1}{ \sqrt{2\pi}}x\right)\\ =\int_{-\infty}^{\Phi^{-1}(\varepsilon)}e^{-\pi x^{2}}dx=\varepsilon\]
By symmetry we have a similar result for the part of our ball defined by \(X_{1}\geq a\). That means that for sufficiently large \(n\) we are going to have
two subsets of volume at least \(\varepsilon\) at a distance \(2a\). But \(a\) was chosen as an arbitrary number lesser than
\[-\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon),\]
from which the statement of the theorem follows.
By combining this with Theorem 3.3 we get
**Theorem 6.2**.: _When \(K_{n}\) are the unit-volume euclidean balls_
\[\lim_{n\to\infty}d_{n}(\varepsilon)=-2\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)\]
**Theorem 6.3**.: _When \(K_{n}\) are the unit cubes we have_
\[\liminf_{n\to\infty}d_{n}(\varepsilon)\geq-2\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon)\]
_The function \(-2\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon)\) is asymptotically equivalent to_
\[\frac{2}{\sqrt{6}}\sqrt{-\ln\varepsilon}\]
_as \(\varepsilon\to 0\)._
Proof.: By the main diagonal of a cube \((0;1)^{n}\) we mean a segment from the origin to \((1,\ldots,1)\). We would be considering hyperplanes orthogonal to the main diagonal.
For each point on the main diagonal we could consider the area of the corresponding orthogonal hyperplane section of \((0;1)^{n}\). This gives rise to a probability distribution on the segment from \((0,\ldots,0)\) to \((1,\ldots,1)\), whose length is \(\sqrt{n}\). We will take the midpoint of this segment as the origin, i. e. we have a probability distribution on \([-\frac{\sqrt{n}}{2};+\frac{\sqrt{n}}{2}]\).
The variance of the uniform distribution on the segment \([0;1]\) is equal to
\[\sigma^{2}=\int_{0}^{1}\left(x-\frac{1}{2}\right)^{2}dx=2\int_{0}^{\frac{1}{2 }}x^{2}dx=2\frac{1}{3}\frac{1}{2^{3}}=\frac{1}{12}\]
And our distribution on \([-\frac{\sqrt{n}}{2};+\frac{\sqrt{n}}{2}]\) could be produced by \(n\) random variables \(X_{1},\ldots,X_{n}\) uniformly distributed on \([0;1]\) as
\[\sqrt{n}\left(\frac{X_{1}+\ldots+X_{n}}{n}-\frac{1}{2}\right)\]
So by central limit theorem as \(n\) goes to infinity our distribution converges to a normal distribution \(\mathcal{N}(0,\sigma^{2})\), whose probability density function would be
\[\frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}\frac{x^{2}}{\sigma^{2}}}=\sqrt{\frac {6}{\pi}}e^{-6x^{2}}\]
Pick a number \(a\) such that
\[\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon)<-a<0\]
And consider the part of the unit cube \((0;1)^{n}\) whose orthogonal projection on the main diagonal lies inside \([0;\frac{1}{2}\sqrt{n}-a]\), i. e. a certain hyperplane section. As \(n\) goes to infinity the volume of this region would converge to the value of the cumulative distribution function of \(\mathcal{N}(0,\sigma^{2})\) at \(-a\), which is equal to
\[\int_{-\infty}^{-a}\sqrt{\frac{6}{\pi}}e^{-6x^{2}}dx=\int_{- \infty}^{-a}e^{-\pi\left(\sqrt{\frac{6}{\pi}}x\right)^{2}}d\left(\sqrt{\frac{ 6}{\pi}}x\right)\\ =\int_{-\infty}^{-\sqrt{\frac{6}{\pi}}a}e^{-\pi x^{2}}dx=\Phi \left(-\sqrt{\frac{6}{\pi}}a\right)>\Phi\left(\sqrt{\frac{6}{\pi}}\sqrt{\frac {\pi}{6}}\Phi^{-1}(\varepsilon)\right)=\varepsilon\]
Thus for large enough \(n\) a part of volume at least \(\varepsilon\) is going to be cut off. By symmetry the same is true for an orthogonal hyperplane section of our cube corresponding to \([\frac{1}{2}\sqrt{n}+a;\sqrt{n}]\) on the main diagonal.
So for \(n\) large enough we get two subsets of volume at least \(\varepsilon\) at a distance \(2a\). But \(a\) was chosen as an arbitrary number smaller than
\[-\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon),\]
from which the statement of the theorem follows.
**Theorem 6.4**.: _When \(K_{n}\) are the unit-volume simplexes we have_
\[\liminf_{n\to\infty}d_{n}(\varepsilon)\geq-\frac{\sqrt{2}}{e}\ln(2\varepsilon).\]
Proof.: The volume of the unit simplex \(\Delta_{n}\) defined by
\[\Delta_{n}=\{(x_{1},\ldots,x_{n})\in\mathbb{R}_{+}^{n}\mid x_{1}+\ldots+x_{n}=1\}\]
is equal to
\[\frac{n\sqrt{n}}{n!}\]
If we set
\[\omega_{n}=\left(\frac{n!}{n\sqrt{n}}\right)^{\frac{1}{n-1}}\sim\frac{n}{e},\]
then \(\omega_{n}\Delta_{n}\) will be a regular unit-volume simplex, whose side length is
\[\sqrt{2}\omega_{n}\]
Now consider any number \(\varepsilon^{\prime}\) such that
\[\varepsilon<\varepsilon^{\prime}<\frac{1}{2}\]
Let \(P\) and \(Q\) be two vertices of \(\omega_{n}\Delta_{n}\). Hyperplane passing through the midpoint of the side \(PQ\) and orthogonal to it divides our simplex into two parts of equal volume. We set
\[\alpha=(2\varepsilon^{\prime})^{\frac{1}{n-1}}.\]
Consider the image of the part containing \(P\) after a homotethy with center at \(P\) and coefficient \(\alpha\), the resulting subset of our simplex will be of volume \(\frac{1}{2}\alpha^{n-1}=\varepsilon^{\prime}>\varepsilon\). Analogously, we construct a subset of volume \(\varepsilon^{\prime}\) corresponding to the vertex \(Q\).
Note that we have two homotethes with coefficient \(\alpha\) applied to the halves of \(PQ\). The distance between our subsets will be equal to
\[\sqrt{2}\omega_{n}\left(1-\alpha\right)=\sqrt{2}\omega_{n}\left(1-(2 \varepsilon^{\prime})^{\frac{1}{n-1}}\right)\]
Now we take limit
\[\lim_{n\to\infty}\sqrt{2}\omega_{n}\left(1-(2\varepsilon^{\prime })^{\frac{1}{n-1}}\right)=\lim_{n\to\infty}\frac{\sqrt{2}}{e}n\left(1-(2 \varepsilon^{\prime})^{\frac{1}{n-1}}\right)\\ =\lim_{n\to\infty}-\frac{\sqrt{2}}{e}\frac{n}{n-1}\frac{(2 \varepsilon^{\prime})^{0}-(2\varepsilon^{\prime})^{\frac{1}{n-1}}}{0-\frac{1} {n-1}}=-\frac{\sqrt{2}}{e}\frac{d}{dt}(2\varepsilon^{\prime})^{t}\Big{|}_{t=0 }=-\frac{\sqrt{2}}{e}\ln(2\varepsilon^{\prime})\]
In other words, for each \(n\) we have two subsets of \(\omega_{n}\Delta_{n}\) of volume at least \(\varepsilon\), and the distance between them tends to
\[-\frac{\sqrt{2}}{e}\ln(2\varepsilon^{\prime})\]
as \(n\) goes to infinity. But \(\varepsilon^{\prime}\) was chosen as an arbitrary number from \((\varepsilon;\frac{1}{2})\), from which the statement of the theorem follows.
**Theorem 6.5**.: _When \(K_{n}\) are the unit-volume \(\ell_{p}\) balls for \(p\in[1;2]\) we have_
\[\liminf_{n\to\infty}d_{n}(\varepsilon)\geq-2\Psi_{p}^{-1}(\varepsilon),\]
_where function \(-2\Psi_{p}^{-1}(\varepsilon)\)(see Appendix C) is asymptotically equivalent to_
\[\frac{1}{e^{\frac{1}{p}}\Gamma\left(1+\frac{1}{p}\right)}(-\ln\varepsilon)^{ \frac{1}{p}}\]
_as \(\varepsilon\to 0\)._
Proof.: In the unit-volume \(\ell_{p}\) ball \(\omega_{n}\ell_{p}^{n}\), where
\[\omega_{n}=\frac{\Gamma\left(1+\frac{n}{p}\right)^{\frac{1}{n}}}{2\Gamma \left(1+\frac{1}{p}\right)}\sim\frac{n^{\frac{1}{p}}}{2\Gamma\left(1+\frac{1} {p}\right)(pe)^{\frac{1}{p}}},\]
consider the segment from \((-\omega_{n},0,\ldots,0)\) to \((\omega_{n},0,\ldots,0)\). We will be considering the hyperplanes orthogonal to this segment, i. e. hyperplanes defined by \(X_{1}=t\). By \(V_{n}(t)\) denote the function that measures the volume of the part given by \(X_{1}\geq t\).
We use notation from Appendix C, where it was established that functions \(V_{n}(x)\) uniformly converge to \(\Psi_{p}(-x)\) on \((-\infty;+\infty)\). Pick an arbitrary number \(a\) such that
\[\Psi_{p}^{-1}(\varepsilon)<-a<0\]
Then for all sufficiently large \(n\) we shall have
\[V_{n}(a)>\varepsilon\]
Because of symmetry, for all sufficiently large \(n\) we would have two bodies in \(\omega_{n}\ell_{p}^{n}\) of volume at least \(\varepsilon\) at a distance at least \(2a\). And, since the choice of \(a\) above was arbitrary, we would have
\[\liminf_{n\to\infty}d_{n}(\varepsilon)\geq-2\Psi_{p}^{-1}(\varepsilon)\]
Assume that we have a family \(K_{n}\) of bounded unit-volume centrally symmetric bodies that are not necessarily convex, where the origin will be the center of symmetry for each \(K_{n}\). By \(\mu_{n}\) denote the uniform probability measure on \(K_{n}\).
Fix \(\varepsilon\in(0;\frac{1}{2})\). Since \(K_{n}\) is bounded, it has a finite diameter, thus for each \(K_{n}\) we could consider \(d_{n}(\varepsilon)\) - the supremum of all possible distances between two subsets of \(K_{n}\), whose volume is \(\varepsilon\).
Turns out that a simple averaging argument gives us.
**Theorem 6.6**.: _For every \(\varepsilon\in(0;\frac{1}{2})\)_
\[\liminf_{n\to\infty}d_{n}(\varepsilon)\geq-2\frac{1}{\sqrt{e}}\Phi^{-1}( \varepsilon),\]
_where function \(-2\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)\) is asymptotically equivalent to_
\[-2\frac{1}{\sqrt{\pi e}}\sqrt{-\ln\varepsilon}\]
_as \(\varepsilon\to 0\)._
Proof.: Consider a number \(d\) such that
\[\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)<-d<0\]
We would like to show that for all sufficiently large \(n\) there would be a direction defined by a unit vector \(u\) on the sphere \(S^{n-1}\) such that the part of our body \(K_{n}\) where \(\langle x,u\rangle\leq-d\) would be of volume at least \(\varepsilon\). By central symmetry the part where \(\langle x,u\rangle\geq d\) would be of the same volume. But then we will be having two subsets of \(K_{n}\) of volume at least \(\varepsilon\) at a distance at least \(2d\), where \(d\) was chosen as an arbitrary number lesser than
\[-\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon),\]
from which the statement of the theorem would follow.
We are looking for a direction \(u\in S^{n-1}\) with
\[\mu_{n}(\langle x,u\rangle\leq-d)\geq\varepsilon\]
If the average value of \(\mu_{n}(\langle x,u\rangle\leq-d)\) is greater than \(\varepsilon\), then such a direction will exist.
By \(\vartheta_{n}\) denote the uniform probability measure on \(S^{n-1}\). We would like to have
\[\int_{S^{n-1}}\mu_{n}(\langle x,u\rangle\leq-d)d\vartheta_{n}\geq\varepsilon \tag{36}\]
Volume of the hyperplane cut defined by \(\langle x,u\rangle\leq-d\) is the integral of the indicator function \([\langle x,u\rangle\leq-d]\) on \(K_{n}\), which allows to rewrite (36) as
\[\int_{S^{n-1}}\int_{K_{n}}[\langle x,u\rangle\leq-d]d\mu_{n}d\vartheta_{n}\geq\varepsilon\]
We may switch the order of integration
\[\int_{K_{n}}\left(\int_{S^{n-1}}[\langle x,u\rangle\leq-d]d\vartheta_{n} \right)d\mu_{n}\geq\varepsilon \tag{37}\]
We would like to consider the integrand of (37) for \(x\neq 0\). First, we rewrite \([\langle x,u\rangle\leq-d]\) as
\[\left[\sqrt{n}\left\langle\frac{x}{\|x\|_{2}},u\right\rangle\leq-\frac{\sqrt {n}d}{\|x\|_{2}}\right]\]
Now we could think of \(u\) as of a random vector on the unit sphere \(S^{n-1}\). Then
\[\left\langle\frac{x}{\|x\|_{2}},u\right\rangle\]
corresponds to the projection of \(u\) on the diameter from \(-\frac{x}{\|x\|_{2}}\) to \(+\frac{x}{\|x\|_{2}}\). And thus by theorem 1 from [20] as \(n\) goes to infinity the distribution of random variable
\[\sqrt{n}\left\langle\frac{x}{\|x\|_{2}},u\right\rangle\]
converges in total variation to the standard normal distribution.
Note that the value of
\[\int_{S^{n-1}}\left[\sqrt{n}\left\langle\frac{x}{\|x\|_{2}},u\right\rangle \leq-\frac{\sqrt{n}d}{\|x\|_{2}}\right]d\vartheta_{n}\]
only depends on \(-\frac{\sqrt{n}d}{\|x\|_{2}}\). So we define the function
\[\Psi_{n}(x)=\int_{S^{n-1}}\left[\sqrt{n}\left\langle\frac{x}{\|x\|_{2}},u \right\rangle\leq x\right]d\vartheta_{n}\]
As was remarked above, for every \(x\)
\[\lim_{n\to\infty}\Psi_{n}(x)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{-\frac{1} {2}x^{2}}dx\]
Since functions \(\Psi_{n}\) are monotonic, we should also have
\[\lim_{n\to\infty}\Psi_{n}(x_{n})=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{x}e^{- \frac{1}{2}x^{2}}dx\]
for any sequence \(x_{n}\) that tends to \(x\) as \(n\to\infty\).
We rewrite (37) as
\[\int_{K_{n}\backslash\{0\}}\Psi_{n}\left(-\frac{\sqrt{n}d}{\|x\|_{2}}\right)d \mu_{n}\geq\varepsilon \tag{38}\]
The function \(\Psi_{n}\) is non-decreasing, thus
\[\|x\|_{2}\geq r>0\Rightarrow\Psi_{n}\left(-\frac{\sqrt{n}d}{\|x\|_{2}}\right) \geq\Psi_{n}\left(-\frac{\sqrt{n}d}{r}\right)\]
So for (38) condition
\[\int_{K_{n}\backslash B_{r}^{n}}\Psi_{n}\left(-\frac{\sqrt{n}d}{r}\right)d\mu _{n}\geq\varepsilon \tag{39}\]
would be sufficient, we may go further and require
\[(1-V(B_{r}^{n}))\Psi_{n}\left(-\frac{\sqrt{n}d}{r}\right)\geq\varepsilon, \tag{40}\]
where \(V(B_{r}^{n})\) is the volume of the \(n\)-ball with radius \(r\).
The volume of the unit \(n\)-ball \(B_{1}^{n}\) is
\[\frac{\sqrt{\pi}^{n}}{\Gamma\left(\frac{n}{2}+1\right)}.\]
The unit volume corresponds to the radius
\[\omega_{n}=\frac{\Gamma\left(\frac{n}{2}+1\right)^{\frac{1}{n}}}{\sqrt{\pi}} \sim\sqrt{\frac{n}{2\pi e}}\]
Pick a number \(\alpha\in(0;1)\) such that
\[\alpha\frac{1}{\sqrt{e}}\Phi^{-1}(\varepsilon)<-d \tag{41}\]
and set
\[r=\alpha\omega_{n}\]
Inequality (40) turns into
\[(1-\alpha^{n})\Psi_{n}\left(-\frac{\sqrt{n}d}{\alpha\omega_{n}}\right)\geq\varepsilon \tag{42}\]
Note that
\[\lim_{n\to\infty}\frac{\sqrt{n}d}{\alpha\omega_{n}}=\lim_{n\to\infty}\sqrt{ \frac{2\pi e}{n}}\frac{\sqrt{n}d}{\alpha}=\frac{\sqrt{2\pi e}d}{\alpha}\]
Now take limit of the left side of (42) and apply condition (41)
\[\lim_{n\to\infty}(1-\alpha^{n})\Psi_{n}\left(-\frac{\sqrt{n}d}{ \alpha\omega_{n}}\right)=\frac{1}{\sqrt{2\pi}}\int_{-\infty}^{-\frac{\sqrt{2 \pi e}d}{\alpha}}e^{-\frac{1}{2}x^{2}}dx\\ =\int_{-\infty}^{-\frac{\sqrt{2\pi e}d}{\alpha}}e^{-\pi\left( \frac{1}{\sqrt{2\pi}}x\right)^{2}}d\left(\frac{1}{\sqrt{2\pi}}x\right)=\int_{ -\infty}^{-\frac{\sqrt{e}d}{\alpha}}e^{-\pi x^{2}}dx=\Phi\left(-\frac{\sqrt{e }d}{\alpha}\right)\\ >\Phi(\Phi^{-1}(\varepsilon))=\varepsilon\]
Thus for all sufficiently large \(n\)
\[(1-\alpha^{n})\Psi_{n}\left(-\frac{\sqrt{n}d}{\alpha\omega_{n}}\right)>\varepsilon\]
and, consequently,
\[\int_{S^{n-1}}\mu_{n}(\langle x,u\rangle\leq-d)d\vartheta_{n}\geq\varepsilon, \tag{43}\]
which means that for all sufficiently large \(n\) the desired direction \(u\in S^{n-1}\) with property
\[\mu_{n}(\langle x,u\rangle\leq-d)\geq\varepsilon\]
exists.
Theorem 6.1 is an immediate corollary of the Theorem 6.6.
**Conclusions.** Roughly speaking, the results established in [12], [11] tell us that generally hyperplane sections of convex bodies across arbitrary directions lead to a gaussian distribution, suggesting that the asymptotic behavior of \(d_{n}(\varepsilon)\) different from \(\Phi^{-1}(\varepsilon)\) is probably caused by a few degenerate directions. For example, Theorems 6.4 and 5.5 tell us that the asymptotic behavior of \(d_{n}(\varepsilon)\) for simplexes corresponds to a function \(-\ln\varepsilon\), but one could also note that the simplex \(\Delta_{n}\) is unusually <<stretched>> in \(n\) directions corresponding to its corners, and that almost all of its volume by Theorem 5.3 is concentrated in the euclidean ball of a diameter much smaller than the diameter of \(\Delta_{n}\).
Discrete isoperimetric problem.
**Introduction.** In this section instead of considering the euclidean distance between two subsets of volume \(\varepsilon\) we will be considering the Manhattan distance. We can no longer say that for fixed \(\varepsilon\) this distance is bounded, but we will derive a result(Theorem 7.3) concerning its asymptotic behaviour in the case of the unit cube \([0;1]^{n}\). The key idea in the proof of it would be to replace the unit cube \([0;1]^{n}\) with a lattice, for which the solution of our problem is already known(see Theorem 7.2).
We consider the Manhattan distance \(d\) in the unit cube \([0;1]^{n}\). We may ask a similar question: what is the largest Manhattan distance between two bodies of volume \(\varepsilon>0\) in the unit cube \([0;1]^{n}\)?
Turns out this problem could be dealt with by discretization. Consider a lattice \(L=\{\frac{0}{m},\frac{1}{m},\ldots,\frac{m}{m}\}^{n}\) inside \([0;1]^{n}\), two points \(x\) and \(y\) in \(L\) are adjacent whenever \(d(x,y)=\frac{1}{m}\). By \(t\)-boundary \(A_{(t)}\) of a subset \(A\subseteq L\) we mean the set of all points of \(L\) that are at a distance at most \(\frac{t}{m}\) from \(A\). The latter concept is analagous to \(\delta\)-enlargements, and one could think of \(|A_{(1)}\setminus A|\) as an analogue of the surface area. This gives rise to the discrete isoperimetirc problem: how large the \(t\)-boundary of a set \(A\subseteq L\) with fixed size can be? This question was answered in [4]
**Theorem 7.1** ([4, Corollary 9]).: _Let \(A\subset[k]^{n}\). For any \(t=0,1,\ldots,\) the \(t\)-boundary of \(A\) is at least as large as the \(t\)-boundary of the first \(|A|\) elements of \([k]^{n}\) in the simplicial order._
Here \([k]^{n}\) is the lattice \(\{0,\ldots,k-1\}^{n}\). Simplicial order on \([k]^{n}\) is defined by setting \(x<y\) if either \(\sum x_{i}<\sum y_{i}\), or \(\sum x_{i}=\sum y_{i}\) and for some \(j\) we have \(x_{j}>y_{j}\) and \(x_{i}=y_{i}\) for all \(i<j\). This discrete isoperimetric inequality leads to
**Theorem 7.2** ([4, Corollary 10]).: _There are sets \(A,B\subset[k]^{n}\) with \(|A|=r,|B|=s\), and \(d(A,B)\geq d\) iff the distance between the first \(r\) and the last \(s\) elements of the simplicial order on \([k]^{n}\) is at least \(d\)._
From this discrete version of the problem considered in the first pararaph we can derive
**Theorem 7.3**.: _If by \(d_{n}(\varepsilon)\) we denote the largest Manhattan distance between two bodies of volume \(\varepsilon\in(0;\frac{1}{2})\) in the unit cube \([0;1]^{n}\), then_
\[\lim_{n\to\infty}\frac{d_{n}(\varepsilon)}{\sqrt{n}}=-2\sqrt{\frac{\pi}{6}} \Phi^{-1}(\varepsilon)\]
_The function \(-2\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon)\) is asymptotically equivalent to_
\[\frac{2}{\sqrt{6}}\sqrt{-\ln\varepsilon}\]
_as \(\varepsilon\to 0\)._
Proof.: Pick a number \(a\) such that
\[-a<\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon)<0\]
If by \(V_{n}\) and \(W_{n}\) we denote the subsets of \([0;1]^{n}\) defined by inequalities \(\sum x_{i}\leq\frac{1}{2}\sqrt{n}-a\) and \(\sum x_{i}\geq\frac{1}{2}\sqrt{n}+a\), respectively, then by applying the central limit theorem just as we did in Theorem 6.3 we shall get
\[\lim_{n\to\infty}\mu(V_{n})=\lim_{n\to\infty}\mu(W_{n})=\int_{- \infty}^{-a}\sqrt{\frac{6}{\pi}}e^{-6x^{2}}dx=\Phi\left(-\sqrt{\frac{6}{\pi}}a \right)\\ <\Phi\left(\sqrt{\frac{6}{\pi}}\sqrt{\frac{\pi}{6}}\Phi^{-1}( \varepsilon)\right)=\varepsilon\]
So there is a number \(N\) such that for all \(n>N\) the volumes of \(V_{n}\) and \(W_{n}\) are going to be lesser than \(\varepsilon\), and it could be noted that the Manhattan distance between them is equal to \(2a\sqrt{n}\). From now on we assume that \(n>N\).
Now consider two bodies \(A\) and \(B\) inside \([0;1]^{n}\) of volume \(\varepsilon\) and a lattice \(L_{m}=\{\frac{0}{m},\ldots,\frac{m}{m}\}^{n}\). The discretization is justified by the fact that the distance does not decrease when we restrict our attention to the lattice \(L_{m}\)
\[d(A,B)\leq d(A\cap L_{m},B\cap L_{m})\]
Since \(\mu(V_{n})<\mu(A)\) and \(\mu(W_{n})<\mu(B)\), for all \(m\) large enough we shall have
\[|V_{n}\cap L_{m}|\leq|A\cap L_{m}|\quad|W_{m}\cap L_{m}|\leq|B\cap L_{m}|\]
And by Theorem 7.2 this implies
\[d(A\cap L_{m},B\cap L_{m})\leq d(V_{n}\cap L_{m},W_{n}\cap L_{m}),\]
because \(V_{n}\cap L_{m}\) and \(W_{n}\cap L_{m}\) are the first \(|V_{n}\cap L_{m}|\) and the last \(|W_{n}\cap L_{m}|\) elements of the simplicial order on \(L_{m}\), respectively.
We conclude that for all \(m\) large enough
\[d(A,B)\leq d(V_{n}\cap L_{m},W_{n}\cap L_{m})\]
Also
\[\lim_{m\to\infty}d(V_{n}\cap L_{m},W_{n}\cap L_{m})=d(V_{n},W_{n})=2a\sqrt{n}\]
Thus for all \(n>N\)
\[d(A,B)\leq 2a\sqrt{n},\]
but \(a\) was chosen here as an arbitrary number greater than \(-\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon)\), from which
\[\limsup_{n\to\infty}\frac{d_{n}(\varepsilon)}{\sqrt{n}}\leq-2\sqrt{\frac{\pi }{6}}\Phi^{-1}(\varepsilon) \tag{44}\]
follows.
If we assume that
\[\sqrt{\frac{\pi}{6}}\Phi^{-1}(\varepsilon)<-a<0,\]
then as it was already shown in the proof of Theorem 6.3 for all \(n\) large enough
\[\mu(V_{n})=\mu(W_{n})>\varepsilon\]
But the Manhattan distance between \(V_{n}\) and \(W_{n}\) is \(2a\sqrt{n}\).
From this observation we can conclude
\[\liminf_{n\to\infty}\frac{d_{n}(\varepsilon)}{\sqrt{n}}\geq-2\sqrt{\frac{\pi} {6}}\Phi^{-1}(\varepsilon),\]
which together with inequality (44) gives us the statement of the theorem.
**Conclusions.** There are also other variations of the discrete isoperimetric problem. For example, Theorem 7.2 has an analogue for the non-negative orthant of the integer lattice \(\mathbb{Z}_{+}^{n}\)(see [22] and [4, Theorem 4]).
Conclusions.
Upper bounds on the distance between subsets in unit-volume \(\ell_{p}\) balls with \(p\in[1;2]\cup\{+\infty\}\) have been established. A case of \(p\in(2;+\infty)\) remains. Since asymptotically our estimates for unit-volume euclidean balls(\(p=2\)) and for unit cubes(\(p=+\infty\)) are the same, we expect to have a similar asymptotic behavior for all \(p\in[2;+\infty]\). As was remarked before both cases of \(p=2\) and \(p=+\infty\) could be approached by providing a Lipschitz map that transforms Gaussian measure into the uniform measure on the corresponding \(\ell_{p}\) balls. Perhaps, the same approach might work out for \(p\in(2;+\infty)\). On the page 4 of [3] it was remarked that the uniform measure on \(\ell_{p}^{n}\) balls with \(p\in[2;+\infty]\)\(\ast\)can be obtained from the canonical Gaussian measure as Lipschitz transform\(\ast\). Although, we are not sure about the exact meaning and implications of this statement.
In Theorem 6.6 we established a sort of a general lower bound regarding our problem. But is it possible to find a general upper bound on the distance between two subsets of volume \(\varepsilon\in(0;\frac{1}{2})\) in a unit-volume convex body? Clearly, we are going to have to consider only some certain \(\ast\)good\(\ast\) convex bodies. For example, in a convex body stretched far in a particular direction two subsets of fixed volume could be at an arbitrarily large distance. One such notion of a \(\ast\)good\(\ast\) convex body is related to the isotropic position. But this means that we are looking for general estimates on the isoperimetric problem in isotropic convex bodies, this appears to be a complicated open question(Kannan-Lovasz-Simonovits conjecture, see [14]). Furthermore, even the relation between the volume and the isotropic constant of a convex body seems to be at the core of another open problem(isotropic constant conjecture, see [5], [7]). In paper [6] a very good lower bound related to the KLS conjecture was proven. There is a lot of material on isotropic convex bodies, for example, [9].
Log-concave probability measures generalize uniform measures on convex bodies. So one might try to consider the problem in a more general setting. Gaussian measures are log-concave, which means that in a more general setting our estimates may be sharp for some specific distributions in the one-dimensional case. In paper [15] some general estimates on the specific version of the isoperimetric problem were proven(see, for example, theorem 1.3). |
2310.00425 | Sharp endpoint $L^p-$estimates for Bilinear spherical maximal functions | In this article, we address endpoint issues for the bilinear spherical
maximal functions. We obtain borderline restricted weak type estimates for the
well studied bilinear spherical maximal function
$$\mathfrak{M}(f,g)(x):=\sup_{t>0}\left|\int_{\mathbb
S^{2d-1}}f(x-ty_1)g(x-ty_2)\;d\sigma(y_1,y_2)\right|,$$ in dimensions $d=1,2$
and as an application, we deduce sharp endpoint estimates for the multilinear
spherical maximal function. We also prove $L^p-$estimates for the local
spherical maximal function in all dimensions $d\geq 2$, thus improving the
boundedness left open in the work of Jeong and Lee
(https://doi.org/10.1016/j.jfa.2020.108629). We further study necessary
conditions for the bilinear maximal function, \[\mathcal M
(f,g)(x)=\sup_{t>0}\left|\int_{\mathbb
S^{1}}f(x-ty)g(x+ty)\;d\sigma(y)\right|\] to be bounded from $L^{p_1}(\mathbb
R^2)\times L^{p_2}(\mathbb R^2)$ to $L^p(\mathbb R^2)$ and prove sharp results
for a linearized version of $\mathcal M$. | Ankit Bhojak, Surjeet Singh Choudhary, Saurabh Shrivastava, Kalachand Shuin | 2023-09-30T16:10:01Z | http://arxiv.org/abs/2310.00425v3 | # Sharp endpoint \(L^{p}-\)estimates for bilinear spherical maximal functions
###### Abstract.
In this article, we address endpoint issues for the bilinear spherical maximal functions. We obtain borderline restricted weak type estimates for the well studied bilinear spherical maximal function
\[\mathfrak{M}(f,g)(x):=\sup_{t>0}\left|\int_{S^{2d-1}}f(x-ty_{1})g(x-ty_{2})\;d \sigma(y_{1},y_{2})\right|,\]
in dimensions \(d=1,2\) and as an application, we deduce sharp endpoint estimates for the multilinear spherical maximal function. We also prove sharp \(L^{p}-\)estimates for the local spherical maximal function in all dimensions \(d\geq 2\), thus resolving a question left open in the work of Jeong and Lee ([https://doi.org/10.1016/j.jfa.2020.108629](https://doi.org/10.1016/j.jfa.2020.108629)). We further study necessary conditions for the bilinear maximal function,
\[\mathcal{M}(f,g)(x)=\sup_{t>0}\left|\int_{S^{1}}f(x-ty)g(x+ty)\;d\sigma(y)\right|\]
to be bounded from \(L^{p_{1}}(\mathbb{R}^{2})\times L^{p_{2}}(\mathbb{R}^{2})\) to \(L^{p}(\mathbb{R}^{2})\) and prove sharp results for a linearized version of \(\mathcal{M}\).
2010 Mathematics Subject Classification: Primary 42B15, 42B25
###### Contents
* 1 Introduction and main results
* 2 Bilinear spherical maximal functions \(\mathfrak{M}\)
* 3 Linear intermediary spherical functions: Proof of Theorem 1.7
* 4 Proof of Theorems 1.12 and 1.13
* 5 Necessary conditions for \(\mathcal{M}\) in dimensions \(d\geq 2\)
## 1. Introduction and main results
Let \(\sigma\) be the surface measure on the \((d-1)\)-dimensional unit sphere \(\mathbb{S}^{d-1}\). The study of \(L^{p}-\)improving properties of the spherical averages defined as
\[A_{t}f(x)=\int_{\mathbb{S}^{d-1}}f(x-ty)\;d\sigma(y),\]
was initiated by Littman [14] and Strichartz [15]. Consider the linear spherical maximal function defined as
\[A_{*}f(x)=\sup_{t>0}\left|A_{t}f(x)\right|.\]
For \(p>\frac{d}{d-1}\), the \(L^{p}-\)boundedness of \(A_{*}\) was established by Stein [16] for dimension \(d\geq 3\) and Bourgain [1] in dimension two. At the endpoint \(p=\frac{d}{d-1}\), the restricted weak type inequality was proved by Bourgain for dimensions \(d\geq 3\). To study the case of dimension two, Oberlin [1] considered the following linearized version of the spherical maximal operator
\[\widetilde{A}f(x)=\int_{\mathbb{S}^{d-1}}f(x-|x|y)\;d\sigma(y),\]
and showed that \(\widetilde{A}\) maps \(L^{2,1}(\mathbb{R}^{2})\) to \(L^{2,\infty}(\mathbb{R}^{2})\) boundedly. However, using a modification of the Kakeya construction from [16], it was finally shown in [15] that the spherical maximal operator \(A_{*}\) does not map \(L^{2,1}(\mathbb{R}^{2})\) to \(L^{2,\infty}(\mathbb{R}^{2})\).
The local spherical maximal operator \(A_{loc}f(x)=\sup_{1\leq t\leq 2}|A_{t}f(x)|\) has also been widely studied. Schlag and Sogge [1, 17] showed that \(A_{loc}\) is bounded from \(L^{p}(\mathbb{R}^{d})\) to \(L^{q}(\mathbb{R}^{d})\) when \(d\geq 2\) and \(\left(\frac{1}{p},\frac{1}{q}\right)\) lies in the interior of the closed convex hull generated by the points \((0,0),\;\left(\frac{d-1}{d},\frac{d-1}{d}\right),\;\left(\frac{d-1}{d},\frac{ d}{d}\right)\) and \(\left(\frac{d^{2}-d}{d^{2}+1},\frac{d-1}{d^{2}+1}\right)\) and unbounded if \(\left(\frac{1}{p},\frac{1}{q}\right)\) lies outside the closed convex hull. Moreover, boundedness on the boundary of the hull was resolved by Lee [11] for all dimensions \(d\geq 2\). The sparse bounds for the operator \(A_{loc}\) was obtained by Lacey [1]. We also refer to [1] for sparse domination of the corresponding \(r-\)variation operators related to the family of spherical averages.
### Part I
In this part we deal with the well-studied bilinear variant of the linear spherical maximal function defined as
\[\mathfrak{M}(f,g)(x):=\sup_{t>0}\left|\int_{\mathbb{S}^{2d-1}}f(x-ty_{1})g(x- ty_{2})\;d\sigma(y_{1},y_{2})\right|.\]
This operator first appeared in the work [1]. The optimal strong type bounds were obtained in [12] by a method of "slicing". The authors showed that for \(d\geq 2\), the operator \(\mathfrak{M}\) maps \(L^{p_{1}}(\mathbb{R}^{d})\times L^{p_{2}}(\mathbb{R}^{d})\) to \(L^{p}(\mathbb{R}^{d})\) if and only if \(p>\frac{d}{2d-1}\) and \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}\) except the points \((1,\infty,1)\) and \((\infty,1,1)\). Moreover, the restricted weak type estimate \(\mathfrak{M}:L^{p_{1},1}(\mathbb{R}^{d})\times L^{p_{2},1}(\mathbb{R}^{d}) \to L^{\frac{d}{2d-1},\infty}(\mathbb{R}^{d})\) holds for dimensions \(d\geq 3\); however, the endpoint boundedness in dimension two remained open.
Now, we discuss the case of dimension one. The boundedness of \(\mathfrak{M}\), when \(d=1\), was studied in [1, 2]. They proved that \(\mathfrak{M}:L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\to L^{p}( \mathbb{R})\) for \(p_{1},p_{2}>2\) and \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}\). Moreover, the weak type estimates fails at the lines \(p_{1}=2\) and \(p_{2}=2\).
The boundedness of the lacunary analogue of \(\mathfrak{M}\) was studied in [1, 10] in dimensions \(d\geq 2\) and [1] in dimension one. Also see [1] for boundedness of bilinear maximal functions defined with degenerate surfaces.
Our first main result addresses the restricted weak type bounds for \(\mathfrak{M}\) at the respective endpoints in dimensions one and two. We have the following,
**Theorem 1.1**.: _Let \(1\leq p_{1},p_{2}\leq\infty\) with \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}\). The following is true,_
1. _For_ \(1\leq p\leq\infty\)_, and either_ \(p_{1}=2\) _or_ \(p_{2}=2\)_, we have_ \[\mathfrak{M}:L^{p_{1},1}(\mathbb{R})\times L^{p_{2},1}(\mathbb{R})\to L^{p, \infty}(\mathbb{R}).\]
2. _For_ \(1<p_{1},p_{2}<2\)_, we have_ \[\mathfrak{M}:L^{p_{1},1}(\mathbb{R}^{2})\times L^{p_{2},1}(\mathbb{R}^{2})\to L ^{\frac{2}{3},\infty}(\mathbb{R}^{2}).\]
We now discuss some applications to multilinear spherical averages. Let \(m\geq 2\) and \(f_{1},f_{2},\ldots,f_{m}\in\mathcal{S}(\mathbb{R})\). Consider the multilinear spherical maximal function defined by
\[\mathcal{S}^{m}(f_{1},f_{2},\ldots,f_{m})(x):=\sup_{t>0}\Big{|}\int_{\mathbb{S }^{m-1}}\prod_{i=1}^{m}f_{i}(x-ty_{i})\ d\sigma_{m-1}(\vec{y})\Big{|},\]
where \(d\sigma_{m-1}(\vec{y})\) is the normalized surface measure on the sphere \(\mathbb{S}^{m-1}\). The corresponding single scale averaging operator was studied by Oberlin [1] and Bak and Shim [1]. Later, Shrivastava and Shuin [1] proved a complete \(L^{p_{1}}(\mathbb{R})\times L^{p_{2}}(\mathbb{R})\times\cdots L^{p_{m}}( \mathbb{R})\to L^{p}(\mathbb{R})\) boundedness for Banach indices satisfying \(\frac{1}{p}\leq\sum_{i=1}^{m}\frac{1}{p_{i}}\)
Figure 1. The figure denotes the region of \(L^{p_{1},1}\times L^{p_{2},1}\to L^{p,\infty}\) boundedness of \(\mathcal{M}\) on the line segments \(U_{1}U_{2}\cup U_{2}U_{3}\) and \(V_{1}V_{2}\) in dimensions one and two respectively.
\(1\leq p_{i},p\leq\infty\). Very recently, Dosidis and Ramos investigated the boundedness of the maximal function \(\mathcal{S}^{m}\). They proved the following,
**Theorem 1.2** ([16]).: _Let \(m\geq 2\), \(1\leq p_{i}\leq\infty\) for \(i=1,2,\ldots,m\) and \(\frac{1}{p}=\sum_{i=1}^{m}\frac{1}{p_{i}}\). Then there exists \(C>0\) such that the following boundedness holds_
\[\|\mathcal{S}^{m}(f_{1},f_{2},\ldots,f_{m})\|_{L^{p}(\mathbb{R})}\leq C\prod_{ i=1}^{m}\|f_{i}\|_{L^{p_{i}}(\mathbb{R})}, \tag{1.1}\]
_if and only if_
_(1) \(\frac{1}{p}=\sum_{i=1}^{m}\frac{1}{p_{i}}<m-1\),_
_(2) for every \(i=1,2,\ldots,m\), \(\sum_{j=1,j\neq i}^{m}\frac{1}{p_{j}}<m-\frac{3}{2}\),_
_(3) \((\frac{1}{p_{1}},\frac{1}{p_{2}},\ldots,\frac{1}{p_{m}})\notin\{0,1\}^{m} \setminus\{(0,0,\ldots,0)\}\)._
_If \((\frac{1}{p_{1}},\frac{1}{p_{2}},\ldots,\frac{1}{p_{m}})\in\{0,1\}^{m} \setminus\{(0,0,\ldots,0)\}\), then weak type estimate holds, i.e._
\[\|\mathcal{S}^{m}(f_{1},f_{2},\ldots,f_{m})\|_{L^{p,\infty}(\mathbb{R})}\leq C \prod_{i=1}^{m}\|f_{i}\|_{L^{p_{i}}(\mathbb{R})}\]
_if and only if \((1)\) and \((2)\) both hold. Moreover, if for some \(i\in\{1,2,\ldots,m\}\), \(\sum_{j=1,j\neq i}^{m}\frac{1}{p_{j}}=m-\frac{3}{2}\), then the estimate \(\mathcal{S}^{m}:L^{p_{1}}(\mathbb{R})\times\cdots\times L^{p_{m}}(\mathbb{R}) \to L^{p,\infty}(\mathbb{R})\) does not hold._
Our next result concerns the restricted weak-type estimates for \(\mathcal{S}^{m}\) on the boundary points of the convex hull defined in Theorem 1.2. Namely, we have
**Corollary 1.3**.: _Let \(m\geq 2\), \(1\leq p_{i}\leq\infty\) for \(i=1,2,\ldots,m\) and \(\frac{1}{p}=\sum_{i=1}^{m}\frac{1}{p_{i}}\). Then there exists \(C>0\) such that_
\[\|\mathcal{S}^{m}(f_{1},f_{2},\ldots,f_{m})\|_{L^{p,\infty}(\mathbb{R})}\leq C \prod_{i=1}^{m}\|f_{i}\|_{L^{p_{i},1}(\mathbb{R})} \tag{1.2}\]
_holds true for \((\frac{1}{p_{1}},\frac{1}{p_{2}},\ldots,\frac{1}{p_{m}})\) belongs to the following closed line segments_
\[L_{k,j}=\left\{\left(\frac{1}{p_{1}},\frac{1}{p_{2}},\ldots,\frac{1}{p_{m}} \right):0\leq\frac{1}{p_{k}}\leq\frac{1}{2},\frac{1}{p_{j}}=\frac{1}{2},\frac {1}{p_{i}}=1,\ \forall\ i\neq k,j\right\},\ \forall\ j,k\in\{1,2,\ldots,m\}.\]
### Local spherical maximal function
In [10], Jeong and Lee also studied the improving estimates for the local bilinear maximal operator,
\[\mathfrak{M}_{loc}(f,g)(x):=\sup_{1<t<2}\left|\int_{\mathbb{S}^{2d-1}}f(x-ty _{1})g(x-ty_{2})\ d\sigma(y_{1},y_{2})\right|.\]
They proved the following,
**Theorem 1.4** ([10]).: _Let \(d\geq 2\), \(1\leq p_{1},p_{2}\leq\infty\), and \(\frac{1}{2}<p<\infty\). Then the estimate_
\[\|\mathfrak{M}_{loc}\|_{L^{p_{1}}(\mathbb{R}^{d})\times L^{p_{2}}(\mathbb{R}^ {d})\to L^{p}(\mathbb{R}^{d})}\lesssim 1, \tag{1.3}\]
_holds for \(\frac{1}{p}\leq\frac{1}{p_{1}}+\frac{1}{p_{2}}<\min\left\{\frac{2d-1}{d},1+\frac{d} {p},\frac{1}{p}+\frac{2(d-1)}{d}\right\}\). Conversely (1.3) holds only if \(\frac{1}{p}\leq\frac{1}{p_{1}}+\frac{1}{p_{2}}\leq\min\left\{\frac{2d-1}{d},1+ \frac{d}{p}\right\}\). Furthermore, for \(p=\infty\), (1.3) holds if and only if \(0\leq\frac{1}{p_{1}}+\frac{1}{p_{2}}\leq 1\)._
The above range is sharp for the strong type boundedness for \(0<p<d\) and \(\frac{2(d-1)}{d-2}<p<\infty\). Using a Knapp type example, the authors in [3] showed that the condition \(\frac{1}{p_{1}}+\frac{1}{p_{2}}\leq\frac{d-1}{(d+1)p}+\frac{2d}{d+1}\) is necessary for the strong \((p_{1},p_{2},p)-\)boundedness of \(\mathfrak{M}_{loc}\); however, the sufficiency of the condition was left open. Our next result provides sharp bounds for \(\mathfrak{M}_{loc}\), thus resolving the sufficiency issue in all dimensions \(d\geq 2\). To state our results, we require a few notations. We define the line segments \(\ell_{1}^{d},\ell_{2}^{d},\),\(\ell_{3}^{d}\) and quadrilateral \(\mathscr{Q}^{d}\) as follows:
\[\ell_{1}^{d} =\left\{(x,y,z)\in[0,1]^{2}\times[0,2):z=x+y=\frac{2d-1}{d} \right\},\] \[\ell_{2}^{d} =\left\{(x,y,z)\in[0,1]^{2}\times[0,2):x+y=\frac{2d-1}{d}=\frac{d -1}{(d+1)}z+\frac{2d}{d+1}\right\},\] \[\ell_{3}^{d} =\left\{(x,y,z)\in[0,1]^{2}\times[0,2):x+y=\frac{d-1}{(d+1)}z+ \frac{2d}{d+1}=1+dz\right\},\] \[\mathscr{Q}^{d} =\left\{(x,y,z)\in[0,1]^{2}\times[0,2):z<x+y=\frac{2d-1}{d}<\frac {d-1}{(d+1)}z+\frac{2d}{d+1}\right\}.\]
**Theorem 1.5**.: _Let \(d\geq 2\), \(1\leq p_{1},p_{2}\leq\infty\), and \(\frac{1}{2}<p<\infty\). Then the following are true,_
1. _The estimate (_1.3_) holds for_ \(\frac{1}{p}\leq\frac{1}{p_{1}}+\frac{1}{p_{2}}\leq\min\left\{\frac{2d-1}{d},1+ \frac{d}{p},\frac{d-1}{(d+1)p}+\frac{2d}{d+1}\right\}\) _and_ \(\left(\frac{1}{p_{1}},\frac{1}{p_{2}},\frac{1}{p}\right)\notin\ell_{1}^{d} \cup\ell_{2}^{d}\cup\ell_{3}^{d}\cup\mathscr{Q}^{d}\)_._
2. _The restricted weak type inequality,_ (1.4) \[\|\mathfrak{M}_{loc}\|_{L^{p_{1},1}(\mathbb{R}^{d})\times L^{p_{2},1}(\mathbb{ R}^{d})\to L^{p,\infty}(\mathbb{R}^{d})}\lesssim 1,\] _holds when_ * \(d\geq 3\) _and_ \(\left(\frac{1}{p_{1}},\frac{1}{p_{2}},\frac{1}{p}\right)\in\ell_{1}^{d}\cup \ell_{2}^{d}\cup\ell_{3}^{d}\)_,_ * \(d=2\)_, and_ \(\left(\frac{1}{p_{1}},\frac{1}{p_{2}},\frac{1}{p}\right)\in\ell_{1}^{2}\cup \ell_{2}^{2}\cup\ell_{3}^{2}\setminus\left\{(1,\frac{1}{2},\frac{3}{2}),( \frac{1}{2},1,\frac{3}{2})\right\}\)_._
3. _The restricted strong type inequality,_ (1.5) \[\|\mathfrak{M}_{loc}\|_{L^{p_{1},1}(\mathbb{R}^{d})\times L^{p_{2},1}(\mathbb{ R}^{d})\to L^{p}(\mathbb{R}^{d})}\lesssim 1,\] _holds when,_ * \(d\geq 3\) _and_ \(\left(\frac{1}{p_{1}},\frac{1}{p_{2}},\frac{1}{p}\right)\in\mathscr{Q}^{d}\)_,_ * \(d=2\)_, and_ \(\left(\frac{1}{p_{1}},\frac{1}{p_{2}},\frac{1}{p}\right)\in\mathscr{Q}^{2}\)_,_ \(p_{1}\neq 1\) _and_ \(p_{2}\neq 1\)
**Remark 1.6**.: _In [1], the authors obtained sparse domination of bilinear spherical maximal function in the range covered in Theorem 1.4. We do not pursue a sparse domination of the bilinear spherical maximal function for the improved range obtained in Theorem 1.5 in this paper but aim to obtain it elsewhere._
To prove Theorem 1.1 (1) and 1.5, we will rely on a modification of slicing argument in [10]. In contrast to domination of \(\mathfrak{M}\) by the product of Hardy-Littlewood and linear spherical maximal function, we will dominate \(\mathfrak{M}\) by intermediate averaging operators defined below,
\[\mathfrak{A}^{r}f(x)=\|A_{t}f(x)\|_{L^{r}([1,2],t^{d-1}dt)},\ 1\leq r\leq\infty.\]
Observe that \(\mathfrak{A}^{1}\) and \(\mathfrak{A}^{\infty}\) are the local Hardy-Littlewood and local spherical maximal functions respectively. We also define the maximal operator \(\mathfrak{A}^{r}_{*}\) as follows,
\[\mathfrak{A}^{r}_{*}f(x)=\sup_{k\in\mathbb{Z}}\|A_{2^{kt}}f(x)\|_{L^{r}([1,2], t^{d-1}dt)},\ 1\leq r\leq\infty.\]
We set \(A=\left(\frac{1}{r},0\right),\ \ P=\left(\frac{r(d^{2}+1)}{d+1+r(d^{2}-d)}, \frac{r^{\prime}(d^{2}+1)}{d-1}\right),\ \ Q=\left(\frac{rd-r+1}{rd},\frac{1}{r^{\prime}d}\right),\) and \(R=\left(\frac{rd-r+1}{rd},\frac{rd-r+1}{rd}\right)\). We denote \(QR\) to be the open line segment joining the points \(Q\) and \(R\). We have the following boundedness for \(\mathfrak{A}^{r}\) and \(\mathfrak{A}^{r}_{*}\),
**Theorem 1.7**.: _Let \(d\geq 2\) and \(1\leq p,q\leq\infty\). The following holds true,_
1. _For_ \(1\leq r\leq\infty\) _and_ \(\frac{1}{q}\leq\frac{1}{p}\leq\min\{\frac{d}{q}+\frac{1}{r},\frac{d-1}{d}+\frac{ 1}{dr},\frac{d-1}{(d+1)q}+\frac{2}{(d+1)r}+\frac{d-1}{d+1}\}\) _and_ \(\left(\frac{1}{p},\frac{1}{q}\right)\notin\{P,Q,R\}\cup QR\)_, we have_ (1.6) \[\|\mathfrak{A}^{r}\|_{L^{p}(\mathbb{R}^{d})\to L^{q}(\mathbb{R}^{d})} \lesssim 1.\] _Moreover for_ \(\left(\frac{1}{p},\frac{1}{q}\right)\in\{P,Q,R\}\)_, we have the restricted weak type inequality,_ \[\|\mathfrak{A}^{r}\|_{L^{p,1}(\mathbb{R}^{d})\to L^{q,\infty}(\mathbb{R}^{d})} \lesssim 1,\] _and for_ \(\left(\frac{1}{p},\frac{1}{q}\right)\in QR\)_, we have the restricted strong type inequality_ \[\|\mathfrak{A}^{r}\|_{L^{p,1}(\mathbb{R}^{d})\to L^{q}(\mathbb{R}^{d})} \lesssim 1.\]
2. _Conversely, the estimate (_1.6_) holds only if_ \(\frac{1}{q}\leq\frac{1}{p}\leq\min\{\frac{d}{q}+\frac{1}{r},\frac{d-1}{d}+\frac {1}{dr},\frac{d-1}{(d+1)q}+\frac{2}{(d+1)r}+\frac{d-1}{d+1}\}\)_._
3. _For_ \(1\leq r\leq\infty\)_, the operator_ \(\mathfrak{A}_{*}^{r}\) _maps_ \(L^{p}(\mathbb{R}^{d})\) _to_ \(L^{p}(\mathbb{R}^{d})\) _for_ \(p>\frac{dr}{dr-r+1}\)_. Moreover_ \(\mathfrak{A}_{*}^{r}\) _is of restricted weak type_ \(\left(\frac{dr}{dr-r+1},\frac{dr}{dr-r+1}\right)\) _for_ \(1\leq r<\infty\)_._
**Remark 1.8**.: _It was pointed out in [BOR\({}^{+}\)22] that the local spherical maximal function does not map \(L^{p}(\mathbb{R}^{d})\) to \(L^{q}(\mathbb{R}^{d})\) for when \(d\geq 3\) and \(\left(\frac{1}{p},\frac{1}{q}\right)\) lies in the line segment joining the points \(\left(\frac{d-1}{d},\frac{d-1}{d}\right)\) and \(\left(\frac{d-1}{d},\frac{1}{d}\right)\). An analogous result also holds for the averaging operators \(\mathfrak{A}^{r}\) when \(d\geq 2\) i.e. a strong type boundedness does not hold for \(\mathfrak{A}^{r}\) when \(\left(\frac{1}{p},\frac{1}{q}\right)\) lies in the line segment \(QR\), this is a corollary of Proposition 3.8._
We prove Theorem 1.1, Corollary 1.3 and Theorem 1.5 in Section 2. The proof of Theorem 1.7 is contained in Section 3.
### Part II
Recently, for \(0<\theta<2\pi\), Greenleaf et al [14] studied the bilinear spherical average
\[\mathcal{A}_{t}^{\theta}(f,g)(x)=\int_{\mathbb{S}^{1}}f(x-ty)g(x-t\Theta y)\;d \sigma(y),\;t>0,\]
where \(\Theta\) denotes the counter-clockwise rotation by an angle \(\theta\). They proved the following,
**Theorem 1.9** ([14]).: _Let \(0<\theta<2\pi\) and \(1\leq p_{1},p_{2},p\leq\infty\). The operator \(\mathcal{A}_{1}^{\theta}\) is bounded from \(L^{p_{1}}(\mathbb{R}^{2})\times L^{p_{2}}(\mathbb{R}^{2})\) to \(L^{p}(\mathbb{R}^{2})\) if_
1. \(\theta\neq\pi\) _and_ \(\left(\frac{1}{p_{1}},\frac{1}{p_{2}},\frac{1}{p}\right)\) _lies in the closed convex hull generated by the vertices_ \((0,0,0),\;\left(\frac{2}{3},\frac{2}{3},1\right),\;\left(0,\frac{2}{3},\frac{1 }{3}\right),\;\left(\frac{2}{3},0,\frac{1}{3}\right),\;(1,0,1),\;(0,1,1),\) _and_ \(\left(\frac{1}{2},\frac{1}{2},\frac{1}{2}\right)\)_, or_
2. \(\theta=\pi\) _and_ \(\left(\frac{1}{p_{1}},\frac{1}{p_{2}},\frac{1}{p}\right)\) _lies in the closed convex hull generated by the vertices_ \((0,0,0),\;\left(\frac{2}{3},\frac{2}{3},1\right),\;\left(0,\frac{2}{3},\frac{1 }{3}\right),\;\left(\frac{2}{3},0,\frac{1}{3}\right),\;(1,0,1),\) _and_ \((0,1,1)\)_._
The boundedness of \(\mathcal{A}_{1}^{\theta}\) is sharp in the Holder range of indices mentioned in the above theorem, i.e. when \(p\geq 1\). However, it is unknown if the boundedness holds outside the closed convex hull in the above theorem for \(p<1\).
**Remark 1.10**.: _In [14], the authors showed that_
\[\frac{3}{p_{1}}+\frac{3}{p_{2}}\leq 1+\frac{3}{p}, \tag{1.7}\]
_is a necessary condition for the operator \(\mathcal{A}_{1}^{\pi}\) to be bounded from \(L^{p_{1}}(\mathbb{R}^{2})\times L^{p_{2}}(\mathbb{R}^{2})\) to \(L^{p}(\mathbb{R}^{2})\). In Section 5, we will provide a different example based on functions of product type, that works for dimensions \(d\geq 2\). In particular, we show that the condition 1.7 is in fact necessary for boundedness of \(\mathcal{A}_{1}^{\pi}\) even when the functions are restricted to the space of functions of product type. We refer to [13] for analogous results for the Kakeya maximal function acting on functions of product type._
The lacunary maximal function \(\mathcal{M}_{lac}^{\theta}(f,g)(x)=\sup_{k\in\mathbb{Z}}|A_{2^{k}}^{\theta}( f,g)(x)|\) was studied in [15], where they obtained the following boundedness for \(\mathcal{M}_{lac}^{\theta}\),
**Theorem 1.11** ([15]).: _Let \(0<\theta<2\pi\) and \(1<p_{1},p_{2}\leq\infty\), with \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}\). Then the following estimate holds_
\[\|\mathcal{M}_{lac}^{\theta}(f,g)\|_{L^{p}(\mathbb{R}^{2})}\lesssim\|f\|_{L^{ p_{1}}(\mathbb{R}^{2})}\|g\|_{L^{p_{2}}(\mathbb{R}^{2})}\]
_for \(p>\frac{3}{4}\)._
In this article, we are concerned with the study of the full maximal function given by,
\[\mathcal{M}^{\theta}(f,g)(x)=\sup_{t>0}|A_{t}^{\theta}(f,g)(x)|.\]
We note that \(\mathcal{M}^{\theta}\) is bounded from \(L^{p_{1}}(\mathbb{R}^{2})\times L^{p_{2}}(\mathbb{R}^{2})\) to \(L^{p}(\mathbb{R}^{2})\) for \(2<p\leq\infty\) with \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}\). Indeed, the boundedness is a consequence of bilinear interpolation, the linear estimates \(A_{*}:L^{p}(\mathbb{R}^{2})\to L^{p}(\mathbb{R}^{2}),\;p>2\) and the pointwise inequality
\[\mathcal{M}^{\theta}(f,g)(x)\leq\min\{\|f\|_{\infty}A_{*}g(x),\|g\|_{\infty}A_ {*}f(x)\}.\]
Our first main result concerns the restricted weak type non-inequality of the maximal function \(\mathcal{M}^{\theta}\) at the endpoint boundaries \(p_{1}=2\) and \(p_{2}=2\) in dimension two. We have,
**Theorem 1.12**.: _Let \(1<p_{1},p_{2}\leq\infty,\ \frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}\) and \(0<\theta<2\pi\). The operator \(\mathcal{M}^{\theta}\) does not map \(L^{p_{1},1}(\mathbb{R}^{2})\times L^{p_{2},1}(\mathbb{R}^{2})\) to \(L^{p,\infty}(\mathbb{R}^{2})\) if \(p_{1}\leq 2\) or \(p_{2}\leq 2\). In particular, \(\mathcal{M}^{\theta}\) is not of restriced weak type \((2,2,1)\)._
Currently, we do not have a positive result for the operator \(\mathcal{M}^{\theta}\) in the local \(L^{2}\) range: \(p_{1},p_{2}>2\) and \(1<p\leq 2\). However, we will prove sharp boundedness results for the linearized version of \(\mathcal{M}^{\theta}\) defined as
\[\widetilde{\mathcal{A}}^{\theta}(f,g)(x)=\int_{\mathbb{S}^{1}}f(x-|x|y)g(x-| x|\Theta y)\;d\sigma(y).\]
More precisely, we have
**Theorem 1.13**.: _Let \(2<p_{1},p_{2}\leq\infty\) with \(\frac{1}{p_{1}}+\frac{1}{p_{2}}=\frac{1}{p}\). Then_
\[\|\widetilde{\mathcal{A}}^{\theta}(f,g)\|_{L^{p}(\mathbb{R}^{2})}\lesssim\|f \|_{L^{p_{1}}(\mathbb{R}^{2})}\|g\|_{L^{p_{2}}(\mathbb{R}^{2})}.\]
_Moreover, for \(p_{1}=2\) or \(p_{2}=2\) we have the following restricted weak type estimates,_
\[\|\widetilde{\mathcal{A}}^{\theta}(f,g)\|_{L^{p,\infty}(\mathbb{R}^{2})} \lesssim\|f\|_{L^{p_{1},1}(\mathbb{R}^{2})}\|g\|_{L^{p_{2},1}(\mathbb{R}^{2})}.\]
The proof is based on an appropriate change of variable in the polar co-ordinates and a multilinear version of Bourgain's interpolation trick (Lemma 3.6).
**Remark 1.14**.: _We would like to remark that using our method of proof, one can recover the result of Oberlin [10] that the corresponding linear spherical operator \(\widetilde{A}\) is also of restricted weak type \((2,2)\). This simplifies the proof of Oberlin [10]._
The proofs of Theorem 1.12 and Theorem 1.13 are contained in Section 4. In Section 5, we also discuss some necessary conditions for \(L^{p}-\) boundedness of higher dimensional version of \(\mathcal{M}^{\theta}\).
Bilinear spherical maximal functions \(\mathfrak{M}\): Proofs of Theorem 1.1, Corollary 1.3, and Theorem 1.5
### Proof of Theorem 1.1 (1):
It is enough to show \(\mathfrak{M}:L^{2,1}(\mathbb{R})\times L^{2,1}(\mathbb{R})\to L^{1,\infty}( \mathbb{R})\). Observe that due to symmetry, it is enough to deal with the integral over the arc from \(\theta=0\) to \(\theta=\frac{\pi}{4}\) instead of integral over \(\mathbb{S}^{1}\). By a change of variable we have,
\[\mathfrak{M}(f,g)(x)\lesssim\sup_{t>0}\int_{y=0}^{\frac{1}{\sqrt{2}}}f(x-ty)g( x-t\sqrt{1-y^{2}})\;\frac{dy}{\sqrt{1-y^{2}}}+\text{ similar terms}\]
By decomposing the interval \([0,\frac{1}{\sqrt{2}}]\) into dyadic annuli, we have
\[\mathfrak{M}(f,g)(x)\lesssim\sum_{k=1}^{\infty}T_{k}(f,g)(x),\]
where the operator \(T_{k}\) is defined by
\[T_{k}(f,g)(x): =\sup_{t>0}\int_{2^{-k-1}}^{2^{-k}}|f(x-ty)||g(x-t\sqrt{1-y^{2}})|\;dy\]
We have the following pointwise inequality,
\[T_{k}(f,g)(x)\lesssim\min\;\left\{2^{\frac{k}{3}}M_{3}f(x)M_{\frac{3}{2}}g(x),2 ^{-\frac{k}{3}}M_{\frac{3}{2}}f(x)M_{3}g(x)\right\}. \tag{2.1}\]
Indeed by Cauchy-Schwartz inequality, we get
\[T_{k}(f,g)(x)\] \[\lesssim \sup_{t>0}\left(\int_{2^{-k-1}}^{2^{-k}}|f(x-ty)|^{3}\;dy\right)^ {\frac{1}{3}}\sup_{t>0}\left(\int_{2^{-k-1}}^{2^{-k}}|g(x-t\sqrt{1-y^{2}})|^{ \frac{3}{2}}\;dy\right)^{\frac{2}{3}}\] \[= 2^{-\frac{k}{3}}\sup_{t>0}\left(\frac{1}{2^{-k}}\int_{2^{-k-1}} ^{2^{-k}}|f(x-ty)|^{3}\;dy\right)^{\frac{1}{3}}\sup_{t>0}\left(\int_{\sqrt{1-2 ^{-2k-2}}}^{\sqrt{1-2^{-2k-2}}}|g(x-tz)|^{\frac{3}{2}}\;\frac{zdz}{\sqrt{1-z^{ 2}}}\right)^{\frac{2}{3}}\] \[\lesssim 2^{\frac{k}{3}}M_{3}f(x)M_{\frac{3}{2}}g(x).\]
The other inequality in (2.1) follows similarly. Therefore, for a fixed \(N\in\mathbb{N}\), we have
\[\mathfrak{M}(f,g)(x) \lesssim\sum_{k=1}^{N}2^{\frac{k}{3}}M_{3}f(x)M_{\frac{3}{2}}g(x) +\sum_{k=N+1}2^{-\frac{k}{3}}M_{\frac{3}{2}}f(x)M_{3}g(x\] \[\lesssim 2^{\frac{N}{3}}M_{3}f(x)M_{\frac{3}{2}}g(x)+2^{-\frac{N}{3}}M _{\frac{3}{2}}f(x)M_{3}g(x).\]
Hence using the weak type bounds \(M_{p}:L^{p}\to L^{p,\infty},\;p\geq 1\), we obtain
\[|\{x\in\mathbb{R}:\;\mathfrak{M}(\chi_{F},\chi_{G})(x)>\lambda\}| \lesssim\frac{1}{\lambda}\left(2^{\frac{N}{3}}|F|^{\frac{1}{3}}|G |^{\frac{2}{3}}+2^{-\frac{N}{3}}|F|^{\frac{2}{3}}|G|^{\frac{1}{3}}\right)\] \[=\frac{1}{\lambda}|F|^{\frac{1}{2}}|G|^{\frac{1}{2}},\]
where we have chosen \(N=3\log_{2}(|F|^{\frac{1}{6}}|G|^{-\frac{1}{6}})\).
### Proof of Corollary 1.3:
In order to prove this theorem we invoke the slicing argument from [13]. Applying slicing argument we get for \(m\geq 3\),
\[\mathcal{S}^{m}(f_{1},f_{2},\ldots,f_{m})(x) = \sup_{t>0}\Big{|}\int_{B^{m-2}(0,1)}\prod_{i=1}^{m-2}f_{i}(x-ty_{i})\] \[\int_{r_{y}\mathbb{S}^{1}}f_{m-1}(x-ty_{m-1})f_{m}(x-ty_{m})d \sigma_{r_{y}}d\vec{y}\Big{|},\]
where \(r_{y}=\sqrt{1-|\tilde{y}|^{2}}\), \(\tilde{y}=(y_{1},y_{2},\ldots,y_{m-2})\) and \(d\vec{y}=\prod_{i=1}^{m-2}dy_{i}\). Now, applying a change of variable we get
\[\mathcal{S}^{m}(f_{1},f_{2},\ldots,f_{m})(x)\]
\[=\sup_{t>0}\Big{|}\int_{B^{m-2}(0,1)}\prod_{i=1}^{m-2}f_{i}(x-ty_{i}) \int_{\mathbb{S}^{1}}f_{m-1}(x-tr_{y}y_{m-1})f_{m}(x-tr_{y}y_{m})d\sigma d\vec{y} \Big{|}\] \[\lesssim\prod_{i=1}^{m-2}Mf_{i}(x)\mathfrak{M}(f_{m-1},f_{m})(x).\]
Here, \(M\) denote the Hardy-Littlewood maximal function. Note that due to symmetry of the sphere \(\mathbb{S}^{m-1}\) we can interchange the role of the functions \(f_{i}\) for \(i=1,2,\ldots,m\) and deduce the following inequality
\[\mathcal{S}^{m}(f_{1},f_{2},\ldots,f_{m})(x)\lesssim\mathfrak{M}(f_{j},f_{k}) (x)\prod_{i=1,i\neq j,k}^{m}Mf_{i}(x), \tag{2.2}\]
for any \(j,k\in\{1,2,\ldots,m\}\). Therefore, using the estimates of Theorem 1.1 (1) we get the desired restricted weak-type estimates.
### Proof of Theorem 1.1 (2):
We claim that the following holds,
\[\mathfrak{M}(f,g)(x)\lesssim\mathfrak{A}_{*}^{r}(f)(x)\mathfrak{A}_{*}^{r^{ \prime}}(g)(x). \tag{2.3}\]
The Theorem 1.1 (2) follows at once by the above claim along with Holder's inequality and Theorem 1.7 (3). We prove the above claim. Indeed, an application of the slicing argument implies that
\[\mathfrak{M}(f,g)(x)\] \[= \sup_{t>0}\left|\int_{0}^{1}\int_{\mathbb{S}^{d-1}}f(x-tsy_{1}) \;d\sigma(y_{1})\int_{\mathbb{S}^{d-1}}g(x-t\sqrt{1-s^{2}}y_{2})\;d\sigma(y_{ 2})s^{d-1}(1-s^{2})^{\frac{d-2}{2}}\;ds\right|\] \[\leq \sup_{t>0}\left(\int_{0}^{1}\left|A_{ts}f(x)\right|^{r}s^{d-1}\; ds\right)^{\frac{1}{r}}\left(\int_{0}^{1}\left|A_{t\sqrt{1-s^{2}}}f(x)\right|^{r^{ \prime}}s(1-s^{2})^{\frac{d-2}{2}}\;ds\right)^{\frac{1}{r^{\prime}}}.\]
Hence by a change of variable, it is enough to show that \(\sup_{t>0}\left(\int_{0}^{1}|A_{ts}|^{r}s^{d-1}ds\right)^{\frac{1}{r}}\lesssim \mathfrak{A}_{*}^{r}(f)(x)\). Now,we have
\[\sup_{t>0}\left(\int_{0}^{1}|A_{ts}|^{r}s^{d-1}ds\right)^{\frac{1} {r}} \leq\sup_{t>0}\left(\int_{0}^{\frac{1}{2}}|A_{ts}|^{r}s^{d-1}ds \right)^{\frac{1}{r}}+\sup_{t>0}\left(\int_{\frac{1}{2}}^{1}|A_{ts}|^{r}s^{d-1 }ds\right)^{\frac{1}{r}}\] \[=\frac{1}{2^{\frac{d}{r}}}\sup_{t>0}\left(\int_{0}^{1}|A_{ts}|^{r }s^{d-1}ds\right)^{\frac{1}{r}}+\sup_{k\in\mathbb{Z}}\sup_{2^{k}\leq t\leq 2^{k+1} }\left(\int_{\frac{1}{2}}^{1}|A_{ts}|^{r}s^{d-1}ds\right)^{\frac{1}{r}}.\]
and the claim follows.
### Proof of Theorem 1.5:
The proof follows directly by Theorem 1.7. Indeed, by arguing similar to the inequality (2.3), we have
\[\mathfrak{M}_{loc}(f,g)(x)\lesssim\mathfrak{A}^{r}(f)(x)\mathfrak{A}^{r^{\prime} }(g)(x), \tag{2.4}\]
and by using appropriate Holder's inequality along with the bounds for the operator \(\mathfrak{A}^{r}\) obtained in Theorem 1.7, we obtain Thereom 1.5.
## 3. Linear intermediary spherical functions: Proof of Theorem 1.7
### Proof of Theorem 1.7:
We employ a multiscale decomposition of the operator \(\mathfrak{A}^{r}_{*}\). Let \(\phi\in\mathcal{S}(\mathbb{R}^{d})\) be a function such that \(\widehat{\phi}\) is supported in \(B(0,2)\) and \(\widehat{\phi}(\xi)=1\) for \(\xi\in B(0,1)\). We define \(\widehat{\phi}_{t}(\xi)=\widehat{\phi}(t\xi)\) and \(\widehat{\psi}_{t}(\xi)=\widehat{\phi}(t\xi)-\widehat{\phi}(2t\xi)\). Then, we have the identity
\[\widehat{\phi}(\xi)+\sum_{j=1}^{\infty}\widehat{\psi}_{2^{-j}}(\xi)=1,\;\xi \neq 0. \tag{3.1}\]
Using this identity, we have the following pointwise inequalities,
\[\mathfrak{A}^{r}f(x)\leq A_{1}^{r,0}f(x)+\sum_{j\geq 1}A_{1}^{r,j}f(x),\;\; \text{and}\;\;\mathfrak{A}^{r}_{*}f(x)\leq A_{*}^{r,0}f(x)+\sum_{j\geq 1}A_{*}^{r, j}f(x),\]
where
\[A_{*}^{r,j}f(x)=\sup_{k\in\mathbb{Z}}A_{2^{k}}^{r,j}f(x)=\sup_{k \in\mathbb{Z}}\|A_{2^{k}t}(f*\psi_{2^{k-j}})(x)\|_{L^{r}([1,2],t^{d-1}dt)},\] \[A_{*}^{r,0}f(x)=\sup_{k\in\mathbb{Z}}A_{2^{k}}^{r,0}f(x)=\sup_{k \in\mathbb{Z}}\|A_{2^{k}t}(f*\phi_{2^{k}})(x)\|_{L^{r}([1,2],t^{d-1}dt)}.\]
The operator \(A_{1}^{r,j}\) has been studied extensively in [1] to obtain variation estimates for spherical averages. We will require some \(L^{p}\) estimates that are obtained by interpolating the bounds for the endpoint \(r=1,\infty\). Some of our bounds are already proved in [1], however we provide a proof for completeness.
**Lemma 3.1**.: _Let \(d\geq 2\) and \(j\in\mathbb{N}\). For \(1\leq r\leq\infty\), we have the following estimates,_
\[\|A_{1}^{r,j}\|_{L^{1}(\mathbb{R}^{d})\to L^{\infty}(\mathbb{R}^{d})} \lesssim 2^{\frac{j}{r^{\prime}}}, \tag{3.3}\] \[\|A_{1}^{r,j}\|_{L^{1}(\mathbb{R}^{d})\to L^{1}(\mathbb{R}^{d})} \lesssim 2^{\frac{j}{r^{\prime}}},\] (3.4) \[\|A_{1}^{r,j}\|_{L^{r}(\mathbb{R}^{d})\to L^{r}(\mathbb{R}^{d})} \lesssim 2^{-j\left(\frac{d-1}{r^{\prime}}\right)},\;\;\;\;\;1 \leq r\leq 2,\] (3.5) \[\|A_{1}^{r,j}\|_{L^{r}(\mathbb{R}^{d})\to L^{r^{\prime}}(\mathbb{R}^{d})} \lesssim 2^{-j\left(\frac{d-1}{r^{\prime}}\right)},\;\;1 \leq r\leq 2,\] (3.6) \[\|A_{1}^{r,j}\|_{L^{2}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^{d})} \lesssim 2^{-j\left(\frac{d-2}{2}+\frac{1}{r}\right)},\;\;2 \leq r\leq\infty. \tag{3.2}\]
Proof.: It is easy to see that
\[\|A_{1}^{1,j}\|_{L^{1}(\mathbb{R}^{d})\to L^{1}(\mathbb{R}^{d})} \lesssim 1, \tag{3.8}\] \[\|A_{1}^{1,j}\|_{L^{1}(\mathbb{R}^{d})\to L^{\infty}(\mathbb{R}^{d})} \lesssim 1. \tag{3.7}\]
By the kernel estimate \(|\psi_{2^{-j}}*d\sigma_{t}(x)|\lesssim\frac{2^{j}}{(1+2^{j}\left|\left|x-t\right| \right)^{N}}\), for large \(N\), we get
\[\|A_{1}^{\infty,j}\|_{L^{1}(\mathbb{R}^{d})\to L^{1}(\mathbb{R}^{d})} \lesssim 2^{j}, \tag{3.10}\] \[\|A_{1}^{\infty,j}\|_{L^{1}(\mathbb{R}^{d})\to L^{\infty}(\mathbb{R}^{d })} \lesssim 2^{j}. \tag{3.9}\]
The estimates (3.2) and (3.3) follows by interpolating the bounds (3.8) with (3.10) and (3.7) with (3.9) respectively.
Using the Fourier decay of the spherical measure \(|\widehat{d\sigma}(\xi)|\lesssim(1+\left|\xi\right|)^{-\frac{d-1}{2}}\), we obtain
\[\|A_{1}^{2,j}\|_{L^{2}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^{d})}\lesssim 2^{-j \left(\frac{d-1}{2}\right)}. \tag{3.11}\]
For \(1\leq r\leq 2\), the estimate (3.4) follows directly from estimates (3.7) and (3.11). Similarly, the estimate (3.5) follows from the estimates (3.8) and (3.11).
Moreover, by Stein's proof of spherical maximal function [10], we have
\[\|A_{1}^{\infty,j}\|_{L^{2}(\mathbb{R}^{d})\to L^{2}(\mathbb{R}^{d})}\lesssim 2 ^{-j\left(\frac{d-2}{2}\right)}. \tag{3.12}\]
For \(2\leq r\leq\infty\), the estimate (3.6) follows by interpolating (3.11) and (3.12).
We now state certain \(L^{p}-\)improving estimates for the single scale versions of the local spherical maximal operator \(A_{i}^{\infty,j}\). For \(d=2\), the estimates were obtained by Lee [11] by relying on local smoothing estimates and the case \(d\geq 3\) follows from the well-known Strichartz estimates. We refer to [11] for details.
**Lemma 3.2** ([11]).: _Let \(1\leq r\leq\infty\). We have the following,_
1. _For_ \(d\geq 3\)_, the following is true,_ (3.13) \[\|A_{1}^{r,j}\|_{L^{\frac{2r}{r+1}}(\mathbb{R}^{d})\to L^{\frac{2r ^{\prime}(d+1)}{d-1}}(\mathbb{R}^{d})}\lesssim 2^{-j\left(\frac{d^{2}-2d-1}{2r^ {\prime}(d+1)}\right)}.\]
2. _For_ \(\frac{1}{p}+\frac{3}{q}=1\) _and_ \(q>\frac{14}{3}\)_, we have_ (3.14) \[\|A_{1}^{r,j}\|_{L^{\frac{pr}{p+r-1}}(\mathbb{R}^{2})\to L^{r^{ \prime}q}(\mathbb{R}^{2})}\lesssim 2^{\frac{j}{r^{\prime}}\left(1-\frac{5}{q} \right)}.\]
Proof.: The estimate (3.13) follows by interpolating (3.8) and the Strichartz estimate [11] below,
\[\|A_{1}^{\infty,j}\|_{L^{2}(\mathbb{R}^{d})\to L^{\frac{2(d+1)}{d-1}}( \mathbb{R}^{d})}\lesssim 2^{-j\left(\frac{d^{2}-2d-1}{2(d+1)}\right)}.\]
The estimate (3.14) follows by using the bound (3.8) and the bound below, which was obtained in [11] by local smoothing estimates,
\[\|A_{1}^{\infty,j}\|_{L^{p}(\mathbb{R}^{2})\to L^{q}(\mathbb{R}^{2})}\lesssim 2 ^{j\left(1-\frac{5}{q}\right)}.\]
We now provide \(L^{p}-\) bounds for the maximal operators \(A_{*}^{r,j}\). The first is a direct consequence of estimates for the case \(r=1,\infty\).
**Lemma 3.3**.: _Let \(d\geq 2\) and \(j\geq 0\). Then for \(f\in L^{1}_{loc}(\mathbb{R}^{d})\), we have_
\[A_{*}^{r,j}f(x)\lesssim 2^{\frac{j}{r^{\prime}}}M_{HL}f(x),\quad a.e.\ x\in \mathbb{R}^{d}.\]
Now, using the estimates for single scale operators \(A_{1}^{r,j}\), we will prove some \(L^{p}-\)estimates for the maximal operators \(A_{*}^{r,j}\) with norm depending on \(j\). To do that, we rely on a interpolation scheme based on a vector-valued argument. We state Lemma 3.4 and the proof can be obtained by arguments similar to Lemma 5.4 of [1].
**Lemma 3.4**.: _Let \(1\leq p_{1},p_{2}\leq 2\) be such that_
\[\|A_{1}^{r,j}\|_{L^{p_{1}}(\mathbb{R}^{d})\to L^{p_{1},\infty}( \mathbb{R}^{d})} \leq C_{1},\] \[\|A_{*}^{r,j}\|_{L^{p_{2}}(\mathbb{R}^{d})\to L^{p_{2},\infty}( \mathbb{R}^{d})} \leq C_{2}.\]
_Then, we have_
\[\|A_{*}^{r,j}\|_{L^{p}(\mathbb{R}^{d})\to L^{p}(\mathbb{R}^{d})}\lesssim C_{1} ^{\frac{p_{1}}{2}}C_{2}^{1-\frac{p_{1}}{2}},\ \text{for}\ p=\frac{2p_{2}}{2+p_{2}-p_{1}}.\]
**Lemma 3.5**.: _Let \(d\geq 2\). The following holds true._
* _Let_ \(1\leq r\leq 2\)_. Then_ (3.15) \[\|A_{*}^{r,j}f(x)\|_{L^{\frac{2}{3-r}}(\mathbb{R}^{d})}\lesssim 2^{-j\left( \frac{d(r-1)}{2}-\frac{1}{r^{\prime}}\right)}\|f\|_{L^{\frac{2}{3-r}}(\mathbb{ R}^{d})}.\]
* _Let_ \(2\leq r\leq\infty\)_. Then_ (3.16) \[\|A_{*}^{r,j}f(x)\|_{L^{2}(\mathbb{R}^{d})}\lesssim 2^{-j\left(\frac{d-2}{2}+ \frac{1}{r}\right)}\|f\|_{L^{2}(\mathbb{R}^{d})}.\]
Proof.: When \(r\geq 2\), the bound (3.16) follows from Littlewood-Paley theory and the estimate (3.6). For \(1\leq r\leq 2\), an application of Lemma 3.4 along with the estimates \(\|A_{*}^{r,j}\|_{L^{1}\to L^{1,\infty}}\lesssim 2^{\frac{j}{r^{\prime}}}\) (Lemma 3.3) and (3.4) implies the bound (3.15).
We will require the interpolation trick of Bourgain that provides a restricted weak type estimate from two strong type bounds with appropriate growth and decay, see [11] for details.
**Lemma 3.6**.: _[_11_]_ _Let \(\epsilon_{1},\epsilon_{2}>0\). Suppose that \(\{T_{j}\}\) is a sequence of \(n-\)linear (or sublinear) operators such that for some \(1\leq p_{1}^{i},p_{2}^{i}<\infty\), \(i=1,2,\ldots,n\) and \(1\leq q_{1},q_{2}<\infty\),_
\[\|T_{j}(f^{1},f^{2},\ldots,f^{n})\|_{L^{q_{1}}(\mathbb{R}^{d})} \leq M_{1}2^{\epsilon_{1}j}\prod_{i=1}^{n}\|f^{i}\|_{L^{p_{1}^{i}}( \mathbb{R}^{d})}, \tag{3.18}\] \[\|T_{j}(f^{1},f^{2},\ldots,f^{n})\|_{L^{q_{2}}(\mathbb{R}^{d})} \leq M_{2}2^{-\epsilon_{2}j}\prod_{i=1}^{n}\|f^{i}\|_{L^{p_{2}^{i}}( \mathbb{R}^{d})}. \tag{3.17}\]
_Then \(T=\sum_{j}T_{j}\) is bounded from \(L^{p^{1},1}(\mathbb{R}^{d})\times L^{p^{2},1}(\mathbb{R}^{d})\times\cdots \times L^{p^{n},1}(\mathbb{R}^{d})\) to \(L^{q,\infty}(\mathbb{R}^{d})\), i.e._
\[\|T(f^{1},f^{2},\cdots,f^{n})\|_{L^{q,\infty}(\mathbb{R}^{d})}\lesssim M_{1}^{ \theta}M_{2}^{1-\theta}\prod_{i=1}^{n}\|f^{i}\|_{L^{p^{i},1}(\mathbb{R}^{d})}, \tag{3.19}\]
_where \(\theta=\epsilon_{2}/(\epsilon_{1}+\epsilon_{2})\), \(1/q=\theta/q_{1}+(1-\theta)/q_{2}\), \(1/r=\theta/r_{1}+(1-\theta)/r_{2}\) and \(1/p^{i}=\theta/p_{1}^{i}+(1-\theta)/p_{2}^{i}\)._
**Remark 3.7**.: _We note that in the proof of Theorem 1.7, we use the above lemma for the case when \(q_{1}=\infty\). This can be justified as follows. We obtain an intermediate strong type estimate with growth in \(j\) using real interpolation with the estimates (3.17) and (3.18), and apply Lemma 3.6 for the intermediate estimate and the bound (3.18). This process results in the same restricted weak type estimate as (3.19)._
We now complete the proof of Theorem 1.7.
**Proof of Theorem 1.7 (1):** It is clear that \(\|\mathfrak{A}^{r}\|_{L^{\infty}(\mathbb{R}^{d})\to L^{\infty}(\mathbb{R}^{d})} \lesssim 1\) and \(\|\mathfrak{A}^{r}\|_{L^{r}(\mathbb{R}^{d})\to L^{\infty}(\mathbb{R}^{d})} \lesssim 1\). Hence, by real interpolation, it is enough to prove the restricted weak type estimate for \(\mathfrak{A}^{r}\) at the points \(P=\left(\frac{r(d^{2}+1)}{d+1+r(d^{2}-d)},\frac{r^{\prime}(d^{2}+1)}{d-1} \right),\ Q=\left(\frac{rd-r+1}{rd},\frac{1}{r^{\prime}d}\right),\) and \(R=\left(\frac{rd-r+1}{rd},\frac{rd-r+1}{rd}\right)\) (see Figure 3). To achieve that, we will use the Lemma 3.6 along with the estimates,
\[\|A_{1}^{r,j}\|_{L^{p_{1}}(\mathbb{R}^{d})\to L^{q_{1}}}(\mathbb{R}^{d}) \lesssim 2^{\epsilon_{1}j},\ \ \ \|A_{1}^{r,j}\|_{L^{p_{2}}(\mathbb{R}^{d})\to L^{q_{2}}(\mathbb{R}^{d})} \lesssim 2^{-\epsilon_{2}j},\]
with the necessary variables prescribed as in Table 4.
**Proof of Theorem 1.7 (3):** It is enough to prove the restricted weak type inequality for \(\mathfrak{A}_{*}^{r}\) at the point \(\left(\frac{rd}{rd-r+1},\frac{rd}{rd-r+1}\right)\); For \(p>\frac{rd}{rd-r+1}\), the \(L^{p}\) boundedness for \(\mathfrak{A}_{*}^{r}\) follows by interpolation.
For \(r\geq 2\), we rely on estimates (3.16) and \(\mathfrak{A}_{*}^{r}:L^{1}(\mathbb{R}^{d})\to L^{1,\infty}(\mathbb{R}^{d})\) (Lemma 3.3). However, the interpolation Lemma 3.6 is not applicable in this case. We can get around
Figure 4. The table prescribes the values of the exponents used for the interpolation Lemma 3.6 to obtain restricted weak type bounds for \(\mathfrak{A}^{r}\).
this by interpolating the weak \((1,1)\) estimate with the strong \((2,2)\) estimate to control the quantity \(\|\mathfrak{A}_{*}^{r}\|_{L^{p_{0}}(\mathbb{R}^{d})\to L^{p_{0}}(\mathbb{R}^{d})}\) for some \(1<p_{0}<\frac{rd}{rd-r+1}\). Now we can obtain the desired restricted weak type by using Lemma 3.6 for the strong \((p_{0},p_{0})\) and \((2,2)\) bounds. The case \(1\leq r\leq 2\) can be resolved similarly by using the estimate (3.15) instead of (3.16). We leave the details to the reader.
**Proof of Theorem 1.7 (\(2\)):** Let \(\delta>0\) be a small number and \(c>1\) be a fixed constant. We define \(B(0,\delta)\) to be ball with center \(0\in\mathbb{R}^{d}\) and radius \(\delta\), and \(S^{\delta}(0)\) to be the \(\delta-\)neighborhood of \(S^{d-1}\), i.e. \(\{x\in\mathbb{R}^{d}:||x|-1|<\delta\}\). We also define \(R_{1}\) to be the rectangle of dimension \([-c\sqrt{\delta},c\sqrt{\delta}]^{d-1}\times[-c\delta,c\delta]\), centered at origin with smaller side along the direction of \(e_{d}=(0,0,\ldots,0,1)\) and \(R_{2}\) to be the rectangle of dimension \([-\sqrt{\delta},\sqrt{\delta}]^{d-1}\times[1,2]\) centered at \((0,0,\ldots,0,\frac{3}{2})\), with longer side along the direction of \(e_{d}\).
We define function \(f\) with \(\|f\|_{L^{p}}\simeq\delta^{\alpha}\) and a test set \(E\) with \(|E|\simeq\delta^{\beta}\) satisfying
\[\|A_{t}f\|_{L^{r}([1,2])}\gtrsim\delta^{\gamma},\ \ \text{for all}\ x\in E.\]
Since \(\delta>0\) is a small number, the required necessary condition holds if
\[\alpha\leq\frac{\beta}{q}+\gamma.\]
The functions and test sets are given in the Figure 5.
As mentioned in Remark 1.8, we provide an example showing that \(\mathfrak{A}^{r}\) does not map \(L^{p}(\mathbb{R}^{d})\) to \(L^{q}(\mathbb{R}^{d})\) when \(\left(\frac{1}{p},\frac{1}{q}\right)\) lies on the line segment \(QR\).
**Proposition 3.8**.: _The operator \(\mathfrak{A}^{r}\) does not map \(L^{\frac{dr}{dr-r+1},s}(\mathbb{R}^{d})\) to \(L^{q}(\mathbb{R}^{d})\), for any \(s>1\) and \(q\geq 1\)._
Figure 5. The table provides examples for the necessary conditions required for boundedness of the operator \(\mathfrak{A}^{r}\).
Proof.: Let \(p_{0}=\frac{dr}{dr-r+1}\). For a small number \(a>0\), we define
\[f(x)=\sum_{i=1}^{N}4^{\frac{d}{p_{0}}i}\chi_{B(0,a4^{-i})}(x).\]
For \(s>1\), we have the following bound on Lorentz space norm of \(f\),
\[\|f\|_{L^{p_{0},s}}\lesssim N^{\frac{1}{s}}.\]
To see this, consider the sets \(F_{j}=\left(4^{\frac{d}{p_{0}}j}\sum\limits_{k=0}^{j-1}4^{-\frac{d}{p_{0}}k},4 ^{\frac{d}{p_{0}}(j+1)}\sum\limits_{k=0}^{j}4^{-\frac{d}{p_{0}}k}\right]\) for \(1\leq j\leq N-1\). If \(t\in F_{j}\), we have
\[\{x\in\mathbb{R}^{d}:\ |f(x)|>t\}=B(0,a4^{-(j+1)}).\]
Denote \(d_{f}(t)=|\{x\in\mathbb{R}^{d}:\ |f(x)|>t\}|\) and observe that,
\[td_{f}(t)^{\frac{1}{p_{0}}}=t(a4^{-(j+1)d})^{\frac{1}{p_{0}}}\lesssim 1.\]
Therefore, we have
\[\|f\|_{L^{p_{0},s}} =p_{0}^{\frac{1}{s}}\left(\int_{0}^{4^{\frac{d}{p_{0}}}}[d_{f}(t )t]^{s}\frac{dt}{t}+\sum_{j=0}^{N-1}\int_{F_{j}}[d_{f}(t)t]^{s}\frac{dt}{t} \right)^{\frac{1}{s}}\] \[\lesssim\left(\int_{0}^{4^{\frac{d}{p_{0}}}}4^{-\frac{d}{p_{0}}} t^{s-1}\ dt+\sum_{j=0}^{N-1}\int_{F_{j}}\frac{dt}{t}\right)^{\frac{1}{s}}\] \[\leq\left(4^{\frac{d}{p_{0}}(s-1)}+\sum_{j=0}^{N-1}1\right)^{ \frac{1}{s}}\lesssim N^{\frac{1}{s}}.\]
Now for \(x\in\{y:\ 1\leq|y|\leq 2\}\), we have
\[\left(\int_{1}^{2}|A_{t}f(x)|^{r}\right)^{\frac{1}{r}}\gtrsim\sum_{i=1}^{N}4^ {\frac{d}{p_{0}}i}4^{-\left(d-1+\frac{1}{r}\right)}=N.\]
If \(\mathfrak{A}^{r}\) maps \(L^{\frac{dr}{dr-r+1},s}(\mathbb{R}^{d})\) to \(L^{q}(\mathbb{R}^{d})\), then
\[N\lesssim\|\mathfrak{A}^{r}f\|_{L^{q}}\lesssim\|\mathfrak{A}^{r}f\|_{L^{p_{0},s}\to L^{q}}\|f\|_{p_{0},s}\lesssim\|\mathfrak{A}^{r}f\|_{L^{p_{0},s}\to L^{q} }N^{\frac{1}{s}}.\]
which is a contradiction for \(s>1\).
## 4. Proof of Theorems 1.12 and 1.13
### Proof of Theorem 1.12:
We will in fact provide a counterexample for the local maximal operator defined by
\[\mathcal{M}^{\theta}_{loc}(f,g)(x)=\sup_{t\in[1,2]}|\mathcal{A}^{\theta}_{t}(f,g)(x)|,\]
The proof is based on the Kakeya construction used in [12] to show that the linear spherical maximal function does not map \(L^{2,1}(\mathbb{R}^{2})\to L^{2,\infty}(\mathbb{R}^{2})\). Let \(R_{l}\) be the collection of \(\delta^{-1}\) overlapping rectangles of dimension \(\delta\times\delta^{2}\) lying inside the cube \([-\delta,\delta]^{2}\) such that the longer side of \(R_{l}\) is parallel to the vector \(e^{i\delta l}\). We have \(|\cup_{l}R_{l}|\sim\frac{\theta^{2}}{\log\frac{1}{\delta}}\). Also let \([1,2]=\cup_{\nu}I_{\nu}\), where \(I_{\nu}\)'s are \(\delta^{-2}\) disjoint intervals of equal length. We denote \(R_{l,\nu}\) to be the rectangle obtained by translating the rectangle \(R_{l}\) by length \(I_{\nu}\) along its shorter side. Also let \(\widetilde{R}_{l,\nu}\) be the rectangle obtained by translating the rectangle \(R_{l}\) by length \(2I_{\nu}\) along its shorter side, followed by a counterclockwise rotation by the angle \(\theta\). We define
\[f(x)=\chi_{\bigcup\limits_{l}R_{l}}(x),\quad g(x)=\chi_{\bigcup\limits_{l,\nu }\widetilde{R}_{l,\nu}}(x).\]
We have \(\|f\|_{p_{1}}\sim(\frac{\delta^{2}}{\log\frac{1}{\delta}})^{\frac{1}{p_{1}}}\) and \(\|g\|\sim 1\). For \(x\in\bigcup\limits_{l,\nu}R_{l,\nu}\), it follows that \(\mathcal{M}^{\theta}_{loc}(f,g)(x)>c\delta\) for some absolute constant \(c>0\). Thus,
\[|\{\mathcal{M}^{\theta}_{loc}(f,g)(x)>c\delta\}|\gtrsim 1\gtrsim\frac{\delta^{-p( \frac{2}{p_{1}}-1)}\log\frac{1}{\delta}^{\frac{p}{p_{1}}}}{\delta^{p}}\|f\|_{p _{1}}^{p}\|g\|_{p_{2}}^{p}.\]
Therefore, we get that
\[\|\mathcal{M}^{\theta}_{loc}\|_{L^{p_{1},1}\times L^{p_{2},1}\to L^{p,\infty}} =\infty\text{ for }p_{1}\leq 2.\]
By symmetry, the same holds for \(p_{2}\leq 2\).
### Proof of Theorem 1.13:
We will prove the theorem for the case \(\theta=\pi\), the proof of other cases is similar.
\[\langle\widetilde{\mathcal{A}}^{\pi}(f,g),h\rangle\] \[= \int\int f(x+|x|y)g(x-|x|y)h(x)\;d\sigma(y)dx\] \[= \int_{r=0}^{\infty}\int_{\theta=0}^{2\pi}\int_{t=0}^{2\pi}f(r(e^{ i\theta}+e^{it}))g(r(e^{i\theta}-e^{it}))h(re^{i\theta})\;dtdrdrd\theta\] \[= \int_{r=0}^{\infty}\int_{\theta=0}^{2\pi}\int_{t=\theta}^{2\pi+ \theta}f\left(2r\cos\left(\frac{\theta-t}{2}\right)e^{i\frac{\theta+t}{2}} \right)g\left(2r\sin\left(\frac{\theta-t}{2}\right)e^{i(\frac{\theta+t}{2}+ \frac{\pi}{2})}\right)h(re^{i\theta})\;dtrdrd\theta\]
By the change of variable \(u=\cos\left(\frac{\theta-t}{2}\right)\), the above term is equal to
\[= \int_{r=0}^{\infty}\int_{\theta=0}^{2\pi}\int_{u=-1}^{1}f(2rue^{i (\theta-\cos^{-1}u)})g(2r\sqrt{1-u^{2}}e^{i(\theta-\cos^{-1}u+\frac{\pi}{2})} )h(re^{i\theta})\;2\frac{du}{\sqrt{1-u^{2}}}rdrd\theta\] \[= \int_{u=-1}^{1}\int_{\mathbb{R}^{2}}f(2ux)g(2\sqrt{1-u^{2}} \mathfrak{R}_{-\frac{\pi}{2}}x)h(\mathfrak{R}_{\cos^{-1}u}x)\;dx\frac{2du}{ \sqrt{1-u^{2}}},\]
where \(\mathfrak{R}_{\phi}x\) is the point obtained by rotating \(x\) by angle \(\phi\) counterclockwise. Applying Holder's inequality and scaling, we get the above quantity is dominated by
\[\lesssim \int_{u=-1}^{1}\|f(2u\cdot)\|_{p_{1}}\|g(2\sqrt{1-u^{2}}\cdot)\|_{p _{2}}\|h\|_{p^{\prime}}\frac{du}{\sqrt{1-u^{2}}}\] \[\lesssim \|f\|_{p_{1}}\|g\|_{p_{2}}\|h\|_{p^{\prime}}\int_{u=0}^{1}u^{- \frac{2}{p_{1}}}(1-u^{2})^{-\frac{1}{p_{2}}-\frac{1}{2}}\;du\] \[\lesssim \|f\|_{p_{1}}\|g\|_{p_{2}}\|h\|_{p^{\prime}}\int_{t=0}^{1}t^{- \frac{1}{p_{1}}-\frac{1}{2}}(1-t)^{-\frac{1}{p_{2}}-\frac{1}{2}}\;dt,\]
where the beta integral in the above quantity is finite for \(p_{1},p_{2}>2\) and the proof concludes.
We now prove the restricted weak type estimates for \(\widetilde{\mathcal{A}}^{\pi}\). We observe that \(\widetilde{\mathcal{A}}^{\pi}:L^{\infty}(\mathbb{R}^{2})\times L^{2,1}( \mathbb{R}^{2})\to L^{2,\infty}(\mathbb{R}^{2})\) and \(\widetilde{\mathcal{A}}^{\pi}:L^{2,1}(\mathbb{R}^{2})\times L^{\infty}( \mathbb{R}^{2})\to L^{2,\infty}(\mathbb{R}^{2})\) follows from the inequality,
\[\widetilde{\mathcal{A}}^{\pi}(f,g)(x)\leq\min\{\|f\|_{\infty}\widetilde{A}(|g| )(x),\|g\|_{\infty}\widetilde{A}(|f|)(x)\}.\]
We prove the restricted weak type inequality at the endpoint \((2,2,1)\), the proof of the remaining endpoints are similar. We decompose the operator \(\widetilde{\mathcal{A}}^{\pi}\) as follows
\[\widetilde{\mathcal{A}}^{\pi}(f,g)(x) = \int_{|u|\leq 1/2}f(2uxe^{-\iota\cos^{-1}u})g(2x\sqrt{1-u^{2}}e^{- \iota\cos^{-1}u+\iota\pi/2})\ \frac{2du}{\sqrt{1-u^{2}}}\] \[+ \int_{1/2<|u|\leq 1}f(2uxe^{-\iota\cos^{-1}u})g(2x\sqrt{1-u^{2}}e^ {-\iota\cos^{-1}u+\iota\pi/2})\ \frac{2du}{\sqrt{1-u^{2}}}\] \[:= I_{0}(f,g)(x)+I_{1}(f,g)(x).\]
We further decompose each of the two operators into infinitely many pieces as follows
\[I_{0}(f,g)(x) = \sum_{j\geq 1}\int_{2^{-j-1}<|u|\leq 2^{-j}}f(2uxe^{-\iota\cos^{-1 }u})g(2x\sqrt{1-u^{2}}e^{-\iota\cos^{-1}u+\iota\pi/2})\ \frac{2du}{\sqrt{1-u^{2}}}\] \[:= \sum_{j\geq 1}I_{0,j}(f,g)(x).\]
Note that the denominator \(\sqrt{1-u^{2}}\) in the expression above behaves like a constant as \(|u|\leq 1/2\). We have
\[\|I_{0,j}(f,g)\|_{L^{1}} \lesssim \int_{2^{-j-1}<|u|\leq 2^{-j}}\|f(2u\cdot)\|_{L^{4}}\|g(2\sqrt{1-u^{ 2}}\cdot)\|_{L^{4/3}}\ du \tag{4.2}\] \[\lesssim 2^{-j/2}\|f\|_{L^{4}}\|g\|_{L^{4/3}}. \tag{4.1}\]
On the other hand,
\[\|I_{0,j}(f,g)\|_{L^{1}} \lesssim \int_{2^{-j-1}<|u|\leq 2^{-j}}\|f(2u\cdot)\|_{L^{4/3}}\|g(2 \sqrt{1-u^{2}}\cdot)\|_{L^{4}}\ du \tag{4.4}\] \[\lesssim 2^{j/2}\|f\|_{L^{4/3}}\|g\|_{L^{4}}. \tag{4.3}\]
Applying Bourgain's interpolation trick (Lemma 3.6) we get that the operator \(I_{0}\) maps \(L^{2,1}\times L^{2,1}\) to \(L^{1,\infty}\).
Next, consider a similar decomposition of \(I_{1}(f,g)\) as follows
\[I_{1}(f,g)(x) = \sum_{j\geq 1}\int_{1-2^{-j}<|u|\leq 1-2^{-j-1}}f(2uxe^{-\iota \cos^{-1}u})g(2x\sqrt{1-u^{2}}e^{-\iota\cos^{-1}u+\iota\pi/2})\;\frac{2du}{ \sqrt{1-u^{2}}}\] \[:= \sum_{j\geq 1}I_{1,j}(f,g)(x).\]
Now computing the \(L^{1}-\)norm we get
\[\|I_{1,j}(f,g)\|_{L^{1}} \lesssim \int_{1-2^{-j}<|u|\leq 1-2^{-j-1}}\|f(2u\cdot)\|_{L^{4}}\|g(2 \sqrt{1-u^{2}}\cdot)\|_{L^{4/3}}\;\frac{du}{\sqrt{1-u^{2}}} \tag{4.6}\] \[\lesssim 2^{j/4}\|f\|_{L^{4}}\|g\|_{L^{4/3}}. \tag{4.5}\]
And
\[\|I_{1,j}(f,g)\|_{L^{1}} \lesssim \int_{1-2^{-j}<|u|\leq 1-2^{-j-1}}\|f(2u\cdot)\|_{L^{4/3}}\|g(2 \sqrt{1-u^{2}}\cdot)\|_{L^{4}}\;\frac{du}{\sqrt{1-u^{2}}} \tag{4.8}\] \[\lesssim 2^{-j/4}\|f\|_{L^{4/3}}\|g\|_{L^{4}}. \tag{4.7}\]
Applying the interpolation lemma we get that the operator \(I_{1}\) maps \(L^{2,1}\times L^{2,1}\) to \(L^{1,\infty}\). Finally, combining the estimates of both \(I_{0}\) and \(I_{1}\), we get the desired result.
## 5. Necessary conditions for \(\mathcal{M}\) in dimensions \(d\geq 2\)
In this section we provide some necessary conditions for the higher dimensional analogue \(\mathcal{T}\) of the operator \(\mathcal{A}_{1}^{\pi}\) defined as,
\[\mathcal{T}(f,g)(x)=\int_{\mathbb{S}^{d-1}}f(x-y)g(x+y)\;d\sigma(y).\]
The first result concerns an generalization of the necessary condition 1.7 for \(\mathcal{T}\). We note that the condition is obtained by considering functions of product type instead of examples generated by C. Fefferman boxes [10], as was the case in [1].
**Proposition 5.1**.: _Let \(d\geq 2\) and \(1\leq p_{1},p_{2}<\infty\). Suppose \(\mathcal{T}\) satisfies the following inequality,_
\[\|\mathcal{T}(f,g)\|_{L^{p}(\mathbb{R}^{d})}\lesssim\|f\|_{L^{p_{1}}(\mathbb{ R}^{d})}\|g\|_{L^{p_{2}}(\mathbb{R}^{d})},\]
_for functions \(f,g\) of the form \(f(x)=f_{1}(x_{1})f_{2}(x_{2})\) and \(g(x)=g_{1}(x_{1})g_{2}(x_{2})\) where we write \(x=(x_{1},x_{2})\) with \(x_{1}\in\mathbb{R}^{d_{1}}\), \(x_{2}\in\mathbb{R}^{d_{2}}\) and \(d=d_{1}+d_{2}\). Then we have,_
\[\frac{d+1}{p_{1}}+\frac{d+1}{p_{2}}\leq d-1+\frac{d+1}{p}.\]
Proof.: Consider the functions
\[f(x)=||x_{1}|-1|^{-\frac{d_{1}\alpha_{1}}{p_{1}}}|x_{2}|^{-\frac{d_{2}\alpha_ {2}}{p_{1}}}\chi_{[0,1]^{d}}(x)\;\;\mbox{and}\;\;g(x)=||x_{1}|-1|^{-\frac{d_{1 }\beta_{1}}{p_{2}}}|x_{2}|^{-\frac{d_{2}\beta_{2}}{p_{2}}}\chi_{[0,1]^{d}}(x).\]
Then \(f\in L^{p_{1}}(\mathbb{R}^{d})\) if \(\alpha_{1}<\frac{1}{d_{1}}\) and \(\alpha_{2}<1\). Similarly, \(g\in L^{p_{2}}(\mathbb{R}^{d})\) if \(\beta_{1}<\frac{1}{d_{1}}\) and \(\beta_{2}<1\). By the slicing argument and decomposing the interval \([0,1]\) into dyadic annuli, we get that
\[\mathcal{T}(f,g)(x)\] \[= \int_{B^{d_{1}}(0,1)}f(x_{1}-y_{1})g(x_{1}+y_{1})(1-|y_{1}|^{2})^ {\frac{d_{2}-2}{2}}\] \[\qquad\int_{\mathbb{S}^{d_{2-1}}}f(x_{2}-\sqrt{1-|y_{1}|^{2}}y_{2 })g(x_{2}+\sqrt{1-|y_{1}|^{2}}y_{2})\ d\sigma(y_{2})dy_{1}\] \[= \int_{0}^{1}(1-r^{2})^{\frac{d_{2}-2}{2}}r^{d_{1}-1}\left(\int_{ \mathbb{S}^{d_{1}-1}}f(x_{1}-ry_{1})g(x_{1}+ry_{1})\ d\sigma(y_{1})\right)\] \[\qquad\left(\int_{\mathbb{S}^{d_{2}-1}}f(x_{2}-\sqrt{1-r^{2}}y_{2 })g(x_{2}+\sqrt{1-r^{2}}y_{2})\ d\sigma(y_{2})\right)dr\] \[= \sum_{j\geq 1}\int_{1-2^{-j+1}}^{1-2^{-j}}(1-r^{2})^{\frac{d_{2}-2 }{2}}r^{d_{1}-1}\left(\int_{\mathbb{S}^{d_{1}-1}}||x_{1}-ry_{1}|-1|^{-\frac{d _{1}\alpha_{1}}{p_{1}}}||x_{1}+ry_{1}|-1|^{-\frac{d_{1}\beta_{1}}{p_{2}}}\ d \sigma(y_{1})\right)\] \[\qquad\left(\int_{\mathbb{S}^{d_{2}-1}}|x_{2}-\sqrt{1-r^{2}}y_{2 }|^{-\frac{d_{2}\alpha_{2}}{p_{1}}}|x_{2}+\sqrt{1-r^{2}}y_{2}|^{-\frac{d_{2} \beta_{2}}{p_{2}}}\ d\sigma(y_{2})\right)dr.\]
Let \(B_{k}=\{x=(x_{1},x_{2}):2^{-k}\leq|x_{1}|\leq 2^{-k+1},2^{-\frac{k}{2}}\leq|x_{ 2}|\leq 2^{-\frac{k-1}{2}},\) then for large \(k\) and \(j>k\), we get that
\[||x_{1}\pm ry_{1}|-1|\sim 2^{-k}\ \text{and}\ |x_{2}\pm\sqrt{1-r^{2}}y_{2}|\sim 2 ^{-\frac{k}{2}}.\]
We can see that
\[\mathcal{T}(f,g)(x) \geq \sum_{j\geq k}\int_{1-2^{-j+1}}^{1-2^{-j}}(1-r^{2})^{\frac{d_{2}- 2}{2}}r^{d_{1}-1}\left(\int_{\mathbb{S}^{d_{1}-1}}2^{k\frac{d_{1}\alpha_{1}}{p _{1}}}2^{k\frac{d_{1}\beta_{1}}{p_{2}}}\ d\sigma(y_{1})\right)\] \[\qquad\left(\int_{\mathbb{S}^{d_{2}-1}}2^{k\frac{d_{2}\alpha_{2}}{2 p_{1}}}2^{k\frac{d_{2}\beta_{2}}{2p_{2}}}\ d\sigma(y_{2})\right)dr\] \[\gtrsim 2^{k(\frac{d_{1}\alpha_{1}}{p_{1}}+\frac{d_{1}\beta_{1}}{p_{2}} +\frac{d_{2}\alpha_{2}}{2p_{1}}+\frac{d_{2}\beta_{2}}{2p_{2}})}\sum_{j\geq k} 2^{-j\frac{d_{2}}{2}}=2^{k(\frac{d_{1}\alpha_{1}}{p_{1}}+\frac{d_{1}\beta_{1}} {p_{2}}+\frac{d_{2}\beta_{2}}{2p_{1}}-\frac{d_{2}}{2})}.\]
Hence, we get that
\[\|\mathcal{T}(f,g)\|_{p}^{p} \geq \sum_{k}\int_{B_{k}}|\mathcal{T}(f,g)(x)|^{p}\ dx\] \[\gtrsim \sum_{k}\int_{B_{k}}2^{kp(\frac{d_{1}\alpha_{1}}{p_{1}}+\frac{d_ {1}\beta_{1}}{p_{2}}+\frac{d_{2}\alpha_{2}}{2p_{1}}+\frac{d_{2}\beta_{2}}{2p_{2 }}-\frac{d_{2}}{2})}\ dx\] \[= \sum_{k}2^{kp(\frac{d_{1}\alpha_{1}}{p_{1}}+\frac{d_{1}\beta_{1} }{p_{2}}+\frac{d_{2}\alpha_{2}}{2p_{1}}+\frac{d_{2}\beta_{2}}{2p_{2}}-\frac{d_ {2}}{2})}2^{-k(d_{1}+\frac{d_{2}}{2})}.\]
The above sum is finite if \(\frac{d_{2}+2}{2p_{1}}+\frac{d_{2}+2}{2p_{2}}\leq\frac{d_{2}}{2}+\frac{2d_{1}+d_{2 }}{2p}\). This condition implies the necessary condition if we choose \(d_{1}=1\) and \(d_{2}=d-1\).
We also record some \(L^{p}-\) improving conditions for \(\mathcal{T}\) in the following proposition. These conditions are higher dimensional analogues of results obtained in [1] by considering indicator functions of appropriate balls and annulus.
**Proposition 5.2**.: _Let \(d\geq 2\) and \(1\leq p_{1},p_{2}<\infty\). Suppose \(\mathcal{T}:L^{p_{1}}(\mathbb{R}^{d})\times L^{p_{2}}(\mathbb{R}^{d})\to L^{p} (\mathbb{R}^{d})\) boundedly, Then we have the following,_
1. \(\frac{d}{p_{1}}+\frac{1}{p_{2}}\leq 1+\frac{1}{p}\)_._
2. \(\frac{1}{p_{1}}+\frac{d}{p_{2}}\leq 1+\frac{1}{p}\)_._
3. \(\frac{1}{p}\leq\frac{1}{p_{1}}+\frac{1}{p_{2}}\leq\frac{d}{p}\)_._
We refer to Section 3 of [1] for details.
## Acknowledgement
The authors thank Michael Lacey and Ben Krause for introducing the bilinear operator \(\mathcal{A}_{t}^{\theta}\) to them. Ankit Bhojak and Saurabh Shrivastava acknowledge the financial support from Science and Engineering Research Board, Department of Science and Technology, Govt. of India, under the scheme Core Research Grant, file no. CRG/2021/000230. Surjeet Singh Choudhary is supported by CSIR(NET), file no.09/1020(0182)/2019-EMR-I for his Ph.D. fellowship. Kalachand Shuin is supported by NRF grant no. 2022R1A4A1018904 and BK 21 Post doctoral fellowship. The authors acknowledge the support and hospitality provided by the International Centre for Theoretical Sciences, Bangalore (ICTS) for participating in the program - Modern trends in Harmonic Analysis (code: ICTS/Mtha2023/06).
|
2309.14255 | The resolution of the weak-exchange limit made rigorous, simple and
general in binuclear complexes | The correct interpretation of magnetic properties in the weak-exchange regime
has remained a challenging task for several decades. In this regime, the
effective exchange interaction between local spins is quite weak, of the same
order of magnitude or smaller than the various anisotropic terms, which
generates a complex set of levels characterized by spin spin mixing. Although
the model multispin Hamiltonian in the absence of local orbital momentum,
\hms{} = \js{} + \da{} +\db{} + \dab{}, is considered good enough to map the
experimental energies at zero field and in the strong-exchange limit,
theoretical works pointed out limitations of this simple model. This work
revives the use of \hms{} from a new theoretical perspective, detailing
point-by-point a strategy to correctly map the computational energies and wave
functions onto \hms{} , thus validating it regardless of the exchange limit. We
will distinguish two cases, based on experimentally characterized dicobalt(II)
complexes from the literature. If centrosymmetry imposes alignment of the
various rank-2 tensors constitutive of \hms{} in the first case, the absence of
any symmetry element prevents such alignment in the second case. In such a
context, the strategy provided herein becomes a powerful tool to rationalize
the experimental magnetic data, since it is capable of fully and rigorously
extracting the multispin model without any assumption on the orientation of its
constitutive tensors. Furthermore, the strategy allows to question the use of
the spin Hamiltonian approach by explicitly controlling the projection norms on
the model space, which is showcased in the second complex where local orbital
momentum could have occurred (distorted octahedra). Finally, previous
theoretical data related to a known dinickel(II) complex is reinterpreted,
clarifying initial wanderings regarding the weak exchange limit. | Dumitru-Claudiu Sergentu, Boris Le Guennic, Rémi Maurice | 2023-09-25T16:13:52Z | http://arxiv.org/abs/2309.14255v2 | # The resolution of the weak-exchange limit made rigorous, simple and general in binuclear complexes
###### Abstract
The correct interpretation of magnetic properties in the weak-exchange regime has remained a challenging task for several decades. In this regime, the effective exchange interaction between local spins is quite weak, of the same order of magnitude or smaller than the various anisotropic terms, which _in fine_ generates a complex set of levels characterized by spin intercalation if not significant spin mixing. Although the model multispin Hamiltonian, \(\hat{H}^{\rm MS}\) = \(J\hat{S}_{a}\hat{S}_{b}\) + \(\hat{S}_{a}\bar{D}_{a}\hat{S}_{a}\) +\(\hat{S}_{b}\bar{D}_{b}\hat{S}_{b}\) + \(\hat{S}_{a}\bar{D}_{ab}\hat{S}_{b}\), is considered good enough to map the experimental energies at zero field and in the strong-exchange limit, theoretical works pointed out limitations of this simple model. This work revives the use of \(\hat{H}^{\rm MS}\) from a new theoretical perspective, detailing point-by-point a strategy to correctly map the computational energies and wave functions onto \(\hat{H}^{\rm MS}\), thus validating it regardless of the exchange regime. We will distinguish two cases, based on
experimentally characterized dicobalt(II) complexes from the literature. If centrosymmetry imposes alignment of the various rank-2 tensors constitutive of \(\hat{H}^{\rm MS}\) in the first case, the absence of any symmetry element prevents such alignment in the second case. In such a context, the strategy provided herein becomes a powerful tool to rationalize the experimental magnetic data, since it is capable of fully and rigorously extracting the multispin model without any assumption on the orientation of its constitutive tensors. Finally, previous theoretical data related to a known dinickel(II) complex is reinterpreted, clarifying initial wanderings regarding the weak-exchange limit.
## 1 Introduction
Recent decades witnessed significant advancements in the engineering of single molecule magnets (SMMs) with compelling magnetic properties closer and closer to room temperature [1, 2, 3, 4]. At stake may be the future of storage devices and quantum information systems [5, 6, 7], but these SMMs also provide a playground to explore complex electronic structures, new quantum-mechanical phenomena [8, 9, 10], and develop novel strategies for evaluating them.
SMMs typically exhibit magnetic anisotropy through the intertwined effect of spin-orbit coupling (SOC) and anisotropic crystal-fields (CFs), and retain magnetic bistability with an energy barrier for magnetization reversal below a certain blocking temperature [6]. To go beyong the blocking temperatures observed in d-element polynuclear complexes, research shifted to the field of f-element single ion magnets after the discovery of the properties of the bis(phthalocyaninato)terbium anion [11]. Although f-element molecules make the current state-of-the-art SMM prototypes [3, 4, 12, 13, 14], with well known recipes designed to improve their magnetic properties [15, 16], potential d- and mixed f/d-element SMMs are also investigated at a pace faster than before [17, 18, 19, 20].
In the laboratory, information on magnetic anisotropy is commonly evidenced through electron paramagnetic resonance (EPR) and magnetic susceptibility (\(\chi\)) studies. The outcome is interpreted through the language of model Hamiltonians [21, 22, 23], dressed with param
eters quantifying the physics of the electronic structure, _e.g._ magnetic exchange, zero-field splitting (ZFS), Zeeman interaction, etc. Optimal values for these parameters are obtained by fitting measured data [24, 25], such as the temperature variation of \(\chi T\), and concluding on a certain set of uniquely defined values from good grounds may require input from first principles calculations.
With simplistic formulations, model Hamiltonians are appealing in experimental contexts and serve as a bridge between experiment and the overly complicated theoretical approaches. In the case of model spin Hamiltonians, only spin operators are at play, with the assumption that the orbital part(s) of the wave function(s) is(are) factored out. In mononuclear complexes, the model Hamiltonian applies to the \(|S,M_{S}\rangle\) components of the ground spin state. In binuclear complexes and beyond, the model space may include more than one spin state. For instance, it may be constituted of the spin components of all the spin states that are triggered by the Heisenberg-Dirac-van Vleck (HDVV) Hamiltonian [26, 27, 28]. Yet, these types of models reproduce faithfully magnetic data at low temperatures as soon as the interaction space encodes the effective physics of the studied system.
In polynuclear systems, two main types of models apply. The giant-spin model, \(\hat{H}^{\rm GS}\), only aims at describing the ZFS of the ground spin state and it is used in a fashion similar to that of mononulcear systems. The multispin model, \(\hat{H}^{\rm MS}\), aims at describing the splitting and mixing of the spin components of the HDVV spectrum. With both model Hamiltonians, the theoretical extraction of relevant parameters is based on an explicit mapping onto an effective Hamiltonian that is built on top of the _ab initio_ calculations [29, 30, 31, 32, 33]. Alternatively, one may skip the explicit projection of the wave functions and work within the pseudospin framework [34]. Since \(\hat{H}^{\rm GS}\) targets only the ground spin state it is quite clear that it is not fully relevant in the weak-exchange limit [29, 35]. \(\hat{H}^{\rm MS}\) is better suited in such cases; it is naturally built up in the \(|S_{i},M_{Si},\ldots\rangle\) basis (the _uncoupled_ basis, the \(i\)'s denoting the active magnetic centers) and may be further expressed in the \(|S,M_{S}\rangle\) basis (the _coupled_ one) [36].
Calculations based on the complete active space self-consistent field (CASSCF) ap
proach[37] are appealing in the context of magnetochemistry.[38] This approach can describe correctly d\({}^{n}\) and f\({}^{n}\) near-degenerate configurations, and allows for a systematic improvement of the electron correlation in post-CASSCF multireference treatments. Subsequently, the spin-orbit coupling (SOC) can be introduced by diagonalization of an electronic energy plus spin-orbit operator matrix, in the basis of the spin components of the previous spin-free CASSCF states, with the electronic energies potentially replaced by dynamically-correlated ones. This is the spirit of (dressed) spin-orbit configuration interaction (SOCI).[39]
Many seminal studies show the use of configuration interaction schemes to calculate magnetic properties.[40, 41, 42, 43, 44, 45, 46, 47] It is worth noting that density functional theory (DFT) is equally involved in rationalizing magnetic data,[48, 49, 50, 51, 52, 53, 54] as well as single-reference spin-flip approaches,[55, 56, 57] and more recently coupled-cluster methods.[58, 59, 60] In principle, the low-energy _ab initio_ spectrum offers sufficient detail for calculating anisotropy parameters and spin-orbit correction to the exchange interaction if a SOCI is performed,[22, 33] as well as EPR parameters and \(\chi T\) profiles provided that the Zeeman interaction is treated.[61, 62, 63, 64] This is the starting point of our work.
Extraction of magnetic parameters from \(\hat{H}^{\text{MS}}\) may be straightforward in centrosymmetric binuclear complexes. In such cases, all rank-2 tensors involved in \(\hat{H}^{\text{MS}}\) (_vide infra_) have the same principal axes, or principal axis frames (PAFs). Such a frame, which may be derived from \(\hat{H}^{\text{GS}}\) if not directly from symmetry arguments,[29, 65, 30] not only simplifies the model construction but also helps in shortcutting the parameter extraction through the effective Hamiltonian theory. In practice, cases were identified in which matrix elements are nil in the model but non-nil in \(\hat{H}^{\text{eff}}\).[65] Such inequalities were proposed to arise from the lack of a rank-4, biquadratic anisotropy exchange tensor in \(\hat{H}^{\text{MS}}\). The case of low-symmetry binuclear complexes is even more complicated. Although the molecular orientation may correspond to a molecular PAF, derived somehow from \(\hat{H}^{\text{GS}}\) or from the diagonalization of the EPR \(g\)-matrix in the case of pseudospin Hamiltonians,[34] this \(xyz\) frame may not correspond at all to the local or specific PAFs of all the rank-2 tensors of \(\hat{H}^{\text{MS}}\). This must
lead to nonzero off-diagonal elements of these tensors if expressed in the molecular frame, which is almost always neglected in experimental studies. Hence, the use of \(\hat{H}^{\rm MS}\) in current magnetochemistry applications still appears problematic.
This article aims to revive the utilization of \(\hat{H}^{\rm MS}\) for interpreting the low-temperature magnetic properties of binuclear complexes; it forwards a simple, rigorous, and versatile strategy to resolve the model, irrespective of the coordination symmetry or the regime (weak- or strong-exchange). The technique is showcased using dicobalt(II) complexes, a centrosymmetric one, \(\left[{\rm Co_{2}Cl_{6}}\right]^{2-}\) (**1**),[66, 67] and an unsymmetrical one, \(\left[{\rm Co_{2}(L)_{2}(acac)_{2}(H_{2}O)}\right]\)[1] (**2**).[68] The molecular structures are shown in Figure 1. Dicobalt(II) systems were chosen simply to show that, by using the proposed strategy, full extraction of magnetic parameters can be easily performed even in challenging cases (here, the HDVV matrix being 16\(\times\)16 and the local magnetic centers intrinsically displaying Kramers degeneracies). The article concludes that achieving a strong agreement between the matrix elements of \(\hat{H}^{\rm MS}\) and \(\hat{H}^{\rm eff}\), necessary for the validation and resolution of the former, is possible in any case, and, in practice, easily achievable by
* building a first effective Hamiltonian in the _coupled_ basis,
Figure 1: Molecular geometries of the binuclear complexes **1** and **2**, and of the respective **a** and **b** models obtained by substituting one Co atom by a Zn one. H atoms are omitted for clarity. Red, green and blue correspond to O, Cl and N atoms, respectively.
* determining the molecular PAF, which is identified by resolution of the anisotropic model Hamiltonian \(\hat{H}^{\rm mod}\)=\(\hat{S}\bar{\bar{D}}_{S=3}\hat{S}\) effectively at play only in the \(|S=3,M_{S}\rangle\) block of the full model space of \(\hat{H}^{\rm MS}\),
* recomputing the effective Hamiltonian in the molecular PAF and consistently revising the \(\pm\) signs of the cross-blocks matrix elements, coupling \(M_{S}\) components belonging to different \(S\)-blocks in \(\hat{H}^{\rm eff}\), and
* expressing all rank-2 tensors of \(\hat{H}^{\rm MS}\) in the molecular PAF to finally determine the missing quantities by minimizing the mismatch between \(\hat{H}^{\rm MS}\) and \(\hat{H}^{\rm eff}\).
A recent erratum to Reference 4 also raised the issue of conflicting signs mentioned at point (iii). In order to properly project \(\hat{H}^{\rm eff}\) onto \(\hat{H}^{\rm MS}\), it is essential to use correct prefactors. In fact, revision of such conflicting signs practically eliminates the need to introduce tensors of rank superior to 2 to achieve agreement between each and every matrix elements of \(\hat{H}^{\rm MS}\) and \(\hat{H}^{\rm eff}\), regardless of the coordination symmetry. Thus, the present article concludes that the resolution of \(\hat{H}^{\rm MS}\) for \([{\rm Ni}_{2}({\rm en})_{4}{\rm Cl}_{2}]^{2+}\), en = ethylendiamine, was in fact straightforward, and already properly done in the seminal publication [65].
The subsequent sections commence with a brief overview of the standard \(\hat{H}^{\rm MS}\) and its validation by effective Hamiltonians. This is followed by an outline of the proposed strategy for utilizing these two concepts in generally deriving magnetic properties for binuclear complexes. The results and discussion extends over two major sections showcasing the resolution of \(\hat{H}^{\rm MS}\), and the subsequent calculation of the \(\chi T\) profile, for the centrosymmetric complex \({\bf 1}\) firstly, and for the unsymmetrical complex \({\bf 2}\), secondly. The article concludes that, when constructed appropriately, the standard multispin Hamiltonian correctly describes the effective magnetic interactions even in the weak-exchange limit, thus justifying its general relevance in the experimental design of SMMs based on d-elements. Finally, a series of take-home messages will be delivered to the attention of the experimental community involved in molecular magnetism, emphasizing once more the strong need for good, independent, computational
input to consistently interpret the data.
## 2 Theory and computational strategy
### The multispin model Hamiltonian for binuclear complexes
With \(a\) and \(b\) labeling the two magnetic centers, the model multispin Hamiltonian expression reads [33, 36, 69]:
\[\hat{H}^{\rm MS} = J\hat{S}_{a}\hat{S}_{b}\,+\,\hat{S}_{a}\bar{\bar{D}}_{a}\hat{S}_{a }\,+\,\hat{S}_{b}\bar{\bar{D}}_{b}\hat{S}_{b}\,+\,\hat{S}_{a}\bar{\bar{D}}_{ab} \hat{S}_{b}+\,\bar{d}\hat{S}_{a}\times\hat{S}_{b} \tag{1}\]
where \(\hat{S}_{a}\) and \(\hat{S}_{b}\) are spin operators, \(J\) is the Heisenberg exchange magnetic coupling, \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\) are rank-2 tensors describing the local anisotropies, \(\bar{\bar{D}}_{ab}\) is a rank-2 tensor describing the symmetric anisotropic exchange, and \(\bar{d}\) is a pseudovector describing the antisymmetric component of the anisotropic exchange. In centrosymmetric complexes, the latter term of Equation 1, also referred to as the Dzyaloshinskii-Moriya interaction (DMI) [70, 71, 72, 73, 74, 75], vanishes, and otherwise, may be less important than the other terms unless specific situations are encoutered (exotic coordination environment and/or orbital near-degeneracy). In fact, this work does not specifically focuss in the DMI, which will be later justified by comparing \(\hat{H}^{\rm eff}\) and \(\hat{H}^{\rm MS}\).
In order to describe interaction with an applied magnetic field, \(\vec{B}\), Equation 1 is completed by the Zeeman Hamiltonian:
\[\hat{H}^{\rm Zee}=\mu_{B}\vec{B}\bar{\bar{g}}_{a}\hat{S}_{a}\,+\,\mu_{B}\vec{B }\bar{\bar{g}}_{b}\hat{S}_{b} \tag{2}\]
where \(\mu_{B}\) is the Bohr magneton and \(\bar{\bar{g}}_{a}\) and \(\bar{\bar{g}}_{b}\) are the local Zeeman splitting tensors. The rank-2 tensors of Equations 1 and 2 are represented as 3\(\times\)3 matrices consisting of up to nine non-zero components in arbitrary _xyz_ frames. The tensors are of course diagonal in their respective PAFs. Though the axial (\(D_{a}\), \(D_{b}\), \(D_{ab}\)) and rhombic (\(E_{a}\), \(E_{b}\), \(E_{ab}\)) local and exchange anisotropic parameters depend only on the diagonal elements of the respective
tensors (\(\bar{D}_{a}\), \(\bar{D}_{b}\), \(\bar{D}_{ab}\)), following \(D=\nicefrac{{3}}{{2}}D_{zz}\) and \(E=\nicefrac{{1}}{{2}}(D_{xx}-D_{yy})^{2}\), it should be noted that the PAFs of \(\bar{D}_{a}\), \(\bar{D}_{b}\) and \(\bar{D}_{ab}\) may not coincide with each other in the general case, nor with those of \(\bar{g}_{a}\) and \(\bar{g}_{b}\). Thus, symmetry is key to understanding how to practically deal with so many tensors.
If centrosymmetry is present, one may expect that all the tensors of Equations 1 and 2 are diagonal in the same coordinate frame, which effectively corresponds to the molecular PAF. This facilitates the straightforward construction of \(\hat{H}^{\rm MS}\) and the extraction of relevant axial and rhombic parameters using the effective Hamiltonian theory. However, if centrosymmetry is not present, two additional challenges arise. Firstly, one must determine the molecular PAF. Secondly, one must express all the rank-2 tensors in this coordinate frame. In this scenario, the rank-2 tensors may no longer be diagonal, although symmetry may still impose some of the off-diagonal elements to be zero in specific situations.
\(\hat{H}^{\rm MS}\) and \(\hat{H}^{\rm Zee}\) are both constructed in the uncoupled spin-basis, \(|S_{a},M_{Sa};S_{b},M_{Sb}\rangle\), or shortly \(|M_{Sa},M_{Sb}\rangle\). Since we deal with d\({}^{7}\) Co(II) centers, \(S_{a}=S_{b}=\nicefrac{{3}}{{2}}\) and \(M_{Sa},M_{Sb}\in\{\pm\nicefrac{{3}}{{2}},\pm\nicefrac{{1}}{{2}}\}\), resulting in a 16\(\times\)16 model space. Thus, the matrices of all terms in Equations 1 and 2 must be expressed in this same 16\(\times\)16 basis. \(\hat{H}^{\rm MS}\) is designed here to reproduce the energy levels generated by the coupling of the local spin-quartets, _i.e._, the septet (\(S=3\)), quintet (\(S=2\)), triplet (\(S=1\)) and singlet (\(S=0\)) coupled-spin states. Translation of \(\hat{H}^{\rm MS}\) from the uncoupled-spin basis, \(|M_{Sa},M_{Sb}\rangle\), to the coupled-spin basis, \(|S,M_{S}\rangle\) (\(S=0,1,2,3\), \(M_{S}=\pm S\)), is achieved via the transformation matrix U based on Clebsch-Gordan (CG) coefficients: [36]
\[\hat{H}^{\rm MS}\mbox{(coupled)}{=\mbox{ U}^{\rm T}.\ }\hat{H}^{\rm MS}\mbox{( uncoupled)}\ \mbox{\raisebox{-1.29pt}{\mbox{U}}} \tag{3}\]
where U\({}^{\rm T}\) is the transposed of U. The 16\(\times\)16 matrix U of CG coefficients used in this work is provided in Table S1 of the Supporting Information (SI) file.
### Construction of the effective Hamiltonian
\(\hat{H}^{\rm eff}\) spans the same model space as \(\hat{H}^{\rm MS}\) and it is initially constructed in the coupled-spin basis, in accord with the expressions of the underlying SOCI wavefunctions, by means of the des Cloizeaux formalism [76]:
\[\hat{H}^{\rm eff}=\sum_{K}E_{K}\left\langle\phi_{i}|\psi_{K}^{L}\right\rangle \left\langle\psi_{K}^{L}|\phi_{j}\right\rangle \tag{4}\]
where \(\phi_{i}\), \(\phi_{j}\) are model-space \(|S,M_{S}\rangle\) spin functions, \(\psi_{K}^{L}\) are Lowdin-orthonormalized projections of the SOCI wavefunctions onto the model interaction space, and the \(E_{K}\)'s are the lowest SOCI energies (4 in the case of a mononuclear cobalt(II) complex and 16 in the case of a binuclear one). Note that Equation 3 may also be used to transform \(\hat{H}^{\rm eff}\) from the coupled-spin basis to the uncoupled-spin one.
### Proposed step-by-step strategy for extracting the multispin Hamiltonian
**Step 1** The first step involves determining the molecular PAF. Note that, unlike the energies, the composition of the SOCI wavefunctions varies with the molecular orientation such that their interpretation may become needlessly complex in the event that the cartesian \(z\)-axis does not coincide with the molecular PAF. For both the studied complexes (**1** and **2**), the molecular PAF was determined from the resolution of \(\hat{H}^{\rm mod}\)=\(\hat{S}\bar{\bar{D}}_{S=3}\hat{S}\), describing only the
ground \(S=3\) state. A similar strategy was employed for determining the anisotropy axes of [Ni\({}_{2}\)(en)\({}_{4}\)Cl\({}_{2}\)]\({}^{2+}\).[65] The procedure consists in:
* perform a first reference SOCI calculation with an arbitrary _xyz_ frame, construct the 16\(\times\)16 matrix of \(\hat{H}^{\rm eff}\) according to Section and focus only on the \(S=3\) block. This procedure is in essence different from simply following the giant spin approach: in the case of spin intercalation and/or spin mixing, it ensures that we properly extract the actual PAF of the \(S\) = 3 block.
* derive the analytical matrix representation of \(\hat{H}^{\rm mod}\)=\(\hat{S}\bar{\bar{D}}_{S=3}\hat{S}\) in the \(|S=3,M_{S}\rangle\) space and extract \(\bar{\bar{D}}_{S=3}\) by best equating \(\hat{H}^{\rm eff}\) from point (i) to \(\hat{H}^{\rm mod}\). Since this matrix is uncommon in the transition metal literature (an \(S=3\) state is impossible to reach within a d\({}^{n}\) manifold), we provide it in Table 1.
* rotate the molecular coordinates from the arbitrary _xyz_ frame to the molecular PAF: \(xyz_{i}^{\rm MPAF}=V^{-1}\cdot xyz_{i}^{\rm arb}\), where \(xyz_{i}^{\rm arb}\) are the coordinates of all the \(i\) atoms in the arbitrary frame and \(V\) is the eigenvector matrix of \(\bar{\bar{D}}_{S=3}\) (convention may apply for the respective labeling of the \(x\), \(y\) and \(z\) axes).
With the molecular geometry rotated in the molecular PAF, the SOCI calculations of Step 1 are repeated and the 16\(\times\)16 representative matrix of \(\hat{H}^{\rm eff}\) is re-built. This is the \(\hat{H}^{\rm eff}\) that will be essentially used for the validation and resolution of \(\hat{H}^{\rm MS}\). However, note that the exact same conclusions can be reached by extracting it all in an arbitrary axis frame (in a more tedious way!).
**Step 2** SOCI calculations are conducted with two model, monomeric, structures defined by substitution of a magnetic center with a diamagnetic one. In the present cases, models **1a**, **1b** and **2a**, **2b** (Figure 1) are obtained by replacing one Co with one Zn in the dimeric structures **1** and **2** respectively. The goal here is to extract independently the local anisotropy tensors, \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\), for the magnetic centers, determine their respectives PAFs, and calculate the corresponding axial and rhombic parameters \(D_{a}\), \(E_{a}\), \(D_{b}\), \(E_{b}\). For each magnetic center,
the eigenvectors of \(\bar{D}\), leading to \(\bar{D}^{\rm diag}\), provide with the \(V\) rotation matrix that can be used to express them back in the molecular PAF, through:
\[\bar{\bar{D}}({\rm molecular\ PAF})=V\bar{\bar{D}}^{\rm diag}V^{-1} \tag{5}\]
**Step 3** The 16\(\times\)16 model analytical matrix of \(\hat{H}^{\rm MS}\) in the uncoupled-spin basis is derived with the leading terms of it, _i.e._, the Heisenberg and the local anisotropy terms, by applying simple spin-operator algebra. This matrix is subsequently transformed in the coupled-spin basis using Equation 3.
Afterwards, the quality of the model matrix can be evaluated by expressing it numerically and comparing it to \(\hat{H}^{\rm eff}\). The less deviation, the better the model. To shortcut the extraction, one may directly use the spin-free \(J\) value and the local anisotropy tensors calculated independently at Step 2 (of course, expressed in the molecular PAF for consistency). In principle, this approach should already provide a good representation of \(\hat{H}^{\rm eff}\) since (i) the effect of the SOC on \(J\) is usually quite small [65, 30] and (ii) the symmetric anisotropy exchange tensor may be of lesser importance than the local ones. Note that we may also refine the \(J\) value and the components of the local anisotropy tensors as well as introducing the symmetric exchange tensor in our model to further improve it.
### Computational details
Electronic structure calculations were performed with the ORCA package, v5.0.3. [77, 78] Scalar relativistic effects were introduced by using the Douglas-Kroll-Hess (DKH) Hamiltonian. [79, 80, 81, 82] The Co, N, O and Cl electrons were treated with the all-electron, triple-zeta DKH-def2-TZVP basis sets whereas the C, F and H electrons were treated with the smaller, double-zeta DKH-def2-SVP ones. These bases were derived from the original def2 variants bycontraction within the DKH framework. [83] To speed off calculation of electron repulsion integrals, the RIJCOSX "chain-of-spheres" density-fitting tehnique [84] was applied together
with large, automatically-generated auxiliary basis sets [85]. The geometries of **1** and **2** were extracted from published crystal information data [66, 68]. Concerning **2**, the H coordinates were optimized within the DFT framework, using the Perdew-Burke-Ernzerhof generalized gradient approximation [86], and otherwise the same details as above hold. The _xyz_ coordinates in the initial, arbitrary, coordinate frames are provided in the SI.
The zero-order wavefunctions were converged within the state-averaged (SA) CASSCF framework, with the active space spanned by the Co(II) d\({}^{7}\) shell(s). _I.e._, CASSCF(14, 10) and CASSCF(7, 5) calculations were performed for the dimeric species **1** and **2** and for the monomeric species **1a**, **1b**, **2a** and **2b**, respectively. The production-level SA schemes included nine states per \(S=\) 3, 2, 1 and 0 block for **1**, three \(S=\nicefrac{{3}}{{2}}\) states for **1a** and **1b**, 49 states per \(S=\) 3, 2, 1 and 0 block for **2**, and seven \(S=\nicefrac{{3}}{{2}}\) states for **2a** and **2b**. Regarding complex **1** and its monomers, additional SA schemes were explored for validation purposes, which will be discussed in the results sections. The SA scheme employed for **2** was validated in Reference [67].
State energies including dynamic correlation effects were calculated with the strongly-contracted, n-electron valence state perturbation theory at second order (NEVPT2) [87, 88]. For **2** in particular, the relative correlated energies for the lowest \(S=\) 3, 2, 1 and 0 states were additionally recorrected by difference dedicated CI (DDCI2) and iterative DDCI2 calculations [89, 90], with an orbital-energy cutoff between \(-10\) and \(1000\) Hartree and _tsel_ and _tpre_ thresholds set to 1e-10 and 1e-6 respectively. In this way, a better \(J\) value was obtained, or in other words, a better separation of the spin blocks. In the iterative DDCI2 calculation, a limit of ten iterations was employed.
Finally, the production SOCI calculations were performed for all the dimeric and monomeric species. In these calculations, the spin-orbit operator matrix was constructed in the basis of the spin components of the CASSCF states. The SOCI matrix is constituted of off-diagonal elements, triggered by the spin-orbit coupling, and also of diagonal ones, the electronic spin-free energies. The diagonal was dressed with dynamically-correlated energies and the
resulting matrix was diagonalized to generate the spin-orbit coupled wavefunctions and energies. Hereafter, SOCI calculations based on NEVPT2 and DDCI2 correlated energies will be referred as SO-NEVPT2 and SO-DDCI2 respectively. We point out additionally that, in the SO-DDCI2 calculations, only the relative energies of the lowest \(S\) = 3, 2, 1, 0 CASSCF states were adjusted by the DDCI2 energy spacings, whereas the NEVPT2 relative energies were retained for the remaining 192 CASSCF states. The intention here was to characterize as best as possible the spin-orbit states that may be populated in the temperature range employed in experimental magnetic susceptibility studies. Since other excited states appear at much higher energies, above 2800 cm\({}^{-1}\), they do not contribute much to the observed properties.
Finally, powder-averaged _ab initio_\(\chi T\) curves were generated for complexes **1** and **2** directly from the outcomes of the SOCI calculations. Note that, in ORCA, \(\chi\) is calculated by finite differentiation of the partition function by using the (field-corrected) spin-orbit states [46]. Furthermore, by using the same approach, we have modeled the \(\chi T\) curves based on the pre-validated \(\hat{H}^{\text{MS}}\) + \(\hat{H}^{\text{Zee}}\) models, through the usual approximation,
\[\chi_{\alpha}\approx\frac{M}{B}=\frac{k_{B}T}{\mu_{B}}\frac{\partial\text{ln} Z}{\partial B_{\alpha}}\frac{1}{B_{\alpha}} \tag{6}\]
as well as by computing the full \(\chi\) tensor,
\[\chi_{\alpha,\beta}=\frac{N_{A}k_{B}T}{10}\frac{\partial^{2}\text{ln}Z}{ \partial B_{\alpha}\partial B_{\beta}} \tag{7}\]
where \(M\) is the magnetization, \(B=0.2\) T is the applied magnetic field, \(\alpha\) and \(\beta\) are two field directions, \(Z=\sum_{i}^{16}e^{-\frac{E_{i}}{k_{B}T}}\) is the partition function, and \(N_{A}\) and \(k_{B}\) are the Avogadro and Boltzmann constants, respectively. For the present dicobalt(II) cases, Equations 6 and 7 led to practically identical \(\chi T=f(T)\) model curves.
## Results and discussion
### \([\text{Co}_{2}\text{Cl}_{6}]^{2-}\), a centrosymmetric complex
#### Introduction and spin-free spectrum
Magnetic properties in this edge-sharing, bitetrahedral ion were measured by Su et al. [66] Each Co(II) center adopts an orbitally non-degenerate ground state (GS), \({}^{4}A_{2}\), correlating with the \({}^{4}F\) ground term of the isolated ion. The \(\chi T\) profile was fitted by an anisotropic model with a relatively large antiferromagnetic Heisenberg exchange coupling, \(J\)=23.2 cm\({}^{-1}\) (\(J\hat{S}_{a}\hat{S}_{b}\) formalism), a large local anisotropy \(D\)=29 cm\({}^{-1}\) parameter, and an isotropic \(g\)-factor of 2.25. With first-principles calculations, de Graaf and Sousa noted that the anisotropy parameter is well approximated by that of a \([\text{CoCl}_{4}]^{2-}\) structural model [67]. However, the calculated \(J\) value, 34.3 cm\({}^{-1}\) with second-order CAS prturbation theory (CASPT2) [91], overreached the fitted \(J\). Furthermore, SOCI delivered a low-energy spin-orbit spectrum plagued by a large admixture of septet (\(S\)=3), quintet (\(S\)=2), triplet (\(S\)=1) and singlet (\(S\)=0) spin components. The authors concluded the impossibility of calculating \(J\) and \(D\) with \(\hat{H}^{\text{MS}}\) of Equation 1 and reiterated the difficulties of describing magnetic properties in the weak-exchange regime [21].
Our first principles calculations support the previous theoretical data. Namely, complex **1** adopts a spin-singlet GS with a Lande ordering for the triplet, quintet and septet spin-states with antiferromagnetic \(J\) of 7.5 (CASSCF) / 11.2 cm\({}^{-1}\) (NEVPT2). The \(J\) value matches
Figure 2: Arbitrarily-chosen coordinate frame and calculated PAFs for complex **1** (from \(\bar{\bar{D}}_{S=3}\)) and for the model structure **1a** (from \(\bar{\bar{D}}_{a}\)). Axes color code: \(z\)=red, \(x\)=blue, \(y\)=green.
that of Ref. 67 within 1 cm\({}^{-1}\) at the CASSCF level, but it is three times smaller when dynamic correlation is included. Although the discrepancy may originate from the choice of the PT2 flavor, here NEVPT2 vs. CASPT2 in Reference 67, both these methods deliver similar accuracy when evaluated against the fitted \(J\) value of 23.2 cm\({}^{-1}\); _i.e._ if CASPT2 overestimates it by 11.1 cm\({}^{-1}\), NEVPT2 underestimates it by 12 cm\({}^{-1}\). In this context, DDCI2 calculations led to \(J=21.3\) cm\({}^{-1}\), within \(\sim\)2 cm\({}^{-1}\) of the fitted value. Iterative DDCI2 calculations did not converge well on a particular value and delivered \(J\) within the \(\sim\)18-21.5 cm\({}^{-1}\) range in 10 iterations, with the mean \(J=20.9\) cm\({}^{-1}\) still in excellent agreement with the fitted \(J\).
#### Spin-orbit spectrum and determination of the molecular PAF
SOCI calculations were initially performed with complex **1** oriented in the arbitrarily-chosen coordinate frame shown in Figure 2, _i.e._ with \(z\) and \(y\) collinear with the Co\({}_{2}\) and \(\mu\)-Cl\({}_{2}\) internuclear axis, respectively, and \(x\) perpendicular to the Co\(-\)(\(\mu\)Cl\({}_{2}\))\(-\)Co plane. The SOC splits and intercalates the \(|S,M_{S}\rangle\) components of the spin-free states, resulting in 16 low-lying levels stretching up to 123 cm\({}^{-1}\) (see Table S2) and a continuum of excited levels above \(\sim\)2700 cm\({}^{-1}\). The low-energy spectrum originates almost entirely from SOC admixture of the 16 \(M_{S}\) components of the ground \(S=3\), 2, 1 and 0 spin-free states. The SO-NEVPT2
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \(\hat{H}^{\rm mod}\) & \(|3,3\rangle\) & \(|3,2\rangle\) & \(|3,1\rangle\) & \(|3,0\rangle\) & \(|3,-1\rangle\) & \(|3,-2\rangle\) & \(|3,-3\rangle\) \\ \hline \(\langle 3,3|\) & 79.0 & 1.0+2.0\(i\) & \(-\)15.0\(-\)6.0\(i\) & 0 & 0 & 0 & 0 \\ \(\langle 3,2|\) & 1.0\(-\)2.0\(i\) & 87.0 & 1.0+1.0\(i\) & \(-\)21.0 \(-\)9.0\(i\) & 0 & 0 & 0 \\ \(\langle 3,1|\) & \(-\)15.0+6.0\(i\) & 1.0\(-\)1.0\(i\) & 92.0 & 1.0\(i\) & \(-\)23.0\(-\)10.0\(i\) & 0 & 0 \\ \(\langle 3,0|\) & 0 & \(-\)21.0+9.0 \(i\) & \(-\)1.0\(i\) & 93.0 & \(-\)1.0\(i\) & \(-\)21.0\(-\)9.0\(i\) & 0 \\ \(\langle 3,-1|\) & 0 & 0 & \(-\)23.0+10.0\(i\) & 1.0\(i\) & 92.0 & \(-\)1.0\(-\)1.0\(i\) & \(-\)15.0\(-\)6.0\(i\) \\ \(\langle 3,-2|\) & 0 & 0 & 0 & \(-\)21.0+9.0\(i\) & \(-\)1.0+1.0\(i\) & 87.0 & \(-\)1.0\(-\)2.0\(i\) \\ \(\langle 3,-3|\) & 0 & 0 & 0 & 0 & \(-\)15.0+6.0\(i\) & \(-\)1.0+2.0\(i\) & 79.0 \\ \hline \end{tabular}
\end{table}
Table 2: Matrix elements, in cm\({}^{-1}\), of the \(S=3\) block within the 16\(\times\)16 matrix of \(\hat{H}^{\rm eff}\), constructed from SO-NEVPT2 calculations on complex **1** oriented in the arbitrary coordinate frame
wavefunctions listed in Table S3 support this statement and demonstrate that each of these states contains more than 90% summed contributions from such \(|S,M_{S}\rangle\) components. This indicates that the majority of the physics is captured by the low-energy _ab initio_ spectrum, which therefore is adequate for the construction of \(\hat{H}^{\rm eff}\) and validation of \(\hat{H}^{\rm MS}\).
Following the strategy presented in Section, the first step toward the resolution of \(\hat{H}^{\rm MS}\) involves identification of the molecular PAF. Here, the SOCI wavefunctions of Table S3 already show little spin-mixing due to misalignment of the \(z\)-axis with the principal magnetic anisotropy axis of the complex, meaning that the arbitrarily-chosen input frame is not very different from the molecular PAF. The numerical matrix elements of the \(|S=3,M_{S}\rangle\) block within \(\hat{H}^{\rm eff}\) is shown in Table 2. Comparison with the analytical \(\hat{H}^{\rm mod}\)=\(\hat{S}\bar{\bar{D}}_{S=3}\hat{S}\) matrix of Table 1 concludes with the perfect one-to-one correspondence of the respective matrix elements showing the validity of the model Hamiltonian. Moreover, it is clear that the matrix elements in \(\hat{H}^{\rm eff}\) that should vanish in the molecular PAF are very close to zero, _e.g._\(\langle 3,2|\,\hat{H}^{\rm mod}\,|3,3\rangle\), \(\langle 3,0|\,\hat{H}^{\rm mod}\,|3,1\rangle\), etc., meaning that the arbitrary axis frame is indeed not far from the molecular PAF. Indeed, extraction and diagonalization of \(\bar{\bar{D}}_{S=3}\) led to the molecuar PAF shown in Figure 2, which essentially differs from the arbitrary frame by axis relabeling, _i.e._\(z\to x\), \(x\to y\) and \(y\to z\). SOCI performed with complex **1** oriented in the molecular PAF led to the wavefunctions printed in Table S4; these are slightly cleaner than the previous ones and show large spin-mixings between \(S\)=0 and \(S\)=2 spin-components, and between the \(S\)=1 and \(S\)=3 ones.
**Extraction and validation of the multispin Hamiltonian**
In order to assemble the model matrix of \(\hat{H}^{\rm MS}\) (Equation 1), we must determine at first the local anisotropy tensors, \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\), associated with the two Co centers, and thereof the axial (\(D_{a}\) and \(D_{b}\)) and rhombic (\(E_{a}\) and \(E_{b}\)) local ZFS parameters (which are here respectively equal by symmetry). To this end, SO-NEVPT2 calculations were performed on the monomeric model structures **1a** and **1b** shown in Figure 1, both oriented in the molec
ular PAF determined above. Because of symmetry, one may here skip the computation of **1b**. However, doing it has two advantages, (i) it allows one to validate spreadsheets and (ii) it is generally applicable also in cases with no symmetry. Resolution of the anisotropic Hamiltonians, \(\hat{S}_{a}\bar{\bar{D}}_{a}\hat{S}_{a}\) and \(\hat{S}_{b}\bar{\bar{D}}_{b}\hat{S}_{b}\), was achieved following the effective Hamiltonian workflow for mononuclear complexes [45]. \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\) are found identical here, as expected due to the centrosymmetry. Moreover, they are diagonal in the molecular PAF:
\[\bar{\bar{D}}_{a}=\bar{\bar{D}}_{b}=\begin{bmatrix}-2.1&0&0\\ 0&-9.5&0\\ 0&0&11.6\end{bmatrix} \tag{8}\]
which shows that \(\bar{\bar{D}}_{a}\), \(\bar{\bar{D}}_{b}\) and \(\bar{\bar{D}}_{S=3}\) share the same PAF. The ZFS parameters derived from matrix 8 are \(D_{a}\) = \(D_{b}\) = 17.4 cm\({}^{-1}\) and \(E_{a}\) = \(E_{b}\) = 3.7 cm\({}^{-1}\).
The 16x16 analytical matrix of \(\hat{H}^{\rm MS}\) was initially derived in the uncoupled-spin basis, \(\left|M_{Sa},M_{Sb}\right\rangle\). Subsequently, the matrix was translated into the coupled-spin basis, \(\left|S,M_{S}\right\rangle\), and it is listed in Tables S5, S6, and S7. In the molecular PAF, the \(\hat{H}^{\rm MS}\) matrix simplifies greatly to that shown in Table 4, where all elements depend only on the axial and rhombic parameters of the local anisotropies, \(D_{a}\) and \(E_{a}\) (due to centrosymmetry, \(D_{b}\) and \(E_{b}\) can be replaced by \(D_{a}\) and \(E_{a}\) respectively), and of the symmetric exchange anisotropy, \(D_{ab}\) and \(E_{ab}\). The 16\(\times\)16 \(\hat{H}^{\rm eff}\) matrix derived from the SO-NEVPT2 calculation in the molecular PAF is given in Tables S8 and S9 for the coupled- and uncoupled-spin basis respectively.
In a first approximation, one may assume vanishing symmetric anisotropy, _i.e._ neglect the contribution of \(D_{ab}\) and \(E_{ab}\) to Table 4, and use the calculated \(J\)=11.2, \(D_{a}\)=17.4 and \(E_{a}\)=3.7 cm\({}^{-1}\) to evaluate \(\hat{H}^{\rm MS}\). The \(\hat{H}^{\rm MS}\) numerical matrix expressed in the coupled- and uncoupled-spin bases is shown in Tables S10 and S11, respectively. Concerning the coupled-spin basis, the correspondence between \(\hat{H}^{\rm MS}\) and \(\hat{H}^{\rm eff}\) matrices is outstanding at a first glance. A closer look reveals, however, that the \(\left\langle 0,0\right|\hat{H}^{\rm eff}\left|2,M_{S}\right\rangle\) and \(\left\langle 1,M_{S}\right|\hat{H}^{\rm eff}\left|3,M_{S}\right\rangle\) elements, as well as their complex-conjugates, have opposite sign compared to counterparts
in \(\hat{H}^{\rm MS}\). The sign discrepancy, reported and clarified in the erratum to Reference 4, occurs since the \(|S,M_{S}\rangle\) spin functions enter with arbitrary phases in the _ab initio_ spin-orbit eigenvectors, and may show up with random \(\pm 1\) prefactors between different _ab initio_ runs or runs on different computers. The sign arbitrariness becomes problematic when \(\hat{H}^{\rm eff}\) is translated in the uncoupled-spin basis using tabulated CG coefficients that follow a specific sign convention. The ones used in this work, shown in Table S1, follow the Condon and Shortley convention [92, 93]. Indeed, comparison between the \(\hat{H}^{\rm MS}\) and \(\hat{H}^{\rm eff}\) numerical matrices in the uncoupled-spin basis, S11 _vs._ S9, is rather poor, generating misleading conclusions. In order to obey sign conventions, one may express the spin-free wavefunctions in a basis of localized orbitals, pick a phase convention and derive analytically the expressions of the spin-orbit wavefunctions; finally, revise the CG coefficients according to the chosen phase convention. This path has been adopted in Reference 74. Pursuing such a scheme, although rigorous, is tedious and not appealing in general. Instead, one may follow the path taken in Reference 4 and revise the conflicting signs in \(\hat{H}^{\rm eff}\) such that projection onto \(\hat{H}^{\rm MS}\) using CG coefficients in the Condon-Shortley convention gives the smallest deviation. In this work, it is realized from the onset that, by construction, the model \(\hat{H}^{\rm MS}\) already yields the correct signs that must be adopted in \(\hat{H}^{\rm eff}\) itself. The sign-revised, coupled-spin basis \(\hat{H}^{\rm eff}\) is shown in Table S12 and the uncoupled-spin basis matrix derived thereof is shown in Table S13. Comparison with \(\hat{H}^{\rm MS}\) counterparts (Tables S10 and S11) reveals outstanding agreement regardless of basis, with maximum deviation not larger than 1 and 1.8 cm\({}^{-1}\) for the off-diagonal and diagonal elements respectively (see Tables S14 and S15). Three highly important conclusions are here drawn, (i) the local anisotropy parameters calculated independently using monomeric structures **1a** and **1b** are transferable to calculations on the dimeric complex **1**, (ii) the local anisotropies are the main contributors to \(\hat{H}^{\rm MS}\) and actually bring the model to very close agreement with \(\hat{H}^{\rm eff}\), and (iii) the approximate \(\hat{H}^{\rm MS}\), accounting only for the Heisenberg exchange and local anisotropies, is validated through the effective Hamiltonian theory and may be already used in modeling magnetic properties such as the \(\chi T\) curve.
Prior to model the \(\chi T\) curve, it may be noted that comparison between the analytical matrix of \(\hat{H}^{\rm MS}\) (Table 4) and the numerical matrix of \(\hat{H}^{\rm eff}\) (Table S12) provides with enough equations for a full extraction of all the magnetic parameters at once. Thus, a more precise extraction can be performed on **1** in order to obtain \(J\) under the effect of SOC and both the local and the symmetric-exchange anisotropies. Such a full extraction has been performed by least-squares fitting of the \(\hat{H}^{\rm eff}\) matrix elements. The extracted parameters, listed in Table 3, lead to \(\hat{H}^{\rm MS}\) matrices (Tables S16 and S17 for the uncoupled- and coupled-spin basis respectively) that agree with \(\hat{H}^{\rm eff}\) within 1 cm\({}^{-1}\) concerning the off-diagonal and diagonal elements (see Tables S18 and S19). Furthermore, diagonalization of the model \(\hat{H}^{\rm MS}\) matrices leads to highly accurate spin-orbit energies, within 1.2 and 1.8 cm\({}^{-1}\) of the _ab initio_ energies (see Table S2), further supporting the validity of \(\hat{H}^{\rm MS}\) and of our extraction scheme.
Several aspects may be noted from Table 3: (i) since \(J\) only appears on the diagonal of \(\hat{H}^{\rm MS}\) in Table 4, switching from NEVPT2 to corrected DDCI2 relative energies affects the \(J\) values themselves but leaves mostly unchanged the remaining parameters; the SOC reduces \(J\) by 0.7 and 2 cm\({}^{-1}\) at the SO-NEVPT2 and SO-DDCI2 levels, respectively, (ii) the re-extracted local anisotropy parameters at the SO-NEVPT2 level, \(D_{a}\)=17.1 and \(E_{a}\)=3.7 cm\({}^{-1}\) are in complete agreement with the counterparts extracted independently with the **1a** and **1b** monomeric structures, \(D_{a}\)=17.4 and \(E_{a}\)=3.7 cm\({}^{-1}\), reaffirming the transferability of these
parameters from monomer to dimer calculations, and (iii) the symmetric exchange anisotropy is indeed minor compared to local anisotropies and thus neglecting it from the construction of \(\hat{H}^{\rm MS}\) is not a bad approximation. It is worth noting for specialists that re-labeling the axes describing the PAF of \(\bar{\bar{D}}_{ab}\) is necessary in order to respect the conventions \(E_{ab\dot{b}}0\) and \(|D_{ab}|>3E_{ab}\). In particular, the easy axis of magnetization of the symmetric anisotropy, which is collinear with the Co\({}_{2}\) internuclear axis, falls perpendicular to the molecular easy axis of local anisotropy, which in turn is almost collinear with the \(\mu\)-Cl\({}_{2}\) internuclear axis.
Finally, the fully-extracted \(\hat{H}^{\rm MS}\) is used to model the \(\chi T\) curve of complex **1**. For this task, one needs to express in matrix form the Zeeman terms of the two magnetic centers, \(\mu_{\rm B}\vec{B}\bar{\bar{g}}_{a}\hat{S}_{a}\) and \(\mu_{\rm B}\vec{B}\bar{\bar{g}}_{b}\hat{S}_{b}\), and add them to \(\hat{H}^{\rm MS}\). Working in the uncoupled-spin basis, the 16\(\times\)16 analytical matrices of the Zeeman terms are derived in Tables S20 and S21. These matrices are evaluated with \(\bar{\bar{g}}_{a}\) and \(\bar{\bar{g}}_{b}\) tensors obtained _via_ the effective Hamiltonian theory from SO-NEVPT2 calculations performed on structures **1a** and **1b** in the molecular PAF:
\[\bar{\bar{g}}_{a}=\bar{\bar{g}}_{b}=\begin{bmatrix}2.40&0&0\\ 0&2.50&0\\ 0&0&2.23\end{bmatrix} \tag{9}\]
Figure 3: \(\chi T=f(T)\) experimental curve of complex **1**, digitized from Reference 66, and modeled with the spin Hamiltonian \(\hat{H}=\hat{H}^{\rm MS}+\hat{H}^{\rm 2ee}\), dressed with parameters listed in Table 3. The best agreement with experiment is obtained with the SO-DDCI2 parameters and a re-adjusted isotropic \(g\)-factor of 2.28.
The diagonal form of Equation 9 shows, as expected, that the local PAFs of the \(\overline{g}\) tensors also coincide with the molecular PAF. The isotropic \(g\)-factor of 2.38 is very close to the fitted value in Reference 66, \(g\)=2.25.
The modeled \(\chi T=f(T)\) curves with magnetic parameters from Table 3, shown in Figure 3, highlight primarily the role of the magnetic coupling \(J\) in achieving agreement with the experiment. Since the \(J\)=10.5 cm\({}^{-1}\) extracted from the SO-NEVPT2 calculation is too small, the \(\chi T\) curve increases steadily and diverges from the reference experimental curve. The revised \(J\)=19.3 cm\({}^{-1}\) extracted out of the SO-DDCI2 calculation leads to \(\chi T\) in a much closer agreement especially in the low-temperature range, \(\sim\)0-50 K. Furthermore, with a slightly adjusted isotropic \(g\)-factor of 2.28, which is even closer to the fitted 2.25 value,[66] the modeled \(\chi T\) curve with the SO-DDCI2 parameters fits almost perfectly the experimental curve in the whole temperature range. The present article does not aim to delve into higher-level theoretical approaches that could potentially improve the description of \(g\) from the onset. Therefore, we conclude that the successful resolution and utilization of the standard multispin model Hamiltonian for calculating magnetic properties in the centrosymmetric complex **1** have been here neatly demonstrated.
## [Co\({}_{2}\)(L)\({}_{2}\)(acac)\({}_{2}\)(H\({}_{2}\)O)], an unsymmetrical complex
### Introduction and local, Co(II), electronic structures
The \(\mu\)-O\({}_{2}\)(phenoxo)-bridged binuclear complex **2** displays Co(II) centers coordinated by O and N atoms. The local coordination around both metals is near-\(O_{h}\), with bond angles between 85-96\({}^{\circ}\) and Co-O and Co-N mean distances of \(\sim\)2.07 and 2.08 A respectively. Recent work recorded a \(\chi T\) profile consistent with weak ferromagnetic behavior and an interesting maximum below 10 K.[68] Fitting of the \(\chi T\) curve resulted in \(J=-3.74\) cm\({}^{-1}\)(\(J\hat{S}_{a}\hat{S}_{b}\) convention). The study concluded that overall, the spin alignment is favored by the spin-orbit coupling (SOC) and the local coordination geometry around the metals. To the best of our knowledge, the electronic structure of the compound has not been examined through first principles calculations. Therefore, we begin detailing the local many-electron states at the Co(II) centers as provided by NEVPT2 calculations conducted with the **2a** and **2b** monomeric structures.
The ground term of the free-ion Co(II), denoted as \({}^{4}F\), splits in (weak) \(O_{h}\) ligand fields, resulting in a ground \({}^{4}T_{1}\) state and excited \({}^{4}T_{2}\) and \({}^{4}A_{2}\) states. The threefold degeneracy of the orbital-triplet states is then lifted by distortions of the local \(O_{h}\) coordination. According to Table 5, both Co(II) centers exhibit spin-free energy levels with a \({}^{4}F\) parentage that are well isolated within specific ranges: approximately 0-1100 cm\({}^{-1}\) for \({}^{4}T_{1}\), 8600-10500 cm\({}^{-1}\) for \({}^{4}T_{2}\), and above 18500 cm\({}^{-1}\) for \({}^{4}A_{2}\). The energy variation of these individual levels on the employed SA scheme is minimal. If the overall degeneracy lift of \({}^{4}T_{1}\) is similar for both Co centers (around 1100 cm\({}^{-1}\)), the splitting of \({}^{4}T_{2}\) is three times larger in **1a** (around 1500 cm\({}^{-1}\)) than in **1b** (around 400 cm\({}^{-1}\)). Taken together with the fact that the spectral widths of the states listed in Table 5 is about 750 cm\({}^{-1}\) higher in **2a** compared to **2b**, these data reflect the slightly more distorted octahedron around the magnetic center of the former structure. Overall, based on energy gaps alone, one may conclude that the three levels of \({}^{4}T_{1}\) parentage are the primary contributors shaping the local spin-orbit electronic structures. Consequently, performing SOC calculations with a SA/SI scheme involving these three quartets should be sufficient to address the local ZFS parameters.
Figure 4: Arbitrarily-chosen coordinate frame and calculated PAFs for complex **2** (from \(\bar{D}_{S=3}\)) and for the model structures **2a** (from \(\bar{D}_{a}\)) and **2b** (from \(\bar{D}_{b}\)). Axes color code: \(x\)=blue, \(y\)=green, \(z\)=red.
Indeed, we found that the lowest-energy Kramers doublets (KDs) only marginally change when increasing the SA/SI scheme beyond the first three quartet roots. As listed in Table 5, each Co(II) center exhibits one KD below approximately 200 cm\({}^{-1}\) and the others above 700 cm\({}^{-1}\). Evidently, all these Kramers doublets (KDs) fully arise as admixtures of \(|S=\nicefrac{{3}}{{2}},M_{S}\rangle\) components of the spin-free states correlating with \({}^{4}T_{1}\). Furthermore, the spin-free quartet GS predominantly contributes to the wavefunctions of KD\({}_{1}\) (\(\sim\)75%) and KD\({}_{2}\) (\(\sim\) 90%) in both **2a** and **2b**. The next significant contribution to KD\({}_{1}\), approximately 20%, comes
\begin{table}
\begin{tabular}{l l l l l l l} \hline & \multicolumn{4}{c}{**2a**} & \multicolumn{4}{c}{**2b**} \\ \cline{2-7} SF & 10Q+40D & 7Q & 3Q & 10Q+40D & 7Q & 3Q \\ \hline \({}^{4}T_{1}\) & 0 & 0 & 0 & 0 & 0 & 0 \\ & 622 & 600 & 610 & 460 & 446 & 468 \\ & 1079 & 1044 & 1059 & 1175 & 1143 & 1154 \\ \({}^{4}T_{2}\) & 8952 & 8673 & n/a & 9189 & 8895 & n/a \\ & 9341 & 9050 & n/a & 9352 & 9059 & n/a \\ & 10555 & 10211 & n/a & 9583 & 9280 & n/a \\ \({}^{4}A_{2}\) & 19969 & 19344 & n/a & 19190 & 18584 & n/a \\ \hline SO\({}^{b}\) & & & & & & \\ KD\({}_{1}\) & 0 & 0 & 0 & 0 & 0 & 0 \\ KD\({}_{2}\) & 147 & 157 & 178 & 190 & 194 & 205 \\ KD\({}_{3}\) & 721 & 708 & 733 & 664 & 659 & 688 \\ \hline \multicolumn{7}{c}{Local ZFS parameters and isotropic \(g\)-factors} \\ \multicolumn{7}{c}{\(D_{a}\)} & \multicolumn{1}{c}{\(E_{a}\)} & \multicolumn{1}{c}{\(g_{a}\)} & \multicolumn{1}{c}{\(D_{b}\)} & \multicolumn{1}{c}{\(E_{b}\)} & \multicolumn{1}{c}{\(g_{b}\)} \\ \multicolumn{7}{c}{\(86.31\)} & \multicolumn{1}{c}{11.91} & \multicolumn{1}{c}{2.30} & \multicolumn{1}{c}{93.14} & \multicolumn{1}{c}{25.02} & \multicolumn{1}{c}{2.31} \\ \hline \end{tabular} \({}^{a}\)Relative energies and ZFS parameters in cm\({}^{-1}\); letters D and Q are used to denote spin-doublet and spin-quartet states; structures **2a** and **2b** are shown in Figure 1. \({}^{b}\)KD in the labeling of the SO states stands for Kramers Doublet.
\end{table}
Table 5: Spin-free (SF) and spin-orbit (SO) energies and local magnetic parameters for the Co(II) centers of complex **2**, obtained with NEVPT2 calculations and different SA/SI schemes\({}^{a}\)
from the first excited spin-quartet root. Anyway, one can confidently extract local ZFS parameters from the usual anisotropic spin Hamiltonians \(\hat{S_{a}}\bar{\bar{D}}_{a}\hat{S_{a}}\) and \(\hat{S_{b}}\bar{\bar{D}}_{b}\hat{S_{b}}\), expressed in the \(|S=\nicefrac{{3}}{{2}},M_{S}\rangle\) basis of the ground quartet state (the model space thus consisting of the leading contributors to both KD\({}_{1}\) and KD\({}_{2}\)).
The extracted axial, \(D_{a}\) and \(D_{b}\), and rhombic, \(E_{a}\) and \(E_{b}\), ZFS parameters are listed at the bottom of Table 5. The PAFs of the local rank-2 tensors, \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\), shown in Figure 4, are very distinct this time, with orientations that may not have been guessed without performing those calculations. In comparison with complex **1**, the axial parameters \(D_{a/b}\simeq 90\) cm\({}^{-1}\) are at least fivefold larger. Furthermore, \(E_{b}=25\) cm\({}^{-1}\) is twice larger than \(E_{a}=12\) cm\({}^{-1}\), and both are at least three times larger than the local rhombic parameter in complex **1**, 3.7 cm\({}^{-1}\). On another hand, the Co centers share similar isotropic \(g\)-factor, \(g_{a}\simeq g_{b}\simeq 2.30\), with that of the Co centers in complex **1**, \(g_{a}=g_{b}=2.35\). Here, however, the spans of \(\bar{\bar{g}}_{a}\) and \(\bar{\bar{g}}_{b}\), 1.86-2.66 and 1.81-2.84 respectively, are much larger than the \(\bar{\bar{g}}\)-tensor span in complex **2**, 2.23-2.40, also highlighting the much larger anisotropy in complex **2** in terms of the local \(\bar{\bar{g}}\)'s.
#### Spin-free and spin-orbit spectra and determination of the molecular PAF
Calculations for complex **2** were initially performed in an arbitrary coordinate system (see Figure 4, the Co\({}_{2}\) and \(\mu\)-O\({}_{2}\) internuclear axis corresponds to the \(z\) and \(y\) directions, re
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \(\hat{H}^{\rm mod}\) & \(|3,3\rangle\) & \(|3,2\rangle\) & \(|3,1\rangle\) & \(|3,0\rangle\) & \(|3,-1\rangle\) & \(|3,-2\rangle\) & \(|3,-3\rangle\) \\ \hline \(\langle 3,3|\) & \(134.0\) & \(18.0+44.0i\) & \(5.0+62.0i\) & \(0\) & \(0\) & \(0\) & \(0i\) \\ \(\langle 3,2|\) & \(18.0-44.0i\) & \(188.0\) & \(14.0+34.0i\) & \(8.0+88.0i\) & \(0\) & \(0\) & \(0\) \\ \(\langle 3,1|\) & \(5.0-62.0i\) & \(14.0-34.0i\) & \(221.0\) & \(5.0+13.0i\) & \(8.0+97.0i\) & \(0\) & \(0\) \\ \(\langle 3,0|\) & \(0\) & \(8.0-88.0i\) & \(5.0-13.0i\) & \(232.0\) & \(-5.0-13.0i\) & \(8.0+88.0i\) & \(0\) \\ \(\langle 3,-1|\) & \(0\) & \(0\) & \(8.0-97.0i\) & \(-5.0+13.0i\) & \(221.0\) & \(-14.0-34.0i\) & \(5.0+62.0i\) \\ \(\langle 3,-2|\) & \(0\) & \(0\) & \(0\) & \(8.0-88.0i\) & \(-14.0+34.0i\) & \(188.0\) & \(-18.0-44.0i\) \\ \(\langle 3,-3|\) & \(0\) & \(0\) & \(0\) & \(0\) & \(5.0-62.0i\) & \(-18.0+44.0i\) & \(134.0\) \\ \hline \end{tabular}
\end{table}
Table 6: Matrix elements, in cm\({}^{-1}\), of the \(S=3\) block within the 16\(\times\)16 matrix of \(\hat{H}^{\rm eff}\), constructed from SO-NEVPT2 calculations on complex **2** in the arbitrary coordinate frame
\(S\), respectively, and the \(x\) axis is perpendicular to the Co-[\(\mu\)O\({}_{2}\)]-Co plane). Spin-free NEVPT2 calculations were conducted, revealing a Lande type spectrum with an \(S=3\) GS followed by a first set of \(S=2\), 1 and 0 excited states, mapped with a ferromagnetic \(J\) (\(-2.85\) cm\({}^{-1}\)). This value is within 1 cm\({}^{-1}\) of the \(J=-3.74\) cm\({}^{-1}\) obtained from fitting the experimental \(\chi T\) curve.[68] When considering the SOC, there are 16 lowest-lying energy levels within 389 cm\({}^{-1}\)(see Tables 7 and S22). In comparison to complex **1**, not only is this energy range three times larger, but the continuum of excited levels begins at a much lower energy, 691 cm\({}^{-1}\). Importantly, the total contribution of the lowest-energy \(S\)=3, 2, 1, 0 spin-free states to the wavefunctions of those 16 spin-orbit levels gradually increases, from about 50% in \(\Psi_{1}\) to 81% in \(\Psi_{16}\). Although these weights are lower than what was observed in **1**, it should be noted that our SOCI wave functions of reference are still dominated by the standard model space (the projection on the model space being more than 50%) and that _in fine_ will we show that our approach allows us to quite accurately reproduce the experimental \(\chi T\) curve.
A quick look at Table S23 reveals that the wavefunctions of the 16 lowest-lying spin-orbit levels are plagued by serious spin-mixing, which may be less important if the molecular PAF is used. We proceeded, therefore, as before to derive the molecular PAF. The matrix
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline SO state & \(\Delta E\) & \(\sum|S,M_{S}\rangle^{b}\) & SO state & \(\Delta E\) & \(\sum|S,M_{S}\rangle^{b}\) \\ \hline \(\Psi_{1}\) & 0 & 0.52 & \(\Psi_{9}\) & 206.4 & 0.66 \\ \(\Psi_{2}\) & 1.9 & 0.51 & \(\Psi_{10}\) & 206.9 & 0.67 \\ \(\Psi_{3}\) & 3.6 & 0.50 & \(\Psi_{11}\) & 209.6 & 0.65 \\ \(\Psi_{4}\) & 7.7 & 0.50 & \(\Psi_{12}\) & 212.2 & 0.66 \\ \(\Psi_{5}\) & 177.0 & 0.62 & \(\Psi_{13}\) & 379.7 & 0.80 \\ \(\Psi_{6}\) & 177.7 & 0.64 & \(\Psi_{14}\) & 380.0 & 0.80 \\ \(\Psi_{7}\) & 180.7 & 0.64 & \(\Psi_{15}\) & 388.4 & 0.69 \\ \(\Psi_{8}\) & 182.2 & 0.61 & \(\Psi_{16}\) & 389.0 & 0.81 \\ \hline \end{tabular} \({}^{a}\)SO-NEVPT2 calculations, relative energies in cm\({}^{-1}\); \({}^{b}\)Total contribution from the spin-components of the lowest-energy \(S=3\), 2, 1 and 0 spin-free states.
\end{table}
Table 7: Low-energy spin-orbit spectrum of complex \(\textbf{2}^{a}\)
elements of the \(S=3\) block within the 16\(\times\)16 \(\hat{H}^{\rm eff}\), shown in Table 6, perfectly match the \(\hat{H}^{\rm mod}\) analytical matrix elements listed in Table 1. Unlike in the previous case of complex **1**, matrix elements such as \(\langle 3,2|\,\hat{H}^{\rm eff}\,|3,3\rangle\) or \(\langle 3,1|\,\hat{H}^{\rm eff}\,|3,2\rangle\) deviate significantly from zero, whereas they should be zero if the input _xyz_ frame is the molecular PAF. Extraction and diagonalization of \(\bar{D}_{S=3}\) led to the molecular PAF depicted in Figure 4. This frame is not only distinct from the initial input frame but also distinct from the local PAFs (of \(\bar{D}_{a}\) and \(\bar{\bar{D}}_{b}\)). The SO-NEVPT2 calculation was repeated in the molecular PAF, leading to much cleaner wavefunctions for the 16 lowest-lying energy levels (see Table S24).
#### Extraction of the multispin Hamiltonian
Figure 4 clearly highlights that the local anisotropy tensors, \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\), cannot be diagonal in the molecular PAF. Since our procedure does not require any assumption on the local PAFs, we have re-computed (rotated) them in the molecular PAF prior to assembling \(\hat{H}^{\rm MS}\) according to Equation 1:
\[\bar{\bar{D}}_{a}=\begin{bmatrix}-33.1&7.2&-15.9\\ 7.2&-19.4&14.3\\ -15.9&14.3&52.6\end{bmatrix};\bar{\bar{D}}_{b}=\begin{bmatrix}-1.8&-6.2&16.3 \\ -6.2&-53.8&-14.6\\ 16.3&-14.6&55.7\end{bmatrix} \tag{10}\]
It should be stressed that no specific relationship appears between the elements of these tensors, in accord with the C\({}_{1}\) symmetry point group of complex **2**.
We are now in the position to best equate the 16\(\times\)16 matrix of \(\hat{H}^{\rm MS}\) with the representative matrix of \(\hat{H}^{\rm eff}\). We first consider the expressions reported in Tables S5 and S6, meaning that we neglect the symmetric exchange tensor. By using the \(-2.85\) cm\({}^{-1}\) spin-free \(J\) value and the full \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{a}\) tensors (Equation 10), the estimated numerical matrix of \(\hat{H}^{\rm MS}\) is given in Tables S25 and S26 in the uncoupled \(|M_{Sa},M_{Sb}\rangle\) and coupled \(|S,M_{S}\rangle\) spin basis, respectively. These are brought in correspondence with the effective Hamiltonians shown in Tables S27 (coupled-spin basis) and S28 (uncoupled-spin basis). As it was the case with complex **1**, outstanding agreement is obtained from the comparison regardless of the
spin basis. Indeed, the difference between \(\hat{H}^{\rm eff}\) and \(\hat{H}^{\rm MS}\) is already smaller than 1 and 2.7 cm\({}^{-1}\) concerning the off-diagonal and diagonal matrix elements, respectively. It is worth emphasizing once again that the independently calculated local anisotropy tensors, \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\), within the framework of the **2a** and **2b** monomeric structures, can be safely transferred to the dimeric structure **2**.
As with complex **1**, we proceeded to extract \(J\) under the influence of SOC, as well as the symmetric anisotropy tensor \(\bar{\bar{D}}_{ab}\), without revising the local anisotropies. In other words, we constructed \(\hat{H}^{\rm MS}\) as follows:
\[\hat{H}^{\rm MS}=J\hat{S}_{a}\hat{S}_{b}+\left[\hat{S}_{a}\bar{\bar{D}}_{a}\hat {S}_{a}+\hat{S}_{b}\bar{\bar{D}}_{b}\hat{S}_{b}\right]_{\rm mono}+\hat{S}_{a} \bar{\bar{D}}_{ab}\hat{S}_{b} \tag{11}\]
where "mono" refers to the monomer calculations. The contribution of \(\hat{S}_{a}\bar{\bar{D}}_{ab}\hat{S}_{b}\) to \(\hat{H}^{\rm MS}\), expressed in an arbitrary axis frame and in the coupled-spin basis, is displayed in Table S7. The contribution of \(\hat{S}_{a}\bar{\bar{D}}_{ab}\hat{S}_{b}\) to \(\hat{H}^{\rm MS}\) in its own PAF is given in Table S31. A careful inspection of all the relevant tables revealed that in fact the previous deviations between the estimated \(\hat{H}^{\rm MS}\) and \(\hat{H}^{\rm eff}\) can essentially be explained by Table S31. In other words, the PAF of \(\hat{S}_{a}\bar{\bar{D}}_{ab}\hat{S}_{b}\) practically matches the molecular PAF, and thus we only need to fit the \(J\), \(D_{ab}\), and \(E_{ab}\) parameters to refine \(\hat{H}^{\rm MS}\).
Table 8 summarizes all the key magnetic parameters resulting from the resolution of \(\hat{H}^{\rm MS}\) for the case of complex **2** (we recall that the \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\) tensors are not diagonal in the molecular PAF). In the coupled-spin basis, the numerical matrix of \(\hat{H}^{\rm MS}\) is shown in
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline \(J\) & \(D_{ab}\) & \(E_{ab}\) & \(D_{a}\) & \(E_{a}\) & \(D_{b}\) & \(E_{b}\) \\ \hline \(-2.28\) & \(-0.07^{a}\) / \(0.22^{b}\) & \(-0.13^{a}\) / \(0.03^{b}\) & 86.31 & 11.91 & 93.14 & 25.02 \\ \hline \hline \end{tabular} \({}^{a}\)Generated from least-squares fitting of the \(\hat{H}^{\rm eff}\) matrix elements; \({}^{b}\)Recalculated parameters after relabeling of the MPAF axes such that the convention \(E_{ab}\hat{\iota}0\) and \(|D_{ab}|>3E_{ab}\) is fulfilled: \(y\to z\), \(z\to x\), \(x\to y\).
\end{table}
Table 8: Magnetic parameters, in cm\({}^{-1}\), obtained from the complete resolution of \(\hat{H}^{\rm MS}\) for complex **2** oriented in the molecular PAF
Table S32. This model is of course in even better agreement with the effective Hamiltonian (Table S27). The difference matrix between \(\hat{H}^{\rm MS}\) and \(\hat{H}^{\rm eff}\), displayed in Table S33, now shows that all the elements are generally smaller than 1 cm\({}^{-1}\). Furthermore, diagonalization of our model \(\hat{H}^{\rm MS}\) yields highly-accurate spin-orbit energies, with an average deviation of only 1.2 cm\({}^{-1}\) and maximum deviation of 2.6 cm\({}^{-1}\). Consequently, we have successfully applied our new procedure to properly extract all the rank-2 tensors of \(\hat{H}^{\rm MS}\) and, ultimately, validated the standard multispin Hamiltonian for any binuclear complex since the key step of unsymmetrical dicobalt(II) complexes is now solved.
Finally, we take a step forward and attempt to model the powder-averaged \(\chi T\) profile of complex **2** using the validated \(\hat{H}^{\rm MS}\). For this purpose, we use again the analytical expressions
Figure 5: Top: Experimental \(\chi T=f(T)\) curve of complex **2**, digitized from Reference 68, and generated from _ab initio_ calculations and multispin Hamiltonian models. Bottom: \(\chi T=f(T)\) curves obtained with SO-NEVPT2 as a function of the angle \(\alpha^{\circ}=\angle\)Co–O–Co. In the crystal structure, \(\alpha=97^{\circ}\), and the black curve represents the best approximation of the experimental \(\chi T\).
of the Zeeman Hamiltonians, derived in Tables S20 and S21, using the \(\bar{g}_{a}\) and \(\bar{g}_{b}\) tensors obtained with the **2a** and **2b** structural models, based on calculations performed in the molecular PAF:
\[\bar{\bar{g}}_{a}=\begin{bmatrix}2.59&-0.10&0.13\\ -0.09&2.43&-0.11\\ 0.12&-0.08&1.89\end{bmatrix};\bar{\bar{g}}_{b}=\begin{bmatrix}2.23&0.06&-0.11\\ 0.06&2.83&0.13\\ -0.11&0.12&1.86\end{bmatrix} \tag{12}\]
Those matrices are not diagonal in the molecular PAF and in fact the corresponding local PAFs also do not strictly match one another (as was observed for \(\bar{\bar{D}}_{a}\) and \(\bar{\bar{D}}_{b}\)). Since the off-diagonal elements are smaller than the diagonal ones in Equations 12, one can still consider that \(\bar{\bar{g}}_{a}\) and \(\bar{\bar{g}}_{b}\) are close to be diagonal in the molecular PAF, even if a detailed analysis may reveal an interchange between the respective hard and intermediate axes of magnetization between \(\bar{\bar{g}}_{a}\) and \(\bar{\bar{g}}_{b}\).
Figure 5, top panel, demonstrates the excellent agreement between the reference, experimental \(\chi T\) curve [68] and the one obtained directly out of the SO-NEVPT2 calculation. Therefore, we can be confident in the quality of our _ab initio_ calculations and now aim at producing good quality model curves, in view of further supporting the validity of both \(\hat{H}^{\rm MS}\) and of our procedure to extract the magnetic parameters. The model curve, obtained from \(\hat{H}^{\rm MS}\) + \(\hat{H}^{\rm Zee}\) dressed with quantities from Table 8 and Equation 12, agrees with the reference data in the low-temperature region. As the temperature increases, it deviates more and more, eventually reaching a plateau around 150 K that is quite lower than the experimental curve. The model significantly improves against the reference and the _ab initio_ data when an isotropic \(g=2.55\) factor is used to describe the Zeeman effect at both Co centers. Such a value is not completely random as it may be justified based on the measured room-temperature \(\chi T\) value of 6.03 cm\({}^{3}\)mol\({}^{-1}\)K, or rather 3.015 cm\({}^{3}\)mol\({}^{-1}\)K per Co center, according to:
\[g^{2}=\frac{\chi_{M}\cdot T\cdot 3k_{B}}{N_{A}\cdot S(S+1)\cdot\mu_{B}^{2}} \tag{13}\]
This expression leads to \(g=2.54\) with the Boltzmann constant, \(k_{B}=0.695\) cm\({}^{-1}\)K\({}^{-1}\), \(\mu_{B}=0.467\), cm\({}^{-1}\)T\({}^{-1}\), and \(N_{A}\cdot\mu_{B}=0.558\) cm\({}^{3}\)mol\({}^{-1}\)T. Note that since our main point here is to understand the zero-field behavior, we do not aim at delving deeper in explaining the reasons behind this revision of the SO-NEVPT2 \(g\) value.
Two additional models are displayed in Figure 5, top panel, evidencing the shape of \(\chi T\) if one assumes that the local anisotropy tensors are diagonal in the molecular frame and if the axial and rhombic parameters displayed in Table 8 are used, or if \(J\) is artificially set to a weakly antiferromagnetic value. The first scenario artificially pushes up the model curve in the low-temperature range, meaning that the maximum is too high. While it is quite intuitive that the mismatch of the actual PAFs of the local anisotropy tensors should lower down the model curve, we should stress that if one assumes that the tensors are diagonal in the molecular PAF and if one fits the local ZFS parameter values, as could be done to fit the experimental data, this should lead to too weak ZFS parameters. In other words, it is crucial to account for the mismatch of the local PAFs in the model, otherwise, meaningless parameters would be obtained. This is exactly where experimental extractions of \(\hat{H}^{\rm MS}\) parameters should not be performed independently from computational chemistry data.
We now aim at better explaining the occurrence of a local maximum of \(\chi T\) at low temperature. From the previous paragraph, we have learned that the mismatch between the local PAFs works towards enhancing this \(\chi T\) maximum. Such maximum does not occur if the antiferromagnetic coupling scenario is retained (see Figure 5). Thus, one could naively think that \(J\) must be maximized to favor the occurrence of such a maximum. In practice, to get a local maximum of \(\chi T\), we believe that it requires population of a state that is much less magnetic than the ones before and after it. A close inspection of Table S24 reveals that the fourth spin-orbit level is essentially grounded of the \(M_{S}=0\) components of the spin-quintet
and spin-singlet states, so that this must be the states that is looked for. For this state to pop up before all the components of the \(S\) = 3 state, one needs to be in the weak-exchange limit (note that this also strengthen the spin-mixing with the singlet state, which also pushes down this state in the model). Additionally, this state must be well separated in energy with the ones that are higher, otherwise a continuous enhancement of \(\chi T\) would be observed. According to Table 7, a gap of about 170 cm\({}^{-1}\) occurs in complex **2**, which corroborates our interpretation.
To strengthen the conclusion drawn earlier, the bottom panel of Figure 5 illustrates the variation of \(\chi T\), obtained directly from SO-NEVPT2 calculations, as a function of the Co-O-Co angles. At the crystal-structure value of 97\({}^{\circ}\), the calculated curve best approximates the reference data. Angles below 97\({}^{\circ}\) promote stronger ferromagnetic coupling (without getting rid of the weak-exchange regime). Consequently, as the "less magnetic" state shifts toward higher energies, the spike in \(\chi T\) is both enhanced and slightly shifted to higher temperatures. On the other hand, angles above 100\({}^{\circ}\) promote antiferromagnetism. As a result, the \(\chi T\) curves no longer exhibit local maxima. The overall structural change, induced when the Co-O-Co angle is changed from 97 to, for instance, 101\({}^{\circ}\), is characterized by a RMSD of 0.08 A, it is thus negligible. Therefore, one can regard the unfolding of \(\chi T\) in complex **2** as an intermediate between the ferromagnetic and antiferromagnetic regimes. This is due to the fact that even a tiny displacement in the molecular structure promptly alters \(\chi T\), transitioning between these two regimes.
Reinterpretation of the [Ni\({}_{2}\)(en)\({}_{4}\)Cl\({}_{2}\)]\({}^{2+}\) case, a centrosymmetric dinickel(II) complex in the weak-exchange regime
An accurate determination of magnetic parameters in [Ni(en)\({}_{4}\)Cl\({}_{2}\)]Cl\({}_{2}\) has been reported more than a decade ago in the context of high-field EPR experiments [94]. The complex is weakly ferromagnetic, with \(J=-9.66\), \(D_{a}=-4.78\), and \(D_{ab}=-0.64\) cm\({}^{-1}\). Maurice et al. performed extraction of these parameters from the anisotropic multispin Hamiltonian through
the effective Hamiltonian theory.[65] The generated parameters, \(J=-5.415\), \(D_{a}=-9.437\), \(E_{a}=2.042\), \(D_{ab}=0.367\) and \(E_{ab}=-0.052\) cm\({}^{-1}\), which are in reasonable agreement with the experimental counterparts, allow for the derivation of the \(\hat{H}^{\rm MS}\) numerical matrix listed in Table 9. Comparison with the numerical \(\hat{H}^{\rm eff}\) matrix, reproduced from Reference 65 in Table 10, shows outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\). Performing a basis-change to the uncoupled basis using the \(\hat{H}^{\rm MS}\) method, we obtain the \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), which are in reasonable agreement with the experimental counterparts, allow for the derivation of the \(\hat{H}^{\rm MS}\) numerical matrix listed in Table 9. Comparison with the numerical \(\hat{H}^{\rm eff}\) matrix, reproduced from Reference 65 in Table 10, shows outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\). Performing a basis-change to the uncoupled basis using the \(\hat{H}^{\rm MS}\) method, we obtain the \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), which are in reasonable agreement with the experimental counterparts, allow for the derivation of the \(\hat{H}^{\rm MS}\) numerical matrix listed in Table 9. Comparison with the numerical \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), reproduced from Reference 65 in Table 10, shows outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\). Performing a basis-change to the uncoupled basis using the \(\hat{H}^{\rm MS}\) method, we obtain the \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), which are in reasonable agreement with the experimental counterparts, allow for the derivation of the \(\hat{H}^{\rm MS}\) numerical matrix listed in Table 9. Comparison with the numerical \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), reproduced from Reference 65 in Table 10, shows outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\). Performing a basis-change to the uncoupled basis using the \(\hat{H}^{\rm MS}\) method, we obtain the \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), which are in reasonable agreement with the experimental counterparts, allow for the derivation of the \(\hat{H}^{\rm MS}\) numerical matrix listed in Table 9. Comparison with the numerical \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), reproduced from Reference 65 in Table 10, shows outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\). Performing a basis-change to the uncoupled basis using the \(\hat{H}^{\rm MS}\) method, we obtain the \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), which are in reasonable agreement with the experimental counterparts, allow for the derivation of the \(\hat{H}^{\rm MS}\) numerical matrix listed in Table 9. Comparison with the numerical \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), reproduced from Reference 65 in Table 10, shows outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\). Performing a basis-change to the uncoupled basis using the \(\hat{H}^{\rm MS}\) method, we obtain the \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), which are in reasonable agreement with the experimental counterparts, allow for the derivation of the \(\hat{H}^{\rm MS}\) numerical matrix listed in Table 9. Comparison with the numerical \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), reproduced from Reference 65 in Table 10, shows outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\). Performing a basis-change to the uncoupled basis using the \(\hat{H}^{\rm MS}\) method, we obtain the \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), reproduced from Reference 65 in Table 10, and shows outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\). Performing a basis-change to the uncoupled basis using the \(\hat{H}^{\rm MS}\) method, we obtain the \(\hat{H}^{\rm eff}\) matrix elements of \(\hat{H}^{\rm eff}\), reproduced from Reference 65 in Table 10, and show outstanding agreement down to only \(\pm\) prefactors for the elements coupling the \(S=2\) and \(S=0\) blocks in \(\hat{H}^{\rm eff}\).
tabulated CG coefficients following the Condon-Shortley convention, these conflicting signs lead to unexpected matrix elements, such as \(\langle 1,-1|\hat{H}^{\text{eff}}|-1,1\rangle=8.6\text{ cm}^{-1}\), that should be null according to the standard \(\hat{H}^{\text{MS}}\).[65] As discussed in this article, in _ab initio_ calculations, the conflicting signs arise from arbitrary phases and must be adjusted based on the model matrix. This sign adjustment results in nearly identical model and effective matrices in both the coupled-spin and uncoupled-spin bases, thereby fully validating the standard multispin Hamiltonian for \([\text{Ni(en)}_{4}\text{Cl}_{2}]\text{Cl}_{2}\). Consequently, in this context, the introduction of rank-4, biquadratic exchange tensors in \(\hat{H}^{\text{MS}}\) is no longer required, unless high accuracy is sought after. Therefore, it is important to stress that the weak-exchange limit was correctly solved in the previous experimental and theoretical works.[65, 94] However, the experimental extraction was based on the assumption that rhombicity is negligible, which is not supported by the calculations. Therefore, the experimental \(J=-9.66\), \(D_{a}=-4.78\), and \(D_{ab}=-0.64\text{ cm}^{-1}\) values may be the subject of small uncertainties due to the neglect of rhombicity, though the important features (weak-exchange limit, local easy-axis anisotropies, \(|D_{ab}|<|D_{a}|\)) are now no doubt.
## 4 Conclusion
By studying dicobalt(II) complexes, we have confirmed the validity of the standard \(\hat{H}^{\text{MS}}\) independent from the weak- or strong-exchange regime. Using _ab initio_ calculations, we have demonstrated that it is possible to extract the full tensors that make up the model, without making any assumptions about their principal axis frames (PAFs). Furthermore, our analysis, based on model \(\chi T\) curves, has revealed that assuming the local anisotropy tensors are diagonal in the molecular PAF should lead to erroneous local anisotropy parameters in the fitting process. Therefore, concerning unsymmetrical or low-symmetry binuclear complexes, a rigorous interpretation of low-temperature magnetic data should retain key inputs from quantum mechanical calculations similar to those used in this work, _i.e._ multiconfigurational
and relativistic wave functions methods. We hope that the present article will trigger new joint theory/experiment studies based on this renewed perspective of explicitly mapping _ab initio_ data onto \(\hat{H}^{\text{MS}}\).
Supporting Information Available
Supporting Information (SI) available: Complimentary tables reporting numerical and/or analytical matrix elements of model and effective Hamiltionians, wavefunctions and energies, and XYZ coordinates for the two dicobalt(II) complexes under study.
RM conceived the study. DCS, RM and BLG secured funding. DCS performed all the theoretical calculations and implemented all codes utilized in generating the data reported in this manuscript. All authors were equally involved in the data analysis. DCS wrote the first draft of the manuscript. RM and BLG shaped the final form of the manuscript.
DCS acknowledges support from the European Union's Horizon 2020 Research and Innovation Program under the Marie Sklodowska-Curie Grant Agreement No. 899546, through the BIENVENUE COFUND program. DCS also acknowledges the infrastructure support provided through the RECENT AIR grant agreement MySMIS no. 127324.
## References
* Goodwin et al. 2017 Goodwin, C. A.; Ortu, F.; Reta, D.; Chilton, N. F.; Mills, D. P. Molecular magnetic hysteresis at 60 kelvin in dysprosocenium. _Nature_**2017**, _548_, 439-442.
* Guo et al. 2017 Guo, F.-S.; Day, B. M.; Chen, Y.-C.; Tong, M.-L.; Mansikkamaki, A.; Layfield, R. A. A Dysprosium Metallocene Single-Molecule Magnet Functioning at the Axial Limit. _Angewandte Chemie International Edition_**2017**, _56_, 11445-11449.
* Guo et al. 2018 Guo, F.-S.; Day, B. M.; Chen, Y.-C.; Tong, M.-L.; Mansikkamaki, A.; Layfield, R. A. Magnetic hysteresis up to 80 kelvin in a dysprosium metallocene single-molecule magnet. _Science_**2018**, _362_, 1400-1403.
* Gould et al. 2022 Gould, C. A.; McClain, K. R.; Reta, D.; Kragskow, J. G. C.; Marchiori, D. A.; Lachman, E.; Choi, E.-S.; Analytis, J. G.; Britt, R. D.; Chilton, N. F.; Harvey, B. G.; Long, J. R. Ultrahard magnetism from mixed-valence dlanthanide complexes with metal-metal bonding. _Science_**2022**, _375_, 198-202.
* Bogani and Wernsdorfer 2008 Bogani, L.; Wernsdorfer, W. Molecular spintronics using single-molecule magnets. _Nat. Mater._**2008**, \(7\), 179-186.
* Coronado 2020 Coronado, E. Molecular magnetism: from chemical design to spin control in molecules, materials and devices. _Nat. Rev. Mater._**2020**, \(5\), 87-104.
* Moneo-Corcuera et al. 2023 Moneo-Corcuera, A.; Nieto-Castro, D.; Cirera, J.; Gomez, V.; Sanjose-Orduna, J.; Casadevall, C.; Molnar, G.; Bousseksou, A.; Parella, T.; Martinez-Agudo, J. M.; Lloret-Fillol, J.; Perez-Temprano, M. H.; Ruiz, E.; Galan-Mascaros, J. R. Molecular memory near room temperature in an iron polyanionic complex. _Chem_**2023**, \(9\), 377-393.
* Gatteschi and Sessoli 2003 Gatteschi, D.; Sessoli, R. Quantum Tunneling of Magnetization and Related Phenomena in Molecular Materials. _Angew. Chem. Int. Ed._**2003**, _42_, 268-297.
* Wernsdorfer et al. 2005 Wernsdorfer, W.; Chakov, N. E.; Christou, G. Quantum Phase Interference and Spin-Parity in Mn\({}_{12}\) Single-Molecule Magnets. _Phys. Rev. Lett._**2005**, _95_, 037203-037207.
* Schlegel et al. 2005 Schlegel, C.; van Slageren, J.; Manoli, M.; Brechin, E. K.; Dressel, M. Direct Observa
tion of Quantum Coherence in Single-Molecule Magnets. _Phys. Rev. Lett._**2008**, _101_, 147203-147207.
* Ishikawa et al. 2005 Ishikawa, N.; Sugita, M.; Wernsdorfer, W. Quantum Tunneling of Magnetization in Lanthanide Single-Molecule Magnets: Bis(phtalocyianato)terbium and Bis(phtalocyianato)dysprosium Anions. _Angew. Chem. Int. Ed._**2005**, _44_, 2931-2935.
* Woodruff et al. 2013 Woodruff, D. N.; Winpenny, R. E. P.; Layfield, R. A. Lanthanide Single-Molecule Magnets. _Chem. Rev._**2013**, _113_, 5110-5148.
* Ishikawa et al. 2003 Ishikawa, N.; Sugita, M.; Ishikawa, T.; Koshihara, S.-y.; Kaizu, Y. Lanthanide Double-Decker Complexes Functioning as Magnets at the Single-Molecular Level. _J. Am. Chem. Soc._**2003**, _125_, 8694-8695.
* Zhu et al. 2019 Zhu, Z.; Guo, M.; Li, X.-L.; Tang, J. Molecular magnetism of lanthanide: Advances and perspectives. _Coord. Chem. Rev._**2019**, _378_, 350-364.
* Liddle and van Slageren 2015 Liddle, S. T.; van Slageren, J. Improving f-element single molecule magnets. _Chem. Soc. Rev._**2015**, _44_, 6655-6669.
* Chilton 2015 Chilton, N. F. Design Criteria for High-Temperature Single-Molecule Magnets. _Inorganic Chemistry_**2015**, _54_, 2097-2099.
* Wang et al. 2011 Wang, X.-Y.; Avendano, C.; Dunbar, K. R. Molecular magnetic materials based on 4d and 5d transition metals. _Chem. Soc. Rev._**2011**, _40_, 3213-3238.
* Swain et al. 2021 Swain, A.; Sen, A.; Rajaraman, G. Are lanthanide-transition metal direct bonds a route to achieving new generation 3d-4f SMMs? _Dalton Trans._**2021**, _50_, 16099-16109.
* Chandrasekhar and Lanthanide and transition metal complexes as molecular magnets. _Dalton Trans._**2022**, _51_, 4199-4201.
* Magott et al. 2022 Magott, M.; Brzozowska, M.; Baran, S.; Vieru, V.; Pinkowicz, D. An intermetallic molecular nanomagnet with the lanthanide coordinated only by transition metals. _Nat. Commun._**2022**, _13_, 2014.
* Boca 2004 Boca, R. Zero-field splitting in metal complexes. _Coord. Chem. Rev._**2004**, _248_, 757-815.
* Maurice et al. 2016 Maurice, R.; Broer, R.; Guihery, N.; de Graaf, C. In _Handbook of Relativistic Quantum Chemistry_; Liu, W., Ed.; Springer Berlin Heidelberg: Berlin, Heidelberg, 2016; pp 1-31.
* Moreira et al. 2002 Moreira, I. d. P. R.; Suaud, N.; Guihery, N.; Malrieu, J. P.; Caballol, R.; Bofill, J. M.; Illas, F. Derivation of spin Hamiltonians from the exact Hamiltonian: Application to systems with two unpaired electrons per magnetic site. _Phys. Rev. B_**2002**, _66_, 134430.
* Chilton et al. 2013 Chilton, N. F.; Anderson, R. P.; Turner, L. D.; Soncini, A.; Murray, K. S. PHI: A powerful new program for the analysis of anisotropic monomeric and exchange-coupled polynuclear d- and f-block complexes. _J. Comput. Chem._**2013**, _34_, 1164-1175.
* Borras-Almenar et al. 2001 Borras-Almenar, J. J.; Clemente-Juan, J. M.; Coronado, E.; Tsukerblat, B. S. MAGPACK A package to calculate the energy levels, bulk magnetic properties, and inelastic neutron scattering spectra of high nuclearity spin clusters. _J. Comput. Chem._**2001**, _22_, 985-991.
* Heisenberg 1926 Heisenberg, W. Z. _Z. Phys._**1926**, _38_, 411.
* Dirac 1929 Dirac, P. A. M. _Proc. R. Soc. London_**1929**, _123_, 714.
* van Vleck 1932 van Vleck, J. H. _The Theory and magnetic susceptibilities_; Oxford University Press, Oxford, 1932.
* Maurice et al. 2002 Maurice, R.; de Graaf, C.; Guihery, N. Magnetic anisotropy in binuclear complexes in
the weak-exchange limit: From the multispin to the giant-spin Hamiltonian. _Phys. Rev. B_**2010**, _81_, 214427.
* Maurice et al. 2010 Maurice, R.; Pradipto, A. M.; Guihery, N.; Broer, R.; de Graaf, C. Antisymmetric Magnetic Interactions in Oxo-Bridged Copper(II) Bimetallic Systems. _J. Chem. Theory Comput._**2010**, \(6\), 3092-3101.
* Maurice et al. 2011 Maurice, R.; Sivalingam, K.; Ganyushin, D.; Guihery, N.; de Graaf, C.; Neese, F. Theoretical Determination of the Zero-Field Splitting in Copper Acetate Monohydrate. _Inorg. Chem._**2011**, _50_, 6229-6236.
* Gendron et al. 2019 Gendron, F.; Autschbach, J.; Malrieu, J.-P.; Bolvin, H. Magnetic coupling in the Ce(III) dimer Ce\({}_{2}\)(COT)\({}_{3}\). _Inorg. Chem._**2019**, _58_, 581-593.
* Maurice et al. 2023 Maurice, R.; Mallah, T.; Guihery, N. _Magnetism in Binuclear Compounds: Theoretical Insights_; Springer Berlin Heidelberg: Berlin, Heidelberg, 2023; pp 1-27.
* Chibotaru and Ungur 2012 Chibotaru, L. F.; Ungur, L. Ab initio calculation of anisotropic magnetic properties of complexes. I. Unique definition of pseudospin Hamiltonians and their derivation. _J. Chem. Phys._**2012**, _137_, 064112-22.
* Wilson et al. 2006 Wilson, A.; Lawrence, J.; Yang, E.-C.; Nakano, M.; Hendrickson, D. N.; Hill, S. Magnetization tunneling in high-symmetry single-molecule magnets: Limitations of the giant spin approximation. _Phys. Rev. B_**2006**, _74_, 140403.
* Boca 1999 Boca, R. _Theoretical Foundations of Molecular Magnetism_; Elsevier: Amsterdam, 1999; pp 642-680.
* Roos et al. 1980 Roos, B. O.; Taylor, P. R.; Siegbahn, P. E. M. A Complete Active Space SCF method (CASSCF) using a density matrix formulated super-CI approach. _Chem. Phys._**1980**, _48_, 157-173.
* Malrieu et al. 2014 Malrieu, J. P.; Caballol, R.; Calzado, C. J.; de Graaf, C.; Guihery, N. Magnetic Interactions in Molecules and Highly Correlated Materials: Physical Content, Analytical Derivation, and Rigorous Extraction of Magnetic Hamiltonians. _Chem. Rev._**2014**, _114_, 429-492.
* Malmqvist et al. 2002 Malmqvist, P.-A.; Roos, B. O.; Schimmelpfennig, B. The restricted active space (RAS) state interaction approach with spin-orbit coupling. _Chem. Phys. Lett._**2002**, _357_, 230-240.
* Calzado et al. 2002 Calzado, C. J.; Cabrero, J.; Malrieu, J. P.; Caballol, R. Analysis of the magnetic coupling in binuclear complexes. I. Physics of the coupling. _J. Chem. Phys._**2002**, _116_, 2728-2747.
* Calzado et al. 2002 Calzado, C. J.; Cabrero, J.; Malrieu, J. P.; Caballol, R. Analysis of the magnetic coupling in binuclear complexes. II. Derivation of valence effective Hamiltonians from ab initio CI and DFT calculations. _J. Chem. Phys._**2002**, _116_, 3985-4000.
* Broer et al. 2003 Broer, R.; Hozoi, L.; Nieuwpoort, W. C. Non-orthogonal approaches to the study of magnetic interactions. _Mol. Phys._**2003**, _101_, 233-240.
* Ganyushin and Neese 2006 Ganyushin, D.; Neese, F. First-principles calculations of zero-field splitting parameters. _J. Chem. Phys._**2006**, _125_, 024103.
* Calzado et al. 2009 Calzado, C. J.; Angeli, C.; Taratiel, D.; Caballol, R.; Malrieu, J.-P. Analysis of the magnetic coupling in binuclear systems. III. The role of the ligand to metal charge transfer excitations revisited. _J. Chem. Phys._**2009**, _131_, 044327.
* Maurice et al. 2009 Maurice, R.; Bastardis, R.; Graaf, C. d.; Suaud, N.; Mallah, T.; Guihery, N. Universal Theoretical Approach to Extract Anisotropic Spin Hamiltonians. _J. Chem. Theory Comput._**2009**, \(5\), 2977-2984.
* Atanasov et al. 2011 Atanasov, M.; Ganyushin, D.; Pantazis, D. A.; Sivalingam, K.; Neese, F. Detailed Ab Initio First-Principles Study of the Magnetic Anisotropy in a Family of Trigonal Pyramidal Iron(II) Pyrrolide Complexes. _Inorg. Chem._**2011**, _50_, 7460-7477.
* Neese and Pantazis 2011 Neese, F.; Pantazis, D. A. What is not required to make a single molecule magnet. _Faraday Discuss._**2011**, _148_, 229-238.
* Neese 2007 Neese, F. Calculation of the zero-field splitting tensor on the basis of hybrid density functional and Hartree-Fock theory. _J. Chem. Phys._**2007**, _127_, 164112.
* Schmitt et al. 2011 Schmitt, S.; Jost, P.; van Wullen, C. Zero-field splittings from density functional calculations: Analysis and improvement of known methods. _J. Chem. Phys._**2011**, _134_, 194113.
* Belkhiri et al. 2019 Belkhiri, L.; Le Guennic, B.; Boucekkine, A. DFT Investigations of the Magnetic Properties of Actinide Complexes. _Magnetochemistry_**2019**, \(5\), 15.
* Luo and Zheng 2021 Luo, Q.-C.; Zheng, Y.-Z. Methods and Models of Theoretical Calculation for Single-Molecule Magnets. _Magnetochemistry_**2021**, \(7\), 107.
* Duplaix-Rata et al. 2023 Duplaix-Rata, G.; Le Guennic, B.; David, G. Revisiting magnetic exchange couplings in heterodinuclear complexes through the decomposition method in KS-DFT. _Phys. Chem. Chem. Phys._**2023**, _25_, 14170-14178.
* David et al. 2023 David, G.; Ferre, N.; Le Guennic, B. Consistent Evaluation of Magnetic Exchange Couplings in Multicenter Compounds in KS-DFT: The Decomposition Method. _J. Chem. Theory Comput._**2023**, _19_, 157-173.
* Meskaldji et al. 2023 Meskaldji, S.; Belkhiri, L.; Maurice, R.; Costuas, K.; Le Guennic, B.; Boucekkine, A.; Ephritikhine, M. Electronic Structure and Magneto-Structural Correlations Study of Cu\({}_{2}\)UL Trinuclear Schiff Base Complexes: A 3d-5f-3d Case. _J. Phys. Chem. A_**2023**, _127_, 1475-1490.
* Mayhall and Head-Gordon 2014 Mayhall, N. J.; Head-Gordon, M. Computational quantum chemistry for single Heisenberg spin couplings made simple: Just one spin flip required. _J. Chem. Phys._**2014**, _141_, 134111.
* Mayhall and Head-Gordon 2015 Mayhall, N. J.; Head-Gordon, M. Computational quantum chemistry for multiple-site Heisenberg spin couplings made simple: Still only one spin-flip required. _J. Phys. Chem. Lett._**2015**, \(6\), 1982-1988.
* Orms and Krylov 2018 Orms, N.; Krylov, A. I. Singlet-triplet energy gaps and the degree of diradical character in binuclear copper molecular magnets characterized by spin-flip density functional theory. _Phys. Chem. Chem. Phys._**2018**, _20_, 13127-13144.
* Schurkus et al. 2020 Schurkus, H.; Chen, D.-T.; Cheng, H.-P.; Chan, G.; Stanton, J. Theoretical prediction of magnetic exchange coupling constants from broken-symmetry coupled cluster calculations. _J. Chem. Phys._**2020**, _152_, 234115.
* Pokhilko and Krylov 2020 Pokhilko, P.; Krylov, A. I. Effective Hamiltonians derived from equation-of-motion coupled-cluster wave functions: Theory and application to the Hubbard and Heisenberg Hamiltonians. _J. Chem. Phys._**2020**, _152_, 094108.
* Alessio and Krylov 2021 Alessio, M.; Krylov, A. I. Equation-of-Motion Coupled-Cluster Protocol for Calculating Magnetic Properties: Theory and Applications to Single-Molecule Magnets. _J. Chem. Theory Comput._**2021**, _17_, 4225-4241.
* Bolvin 2006 Bolvin, H. An Alternative Approach to the g-Matrix: Theory and Applications. _ChemPhysChem_**2006**, \(7\), 1575-1589.
* Vancoillie et al. 2009 Vancoillie, S.; Rulissek, L.; L.,; Pierloot, K. Theoretical Description of the Structure and Magnetic Properties of Nitroide-Cu(II)-Nitroxide Spin Triads by Means of Multiconfigurational Ab Initio Calculations. _J. Phys. Chem. A_**2009**, _113_, 6149-6157.
* Ungur and Chibotaru 2017 Ungur, L.; Chibotaru, L. F. Ab Initio Crystal Field for Lanthanides. _Chem. Eur. J._**2017**, _23_, 3708-3718.
* Dey et al. 2022 Dey, S.; Rajaraman, G.; Bolvin, H. Analysis of the Magnetic Coupling in a Mn(II)-U(V)-Mn(II) Single Molecule Magnet. _Chem. Eur. J._**2022**, _28_, e202201883.
* Maurice et al. 2010 Maurice, R.; Guihery, N.; Bastardis, R.; Graaf, C. d. Rigorous Extraction of the Anisotropic Multispin Hamiltonian in Bimetallic Complexes from the Exact Electronic Hamiltonian. _J. Chem. Theory Comput._**2010**, \(6\), 55-65.
* Sun et al. 1999 Sun, J.-S.; Zhao, H.; Ouyang, X.; Clerac, R.; Smith, J. A.; Clemente-Juan, J. M.; Gomez-Garcia, C.; Coronado, E.; Dunbar, K. R. Structures, Magnetic Properties, and Reactivity Studies of Salts Containing the Dinuclear Anion [M\({}_{2}\)Cl\({}_{6}\)]\({}^{2-}\) (M = Mn, Fe, Co). _Inorg. Chem._**1999**, _38_, 5841-5855.
* de Graaf and Sousa 2006 de Graaf, C.; Sousa, C. Assessing the zero-field splitting in magnetic molecules by wave function-based methods. _Int. J. Quantum Chem._**2006**, _106_, 2470-2478.
* Song and Xue 2020 Song, X.-j.; Xue, X.-m. Study on the Magneto-Structural Correlation of a New Dinuclear Cobalt(II) Complex with Double \(\mu\)-Phenoxo Bridges. _ACS Omega_**2020**, \(5\), 8347-8354.
* Kahn 1993 Kahn, O. _Molecular magnetism_; VCH: New York, 1993; pp 135-143.
* Dzyaloshinskii 1964 Dzyaloshinskii, I. Theory of helicoidal structures in antiferromagnets. I. Nonmetals. _Sov. Phys. JETP_**1964**, _19_, 960-971.
* Moriya 1960 Moriya, T. Anisotropic Superexchange Interaction and Weak Ferromagnetism. _Phys. Rev._**1960**, _120_, 91-98.
* Dmitrienko et al. 2014 Dmitrienko, V.; Ovchinnikova, E.; Collins, S.; Nisbet, G.; Beutier, G.; Kvashnin, Y.; Mazurenko, V.; Lichtenstein, A.; Katsnelson, M. Measuring the Dzyaloshinskii-Moriya interaction in a weak ferromagnet. _Nat. Phys._**2014**, _10_, 202-206.
* Bouammali et al. 2021 Bouammali, M.-A.; Suaud, N.; Martins, C.; Maurice, R.; Guihery, N. How to create giant Dzyaloshinskii-Moriya interactions? Analytical derivation and ab initio calculations on model dicopper(II) complexes. _J. Chem. Phys._**2021**, _154_, 134301.
* Bouammali et al. 2021 Bouammali, M.-A.; Suaud, N.; Maurice, R.; Guihery, N. Extraction of giant Dzyaloshinskii-Moriya interaction from ab initio calculations: First-order spin-orbit coupling model and methodological study. _J. Chem. Phys._**2021**, _155_, 164305, 164305.
* Bouammali et al. 2022 Bouammali, M.-A.; Suaud, N.; Guihery, N.; Maurice, R. Antisymmetric Exchange in a Real Copper Triangular Complex. _Inorg. Chem._**2022**, _61_, 12138-12148.
* des Cloizeaux 1960 des Cloizeaux, J. Extension d'une formule de Lagrange a des problemes de valeurs propres. _Nucl. Phys._**1960**, _20_, 321-346.
* Neese 2012 Neese, F. The ORCA program system. _WIREs Comput. Mol. Sci._**2012**, \(2\), 73-78.
* Neese 2022 Neese, F. Software update: The ORCA program system-Version 5.0. _WIREs Comput. Mol. Sci._**2022**, _12_, e1606.
* Douglas and Kroll 1974 Douglas, M.; Kroll, N. M. Quantum electrodynamical corrections to the fine structure of helium. _Ann. Phys._**1974**, _82_, 89-155.
* Hess 1985 Hess, B. A. Applicability of the no-pair equation with free-particle projection operators to atomic and molecular structure calculations. _Phys. Rev. A_**1985**, _32_, 756-763.
* Hess 1986 Hess, B. A. Relativistic electronic-structure calculations employing a two-component no-pair formalism with external-field projection operators. _Phys. Rev. A_**1986**, _33_, 3742-3748.
* Wolf et al. 2002 Wolf, A.; Reiher, M.; Hess, B. A. The generalized Douglas-Kroll transformation. _J. Chem. Phys._**2002**, _117_, 9215-9226.
* Weigend and Ahlrichs 2005 Weigend, F.; Ahlrichs, R. Balanced basis sets of split valence, triple zeta valence and quadruple zeta valence quality for H to Rn: Design and assessment of accuracy. _Phys. Chem. Chem. Phys._**2005**, \(7\), 3297-3305.
* Neese et al. 2009 Neese, F.; Wennmohs, F.; Hansen, A.; Becker, U. Efficient, approximate and parallel Hartree-Fock and hybrid DFT calculations. A 'chain-of-spheres' algorithm for the Hartree-Fock exchange. _Chem. Phys._**2009**, _356_, 98-109.
* Stoychev et al. 2017 Stoychev, G. L.; Auer, A. A.; Neese, F. Automatic Generation of Auxiliary Basis Sets. _J. Chem. Theory Comput._**2017**, _13_, 554-562.
* Perdew et al. 1996 Perdew, J. P.; Burke, K.; Ernzerhof, M. Generalized Gradient Approximation Made Simple. _Phys. Rev. Lett._**1996**, _77_, 3865-3868.
* Angeli et al. 2001 Angeli, C.; Cimiraglia, R.; Evangelisti, S.; Leininger, T.; Malrieu, J.-P. Introduction of n-electron valence states for multireference perturbation theory. _J. Chem. Phys._**2001**, _114_, 10252-10264.
* Angeli et al. 2001 Angeli, C.; Cimiraglia, R.; Malrieu, J.-P. N-electron valence state perturbation theory: a fast implementation of the strongly contracted variant. _Chem. Phys. Lett._**2001**, _350_, 297-305.
* Miralles et al. 1993 Miralles, J.; Castell, O.; Caballol, R.; Malrieu, J.-P. Specific CI calculation of energy differences: Transition energies and bond energies. _Chem. Phys._**1993**, _172_, 33-43.
* Garcia et al. 1995 Garcia, V.; Castell, O.; Caballol, R.; Malrieu, J. An iterative difference-dedicated configuration interaction. Proposal and test studies. _Chem. Phys. Lett._**1995**, _238_, 222-229.
* Andersson et al. 1992 Andersson, K.; Malmqvist, P.-A.; Roos, B. O. Second-order perturbation theory with a Complete Active Space Self-Consistent Field reference function. _J. Chem. Phys._**1992**, _96_, 1218-1226.
* Condon et al. 1951 Condon, E. U.; Condon, E. U.; Shortley, G. H. _The theory of atomic spectra_; Cambridge University Press, 1951.
* Alder and Winther 1971 Alder, K.; Winther, A. Phase conventions for angular momentum eigenfunctions. _Phys. Lett. B_**1971**, _34_, 357-358.
* Herchel et al. 2007 Herchel, R.; Boca, R.; Krzystek, J.; Ozarowski, A.; Duran, M.; van Slageren, J. Definitive Determination of Zero-Field Splitting and Exchange Interactions in a Ni(II) Dimer: Investigation of [Ni\({}_{2}\)(en)\({}_{4}\)Cl\({}_{2}\)]Cl\({}_{2}\)2 Using Magnetization and Tunable-Frequency High-Field Electron Paramagnetic Resonance. _J. Am. Chem. Soc._**2007**, _129_, 10306-10307.
Supporting Information for:
The resolution of the weak-exchange limit made rigorous, simple and general in binuclear complexes
Dumitru-Claudiu Sergentu,\({}^{a,b}\) Boris Le Guennic,\({}^{a}\) and Remi Maurice\({}^{*a}\)
\({}^{a}\) _Univ Rennes, CNRS ISCR (Institut des Sciences Chimiques de Rennes) - UMR 6226, Rennes, France._
\({}^{b}\) _Universitatea Alexandru loan Cuza din lasi, Laboratorul RA-03 (RECENT AIR), lasi, Romania._
###### Contents
* S1 Supporting tables
* S2 xyz coordinates
* S2.1 Complex 1: \([\text{Co}_{2}\text{Cl}_{6}]^{2-}\)
* S2.2 Complex 2: \([\text{Co}_{2}(\text{I})_{2}(\text{acac})_{2}(\text{H}_{2}\text{O})]\)
* S2.3
Supporting tables
Table S1 Matrix \(U\) used to translate \(\hat{H}^{\text{eff}}\) and \(\hat{H}^{\text{MS}}\) between the coupled-spin and uncoupled-spin bases.
\begin{tabular}{c c c c c c c c c c c c c c c} \hline \hline \(U\) & \([3,3)\) & \([3,2)\) & \([3,1)\) & \([3,0)\) & \([3,-1)\) & \([3,-2)\) & \([3,-3)\) & \([2,2)\) & \([2,1)\) & \([2,0)\) & \([2,-1)\) & \([2,-2)\) & \([1,1)\) & \([1,0)\) & \([1,-1)\) & \([0,0)\) \\ \hline \(\langle\nicefrac{{3}}{{2}},\nicefrac{{3}}{{2}}]\) & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(\langle\nicefrac{{1}}{{2}},\nicefrac{{3}}{{2}}]\) & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(-\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & \(\frac{\sqrt{2}}{20}\) & 0 & 0 & 0 \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(0\) & 0 & 0 & 0 & \(-\frac{\sqrt{2}}{20}\) & 0 & 0 & \(\frac{\sqrt{2}}{20}\) \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(-\frac{\sqrt{2}}{2}\) & 0 & 0 & \(\frac{\sqrt{2}}{20}\) & 0 & 0 & \(\frac{\sqrt{2}}{20}\) \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(-\frac{\sqrt{2}}{2}\) & 0 & 0 & \(-\frac{\sqrt{2}}{20}\) & 0 & \(\frac{\sqrt{2}}{20}\) \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{3}}{{2}}]\) & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(-\frac{1}{2}\) & 0 & 0 & \(-\frac{\sqrt{2}}{20}\) & 0 & 0 & \(\frac{\sqrt{2}}{20}\) \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{3}}{{2}}]\) & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & \(\frac{\sqrt{2}}{20}\) & 0 & 0 & \(\frac{\sqrt{2}}{20}\) \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & 0 & \(-\frac{\sqrt{2}}{20}\) & 0 & 0 & 0 & 0 \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(-\frac{\sqrt{2}}{20}\) & 0 & 0 & 0 & 0 \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & 0 & 0 & 0 & 0 & \(\frac{\sqrt{2}}{2}\) & 0 & 0 & 0 & 0 & 0 & \(-\frac{\sqrt{2}}{20}\) & 0 & 0 & 0 & 0 \\ \(\langle\nicefrac{{3}}{{2}},\nicefrac{{1}}{{2}}]\) & 0 & 0 & 0 & 0 & 0 & \(\frac{\
\begin{table}
\begin{tabular}{l c c c c c c c c c c c c c c c c} SOCI state & \(|3,3\rangle\) & \(|3,2\rangle\) & \(|3,1\rangle\) & \(|3,0\rangle\) & \(|3,-1\rangle\) & \(|3,-2\rangle\) & \(|3,-3\rangle\) & \(|2,2\rangle\) & \(|2,1\rangle\) & \(|2,0\rangle\) & \(|2,-1\rangle\) & \(|2,-2\rangle\) & \(|1,1\rangle\) & \(|1,0\rangle\) & \(|1,-1\rangle\) & \(|0,0\rangle\) \\ \hline \(\psi_{1}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.02 & 0 & 0.23 & 0 & 0.02 & 0 & 0 & 0.65 \\ \(\psi_{2}\) & 0 & 0 & 0.03 & 0 & 0.03 & 0 & 0 & 0 & 0 & 0 & 0.43 & 0 & 0.43 & 0 \\ \(\psi_{3}\) & 0 & 0.07 & 0 & 0.07 & 0 & 0 & 0 & 0 & 0 & 0 & 0.39 & 0 & 0 & 0.39 & 0 \\ \(\psi_{4}\) & 0 & 0 & 0.5 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.41 & 0 & 0 \\ \(\psi_{5}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.46 & 0 & 0 & 0.46 & 0 & 0 & 0 & 0 \\ \(\psi_{6}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0.46 & 0 & 0 & 0.46 & 0 & 0 & 0 & 0 \\ \(\psi_{7}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.46 & 0 & 0.11 & 0 & 0.4 & 0 & 0 & 0 \\ \(\psi_{8}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0.46 & 0 & 0 & 0.46 & 0 & 0 & 0 & 0 \\ \(\psi_{9}\) & 0 & 0.01 & 0 & 0.39 & 0 & 0.39 & 0 & 0.01 & 0 & 0 & 0 & 0 & 0.07 & 0 & 0.07 & 0 \\ \(\psi_{10}\) & 0 & 0.17 & 0 & 0.23 & 0 & 0.17 & 0 & 0 & 0 & 0 & 0 & 0.36 & 0 & 0 \\ \(\psi_{11}\) & 0.01 & 0 & 0.42 & 0 & 0.42 & 0 & 0.01 & 0 & 0 & 0 & 0 & 0.03 & 0 & 0 \\ \(\psi_{12}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0.04 & 0 & 0.57 & 0 & 0.04 & 0 & 0 & 0.28 \\ \(\psi_{13}\) & 0 & 0.46 & 0 & 0 & 0 & 0.46 & 0 & 0 & 0 & 0 & 0 & 0.16 & 0 & 0 \\ \(\psi_{15}\) & 0.46 & 0 & 0 & 0 & 0 & 0.46 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \(\psi_{16}\) & 0.45 & 0 & 0.01 & 0 & 0.01 & 0 & 0.45 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \end{tabular}
\end{table}
Table S3: Contribution of the \(|S,M_{S}\rangle\) components of the low-lying \(S=0,\,1,\,2,\,3\) states to the wavefunctions of the 16 low-energy spin-orbit states, obtained from a SO-NEVPT2 calculation on complex 1 in the arbitrary \(\times\)z frame.\({}^{a}\)
[MISSING_PAGE_POST]
[MISSING_PAGE_POST]
\begin{table}
\begin{tabular}{c c c c} \hline \hline State idx & SO-CASSCF & SO-NEVPT2 & model//SO-NEVPT2 \\ \hline \(\psi_{1}\) & 0.0 & 0.0 & 0.0 \\ \(\psi_{2}\) & 1.9 & 1.9 & 2.1 \\ \(\psi_{3}\) & 3.9 & 3.6 & 3.9 \\ \(\psi_{4}\) & 8.7 & 7.7 & 8.1 \\ \(\psi_{5}\) & 214.6 & 177.0 & 178.7 \\ \(\psi_{6}\) & 215.2 & 177.7 & 179.3 \\ \(\psi_{7}\) & 218.9 & 180.6 & 182.2 \\ \(\psi_{8}\) & 220.2 & 182.2 & 183.7 \\ \(\psi_{9}\) & 238.0 & 206.4 & 206.6 \\ \(\psi_{10}\) & 238.3 & 206.9 & 207.1 \\ \(\psi_{11}\) & 241.4 & 209.6 & 210.4 \\ \(\psi_{12}\) & 243.8 & 212.2 & 212.9 \\ \(\psi_{13}\) & 448.8 & 379.7 & 382.3 \\ \(\psi_{14}\) & 449.0 & 380.0 & 382.4 \\ \(\psi_{15}\) & 457.8 & 388.4 & 390.5 \\ \(\psi_{16}\) & 458.2 & 389.0 & 391.2 \\ \hline \hline \end{tabular}
\end{table}
Table S23: Contribution of the \(|S\),\(M_{S}\rangle\) components of the low-lying \(S=3\), 2, 1, 0 states to the wavefunctions of the 16 low-energy spin-orbit states, obtained in SO-NEVPT2 calculations on complex 2 in an arbitrary \(xyz\) frame. |
2309.04046 | The mean-field Limit of sparse networks of integrate and fire neurons | We study the mean-field limit of a model of biological neuron networks based
on the so-called stochastic integrate-and-fire (IF) dynamics. Our approach
allows to derive a continuous limit for the macroscopic behavior of the system,
the 1-particle distribution, for a large number of neurons with no structural
assumptions on the connection map outside of a generalized mean-field scaling.
We propose a novel notion of observables that naturally extends the notion of
marginals to systems with non-identical or non-exchangeable agents. Our new
observables satisfy a complex approximate hierarchy, essentially a tree-indexed
extension of the classical BBGKY hierarchy. We are able to pass to the limit in
this hierarchy as the number of neurons increases through novel quantitative
stability estimates in some adapted weak norm. While we require non-vanishing
diffusion, this approach notably addresses the challenges of sparse interacting
graphs/matrices and singular interactions from Poisson jumps, and requires no
additional regularity on the initial distribution. | Pierre-Emmanuel Jabin, Datong Zhou | 2023-09-07T23:28:35Z | http://arxiv.org/abs/2309.04046v1 | # The mean-field limit of sparse networks of integrate and fire neurons
###### Abstract.
We study the mean-field limit of a model of biological neuron networks based on the so-called stochastic integrate-and-fire (IF) dynamics. Our approach allows to derive a continuous limit for the macroscopic behavior of the system, the 1-particle distribution, for a large number of neurons with no structural assumptions on the connection map outside of a generalized mean-field scaling. We propose a novel notion of observables that naturally extends the notion of marginals to systems with non-identical or non-exchangeable agents. Our new observables satisfy a complex approximate hierarchy, essentially a tree-indexed extension of the classical BBGKY hierarchy. We are able to pass to the limit in this hierarchy as the number of neurons increases through novel quantitative stability estimates in some adapted weak norm. While we require non-vanishing diffusion, this approach notably addresses the challenges of sparse interacting graphs/matrices and singular interactions from Poisson jumps, and requires no additional regularity on the initial distribution.
P-E Jabin and D. Zhou were partially supported by NSF DMS Grants 2205694, 2219297.
## 1. Introduction
This article derives a continuous limit for the large-scale behavior of networks of neurons following a type of dynamics known as integrate-and-fire (IF). It is a natural example of _multi-agent systems_, where each agent (neuron) could influence others and be influenced in return. However because each neuron has a priori different connections to other neurons, it is also an important example of non-exchangeable systems.
We focus on IF systems for large number of agents or neurons, typically \(86\times 10^{9}\) in a human brain for example. This makes it quite challenging to study the original system, either numerically or analytically. Instead one can try to approach the large scale behavior of such multi-agent systems through the concept of _mean-field limit_. In classical exchangeable systems, the mean-field limit consists in replacing the exact influence exerted on one particle by its expectation or mean. It is hence connected to the famous notion of propagation of chaos which allows the use of a law of large number to rigorously justify this approximation. However in non-exchangeable systems, the derivation of the mean-field limit also requires a way to capture the limit of the non-identical interactions between particles or agents.
This article introduces a novel strategy based on a new concept of observables that are well chosen linear combinations of empirical laws of agents or neurons. This family solves a tree-indexed hierarchy, approximately for a finite number of neurons and exactly at the limit; a key feature of this hierarchy is that the connection weights between neurons does not appear explicitly anymore. As a consequence, the mean-field limit can be derived directly by passing to the limit in the hierarchy, bypassing a priori structural assumptions on the connection weights. In particular our result is entirely compatible with _sparse_ connection weights, as supported by experimental findings in neuroscience. However the IF-type dynamics involve jump processes in time, which inevitably introduce discontinuities. Therefore, at the technical level, a major contribution of this article is the development of well adapted weak norms that provide quantitative stability estimates.
### An IF neuron network with non-identical sparse connections
We focus in this article on a type of stochastic integrate and fire models. In this model, neurons interact through "spikes" that represent short electrical pulses in the membrane potential, typically lasting 1-2 ms. A broad range of IF models adopt the following theoretical simplification that dates back to the earliest mathematical model of neuron [61] as well as [46, 63].
Spikes occur at distinct points in time, initiating what is typically referred to as a "fire" event. For a network of IF neurons, at the exact time when the \(i\)-th neuron fires,
\[\text{for all $j$ connected to $i$},\;X^{j}\text{ jumps by $w_{j,i}$},\]
where \(X^{j}\) describes the membrane potential of the \(j\)-th neuron and \(w_{j,i}\) represents the _synaptic connection_ from \(i\) to \(j\). The case of no synaptic connection is represented by \(w_{j,i}=0\).
There exists a large variety of models with various rules to determine when a neuron is firing and what is the evolution of the membrane potential between spikes. In the seminal work [61], the firing of neuron \(i\) is predicted at the time \(X^{i}\) reaches a certain hard threshold value \(X_{F}\). According to IF dynamics, at such a time point each \(X^{j}\) jumps by \(w_{j,i}\) and \(X^{i}\) is reset to zero. However we consider instead in the present paper a notion of soft threshold where the firing of each neuron follows independent Poisson process with a rate that depends on the membrane potential.
When there is no firing, the "pre-spike" dynamics of membrane potential is usually given by a simple ODE or SDE, which we may write in our case as
\[\operatorname{d}\!X^{i}(t)=\mu(X^{i}(t))\operatorname{d}\!t+\sigma(X^{i}(t)) \operatorname{d}\!t.\]
As mentioned earlier, there exists a large variety of IF models in terms of the equations for pre-spike dynamics and criteria for firing. From the point of view of the mathematical analysis developed in this paper, both the stochasticity in the SDE equation on \(X^{i}(t)\) and the soft threshold are needed.
The non-linearity of pre-spike dynamics has been observed in modern experimental studies such as [4], and stochasticity was noted in [34, 33, 55]. Although biophysical models such as Hodgkin-Huxley [48] and FitzHugh-Nagumo [30, 65] are available for more accurately representing the shape of each spike, IF type dynamics are frequently preferred for their perceived precision when investigating multiple-neuron networks. Nevertheless the present is still a compromise between mathematical succinctness and biological plausibility. Some extended mathematical models that aim to capture more complex neuronal phenomena have also been studied, for example, in [11, 67, 69]. For a more extensive discussion of IF models in the context of neuroscience, we refer to [9, 35, 36] and the references therein. For a more thorough exploration of the biological considerations, we direct interested readers to references [36, 76].
To complete the definition at end points, it is conventional to define \(X^{i}(t)\) at a firing time as the value _after_ the jump or reset, making each \(X^{i}(t)\) right continuous with left limit (cadlag functions). This allows to give a precise mathematical definition of the dynamics. Let \((X^{i}_{t})_{i=1}^{N}\) be the \(\mathbb{R}\)-valued cadlag processes representing the membrane potential changes of the \(N\) neurons and let \(w_{N}:=(w_{i;N})_{i,j=1}^{N}\) be the interaction matrix describing the synaptic connection between these neurons. The IF-type dynamics of neurons are characterized by the following SDE in integral form holding for all \(i\in\{1,\ldots,N\}\):
\[\begin{split} X^{i;N}_{t}=X^{i;N}_{0}&+\int_{0}^{t }\mu(X^{i;N}_{s^{-}})\;\mathrm{d}s+\int_{0}^{t}\sigma(X^{i;N}_{s^{-}})\; \mathrm{d}\mathbf{B}^{i}_{s}\\ &+\sum_{j\neq i}w_{i,j;N}\int_{0}^{t}\int_{0}^{\infty}\mathbbm{1} \{z\leq\nu(X^{j;N}_{s^{-}})\}\;\mathbf{N}^{j}(\mathrm{d}z,\mathrm{d}s)\\ &-\int_{0}^{t}\int_{0}^{\infty}X^{i;N}_{s^{-}}\mathbbm{1}\{z\leq \nu(X^{i;N}_{s^{-}})\}\mathbf{N}^{i}(\mathrm{d}z,\mathrm{d}s),\end{split} \tag{1.1}\]
where
\[\{\mathbf{N}^{i}\}_{i=1}^{N}\] are homogeneous spatial Poisson processes w.r.t. Lebesgue measure, \[\{\mathbf{B}^{i}\}_{i=1}^{N}\] are standard Wiener processes, and the \[2N\] processes are independent.
For the target neuron \(i\), the term \(\mu(X^{i;N}_{s^{-}})\mathrm{d}s\) summarizes its pre-spike dynamics and \(\sigma(X^{i;N}_{s^{-}})\mathrm{d}\mathbf{B}^{i}_{s}\) adds a Brownian noise. It experiences a jump of \(w_{i,j;N}\) when another neuron \(j\) fires and is reset to zero when itself fires. Neuron \(i\) firing occurs with a likelihood depending on the membrane potential, which we denote by \(\nu(X^{i;N}_{s^{-}})\) and we introduce the Poisson processes \(\mathbf{N}^{i}(\mathrm{d}z,\mathrm{d}s)\).
For the simplified case that the connections between neurons are all identical, i.e. \(w_{i,j;N}=1/N\), \(\forall\{i,j\}\in\{1,\ldots,N\}\), the mean-field limit of (1.1) or its variations can be expressed as a PDE about the (time-varying) density function \(f(t,x)\), where \(x\in\mathbb{R}\) represents the membrane potential. We mention [73] that employs a PDE-based approach, and [22, 24, 29] that each offer a distinct probabilistic perspective. Though significantly different from (1.1), Hawkes processes give another type of popular models for biological neuron networks and their mean-field limit has also been studied, as in [14, 25]. We also cite [5] for the study of large biophysical models with Hodgkin-Huxley and FitzHugh-Nagumo equations for the neurons, together with [68] which derives an IF model from biophysical models in a mean-field setting. Even in the case of identical connections, we emphasize that some neuron models may contain singularities that lead to important mathematical challenges when deriving the mean-field limit.
While assuming identical connections is a significant simplification, the derived mean-field limits have nonetheless provided useful insights into our understanding of large biological neuron networks. For some limiting models, the mean-field equations can for example exhibit blow-up in finite time, which may represent some large-scale synchronization within the network, see for instance [10, 12, 13] from a PDE perspective, and [23] from a probability point of view. The issue of convergence to equilibrium in the mean-field limit is also an important question, for which we refer for example to [32] and [28]. Other studies, such as [20, 21, 19], have explored the spectral conditions sufficient for the existence of periodic solutions near the invariant measure through a Hopf bifurcation.
Systems with non-identical connections remain less understood, despite their relevance to applications in neuroscience, as noted for instance in [71]. This is also supported by recent progress in experimental biology that makes detailed connection graph for large neuron networks available [49]. Mathematically, non-identical connections fundamentally alters the dynamics of coupled ODEs or SDEs like (1.1), rendering them _non-exchangeable_ and making many established tools for exchangeable systems lose their applicability.
Despite these challenges, there exists a wide range of results that are able to handle systems with certain types of non-identical connections, provided some structural assumptions are made. A first example assumes that connections follow the algebraic constraint \(\sum_{j}w_{i,j;N}\equiv 1\) and that the initial data \((X_{0}^{i})_{i=1}^{N}\) are i.i.d.; the same mean-field equation as in the exchangeable case is then obtained, see for instance [50]. Another well known case is found when the connections smoothly depend on the physical location of each neuron: A typical assumption is that \(w_{i,j;N}=W(y_{i;N},y_{j;N})\), where \(y_{i;N}\in\mathbb{R}^{d}\) denotes the spatial location of the \(i\)-th neuron and \(W(\cdot,\cdot)\) is a smooth function. This case leads to some version of the well-known neural field equations, see [7, 42, 43, 44, 79, 1]. Within this type of assumptions on connections, the mean-field limit has also been investigated in [15] for a model based on the Hawkes processes. Another well-known setting consists in taking random connections, typically corresponding to some classical random graph. This can of course be an attractive assumption when the connections remains mostly unknown. The mean-field limit has been rigorously derived with several types of random connections including the Erdos-Renyi type, as shown in [41]. We also mention [18, 64, 66] that obtain mean-field limits of other multi-agent systems, still with random connections.
It is also enlightening to draw a comparison with the wider spectrum of results on general non-exchangeable systems and not specifically IF models. Many approaches rely on graphon theory, such as [54] which derives the mean-field limit for the Kuramoto model (originally introduced in [57]) while subsequent explorations of the dynamics were performed in [16, 17]. Graphons are natural tools to try to describe the graph limit of connections \(w_{i,j;N}\) without a priori knowledge of additional regularity. Unfortunately, the use of graphon requires a dense scaling for the connections with typically \(\max_{i,j}|w_{i,j;N}|\sim O(1/N)\). There are still some results on sparse graph connections. We mention [59] based on some concept of weak convergence on graphs, or [37, 56, 38, 39] which are based on extensions of graphons such as graph-op. While those results still require a priori knowledge of some additional convergence of \(w_{i,j;N}\), the case of sparse connections without a priori regularity has been recently studied in [51].
We keep in the present article the same general assumptions on connections as [51] namely,
* The \(w_{i,j;N}\) may be completely different for every pair of neurons.
* The \(w_{i,j;N}\) can be positive or negative with corresponding excitation or inhibition between neurons, and are not symmetric.
* The number of neurons is assumed to be very large \(N\gg 1\). We recall in particular that the human brain for example contains approximately \(8.6\times 10^{10}\) neurons.
* The \(w_{i,j;N}\) satisfy the following scaling: \[\max\Big{(}\max_{i}\sum_{j}|w_{i,j;N}|,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)}\sim O (1),\qquad\max_{i,j}|w_{i,j;N}|\ll 1.\] This scaling allows each neuron \(i\) to be connected to a large population of neurons \(j\), while keeping the network sparsely connected. This again seems to fit with the average of \(7000\) synaptic connections per neuron in the human brain.
However, as explained later on, we introduce several new key ideas with respect to [51], which allows for a broader set of assumptions on the initial data and also makes dealing with jump processes easier.
### The marginal laws and BBGKY hierarchy for exchangeable systems
A classical way to address this mean-field limit of large SDE systems like (1.1) is to shift our focus from tracking trajectories to examining the joint law of various subsets of neurons.
For clarity, let us first mention some of the notations that we are using. We denote by \(\mathcal{M}(\mathbb{R}^{k})\) the space of signed Borel measures with bounded _total variation norm_ on \(\mathbb{R}^{k}\). \(\mathcal{M}_{+}(\mathbb{R}^{k})\)
stands for the subset of non-negative measures. \(\mathcal{P}(\mathbb{R}^{k})\) stands for the subset of probability measures. When choosing a topology on \(\mathcal{M}(\mathbb{R}^{k})\), we will mostly use the classical notion of _weak-*_ convergence. Note that we will also have bounds on some exponential moments, so that together with those estimates, weak-* convergence will typically imply tight convergence.
We now introduce the classical concept of marginals for exchangeable systems, where we emphasize the following steps to highlight the difference with non-exchangeable systems,
* For any distinct indices \(i_{1},\dots,i_{k}\in\{1,\dots,N\}\), denote the marginal law of the agents \(X_{t}^{i_{1};N},\dots,X_{t}^{i_{k};N}\) by \[f_{N}^{i_{1},\dots,i_{k}}(t,\cdot):=\,\mathrm{Law}(X_{t}^{i_{1};N},\dots,X_{t} ^{i_{k};N})\in\mathcal{P}(\mathbb{R}^{k}).\]
* Formally define \(f_{N}^{i_{1},\dots,i_{k}}\equiv 0\) if there are duplicated indices among \(i_{1},\dots,i_{k}\).
* For the full joint law, adopt the simplified notation that \[f_{N}(t,\cdot):=f_{N}^{1,\dots,N}(t,\cdot)=\mathrm{Law}(X_{t}^{1;N},\dots,X_{ t}^{N;N})\in\mathcal{P}(\mathbb{R}^{N}).\]
In the context of exchangeable system (identical connections, \(w_{i,j;N}=w(N)\)), it is straightforward that, if \((X_{t}^{1;N},\dots,X_{t}^{N;N})\) is a solution of system, then any permutation \((X_{t}^{i_{1};N},\dots,X_{t}^{i_{N};N})\) solves the same system as well. This implies that the full joint law equation is symmetric, so it suffice to consider that marginals of the same order are identical, namely,
\[f_{N}^{i_{1},\dots,i_{k}}(t,\cdot)=f_{N}^{i_{1},\dots,i_{k}}(t,\cdot)\in \mathcal{P}(\mathbb{R}^{k}),\]
if the indices \(1\leq i_{1},\dots,i_{k}\leq\) are distinct and \(1\leq j_{1},\dots,j_{k}\leq N\) are also distinct.
Given this property, it is natural to define the unique \(k\)-marginal
\[f_{N,k}(t,\cdot):=f_{N}^{1,\dots,k}(t,\cdot)\in\mathcal{P}(\mathbb{R}^{k}).\]
The marginals are solutions to the famous BBGKY hierarchy of equations, in which the equation for each \(f_{N,k}\) depends on itself and the next marginal \(f_{N,k+1}\) recursively.
One of the key concepts to obtain the mean-field limit is the notion of (Kac's) chaos, which can be defined in various equivalent ways. One possible definition involves the marginals which is the one we use in this article: We have chaos iff the \(k\)-marginals of random variables \((X^{1;N},\dots,X^{N;N})\) converge weak-* to the tensorization of a certain one-particle distribution \(f\in\mathcal{P}(\mathbb{R})\) as \(N\to\infty\), namely,
\[f_{N,k}\stackrel{{*}}{{\rightharpoonup}}f^{\otimes k}\in \mathcal{P}(\mathbb{R}^{k}),\quad f^{\otimes k}(z_{1},\dots,z_{k}):=\prod_{m=1 }^{k}f(z_{m}),\quad\text{for all fixed $k\in\mathbb{N}$}.\]
At least for smooth enough dynamics, it is possible to show that chaos on the initial data implies chaos at every later time, which is the famous propagation of chaos. Among the various strategies for proving propagation of chaos and for obtaining the Vlasov equation as a mean-field limit, we highlight the following one given its similarities with the approach we will follow:
* Pass to the limit in the BBGKY hierarchy to the Vlasov hierarchy as \(N\to\infty\), which yields \(f_{N,k}(t,\cdot)\stackrel{{*}}{{\rightharpoonup}}f_{\infty,k}(t, \cdot)\in\mathcal{P}(\mathbb{R}^{k})\) where \(f_{\infty,k}(t,\cdot)\) represents a solution to the Vlasov hierarchy with initial data in tensorized form, namely \(f_{\infty,k}(0,\cdot)=f_{0}^{\otimes k}\).
* Notice that if the one-particle distribution \(f(t,\cdot)\) solves the Vlasov equation with initial data \(f_{0}\), then the \(k\)-marginals in tensorized form \(f^{\otimes k}(t,\cdot)\) are a solution to the Vlasov hierarchy with the same initial data \(f_{\infty,k}(0,\cdot)=f_{0}^{\otimes k}\).
* Prove the uniqueness of the solution of the Vlasov hierarchy, which allows one to conclude that at all time \(t\geq 0\), \(f_{\infty,k}(t,\cdot)=f^{\otimes k}(t,\cdot)\).
A variation of this argument involves directly obtaining stability estimates between the BBGKY hierarchy and the Vlasov hierarchy, yet quantifies the deviation of the \(N\)-particle SDE system to the Vlasov equation on the level of marginal laws. In general deriving the mean-field limit can be challenging, especially when the interaction between particles is singular or when there is no diffusion in the dynamics. Not surprisingly, the above approach usually requires smoothness on the dynamics: from analytic in [75] to Lipschitz in [40]. However recent results such as [58] and [8] have shown how to take advantage of non-vanishing diffusion to handle interactions through
a kernel merely in respectively only \(L^{\infty}\) (more precisely some exponential Orlicz space) and only \(L^{p}\) for \(p>1\).
We hope to implement a similar strategy for non-exchangeable systems, such as our (1.1). However, given a solution \((X_{t}^{1;N},\ldots,X_{t}^{N;N})\) of (1.1), a permutation \((X_{t}^{i_{1};N},\ldots,X_{t}^{i_{N};N})\) is not in general also a solution since \(w_{1,2;N}\) is in general not equal to \(w_{i_{1},i_{2};N}\) for example. Consequently, the concept of \(k\)-marginals does not actually exist and, instead, we have to consider the more complicated situation where for a fixed \(k\), each marginal law \(f_{N}^{i_{1},\ldots;i_{k}}\) might differ.
It is, however, not even the most significant obstacle. The more intricate issue lies in the fact that any direct generalization of the BBGKY hierarchy would depend explicitly on the coefficients \(w_{i,j}\). Hence, passing to the limit in the hierarchy would require passing to the limit in some appropriate sense in the coefficients \(w_{i,j;N}\). If the \(w_{i,j;N}\) are of order \(O(1/N)\), one can potentially apply the graphon theory [62] to achieve this, as has been done for the Kuramoto model in [54]. Unfortunately, we are considering potentially sparse networks without any a priori smoothness and we have no idea how to generalize graphon theory in that case.
### The novel notion of observables for non-exchangeable systems
A main contribution of the paper is to introduce a novel concept of _observables_ in Definition 1.2, which not only incorporates into the marginal laws \(f_{N}^{i_{1},\ldots,i_{k}}\) but also takes into account the effect of connectivity \(w_{N}=(w_{i,j})_{i,j=1}^{N}\) in (1.1).
Those observables satisfy an approximate hierarchy that extends in some sense the BBGKY hierarchy but which does not involve any explicit dependence on the connection weights. This new hierarchy hence offers a promising framework for obtaining the mean-field limit, as it will be enough to pass to the limit in a countable family of observables and equations.
Its structure however remains more complex. The main idea behind the definition of the new observables, is to track all possible interactions between any finite number of neurons. In the exchangeable case, it does not matter in which order these interactions take place, so that our observables would reduce to the marginals and only depend on the total number of neurons under consideration. But in the non-exchangeable case such as here, it is necessary to keep track of which neuron is interacting with which. To achieve this, we use tree graphs to index our observables, and establish a natural correspondence between adding a leaf on a node of the tree and interacting with a particular agent among the \(k\) selected ones.
**Definition 1.1**.: _Define \(\mathcal{T}\) as a set of directed labeled graphs (trees) constructed recursively in the following manner_
* _Denoting by_ \(|T|\) _the total number of vertices in_ \(T\)_, index the vertices in_ \(T\) _from_ \(1,\ldots,|T|\)_._
* _The graph of a single node (indexed by_ \(1\)_) belongs to_ \(\mathcal{T}\)_._
* _All other elements of_ \(\mathcal{T}\) _are constructed recursively: For any_ \(T\in\mathcal{T}\) _and any_ \(1\leq m\leq|T|\)_, the graph_ \(T+m\) _belongs to_ \(\mathcal{T}\)_, where_ \(T+m\) _is obtained by adding a leaf to vertex_ \(\#m\) _namely by adding a node indexed by_ \(|T|+1\) _and adding_ \((m,|T|+1)\) _as an edge to_ \(T\)_._
The family \(\mathcal{T}\) corresponds to all trees up to isomorphisms but it is equipped with a natural orientation. The root of the tree is always labeled \(1\), and \((l,m)\in\mathcal{E}(T)\) if there exists an edge connecting \(l\) and \(m\) and if \(l\) is closer to the root than \(m\). This family enables us to define our observables.
**Definition 1.2**.: _Consider any connectivity matrix \(w_{N}=(w_{i,j})_{i,j=1}^{N}\) and a collection of random processes \((X_{t}^{1;N},\ldots,X_{t}^{N;N})\). We define the observable \(\tau_{N}(T,w_{N},f_{N})(t,\cdot)\in\mathcal{M}(\mathbb{R}^{|T|})\), \(T\in\mathcal{T}\) as the weighted sum of marginals_
\[\tau_{N}(T,w_{N},f_{N})(t,\mathrm{d}z):=\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}= 1}^{N}w_{N,T}(i_{1},\ldots,i_{|T|})f_{N}^{i_{1},\ldots,i_{|T|}}(t,\mathrm{d}z_ {1},\ldots,\mathrm{d}z_{|T|}) \tag{1.2}\]
_where the weight of each marginal is given by_
\[w_{N,T}(i_{1},\ldots,i_{|T|}):=\prod_{(l,m)\in\mathcal{E}(T)}w_{i_{l},i_{m};N }\in\mathbb{R}.\]
_We also define the absolute observable \(|\tau_{N}|(T,w_{N},f_{N})(t,\cdot)\in\mathcal{M}_{+}(\mathbb{R}^{|T|})\), \(T\in\mathcal{T}\), as_
\[|\tau_{N}|(T,w_{N},f_{N})(t,\mathrm{d}z):=\frac{1}{N}\sum_{i_{1}, \ldots,i_{|T|}=1}^{N}\big{|}w_{N,T}(i_{1},\ldots,i_{|T|})\big{|}f_{N}^{i_{1}, \ldots,i_{|T|}}(t,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{|T|}).\]
As we can see, if \(T_{1},T_{2}\in\mathcal{T}\) are isomorphic as tree graphs, the corresponding observables are also identical up to permutation. In this sense, we can say our observables are indexed by trees. It will be apparent later that the weights are chosen in a natural way so that, in the evolution of observable \(T\), the observable \(T+m\) accounts for the interaction with the \(m\)-th agent among the \(|T|\) selected ones.
There does not appear to be an immediate interpretation for most observables, with the obvious exception of the first one. If we take as \(T=T_{1}\) the first trivial tree with only vertex, then the observable is the \(1\)-particle distribution which is just the average of all marginals of order \(1\),
\[\tau_{N}(T_{1},w_{N},f_{N})(t,\mathrm{d}z_{1})=\frac{1}{N}\,\sum_ {i=1}^{N}f_{N}^{i}(t,\mathrm{d}z_{1}).\]
Hence obtaining the limit of the observables directly provides the limit of the \(1\)-particle distribution.
We also emphasize that, in contrast to the marginals, our observables are not probability measures. They are neither necessarily normalized to a total mass of \(1\), nor guaranteed to be non-negative. But the scaling of \(w_{N}\) still ensures the total variation of any observable is at most \(O(1)\),
**Lemma 1.3**.: _For any \(T\in\mathcal{T}\), we have that_
\[\big{\|}|\tau_{N}|(T,w_{N},f_{N})(t,\cdot)\big{\|}_{\mathcal{M}( \mathbb{R}^{|T|})}\leq\big{(}\mathrm{max}_{i}\sum_{j=1}^{N}|w_{i,j;N}|\big{)} ^{|T|-1}.\]
Proof.: Recall that any marginal law has total mass \(1\) by definition, thus,
\[\big{\|}|\tau_{N}|(T,w_{N},f_{N})(t,\cdot)\big{\|}_{\mathcal{M}( \mathbb{R}^{|T|})}\leq\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}\big{|}w_{N, T}(i_{1},\ldots,i_{|T|})\big{|}.\]
If \(|T|=1\), the right hand side equals to \(1\) trivially, concluding the proof.
When \(|T|\geq 2\), we can assume \(T=T^{\prime}+m\) and argue recursively
\[\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}\big{|}w_{N,T}(i_{1},\ldots,i_{|T|})\big{|}=\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}\big{|}w_ {N,T^{\prime}}(i_{1},\ldots,i_{|T|-1})\big{|}|w_{i_{m},i_{|T|};N}|\] \[\leq\bigg{(}\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|-1}=1}^{N}\big{|} w_{N,T^{\prime}}(i_{1},\ldots,i_{|T|-1})\big{|}\bigg{)}\mathrm{max}_{i}\sum_{j=1}^ {N}|w_{i,j;N}|.\]
**Remark 1.4**.: _While in Definition 1.2 the laws and observables are only assumed to be measures, and hence are denoted by \(f(\mathrm{d}z)\), we may adopt the abuse of notation \(f(z)\) in latter discussions. Many of the forthcoming equations, such as the Vlasov equation (1.3), are indeed classically written on densities._
Given the non-exchangeability of the system (1.1), the limiting behavior as \(N\to\infty\) cannot be approximated by just a function \(f(t,x)\), with \(x\in\mathbb{R}\). Following the idea in [54] and [51], we introduce the so-called extended density \(f(t,\xi,x)\) instead, where the additional variable \(\xi\in[0,1]\) accounts for the non-exchangeable indices \(i\in\{1,\ldots,N\}\) in the mean-field limit. The non-identical interactions in the limit is described by a kernel we denote by \(w(\xi,\zeta)\), \((\xi,\zeta)\in[0,1]^{2}\)
and the Vlasov equation corresponding to (1.1) is given by
\[\begin{split}\partial_{t}f(t,\xi,x)+\partial_{x}\Big{(}\mu^{*}_{f}( t,\xi,x)f(t,\xi,x)\Big{)}-\frac{\sigma^{2}}{2}\partial_{xx}f(t,\xi,x)\\ +\nu(x)f(t,\xi,x)-\delta_{0}(x)J_{f}(t,\xi)=0,\end{split} \tag{1.3}\]
where the mean firing rate and the mean-field drift are defined as
\[J_{f}(t,\xi):=\int_{\mathbb{R}}\nu(x)f(t,\xi,x)\;\mathrm{d}x,\qquad\mu^{*}_{f}( t,\xi,x):=\mu(x)+\int_{0}^{1}w(\xi,\zeta)J_{f}(t,\zeta)\;\mathrm{d}\zeta. \tag{1.4}\]
In our context, \(w(\xi,\zeta)\) should be the limit object of the sparsely connected \(w_{N}:=(w_{i,j;N})_{i,j=1}^{N}\) that we have described in Section 1.1. As a consequence, we are forced to consider singular kernels \(w(\xi,\zeta)\) and the only property we can inherit from \(w_{N}\) is the \(O(1)\) scaling of
\[\max\Big{(}\max_{i}\sum_{j}|w_{i,j;N}|,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)}=\max \big{(}\|w_{N}\|_{\ell^{\infty}\to\ell^{\infty}},\|w_{N}\|_{\ell^{1}\to\ell^{1} }\big{)}.\]
To extend this norm for \(N\times N\) connectivity matrices to the kernel on \((\xi,\zeta)\in[0,1]^{2}\), we define the Banach space \(L^{\infty}_{\xi}([0,1],\mathcal{M}_{\zeta}[0,1])\) as the topological dual of the (strong) Bochner space \(L^{1}_{\xi}([0,1],C_{\zeta}[0,1])\). Since \(\mathcal{M}_{\xi,\zeta}([0,1]^{2})\) is the topological dual of \(C_{\xi,\zeta}([0,1]^{2})\) and the canonical embedding
\[C_{\xi,\zeta}([0,1]^{2})\to L^{1}_{\xi}([0,1],C_{\zeta}[0,1])\]
is continuous with dense image, one can consider
\[L^{\infty}_{\xi}([0,1],\mathcal{M}_{\zeta}[0,1])\subset\mathcal{M}_{\xi, \zeta}([0,1]^{2}).\]
This leads to the main Banach space \(\mathcal{W}\) for the kernels
\[\mathcal{W}:=\{w\in\mathcal{M}([0,1]^{2}):w(\xi,\mathrm{d}\zeta)\in L^{\infty }_{\xi}([0,1],\mathcal{M}_{\zeta}[0,1]),\;w(\mathrm{d}\xi,\zeta)\in L^{\infty }_{\zeta}([0,1],\mathcal{M}_{\xi}[0,1])\}.\]
We note that we deal later in the article with a priori estimate of \(f(t,\xi,x)\) and we use for those the usual strong Bochner spaces \(L^{\infty}([0,t_{*}]\times[0,1];\mathcal{M}(\mathbb{R}))\).
The proper definition of the kernel space \(\mathcal{W}\) allows us to correctly define the conjectured limiting observables from the extended density.
**Definition 1.5**.: _Consider a connectivity kernel \(w(\xi,\zeta)\in\mathcal{W}\), \((\xi,\zeta)\in[0,1]^{2}\) and some extended density \(f\in L^{\infty}([0,t_{*}]\times[0,1];\mathcal{M}_{+}(\mathbb{R}))\). Define the observables \(\tau_{\infty}(T,w,f)(t,\cdot)\in\mathcal{M}(\mathbb{R}^{|T|})\), \(T\in\mathcal{T}\), as_
\[\tau_{\infty}(T,w,f)(t,z):=\int_{[0,1]^{|T|}}w_{T}(\xi_{1},\ldots,\xi_{|T|}) \prod_{m=1}^{|T|}f(t,\xi_{m},z_{m})\;\mathrm{d}\xi_{1},\ldots,\mathrm{d}\xi_ {|T|}, \tag{1.5}\]
_where_
\[w_{T}(\xi_{1},\ldots,\xi_{|T|}):=\prod_{(l,m)\in\mathcal{E}(T)}w(\xi_{l},\xi_{ m}).\]
It is easy to check the validity of integrals in (1.4) and (1.5) if the kernel \(w(\xi,\zeta)\) is smooth or when \(w\in L^{\infty}\). At the present, it may not be clear yet why the integrations with respect to \(\xi\in[0,1]\) involved in (1.4) and (1.5) make sense when we only have \(w\in\mathcal{W}\). We prove in Section 4 that it is possible to extend the bounds in Lemma 1.3 through a density argument. We note that a definition akin to \(\tau_{\infty}\) along with a similar argument on integrability has been addressed in [51].
### Main result
Our main result states that the large scale dynamics of (1.1) described in terms of observables \(\tau_{N}(T,w_{N},f_{N})\) can be indeed approximated by the mean-field limit, provided the initial observables \(\tau_{N}(t=0)\) are approximated by the initial \(\tau_{\infty}(t=0)\).
**Theorem 1.6**.: _Assume that \(\mu,\nu\in W^{1,\infty}\) and \(\sigma>0\). For a sequence of \(N\to\infty\), let \((X_{t}^{i;N})_{i=1}^{N}\) be solutions of the non-exchangeable SDE system (1.1) with connectivity matrices \(w_{N}:=(w_{i,j;N})_{i,j=1}^{N}\). In addition, let \(f\in L^{\infty}([0,t_{*}]\times[0,1];\mathcal{M}_{+}(\mathbb{R}))\) be a solution of the Vlasov equation (1.3)-(1.4) with connectivity kernel \(w\in\mathcal{W}\). Assume that the following holds:_
* _The connectivity matrices are uniformly bounded: For some_ \(C_{\mathcal{W}}>0\)_,_ (1.6) \[\sup_{N}\ \max\Big{(}\max_{i}\sum_{j}|w_{i,j;N}|,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)} \leq C_{\mathcal{W}}.\]
* _The interaction of each pair of agents vanishes:_ (1.7) \[\max_{1\leq i,j\leq N}|w_{i,j;N}|\to 0\ \ \text{as}\ \ N\to\infty.\]
* _The hierarchy of observables and the extended density are initially bounded by an exponential scale: There exists some_ \(a>0\)_,_ \(M_{a}>0\)_, such that,_ (1.8) \[\sup_{N}\ \int_{\mathbb{R}^{|T|}}\exp\big{(}a\sum_{m=1}^{|T|}|z_{m} |\big{)}|\tau_{N}|(T,w_{N},f_{N})(0,z)\ \mathrm{d}z \leq M_{a}^{|T|},\quad\forall T\in\mathcal{T},\] \[\operatorname*{ess\,sup}_{\xi\in[0,1]}\int_{\mathbb{R}}\exp\big{(}a|x |\big{)}f(0,\xi,x)\ \mathrm{d}x \leq M_{a}.\]
* _The hierarchy of observables initially converges in weak-* topology:_ (1.9) \[\tau_{N}(T,w_{N},f_{N})(0,\cdot)\stackrel{{*}}{{\rightharpoonup}} \tau_{\infty}(T,w,f)(0,\cdot)\in\mathcal{M}(\mathbb{R}^{|T|})\ \ \text{as}\ \ N\to\infty,\quad\forall T\in \mathcal{T}\]
_Then, the hierarchy of observables converges at any time, in weak-* topology:_
\[\tau_{N}(T,w_{N},f_{N})(t,\cdot)\stackrel{{*}}{{\rightharpoonup}} \tau_{\infty}(T,w,f)(t,\cdot)\in\mathcal{M}(\mathbb{R}^{|T|})\ \ \text{as}\ \ N\to\infty,\quad\forall t\in[0,t_{*}],\ T\in \mathcal{T}. \tag{1.10}\]
While we state Theorem 1.6 in terms of the observables \(\tau_{N}\) from non-exchangeable systems converging to the limiting observables \(\tau_{\infty}\) in weak-* topology, our approach is inherently quantitative. We state, in the next section, a precise and quantitative version of Theorem 1.6, namely Theorem 2.6.
We recall that the first observable immediately correspond to the \(1\)-particle distribution so that Theorem 1.6 provides the limit of this \(1\)-particle distribution. It would in fact be possible to derive the limit of other well-known statistical objects, the \(2\)-particle distribution and correlations for example. To do that, we would build another family of new observables starting from the \(2\)-particle distribution in addition to the \(1\)-particle distribution. This would also require stronger assumptions with the initial convergence on both families instead of only (1.9). However we did not want to further add to our approach or our statements and confine ourselves to the limit of the \(1\)-particle distribution.
The only non-straightforward assumption in Theorem 1.6 is (1.9) about whether the \(\tau_{\infty}(T)(0,\cdot)\), \(\forall T\in\mathcal{T}\) come from a pair of extended density \(f(0,x,\xi)\) and \(w\in\mathcal{W}\) as defined in Definition 1.5. It would be possible to formulate a version of Theorem 1.6 without this assumption. The sequence of initial data \(\tau_{N}(T,w_{N},f_{N})(0,\cdot)\) is obviously precompact as \(N\to\infty\), so that we could extract a converging sub-sequence. The proof of Theorem 1.6 would then imply that the limiting \(\tau_{\infty}\) are exact solutions to a limiting, tree-indexed hierarchy. However, without (1.9), we cannot identify the limiting \(\tau_{\infty}\) as being obtained through some solution \(f(t,x,\xi)\) to the limiting Vlasov equation.
It is fortunately straightforward to show that (1.9) directly follows when the initial \(X_{0}^{i,N}=X^{i,N}(t=0)\) are independent. When the initial data \((X_{0}^{1;N},\ldots,X_{0}^{N;N})\) are independent random variables with \(f_{N,0}^{i}=\text{Law}(X_{0}^{i;N})\) for all \(1\leq i\leq N\), the marginal laws are of form \(f_{N,0}^{i_{1},\ldots,i_{k}}=\prod_{m=1}^{k}f_{N,0}^{i_{m}}\) for \(1\leq i_{1},\ldots,i_{k}\leq N\) that are distinct. We can then define a graphon-like kernel and the extended density as
\[\begin{split}\tilde{w}_{N}(\xi,\zeta)&=\ \sum_{i,j=1}^{N}Nw_{i,j;N}\mathbbm{1}_{[\frac{i-1}{N},\frac{i}{N})}(\xi) \mathbbm{1}_{[\frac{i-1}{N},\frac{j}{N})}(\zeta),\\ \tilde{f}_{N}(x,\xi)&=\ \sum_{i=1}^{N}f_{N}^{i}(x) \mathbbm{1}_{[\frac{i-1}{N},\frac{i}{N})}(\xi).\end{split} \tag{1.11}\]
It becomes straightforward to show that the initial observables \(\tau_{N}(T,w_{N},f_{N},t=0)\) are approximated by \(\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N},t=0)\) up to an error of \(O(\max_{1\leq i,j\leq N}|w_{i,j;N}|)\). We in particular state the following proposition, whose proof is postponed to Section 4.
**Proposition 1.7**.: _For a sequence of \(N\to\infty\), consider \((X^{1;N},\ldots,X^{N;N})\) as independent random variables and \(w_{N}=(w_{i,j})_{i,j=1}^{N}\in\mathbb{R}^{N\times N}\). Denote the marginal laws as \(f_{N}^{i}=\operatorname{Law}(X^{i;N})\) for each \(N\) and \(1\leq i\leq N\). Further, let \(\tilde{w}_{N}\), \(\tilde{f}_{N}\) be the kernel and extended density as defined in (1.11). Assume that the following holds:_
* _The connectivity matrices are uniformly bounded: For some_ \(C_{\mathcal{W}}>0\)_,_ (1.12) \[\sup_{N}\ \max\Big{(}\max_{i}\sum_{j}|w_{i,j;N}|,\max_{j}\sum_{i}|w_{i,j;N}| \Big{)}\leq C_{\mathcal{W}}.\]
* _The interaction of each pair of agents vanishes:_ (1.13) \[\bar{w}_{N}:=\max_{1\leq i,j\leq N}|w_{i,j;N}|\to 0,\ \ \text{as}\ \ N\to\infty.\]
* _The laws are bounded by an exponential scale: There exists some_ \(a>0\)_,_ \(M_{a}>0\)_, such that,_ (1.14) \[\sup_{N}\ \max_{1\leq i\leq N}\int_{\mathbb{R}}\exp(a|z|)f_{N}^{i}(z)\ \mathrm{d}z\leq M_{a}.\]
_Then the difference between observables \(\tau_{N}(T,w_{N},f_{N})\) and their approximations \(\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})\), as formulated by (1.5) and (1.11), is quantified by_
\[\begin{split}&\int_{\mathbb{R}^{|T|}}\exp\big{(}a\sum_{m=1}^{|T |}|z_{m}|\big{)}|\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})(z)-\tau_{N}(T,w_ {N},f_{N})(z)|\ \mathrm{d}z\\ \leq&\max_{1\leq i,j\leq N}|w_{i,j;N}|\max\Big{(} \max_{i}\sum_{j}|w_{i,j;N}|,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)}^{|T|-2}|T|^{2}M _{a}^{|T|}.\end{split} \tag{1.15}\]
_Moreover, by extracting a subsequence (which we still index by \(N\) for simplicity), there exists a pair of kernel \(w\in\mathcal{W}\) and extended density \(f\in L^{\infty}([0,1];\mathcal{M}_{+}(\mathbb{R}))\), such that the hierarchy of approximate observables \(\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})\) converges weak-* to the limit hierarchy \(\tau_{\infty}(T,w,f)\):_
\[\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})\stackrel{{*}}{{\to} }\tau_{\infty}(T,w,f)\in\mathcal{M}(\mathbb{R}^{|T|})\ \ \text{as}\ \ N\to\infty,\quad\forall T\in \mathcal{T}. \tag{1.16}\]
_In addition, such extended density \(f\) satisfies the bound_
\[\operatorname*{ess\,sup}_{\xi\in[0,1]}\int_{\mathbb{R}}\exp\big{(}a|x|\big{)} f(\xi,x)\ \mathrm{d}x\leq M_{a}.\]
When combined with Proposition 1.7, Theorem 1.6 yields the mean-field limit for independent initial \(X_{0}^{i,N}\) with only some appropriate moments bounds and no other structural assumptions on the \(w_{i,j,N}\). However we do emphasize that for non-exchangeable systems, the convergence of observables can in general be much less demanding than independence. It is a very different situation from exchangeable systems where chaos (or approximate independence) is essentially equivalent to the asymptotic tensorization of the marginal.
But for our present models, counterexamples are easy to construct. We can for instance separate the index \(i=1\ldots N\) into two distinct subset \(I_{1}\) and \(I_{2}\). We then take \(w_{i,j,N}=0\) if \(i\in I_{1}\) and \(j\in I_{2}\) or \(j\in I_{1}\) and \(i\in I_{2}\). In that case there are no interactions between neurons in \(I_{1}\) and neurons in \(I_{2}\). We can then easily satisfy Assumption (1.9) by having the \(X_{0}^{i,N}\) independent within each subset \(I_{1}\) and \(I_{2}\) but with as much correlation as desired between the subsets. This example can obviously be generalized to any arbitrary fixed number of subsets and it is possible to construct even more intricate examples. But this already shows that the optimal assumptions on the initial \(X_{0}^{i,N}\) have to depend intrinsically on the structure of the connections in non-exchangeable cases. In that regard, we conjecture that Assumption (1.9) is both necessary and sufficient to have the convergence of the \(1\)-particle distribution.
Theorem 1.6 is the first rigorous result to obtain the mean-field limit for networks of neurons interacting through integrate and fire models. The approach through an extended hierarchy
solved by observables has very few comparisons in the literature, having only been used previously in [51]. In comparison with the previous [51] however, we put forward several new key ideas with notably
* We introduce the observables directly at the level of the marginals. Instead the notion of observables in [51] was only valid for almost independent variables, which required first the propagation of independence. There are hence several advantages to our new definition, first as per the discussion above about independence but also by providing a much immediate notion of the statistical distribution in the system.
* We develop a new approach for the quantitative estimates on the hierarchy, based on weak norms. This is again in contrast to [51] which was using strong \(L^{2}\) norms. This is a critical point because the jumps in integrate and fire models lead to discontinuities so that we cannot have convergence in the hierarchy for our system for any strong norm. On the other hand, the use of weak norms forces a different method in the analysis as propagating weak norms necessarily creates intricate commutator estimates. An important technical contribution of the present paper is to introduce the "right" weak norms and a novel approach to handle those commutators.
There are however many remaining open questions. First of all, the statistical approach followed here does not seem to allow to obtain the limit of any individual trajectory. This is again in contrast with classical exchangeable systems where obtaining the limit of the 1-particle distribution allows to have the limit of typical (in some sense) trajectories. Another important question is whether it is possible to connect the additional variable \(\xi\) to some properties of individual neurons, which could lead to classifying neurons in terms of their role in the dynamics. We mention as final example of open problem, the issue of including learning in the models. In the setting of (1.1), learning can be simply incorporated in the model by considering time-dependent synaptic weights \(w_{i,j;N}(t)\) together with some equation prescribing the evolution of those weights. This has been recognized to be a critical mechanism as early as the famous Hebb rule in [45]. But it is unclear how to model this kind of learning appropriately while keeping sparse connections and a mean-field scaling, or whether the present approach would remain valid for such models. The mean-field limit has been derived [70, 78] for neuron networks incorporating learning mechanisms, and also in [3] for an opinion dynamics model. But those results impose the strong algebraic constraint that \(w_{i_{1},j;N}=w_{i_{2},j;N}\), \(\forall i_{1},i_{2}\neq j\).
The rest of the paper is structured as follows. In Section 2, we present our approach of directly obtaining stability estimates, starting from the extended BBGKY hierarchy from non-exchangeable system (1.1), the corresponding Vlasov hierarchy, and their a priori estimates. The main stability result, as a quantitative version of (1.10), is stated as Theorem 2.6.
The subsequent sections are about rigorously proving the results in Section 2. We discuss in Section 3 the properties of the weak norms denoted as \(H_{\eta}^{-1\otimes k}\) that we use throughout the quantitative estimates. In Section 4, we revisit the limiting observables \(\tau_{\infty}(T,w,f)\), \(T\in\mathcal{T}\), to show that they are well-defined. Finally, with the preliminaries done in Section 3 and 4, Section 5 is devoted to the proofs of the main results of Section 2, including Theorem 2.6.
## 2. Quantitative stability estimates
### A tensorized negative Sobolev norm
This subsection is dedicated to the introduction of \(H_{\eta}^{-1\otimes k}\)-norm along with its basic properties. While it is straightforward, the specific choice of this norm plays a key role in our later estimates as it leads to good commutator estimates. Introducing the mollification kernel
\[K(x):=\frac{1}{\pi}\int_{0}^{\infty}\exp(-|x|\cosh(\xi))\mathrm{d}\xi,\]
we may define the \(H_{\eta}^{-1\otimes k}\)-norm as follows.
**Definition 2.1**.: _For any function \(F\) defined on \(\mathbb{R}\), denote its tensorization to \(\mathbb{R}^{k}\) by_
\[F^{\otimes k}(z_{1},\ldots,z_{k}):=\prod_{m=1}^{k}F(z_{m}),\quad\forall(z_{1}, \ldots,z_{k})\in\mathbb{R}^{k}.\]
_We then define_
\[\|g\|_{H^{-1\otimes k}}:=\|K^{\otimes k}\star g\|_{L^{2}(\mathbb{R}^{k})},\]
_and for any weight function \(\eta\) on \(\mathbb{R}\),_
\[\|g\|_{H^{-1\otimes k}_{\eta}}:=\|K^{\otimes k}\star(g\eta^{\otimes k})\|_{L^{ 2}(\mathbb{R}^{k})}.\]
The introduction of the weight \(\eta\) is motivated by the need for some control on the decay of the solutions at infinity since we work on the whole \(\mathbb{R}\). We simply choose some \(\alpha>0\) and define
\[\eta(x)=\eta_{\alpha}(x):=C_{\alpha}\exp\Big{(}\sqrt{1+\alpha^{2}x^{2}}\Big{)},\quad C_{\alpha}=\int_{\mathbb{R}}\exp\Big{(}-\sqrt{1+\alpha^{2}x^{2}}\Big{)} \;\mathrm{d}x.\]
Our definition of \(H^{-1\otimes k}_{\eta}\) leads to a topology that is equivalent to the classical weak-* topology of \(\mathcal{M}(\mathbb{R}^{k})\).
**Lemma 2.2**.: _Consider any \(a>0\), \(C>0\), \(0<\alpha<a\) (which determines \(\eta=\eta_{\alpha}\)) and any sequence_
\[\{g_{n}\}_{n=1}^{\infty}\subset\bigg{\{}g\in\mathcal{M}(\mathbb{R}^{k}):\int_ {\mathbb{R}^{k}}\exp\big{(}a\sum_{m=1}^{k}|z_{m}|\big{)}|g|(z)\;\mathrm{d}z \leq C\bigg{\}}.\]
_Then the following are equivalent_
* \(g_{n}\stackrel{{*}}{{\rightharpoonup}}g_{\infty}\) _under the weak-* topology of_ \(\mathcal{M}(\mathbb{R}^{k})\)_._
* \(\|g_{n}-g_{\infty}\|_{H^{-1\otimes k}_{\eta}}\to 0\)_._
The proof of Lemma 2.2 is postponed to Section 3, where we also conduct a deeper examination of the relationship between the \(H^{-1\otimes k}_{\eta}\) norm and classical negative Sobolev norms. The use of weak distances such as Wasserstein distances is classical in the derivation of the mean-field limit, in particular when looking at the notion of empirical measures.
However our observables are bounded functions at any \(t>0\), for which we can even prove bounds, and a main motivation for the use of weak norms stems from the singularity introduced by the Poisson jump processes. The usefulness of negative-Sobolev norms in that context has been highlighted in works such as [73]. We also mention [31] which considers a somewhat relaxed IF model with connections depending on the spatial structure of neurons. However, instead of studying the 1-particle distribution, we use tensorized \(H^{-1\otimes k}_{\eta}\)-norms to investigate the joint law \(f^{i_{1},\ldots,i_{k}}_{N}\) and the observables, which seems to be a novel approach in this context.
### From the original SDE system to the extended BBGKY hierarchy
We show in this subsection that the observables, as defined in Definition 1.2, satisfy an extended BBGKY hierarchy.
We first recall the Liouville or forward Kolmogorov equation that is satisfied by the full joint law \(f_{N}\) of solutions to the SDE (1.1),
\[\begin{split}\partial_{t}f_{N}(t,x)+&\sum_{i=1}^{N} \bigg{[}\partial_{x_{i}}(\mu(x_{i})f_{N}(t,x))-\frac{\sigma^{2}}{2}\partial_{ x_{i}}^{2}f_{N}(t,x)\\ &+\nu(x_{i})f_{N}(t,x)-\delta_{0}(x_{i})\bigg{(}\int_{\mathbb{R} }\nu(y_{i})f_{N}(t,y-(w_{N})_{\cdot,i}^{\top})\;\mathrm{d}y_{i}\bigg{)}\bigg{|} _{\forall j\neq i,\,y_{j}=x_{j}}\bigg{]}=0,\\ (w_{N})_{\cdot,i}^{\top}&=\big{(}w_{1,i;N},\ldots w _{N,i;N}\big{)}\in\mathbb{R}^{N},\quad\forall 1\leq i\leq N,\end{split} \tag{2.1}\]
where \(\delta_{0}\) is the Dirac delta function at origin. The "spike vector" \((w_{N})_{\cdot,i}^{\top}\) corresponds to the \(i\)-th column of connectivity matrix \(w_{N}\) that account for the jumps when the \(i\)-th neuron fires.
From the Kolmogorov equation, we may derive equations on each observable.
**Proposition 2.3**.: _Assume that \(\mu,\nu\in W^{1,\infty}\) and \(\sigma>0\). Let \(w_{N}:=(w_{i,j;N})_{i,j=1}^{N}\) be the connectivity matrix and \((X_{0}^{1;N},\ldots,X_{0}^{N;N})\) be the initial data with \(g_{N}=\operatorname{Law}(X_{0}^{1;N},\ldots,X_{0}^{N;N})\)._
_Then, there exists a unique solution \((X_{t}^{1;N},\ldots,X_{t}^{N;N})\) solving SDE (1.1) for all \(t\geq 0\), whose law_
\[f_{N}(t,\cdot)=\operatorname{Law}(X_{t}^{1;N},\ldots,X_{t}^{N;N})\]
_is the unique distributional solution of Liouville equation (2.1) with initial data \(g_{N}\). In addition, the observables_
\[\tau_{N}(T)=\tau_{N}(T,w_{N},f_{N}),\quad\forall T\in\mathcal{T}\]
_solve the extended version of BBGKY hierarchy with remainder terms: For all \(T\in\mathcal{T}\),_
\[\partial_{t}\tau_{N}(T)(t,z)\] \[\quad-\nu(z_{m})\tau_{N}(T)(t,z)+\delta_{0}(z_{m})\bigg{(}\int_{ \mathbb{R}}\nu(u_{m})\Big{(}\tau_{N}(T)(t,u)+\mathscr{R}_{N,T,m}(t,u)\Big{)} \;\mathrm{d}u_{m}\bigg{)}\bigg{|}_{\forall n\neq m,\,u_{n}=z_{n}}\bigg{]}\] \[\quad-\partial_{z_{m}}\bigg{[}\int_{\mathbb{R}}\nu(z_{|T|+1}) \Big{(}\tau_{N}(T+m)(t,z)+\tilde{\mathscr{R}}_{N,T+m,|T|+1}(t,z)\Big{)}\; \mathrm{d}z_{|T|+1}\bigg{]}\bigg{\}}, \tag{2.2}\]
_where the remainder terms are given by_
\[\mathscr{R}_{N,T,m}(t,z) :=\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}w_{N,T}(i_{1}, \ldots,i_{|T|})\Big{(}f_{N}^{i_{1},\ldots,i_{|T|}}(t,z-w_{N;i_{m}}^{i_{1}, \ldots,i_{|T|}})-f_{N}^{i_{1},\ldots,i_{|T|}}(t,z)\Big{)},\] \[\tilde{\mathscr{R}}_{N,T,m}(t,z) :=\int_{0}^{1}\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}w_{N,T} (i_{1},\ldots,i_{|T|})\Big{(}f_{N}^{i_{1},\ldots,i_{|T|}}(t,z-rw_{N;i_{m}}^{i_ {1},\ldots,i_{|T|}})-f_{N}^{i_{1},\ldots,i_{|T|}}(t,z)\Big{)}\;\mathrm{d}r, \tag{2.3}\]
_and the \(w_{N;j}^{i_{1},\ldots,i_{k}}\) are defined as the restriction of the "spike vector" \((w_{N})_{\cdot,j}^{\top}\) to the marginal space, namely_
\[w_{N;j}^{i_{1},\ldots,i_{k}}:=\big{(}w_{i_{n},j;N}\big{)}_{n=1}^{k}=\big{(}w_{ i_{1},j;N},\ldots w_{i_{k},j;N}\big{)}\in\mathbb{R}^{k}.\]
The proof of the proposition will be done in Section 5.1. Unlike the standard BBGKY hierarchy that usually gives a closed equation involving \(f_{N,k}\) and the next marginal \(f_{N,k+1}\), the hierarchy of equations derived here is only approximate as the remainder terms do not only depend on our observables. Thus, an essential part of our approach is to prove that as the strength of pairwise interaction \(\max_{1\leq i,j\leq N}|w_{i,j;N}|\) goes to \(0\) (which is assumption (1.7) in Theorem 1.6), those remainder terms \(\mathscr{R}\) and \(\tilde{\mathscr{R}}\) vanish in the \(H_{\eta}^{-1\otimes k}\) sense. As we mentioned earlier, it is a main motivation of choosing \(H_{\eta}^{-1\otimes k}\) as its specific form. This result is precisely formulated in Proposition 3.6 in the next subsection.
We also note that the presence of the remainder terms \(\mathscr{R}\) and \(\tilde{\mathscr{R}}\) is not only a consequence of the Poisson jump process. Consider the more classical first-order dynamics
\[X_{t}^{i;N}=X_{0}^{i;N} +\int_{0}^{t}\mu(X_{s}^{i;N})\;\mathrm{d}s+\int_{0}^{t}\sigma(X_{ s}^{i;N})\;\mathrm{d}\mathbf{B}_{s}^{i}\] \[+\sum_{j\neq i}w_{i,j;N}\int_{0}^{t}\nu(X_{s}^{i;N},X_{s}^{j;N})\; \mathrm{d}s.\]
Depending on the specific form of \(\nu(\cdot,\cdot)\), the term \(\tilde{\mathscr{R}}_{N,T+m,|T|+1}\) may vanish, but the term \(\mathscr{R}_{N,T,m}\) is always present. More than the specific form of the dynamics, the remainders reflect the more essential difficulty that interaction between the first \(k\) neurons \(i_{1},\ldots,i_{k}\) can not be fully described by the observables as defined in Definition 1.2.
This is also one of the crucial distinctions that separates the method in this article from [51]. The observables in [51] are similar to the limiting observables \(\tau_{\infty}\) in this article, but are constructed from the solutions of the Mckean-Vlasov SDE where the interaction felt by one agent \(X^{i;N}\) is determined not by the exact \(X^{j;N}\), but the \(\operatorname{Law}(X^{j;N})\). This leads to a simplified hierarchy without remainders. On the other hand, in this article all the observables are constructed directly from the solution of (1.1), hence the extended, approximate BBGKY hierarchy (2.2) reflects the dynamics of the original non-exchangeable system.
We conclude the subsection with a priori estimates of the absolute observables \(|\tau_{N}|\) whose proof is also postponed to Section 5.1.
**Proposition 2.4**.: _Let \(N\geq 1\), \(t_{*}>0\) and \(\alpha>0\) (which determines \(\eta=\eta_{\alpha}\)). Assume that the connectivity matrix \(w_{N}:=(w_{i,j;N})_{i,j=1}^{N}\) and joint law \(f_{N}\in L^{\infty}([0,t_{*}];\mathcal{M}_{+}(\mathbb{R}^{N}))\) solves the Kolmogorov equation (2.1) in the sense of distributions. For any \(T\in\mathcal{T}\), assume that at \(t=0\),_
\[\||\tau_{N}|(T)(0,\cdot)\eta^{\otimes|T|}\|_{\mathcal{M}(\mathbb{R}^{|T|})} \leq C_{\eta}(T)<\infty.\]
_Then there exists \(A_{\eta}>0\) only depending on \(\alpha\), \(\|\mu\|_{W^{1,\infty}}\), \(\|\nu\|_{W^{1,\infty}}\), \(\sigma\) and_
\[\max\left(\max_{i}\sum_{j}|w_{i,j;N}|,\ \max_{j}\sum_{i}|w_{i,j;N}|\right),\]
_such that,_
\[\||\tau_{N}|(T)(t,\cdot)\eta^{\otimes|T|}\|_{\mathcal{M}(\mathbb{R}^{|T|})} \leq C_{\eta}(T)\big{(}\exp(A_{\eta}t_{*})\big{)}^{|T|},\quad\forall t\in[0,t_ {*}],\]
_and_
\[\||\tau_{N}|(T)(t,\cdot)\|_{H_{\eta}^{-1\otimes|T|}}\leq C_{\eta}(T)\big{(}\| K\|_{L^{2}(\mathbb{R})}\exp(A_{\eta}t_{*})\big{)}^{|T|},\quad\forall t\in[0,t_ {*}]. \tag{2.4}\]
Let us emphasize again that this proposition is about \(|\tau_{N}|\) the absolute observables, which are non-negative measures obtained by linear combinations of laws \(f_{N}^{i_{1},\ldots,i_{k}}\), \(1\leq i_{1},\ldots,i_{k}\leq N\). We do not expect a straightforward extension to the \(\tau_{N}\) as the potential cancellations of positive and negative terms in the dynamics makes the problem much less tractable.
### From the limiting Vlasov equation to the limiting hierarchy
The following proposition states that the limiting observables \(\tau_{\infty}\) defined from the limiting Vlasov equation (1.3)-(1.4) satisfy the limiting hierarchy (2.6), which is similar to the BBGKY hierarchy (2.2) in Proposition 2.3 but without the remainder terms \(\mathscr{R}\) and \(\tilde{\mathscr{R}}\). In that sense the limiting hierarchy provides closed recursive relations of the family \(\tau_{\infty}(T)\), \(\forall T\in\mathcal{T}\). In particular the quantitative estimates proved later would imply the uniqueness of solutions to the hierarchy for a given choice of initial data.
**Proposition 2.5**.: _Assume that \(\mu,\nu\in W^{1,\infty}\) and \(\sigma>0\). Then for any \(t_{*}>0\), \(\alpha>0\) (which determines \(\eta=\eta_{\alpha}\)), any connectivity kernel \(w\in\mathcal{W}\) and any initial extended density \(g\in L^{\infty}([0,1];H_{\eta}^{-1}\cap\mathcal{M}_{+}(\mathbb{R}))\), there exists a unique_
\[f\in L^{\infty}([0,t_{*}]\times[0,1];H_{\eta}^{-1}\cap\mathcal{M}_{+}(\mathbb{ R}))\]
_solving Vlasov equation (1.3)-(1.4) in the sense of distributions. Furthermore, the observables \(\tau_{\infty}(T)=\tau_{\infty}(T,w,f)\), \(\forall T\in\mathcal{T}\) are bounded by_
\[\big{\|}\tau_{\infty}(T,w,f)(t,\cdot)\big{\|}_{H_{\eta}^{-1\otimes|T|}}\leq\| w\|_{\mathcal{W}}^{|T|-1}\|f\|_{L^{\infty}_{t,\xi}(H_{\eta}^{-1})_{x}}^{|T|},\quad \forall t\in[0,t_{*}],\ T\in\mathcal{T}, \tag{2.5}\]
_and solve the following non-exchangeable extended version of the Vlasov hierarchy: For all \(T\in\mathcal{T}\),_
\[\begin{split}&\partial_{t}\tau_{\infty}(T)(t,z)\\ &\quad=\sum_{m=1}^{|T|}\Bigg{\{}\bigg{[}-\partial_{z_{m}}(\mu(z_{ m})\tau_{\infty}(T)(t,z))+\frac{\sigma^{2}}{2}\partial_{z_{m}}^{2}\tau_{\infty}(T)(t,z) \\ &\quad-\nu(z_{m})\tau_{\infty}(T)(t,z)+\delta_{0}(z_{m})\bigg{(} \int_{\mathbb{R}}\nu(u_{m})\tau_{\infty}(T)(t,u)\ \mathrm{d}u_{m}\bigg{)}\bigg{|}_{\forall n\neq m,\,u_{n}=z_{n}}\bigg{]}\\ &\quad-\partial_{z_{m}}\bigg{[}\int_{\mathbb{R}}\nu(z_{|T|+1}) \tau_{\infty}(T+m)(t,z)\ \mathrm{d}z_{|T|+1}\bigg{]}\Bigg{\}}.\end{split} \tag{2.6}\]
The proof of the proposition is again done in Section 5.1.
### Quantitative stability estimates between the hierarchies
We are now ready to state the main quantitative result in this paper, which compares the observables \(\tau_{N}(T,w_{N},f_{N})\) satisfying the approximate hierarchy (2.2)-(2.3) to \(\tau_{\infty}(T)\) satisfying the limiting hierarchy (2.6). The proof of the theorem and the exact derivation of constants \(C_{1},\ C_{2}\) in the estimate are performed in Section 5.2.
**Theorem 2.6**.: _Assume that \(\mu,\nu\in W^{1,\infty}\), \(\sigma>0\) and \(N\geq 1\). Let \(w_{N}:=(w_{i,j;N})_{i,j=1}^{N}\in\mathbb{R}^{N\times N}\) be a connectivity matrix and \(f_{N}^{i_{1},\ldots,i_{k}}\), \(\forall\{i_{1},\ldots,i_{k}\}\subset\{1\ldots N\}\) be marginal laws, from which the hierarchy of observables \(\tau_{N}(T,w_{N},f_{N})\) and the absolute observables \(|\tau_{N}|(T,w_{N},f_{N})\) are defined and satisfy (2.2)-(2.3) in distributional sense. Denote the strength of pairwise interaction as_
\[\bar{w}_{N}:=\max_{1\leq i,j\leq N}|w_{i,j;N}|.\]
_In addition, let \(\tau_{\infty}(T)\in L^{\infty}([0,t_{*}];\mathcal{M}(\mathbb{R}^{|T|}))\), \(\forall T\in\mathcal{T}\) satisfy (2.6) in distributional sense._
_For some choice of \(\lambda>0\) and \(\alpha>0\) (which determines \(\eta=\eta_{\alpha}\)), assume that there exists \(n\in\mathbb{N}\) s.t._
\[\bar{\varepsilon}:=C_{1}\big{[}\exp\big{(}(2+2\alpha)n\bar{w}_{N}\big{)}-1 \big{]}+(1/4)^{n}<1,\]
_where \(C_{1}\) is a constant depending only on \(\|\mu\|_{W^{1,\infty}},\|\nu\|_{W^{1,\infty}},\sigma\) and the scaling factor \(\lambda>0\). Then the following estimate holds: for any tree \(T_{*}\in\mathcal{T}\),_
\[\begin{split}&\sup_{t\leq t_{*}}\,(\lambda/8)^{|T_{*}|}\|\tau_{N} (T_{*},w_{N},f_{N})(t,\cdot)-\tau_{\infty}(T_{*})(t,\cdot)\|_{H_{\eta}^{-1 \otimes|T_{*}|}}^{2}\\ &\quad\leq C_{2}\,C_{\lambda;\eta}^{2}\,\left\{\max\left(\bar{ \varepsilon},\ \max_{|T|\leq\max(n,\ |T_{*}|)}\frac{(\lambda/8)^{|T|}}{C_{\lambda;\eta}^{2}}\,\| \tau_{N}(T,w_{N},f_{N})(0,\cdot)-\tau_{\infty}(T)(0,\cdot)\|_{H_{\eta}^{-1 \otimes|T|}}^{2}\right)\right\}^{1/C_{2}},\end{split} \tag{2.7}\]
_where \(C_{2}\) depends only on \(t_{*}\), \(\|\mu\|_{W^{1,\infty}},\|\nu\|_{W^{1,\infty}},\sigma\) and \(\lambda>0\), and where \(C_{\lambda;\eta}\) depends on the following a priori estimate_
\[\sup_{t\leq t_{*}}\,\max_{|T|\leq\max(n,\ |T_{*}|)}\lambda^{\frac{|T|}{2}} \left(\|\tau_{N}|(T,w_{N},f_{N})(t,\cdot)\|_{H_{\eta}^{-1\otimes|T|}}+\|\tau_{ \infty}(T)(t,\cdot)\|_{H_{\eta}^{-1\otimes|T|}}\right)\leq C_{\lambda;\eta}. \tag{2.8}\]
Remark 1. The values of \(\lambda\), \(\alpha\), and \(n\) must be chosen carefully for this result to be useful. The scaling factors \(\lambda\) and \(\alpha\) need to be selected so that the various norms in the theorem are finite, to fit with the existing a priori estimates. Also, we need to have \(n\) s.t. \(\bar{\varepsilon}\) is small enough, which would typically lead to taking \(n\sim\frac{|\log\bar{w}_{N}|}{\bar{w}_{N}}\). However \(n\) also enters in the definition of \(C_{\lambda;\eta}\) in an implicit way as a larger value of \(n\) forces to take the \(\max\) over more trees \(T\). Hence the actual optimal value of \(n\) is not so easy to determine unless (2.8) is a priori given where the maximum is replaced by the supremum over all trees \(T\in\mathcal{T}\).
Stability and uniqueness estimates on the kind of generalized hierarchy that we are dealing with here are notoriously difficult, with only limited results available. As we mentioned before there are obvious similarities between our approach and the hierarchy derived in [51] or the strong estimates on the classical BBGKY hierarchy in [8] (leading for example to the mean-field
limit to the Vlasov-Fokker-Planck-Poisson equation). We also mention results around the wave kinetic equation in [26, 27].
A major difference in Theorem 2.6 is that the observables \(\tau_{N}\) do not solve an exact hierarchy and the remainder terms only vanish in some weak norms. As we briefly explained earlier, this forces the use of the \(H_{\eta}^{-1\otimes|T|}\) norm to both control the remainders and to have appropriate commutator estimates, which is the main technical innovation in the paper.
We also emphasize that the general method used to derive stability estimates relies on recursive inequalities, which often leads to a blow-up in finite time. Those do not occur here because we can derive a priori estimates, namely (2.8) from Proposition 2.4 and Proposition 2.5, that are strong enough with respect to the weak norms that we are using.
### Proving Theorem 1.6 from our quantitative estimates
We conclude this subsection by explaining how Theorem 1.6 follows from all the estimates presented here.
Proof of Theorem 1.6.: The first step is to make sure that we can apply Theorem 2.6 from the assumptions (1.6)-(1.9) in Theorem 1.6. More precisely, we tend to show that (2.8) in Theorem 2.6 hold for some well chosen \(\lambda>0\) and \(C_{\lambda;\eta}>0\), and the maximum over \(|T|\leq\max(n,\ |T_{*}|)\) can actually replaced by the supremum over all trees \(T\in\mathcal{T}\).
Recall that for any \(k\geq 1\),
\[\eta_{a}^{\otimes k}(z_{1},\ldots,z_{k})=C_{a}^{k}\exp\Big{(}{\sum_{m=1}^{k} \sqrt{1+a^{2}z_{m}^{2}}}\Big{)},\]
hence
\[\exp\Big{(}{\sum_{m=1}^{k}}a|z_{m}|\Big{)}\leq\eta_{a}^{\otimes k}(z_{1}, \ldots,z_{k})\leq(C_{a}\exp(1))^{k}\exp\Big{(}{\sum_{m=1}^{k}}a|z_{m}| \Big{)}.\]
Thus, from assumption (1.8) in Theorem 1.6, the following two inequalities about the initial data can immediately be derived,
\[\sup_{N}\ \||\tau_{N}|(T)(0,\cdot)\eta_{a}^{\otimes|T|}\|_{ \mathcal{M}(\mathbb{R}^{|T|})} \leq\Big{(}M_{a}C_{a}\exp(1)\Big{)}^{|T|},\quad\forall T\in \mathcal{T},\] \[\|f(0,\cdot,\cdot)\|_{L^{\infty}_{\mathcal{H}_{\eta a}}(H_{\eta a }^{-1})_{x}} \leq\|K\|_{L^{2}(\mathbb{R})}M_{a}C_{a}\exp(1).\]
Now, applying Proposition 2.4 and Proposition 2.5 to the two initial bounds, we obtain the exponential moment bound
\[\sup_{N}\ \int_{\mathbb{R}^{|T|}}\exp\big{(}a\sum_{m=1}^{|T|}|z_{m} |\big{)}|\tau_{N}|(T)(t,\mathrm{d}z)\] \[\leq\||\tau_{N}|(T)(t,\cdot)\eta_{a}^{\otimes|T|}\|_{\mathcal{M} (\mathbb{R}^{|T|})}\leq\Big{(}M_{a}C_{a}\exp(1)\exp(A_{\eta}t_{*})\Big{)}^{|T |},\quad\forall t\in[0,t_{*}],\ T\in\mathcal{T},\]
and a priori energy bounds
\[\||\tau_{N}|(T)(t,\cdot)\|_{H_{\eta a}^{-1\otimes|T|}}\leq\Big{(} \|K\|_{L^{2}(\mathbb{R})}M_{a}C_{a}\exp(1)\exp(A_{\eta}t_{*})\Big{)}^{|T|}, \quad\forall t\in[0,t_{*}],\ T\in\mathcal{T},\] \[\big{\|}\tau_{\infty}(T,w,f)(t,\cdot)\big{\|}_{H_{\eta a}^{-1 \otimes|T|}}\leq\|w\|_{\mathcal{W}}^{|T|-1}\|f\|_{L^{\infty}_{t,\xi}(H_{\eta a }^{-1})_{x}}^{|T|},\quad\forall t\in[0,t_{*}],\ T\in\mathcal{T},\]
where the coefficient \(A_{\eta}\) inside the exponent now only depend on \(a\), \(\|\mu\|_{W^{1,\infty}}\), \(\|\nu\|_{W^{1,\infty}}\), \(\sigma\) and
\[\max\Big{(}\max_{i}\sum_{j}|w_{i,j;N}|,\ \max_{j}\sum_{i}|w_{i,j;N}|\Big{)}\,.\]
This guarantees (2.8) where the maximum over \(|T|\leq\max(n,\ |T_{*}|)\) is replaced by the supremum over all trees \(T\in\mathcal{T}\), with \(\lambda\), \(C_{\lambda;\eta}\) chosen as
\[\lambda=\min\bigg{(}\Big{(}\|K\|_{L^{2}(\mathbb{R})}M_{a}C_{a}\exp(1)\exp(A_{ \eta}t_{*})\Big{)}^{-2},\Big{(}\max\big{(}\|w\|_{\mathcal{W}},1\big{)}\|f\|_{L ^{\infty}_{t,\xi}(H_{\eta}^{-1})_{x}}\Big{)}^{-2}\bigg{)},\qquad C_{\lambda; \eta}=1.\]
Hence the assumptions of Theorem 2.6 are satisfied and we apply it along the following point.
* Using (1.6) and (1.8), we choose the coefficients \(\alpha\in(0,a)\), \(\lambda>0\), \(C_{\lambda;\eta}>0\) in (2.8) independent of \(N\), and the supremum in (2.8) is taken over all possible \(T\in\mathcal{T}\). This implies in particular a uniform bound on exponential moments with coefficient \(a>0\) so that Lemma 2.2 applies.
* Fix \(T_{*}\in\mathcal{T}\). For any \(\varepsilon>0\), choose sufficiently large \(n\), such that \[(\lambda/8)^{-\frac{|T_{*}|}{2}}\sqrt{C_{2}}\,C_{\lambda;\eta}\left[2(1/4)^{n} \right]^{1/2C_{2}}\leq\varepsilon.\]
* By (1.7), we choose sufficiently large \(N_{1}\), such that for all \(N\geq N_{1}\) the corresponding \[\bar{w}_{N}:=\max_{1\leq i,j\leq N}|w_{i,j;N}|\] is sufficiently small such that \[C_{1}\big{[}\exp\big{(}(2+2\alpha)n\bar{w}_{N}\big{)}-1\big{]}\leq(1/4)^{n}.\]
* Notice that there are only finitely many \(T\in\mathcal{T}\) satisfying \(|T|\leq\max(n,\ |T_{*}|)\). By (1.9) on the weak-* convergence of initial data, and by Lemma 2.2, choose a sufficiently large \(N_{2}\geq 1\) such that for all \(N\geq N_{2}\), \[\max_{|T|\leq\max(n,\ |T_{*}|)}(\lambda/8)^{|T|}\|\tau_{N}(T,w_{N},f_{N})(0, \cdot)-\tau_{\infty}(T)(0,\cdot)\|_{H^{-1\otimes|T|}_{\eta}}^{2}/(4C_{\lambda; \eta}^{2})\leq 2(1/4)^{n}.\]
* In summarize, for any \(T_{*}\in\mathcal{T}\) and any \(\varepsilon>0\), by taking \(N\geq\max(N_{1},\ N_{2})\) according to our previous discussion and applying Theorem 2.6, we obtain that (2.9) \[\sup_{t\in[0,\ t_{*}]}\|\tau_{N}(T_{*},w_{N},f_{N})(t,\cdot)-\tau_{\infty}(T_{ *})(t,\cdot)\|_{H^{-1\otimes|T_{*}|}_{\eta}}\leq\varepsilon.\]
* Invoking again Lemma 2.2, we finally deduce that \[\lim_{N\to\infty}\tau_{N}(T,w_{N},f_{N})(t,\cdot)=\tau_{\infty}(T)(t,\cdot), \quad\forall T\in\mathcal{T}\] in the weak-* topology of \(L^{\infty}([0,\ t_{*}],\ \mathcal{M})\).
## 3. The weak norm and the exponential moments
### Basic properties
We first revisit our definition of the kernel \(K\), and introduce another kernel, denoted as \(\Lambda\), as follows,
\[K(x):=\frac{1}{\pi}\int_{0}^{\infty}\exp(-|x|\cosh(\xi))\mathrm{d}\xi,\quad \Lambda(x):=\frac{1}{2}\exp(-|x|),\quad\forall x\in\mathbb{R}.\]
For \(x>0\), the kernel \(K\) is, in fact, the zero-th order modified Bessel function of second type. From the known properties of Bessel functions, \(K\) is a non-negative, radially-decreasing \(L^{2}\) function, and satisfies
\[K\star K=\Lambda,\quad\widehat{K}(\xi)=\int_{\mathbb{R}}K(x)\exp(-2\pi ix\xi) \ \mathrm{d}x=\frac{1}{\sqrt{1+4\pi^{2}\xi^{2}}}.\]
It is easy to extend the identity \(K\star K=\Lambda\) to the tensorized kernels \(K^{\otimes k}\star K^{\otimes k}=\Lambda^{\otimes k}\), which yields the following equivalent formalism of \(H^{-1\otimes k}\) by Fourier analysis:
\[\|f\|_{H^{-1\otimes k}}^{2} = \int_{z\in\mathbb{R}^{k}}\big{[}K^{\otimes k}\star f(z)\big{]}^{ 2}\ \mathrm{d}z=\int_{z\in\mathbb{R}^{k}}f(z)\big{[}\Lambda^{\otimes k}\star f(z) \big{]}\ \mathrm{d}z\] \[= \int_{\xi\in\mathbb{R}^{k}}\bigg{(}\prod_{m=1}^{k}\frac{1}{1+4\pi ^{2}\xi_{m}^{2}}\bigg{)}\ \hat{f}(\xi)\hat{f}(\xi)\ \mathrm{d}\xi.\]
In one dimension, it is straightforward that our notion of \(H^{-1\otimes k}\)-norm for \(k=1\) is equivalent to the negative Sobolev norm of \(H^{-1}(\mathbb{R})\), i.e.
\[\|f\|_{H^{-1\otimes 1}}=\|f\|_{H^{-1}(\mathbb{R})},\]
provided we define \(H^{s}(\mathbb{R})\) as
\[\|g\|_{H^{s}(\mathbb{R})}^{2}:=\int_{\mathbb{R}}\big{(}1+4\pi^{2}\xi^{2}\big{)} ^{s}\big{|}\hat{g}(\xi)\big{|}^{2}\ \mathrm{d}\xi,\]
for any \(s\in\mathbb{R}\).
This also gives us the duality formula
\[\|f\|_{H^{-s}(\mathbb{R})}=\sup_{\|g\|_{H^{s}(\mathbb{R})}\leq 1}\bigg{|}\int_{ \mathbb{R}}f(x)g(x)\ \mathrm{d}x\bigg{|},\]
and the inequality from Leibniz rule for \(s=1\),
\[\|\nu f\|_{H^{-1}(\mathbb{R})}=\sup_{\|g\|_{H^{1}}\leq 1}\bigg{|}\int_{\mathbb{R}}g (x)\nu(x)f(x)\ \mathrm{d}x\bigg{|}\leq\sup_{\|g\|_{H^{1}}\leq 1}\|g\nu\|_{H^{1}} \|f\|_{H^{-1}}\leq 2\|\nu\|_{W^{1,\infty}}\|f\|_{H^{-1}(\mathbb{R})}.\]
### Tensorization properties
In higher dimensions, our notion of \(H^{-1\otimes k}\)-norm is the tensorization of \(H^{-1}(\mathbb{R})\)-norm to \(\mathbb{R}^{k}\):
**Lemma 3.1**.: _For any weight function \(\eta:\mathbb{R}\to\mathbb{R}_{+}\), one has_
\[\|f^{\otimes k}\|_{H^{-1\otimes k}}=\big{(}\|f\|_{H^{-1}(\mathbb{R})}\big{)}^{ k},\quad\|f^{\otimes k}\|_{H^{-1\otimes k}_{\eta}}=\big{(}\|f\|_{H^{-1}_{\eta}( \mathbb{R})}\big{)}^{k}.\]
Proof.: One has that
\[\|f^{\otimes k}\|_{H^{-1\otimes k}_{\eta}}^{2} =\int_{z\in\mathbb{R}^{k}}\big{[}K^{\otimes k}\star(f^{\otimes k }\eta^{\otimes k})(z)\big{]}^{2}\ \mathrm{d}z=\prod_{m=1}^{k}\int_{z_{m}\in\mathbb{R}}\big{[}K\star(f\eta)(z_{ m})\big{]}^{2}\ \mathrm{d}z_{m}\] \[=\big{(}\|f\|_{H^{-1}_{\eta}(\mathbb{R})}\big{)}^{2k}.\]
The unweighted case of \(H^{-1\otimes k}\) is naturally included by choosing \(\eta\equiv 1\).
It is important to emphasize however that the tensorized \(H^{-1\otimes k}\)-norm is weaker than the standard \(H^{-1}(\mathbb{R}^{k})\)-norm since in Fourier
\[\prod_{m=1}^{k}\frac{1}{1+4\pi^{2}\xi_{m}^{2}}\ll\frac{1}{1+4\pi^{2}\sum_{m=1 }^{k}\xi_{m}^{2}}.\]
This shows that the energy distributed along the diagonals of the Fourier domain have a much less contribution to the tensorized \(H^{-1\otimes k}\)-norm than to the \(H^{-1}(\mathbb{R}^{k})\)-norm.
Similarly, while it is possible to include \(\mathcal{M}(\mathbb{R}^{k})\) into the standard \(H^{-s}(\mathbb{R}^{k})\), the order \(s>0\) in such Sobolev inequalities depends on the dimension \(k\), namely \(s>k/2\). On the other hand, the following lemma holds for our notion of \(H^{-1\otimes k}\)-norm,
**Lemma 3.2**.: _Consider \(g\in\mathcal{M}(\mathbb{R}^{k})\) and any weight function \(\eta\in L^{1}(\mathbb{R},\mathbb{R}_{+})\) such that \(\eta^{\otimes k}\) is integrable against \(g\), then_
\[\begin{split}\|g\|_{H^{-1\otimes k}}:=\|K^{\otimes k}\star g\|_{ L^{2}(\mathbb{R}^{k})}&\leq\|K\|_{L^{2}(\mathbb{R})}^{k}\|g\|_{ \mathcal{M}(\mathbb{R}^{k})},\\ \|g\|_{H^{-1\otimes k}_{\eta}}:=\|K^{\otimes k}\star(g\eta^{ \otimes k})\|_{L^{2}(\mathbb{R}^{k})}&\leq\|K\|_{L^{2}(\mathbb{R})} ^{k}\|g\eta^{\otimes k}\|_{\mathcal{M}(\mathbb{R}^{k})}.\end{split} \tag{3.1}\]
Proof.: The proof is a simple application of convolutional inequality.
Hence \(\mathcal{M}(\mathbb{R}^{k})\) is naturally included in \(H^{-1\otimes k}\), and can also be included into \(H^{-1\otimes k}_{\eta}\), provided that the measure has the right moment bound.
The next lemma extends the inequality from Leibniz rule to any dimension.
**Lemma 3.3**.: _Consider \(\nu_{m}\) of form_
\[\nu_{m}=1\otimes\cdots\otimes\nu\otimes\cdots\otimes 1,\]
_where \(\nu\in W^{1,\infty}(\mathbb{R})\) appears in the \(m\)-th coordinate, i.e. \(\nu_{m}(z)=\nu(z_{m})\). Then for any \(f\in\mathcal{M}(\mathbb{R}^{k})\cap H^{-1\otimes k}\), the following inequality holds_
\[\|\nu_{m}f\|_{H^{-1\otimes k}}\leq 2\|\nu\|_{W^{1,\infty}(\mathbb{R})}\|f\|_{H^{-1 \otimes k}},\]
_while for \(f\in\mathcal{M}(\mathbb{R}^{k})\cap H^{-1\otimes k}_{\eta}\), we have the corresponding_
\[\|\nu_{m}f\|_{H^{-1\otimes k}_{\eta}}\leq 2\|\nu\|_{W^{1,\infty}(\mathbb{R})}\|f\| _{H^{-1\otimes k}_{\eta}}.\]
Proof.: Let us first discuss the unweighted inequality and WLOG consider \(\nu_{k}\) that is non-constant in the \(k\)-th dimension. Let us introduce the Fourier transform on the first \(k-1\) dimensions
\[\mathcal{F}^{\otimes k-1}\otimes I:\mathbb{R}^{k-1}\times\mathbb{R}\to \mathbb{R}^{k-1}\times\mathbb{R}.\]
It is easy to verify that
\[\big{(}\mathcal{F}^{\otimes k-1}\otimes I\big{)}\big{(}K^{\otimes k }\star(\nu_{k}f)\big{)}(\xi_{1},\ldots,\xi_{k-1},z_{k})\] \[= \bigg{(}\prod_{m=1}^{k-1}\frac{1}{\sqrt{1+4\pi^{2}\xi_{m}^{2}}} \bigg{)}\ \big{(}K\star_{k}(\nu_{k}\mathcal{F}^{\otimes k-1}f)\big{)}(\xi_{1}, \ldots,\xi_{k-1},z_{k}).\]
By Plancherel identity,
\[\|\nu_{k}f\|_{H^{-1\otimes k}}^{2} = \int\bigg{|}\bigg{(}\prod_{m=1}^{k-1}\frac{1}{\sqrt{1+4\pi^{2} \xi_{m}^{2}}}\bigg{)}\ \big{(}K\star_{k}(\nu_{k}\mathcal{F}^{\otimes k-1}f)\big{)}(\xi_{1}, \ldots,\xi_{k-1},z_{k})\bigg{|}^{2}\ \mathrm{d}\xi_{1},\ldots,\xi_{k-1}\mathrm{d}z_{k}\] \[= \int\bigg{(}\prod_{m=1}^{k-1}\frac{1}{1+4\pi^{2}\xi_{m}^{2}} \bigg{)}\bigg{\|}\big{(}\nu_{k}\mathcal{F}^{\otimes k-1}f\big{)}(\xi_{1}, \ldots,\xi_{k-1},\cdot)\bigg{\|}_{H^{-1}(\mathbb{R})}^{2}\mathrm{d}\xi_{1}, \ldots,\xi_{k-1}.\]
Since \(\nu\in W^{1,\infty}(\mathbb{R})\),
\[\bigg{\|}\big{(}\nu_{k}\mathcal{F}^{\otimes k-1}f\big{)}(\xi_{1},\ldots,\xi_{ k-1},\cdot)\bigg{\|}_{H^{-1}(\mathbb{R})}\leq 2\|\nu\|_{W^{1,\infty}(\mathbb{R})} \bigg{\|}\mathcal{F}^{\otimes k-1}f(\xi_{1},\ldots,\xi_{k-1},\cdot)\bigg{\|}_ {H^{-1}(\mathbb{R})}.\]
Hence
\[\|\nu_{k}f\|_{H^{-1\otimes k}}^{2} \leq 4\|\nu\|_{W^{1,\infty}(\mathbb{R})}^{2}\int\bigg{(}\prod_{m= 1}^{k-1}\frac{1}{1+4\pi^{2}\xi_{m}^{2}}\bigg{)}\bigg{\|}\mathcal{F}^{\otimes k -1}f(\xi_{1},\ldots,\xi_{k-1},\cdot)\bigg{\|}_{H^{-1}(\mathbb{R})}^{2}\mathrm{ d}\xi_{1},\ldots,\xi_{k-1}\] \[=4\|\nu\|_{W^{1,\infty}(\mathbb{R})}^{2}\|f\|_{H^{-1\otimes k}}^{2},\]
which completes the proof of unweighted inequality. Finally, for the weighted inequality, we can apply the unweighted inequality to obtain
\[\|\nu_{m}f\|_{H^{-1\otimes k}_{\eta}}=\|\nu_{m}f\eta^{\otimes k}\|_{H^{-1 \otimes k}}\leq 2\|\nu\|_{W^{1,\infty}}\|f\eta^{\otimes k}\|_{H^{-1 \otimes k}}=2\|\nu\|_{W^{1,\infty}}\|f\|_{H^{-1\otimes k}_{\eta}}.\]
### The weak-* topology on measures
Now, we proceed to the proof of Lemma 2.2, restated here.
**Lemma 3.4**.: _Consider any \(a>0\), \(C_{a}>0\), \(0<\alpha<a\) (which determines \(\eta=\eta_{\alpha}\)) and any sequence_
\[\{g_{n}\}_{n=1}^{\infty}\subset\bigg{\{}g\in\mathcal{M}(\mathbb{R}^{k}):\int_ {\mathbb{R}^{k}}\exp\big{(}a\sum_{m=1}^{k}|z_{m}|\big{)}|g|(\mathrm{d}z)\leq C_ {a}\bigg{\}}. \tag{3.2}\]
_Then the following are equivalent:_
* \(g_{n}\stackrel{{*}}{{\rightharpoonup}}g_{\infty}\) _under the weak-* topology of_ \(\mathcal{M}(\mathbb{R}^{k})\)_._
* \(\|g_{n}-g_{\infty}\|_{H^{-1\otimes k}_{\eta}}\to 0\)_._
Proof of Lemma 2.2.: A sequence \(\{g_{n}\}_{n=1}^{\infty}\) satisfying (3.2) is uniformly tight and bounded in total variation norm. By Prokhorov's theorem, \(\{g_{n}\}_{n=1}^{\infty}\) is sequentially precompact in the weak-* topology. Assuming now that \(\|g_{n}-g_{\infty}\|_{H^{-1\otimes k}_{\eta}}\to 0\), the definition of \(H^{-1\otimes k}_{\eta}\) directly implies that \((g_{n}-g_{\infty})\,\eta^{\otimes k}\) converges to \(0\) in the sense of distribution. Since \(\eta=\eta_{\alpha}\) is smooth, bounded from below and from above on any compact, it further yields that \(g_{n}\) converges to \(g_{\infty}\), still in the sense of distributions. Hence we immediately have that \(g_{n}\stackrel{{*}}{{\rightharpoonup}}g_{\infty}\) under the weak-* topology of \(\mathcal{M}(\mathbb{R}^{k})\).
Assuming now only that \(g_{n}\stackrel{{*}}{{\rightharpoonup}}g_{\infty}\) under the weak-* topology of \(\mathcal{M}(\mathbb{R}^{k})\). First recall that
\[\eta^{\otimes k}(z_{1},\ldots,z_{k})=C_{\alpha}^{k}\exp\left(\sum_{m=1}^{k} \sqrt{1+\alpha^{2}z_{m}^{2}}\right)\leq(C_{\alpha}\exp(1))^{k}\exp\left(\sum_{m= 1}^{k}\alpha|z_{m}|\right).\]
The kernel \(\Lambda^{\otimes k}\) is Lipschitz. Hence the convolution \(\Lambda^{\otimes k}\star(g_{n}\eta^{\otimes k})\) is also Lipschitz, by
\[\|\Lambda^{\otimes k}\star(g_{n}\eta^{\otimes k})\|_{W^{1,\infty}}\leq\|\Lambda ^{\otimes k}\|_{W^{1,\infty}}\;\|g_{n}\eta^{\otimes k}\|_{\mathcal{M}}.\]
By the exponential moment bound (3.2), we have
\[\|g_{n}\eta^{\otimes k}\|_{\mathcal{M}} =(C_{\alpha}\exp(1))^{k}\int_{\mathbb{R}^{k}}\exp\big{(}-(a-\alpha )\sum_{m=1}^{k}|z_{m}|\big{)}\mathrm{exp}\,\big{(}a\sum_{m=1}^{k}|z_{m}|\big{)} |g_{n}|(\mathrm{d}z)\] \[\leq(C_{\alpha}\exp(1))^{k}C_{a}.\]
This implies that \(g_{n}\,\eta^{\otimes k}\) is precompact and hence converges to \(g_{\infty}\,\eta^{\otimes k}\), so that
\[\Lambda^{\otimes k}\star(g_{n}\,\eta^{\otimes k})\to\phi=\Lambda^{\otimes k} \star(g_{\infty}\,\eta^{\otimes k})\in C(\mathbb{R}^{k})\text{ uniformly on all compact subset of }\mathbb{R}^{k}.\]
Let \(\rho\in C_{c}(\mathbb{R})\) such that \(0\leq\rho\leq 1\), \(\rho([-1,1])\equiv 1\), \(\mathrm{supp}\,\rho\subset[-2,2]\) and denote \(\rho_{R}(x)=\rho(x/R)\). Then
\[\|g_{n}\|_{H^{-1\otimes k}_{\eta}}^{2} =\,\int_{z\in\mathbb{R}^{k}}(g_{n}\eta^{\otimes k})(z)\big{[} \Lambda^{\otimes k}\star(g_{n}\eta^{\otimes k})(z)\big{]}\;\mathrm{d}z\] \[\leq\,\int_{z\in\mathbb{R}^{k}}(g_{n}\eta^{\otimes k})(z)(\phi \rho_{R}^{\otimes k})(z)\;\mathrm{d}z\] \[\quad+\int_{z\in\mathbb{R}^{k}}(g_{n}\,\eta^{\otimes k})(z)(( \Lambda^{\otimes k}\star(g_{n}\eta^{\otimes k})-\phi)\rho_{R}^{\otimes k})(z) \;\mathrm{d}z\] \[\quad+\int_{z\in\mathbb{R}^{k}}(g_{n}\eta^{\otimes k})(z)(( \Lambda^{\otimes k}\star(g_{n}\eta^{\otimes k}))(1-\rho_{R}^{\otimes k}))(z) \;\mathrm{d}z\] \[=:L_{1}+L_{2}+L_{3}.\]
We note that \(\phi\,\rho_{R}^{\otimes k}\) is continuous and compactly supported so that, for a fixed \(R\), \(L_{1}\) converges to \(0\) from the weak-* convergence of \(g_{n}\). \(L_{2}\) also directly converges to \(0\) for a fixed \(R\) from the uniform convergence of \(\Lambda^{\otimes k}\star(g_{n}\eta^{\otimes k})\) to \(\phi\) on compact sets.
Finally, for any \(\varepsilon>0\), choose sufficiently large \(R>0\) such that
\[\big{[}C_{\alpha}\mathrm{exp}\,\big{(}1-(a-\alpha)R\big{)}\big{]}^{k}\leq\frac {\varepsilon/6}{\|\Lambda^{\otimes k}\|_{L^{\infty}}(C_{\alpha}\exp(1))^{k}C_{ a}^{2}}.\]
Then
\[L_{3} \leq\|\Lambda^{\otimes k}\|_{L^{\infty}}(C_{\alpha}\exp(1))^{k}C_{ a}\int_{z\in\mathbb{R}^{k}}|g_{n}\eta^{\otimes k}|(z)(1-\rho_{R}^{\otimes k})(z) \;\mathrm{d}z\] \[\leq\|\Lambda^{\otimes k}\|_{L^{\infty}}\,(C_{\alpha}\exp(1))^{k} C_{a}\,\big{[}C_{\alpha}\exp\big{(}1-(a-\alpha)R\big{)}\big{]}^{k}\,C_{a}\, \leq\varepsilon/6.\]
This shows that \(L_{3}\) converges to \(0\) as \(R\to\infty\) uniformly in \(n\), which concludes.
### Bounding the remainder terms
As a first example of application of our weak norms, we can derive a quantified weak convergence of the remainder terms \(\mathscr{R}\) and \(\tilde{\mathscr{R}}\) in (2.3). \(L^{p}\) norms are too sensitive to the pointwise density of the distribution, which makes it difficult to quantify vanishing translations. The following lemma shows how such translations are smoothen when mollified by \(\Lambda^{\otimes k}\), making the behavior of \(\mathscr{R}\) and \(\tilde{\mathscr{R}}\) milder in the \(H^{-1\otimes k}\) sense and laying the ground for our future commutator estimates.
**Lemma 3.5**.: _For any non-negative measure \(f\in\mathcal{M}_{+}(\mathbb{R}^{k})\) and vector \(w\in\mathbb{R}^{k}\), the following pointwise estimate holds_
\[\big{|}(\Lambda^{\otimes k}\star f)(z-w)-(\Lambda^{\otimes k}\star f)(z)\big{|} \leq\big{[}\exp\big{(}\|w\|_{\ell^{1}}\big{)}-1\big{]}(\Lambda^{\otimes k} \star f)(z),\quad\forall z\in\mathbb{R}^{k}.\]
Proof.: It is straightforward that
\[\Big{|}(\Lambda^{\otimes k}\star f)(z-w)-(\Lambda^{\otimes k}\star f)(z)\Big{|} \leq\int_{\mathbb{R}^{k}}\Big{|}\Lambda^{\otimes k}(z-w-y)-\Lambda^{\otimes k }(z-y)\Big{|}f(y)\;\mathrm{d}y.\]
From the formula,
\[\Lambda^{\otimes k}(z)=\frac{1}{2}\exp\Big{(}-\sum_{m=1}^{k}|z_{m}|\Big{)},\]
we have that
\[\Big{|}\Lambda^{\otimes k}(z-w-y)-\Lambda^{\otimes k}(z-y)\Big{|}\leq\big{[}\exp \big{(}\|w\|_{\ell^{1}}\big{)}-1\big{]}\Lambda^{\otimes k}(z-y).\]
We conclude the lemma by multiplying both sides by \(f(y)\) and integrate by \(y\).
The following proposition summarizes the estimates of \(\mathscr{R}\) and \(\tilde{\mathscr{R}}\) terms.
**Proposition 3.6**.: _Consider any \(\alpha>0\) (which determines \(\eta=\eta_{\alpha}\)), any connectivity matrix \(w_{N}\in\mathbb{R}^{N\times N}\) and any joint law \(f_{N}\in\mathcal{M}_{+}(\mathbb{R}^{N})\). Let \(\mathscr{R}_{N,T,m}\) and \(\tilde{\mathscr{R}}_{N,T,m}\) be the remainder terms as in (2.3) and let \(|\tau_{N}|(T)=|\tau_{N}|(T,w_{N},f_{N})\) as in Definition 1.2 (where the variable \(t\) shall be neglected). Then the following estimate holds:_
\[\max\left(\|\mathscr{R}_{N,T,m}\|^{2}_{H^{-1\otimes|T|}_{\eta}},\ \|\tilde{ \mathscr{R}}_{N,T,m}\|^{2}_{H^{-1\otimes|T|}_{H^{-1\otimes|T|}_{\eta}}}\right) \leq\big{[}\exp\big{(}(2+2\alpha)c(w_{N},|T|)\big{)}-1\big{]}\||\tau_{N}|(T) \|^{2}_{H^{-1\otimes|T|}_{\eta}},\]
_where_
\[c(w_{N},|T|):=\min\left(|T|\big{(}\max_{i,j}|w_{i,j;N}|\big{)},\ \max\Big{(}\max_{j}\sum_{i}|w_{i,j;N}|,\max_{i}\sum_{j}|w_{i,j;N}|\Big{)} \right).\]
Notice that the right hand side of the inequality is the "absolute" observables \(|\tau_{N}|\) instead of \(\tau_{N}\) as non-negativity plays a role in the proof. The constant \(\alpha>0\) takes the effect of weight \(\eta=\eta_{\alpha}\) into account.
Proof of Proposition 3.6.: Once we obtain the bound of \(\mathscr{R}_{N,T,m}\), we can derive the same bound of \(\tilde{\mathscr{R}}_{N,T,m}\) by Minkowski inequality. Hence, let us only consider \(\mathscr{R}_{N,T,m}\). For simplicity, we also omit \(t\) variable in the proof.
By definition,
\[\|\mathscr{R}_{N,T,m}\|^{2}_{H^{-1\otimes|T|}_{\eta}} =\,\int_{\mathbb{R}^{|T|}}\big{[}\big{(}\mathscr{R}_{N,T,m}\eta^ {\otimes n}\big{)}(z)\big{]}\big{[}\Lambda^{\otimes n}\star\big{(}\mathscr{R}_ {N,T,m}\eta^{\otimes n}\big{)}(z)\big{]}\ \mathrm{d}z\] \[\leq\,\int_{\mathbb{R}^{|T|}}\big{|}\big{(}\mathscr{R}_{N,T,m} \eta^{\otimes n}\big{)}(z)\big{|}\big{|}\Lambda^{\otimes n}\star\big{(}\mathscr{ R}_{N,T,m}\eta^{\otimes n}\big{)}(z)\big{|}\ \mathrm{d}z.\]
We recall the notation
\[w^{i_{1},\ldots,i_{|T|}}_{N;j}=(w_{i_{l},j:N})_{l=1}^{|T|},\]
so that
\[\|w^{i_{1},\ldots,i_{|T|}}_{N;j}\|_{\ell^{1}} \leq\min\left(|T|\big{(}\max_{i,j}|w_{i,j;N}|\big{)},\ \max\Big{(}\max_{j}\sum_{i}|w_{i,j;N}|,\max_{i}\sum_{j}|w_{i,j;N}|\Big{)}\right)\] \[=c(w,|T|).\]
By Lemma 3.5, since the marginals are non-negative,
\[\big{|}\Lambda^{\otimes n}\star\big{(}\mathscr{R}_{N,T,m}\eta^{ \otimes n}\big{)}(z)\big{|}\] \[\leq\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}\big{|}w_{N,T}(i_ {1},\ldots,i_{|T|})\big{|}\Lambda^{\otimes n}\star\Big{(}\big{(}f_{N}^{i_{1}, \ldots,i_{|T|}}(.-w^{i_{1},\ldots,i_{|T|}}_{N;i_{m}})-f_{N}^{i_{1},\ldots,i_{|T| }}(.)\big{)}\eta^{\otimes n}\Big{)}(z)\Big{|}\] \[\leq\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}\big{|}w_{N,T}(i_ {1},\ldots,i_{|T|})\big{|}\big{[}\exp\big{(}(1+\alpha)c(w,|T|)\big{)}-1\big{]} \big{[}\Lambda^{\otimes n}\star\big{(}f_{N}^{i_{1},\ldots,i_{|T|}}\eta^{ \otimes n}\big{)}(z)\big{]}\] \[=\big{[}\exp\big{(}(1+\alpha)c(w,|T|)\big{)}-1\big{]}\big{[} \Lambda^{\otimes n}\star\big{(}|\tau_{N}|(T)\eta^{\otimes n}\big{)}(z)\big{]}.\]
Then
\[\|\mathscr{R}_{N,T,m}\|^{2}_{H^{-1\otimes|T|}_{\eta}} \leq\big{[}\exp\big{(}(1+\alpha)c(w,|T|)\big{)}-1\big{]}\int_{ \mathbb{R}^{|T|}}\big{|}\big{(}\mathscr{R}_{N,T,m}\eta^{\otimes n}\big{)}(z) \big{|}\big{[}\Lambda^{\otimes n}\star\big{(}|\tau_{N}|(T)\eta^{\otimes n} \big{)}(z)\big{]}\ \mathrm{d}z\] \[\leq\big{[}\exp\big{(}(1+\alpha)c(w,|T|)\big{)}-1\big{]}\int_{ \mathbb{R}^{|T|}}\big{[}\Lambda^{\otimes n}\star|\mathscr{R}_{N,T,m}\eta^{ \otimes n}|(z)\big{]}\big{[}\big{(}|\tau_{N}|(T)\eta^{\otimes n}\big{)}(z) \big{]}\ \mathrm{d}z.\]
We hence need to bound also \(\Lambda^{\otimes n}\star\left|\mathscr{R}_{N,T,m}\eta^{\otimes n}\right|\) with the absolute value inside but
\[\quad\big{(}\Lambda^{\otimes n}\star\left|\mathscr{R}_{N,T,m}\eta^{ \otimes n}\right|\big{)}(z)\] \[=\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}\big{|}w_{N,T}(i_{1},\ldots,i_{|T|})\big{|}\bigg{[}\Lambda^{\otimes n}\star\bigg{(}\big{(}f_{N}^{i _{1},\ldots,i_{|T|}}(\cdot-w_{N;i_{m}}^{i_{1},\ldots,i_{|T|}})+f_{N}^{i_{1}, \ldots,i_{|T|}}(\cdot)\big{)}\eta^{\otimes n}\bigg{)}(z)\bigg{]}\] \[=2\,\Lambda^{\otimes n}\star\big{(}|\tau_{N}|(T)\eta^{\otimes n} \big{)}(z)\] \[\quad+\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}\big{|}w_{N,T}( i_{1},\ldots,i_{|T|})\big{|}\bigg{[}\Lambda^{\otimes n}\star\bigg{(}\big{(}f_{N}^{ i_{1},\ldots,i_{|T|}}(\cdot-w_{N;i_{m}}^{i_{1},\ldots,i_{|T|}})-f_{N}^{i_{1}, \ldots,i_{|T|}}(\cdot)\big{)}\eta^{\otimes n}\bigg{)}(z)\bigg{]}.\]
Hence again by Lemma 3.5,
\[\quad\big{(}\Lambda^{\otimes n}\star\left|\mathscr{R}_{N,T,m} \eta^{\otimes n}\right|\big{)}(z)\leq 2\,\Lambda^{\otimes n}\star\big{(}| \tau_{N}|(T)\eta^{\otimes n}\big{)}(z)\] \[\quad\quad=\big{[}\exp\big{(}(1+\alpha)c(w,|T|)\big{)}+1\big{]} \big{[}\Lambda^{\otimes n}\star\big{(}|\tau_{N}|(T)\eta^{\otimes n}\big{)}(z )\big{]}.\]
In conclusion
\[\left\|\mathscr{R}_{N,T,m}\right\|_{H_{\eta}^{-1\otimes|T|}}^{2} \leq\big{[}\exp\big{(}(1+\alpha)c(w,|T|)\big{)}^{2}-1\big{]} \int_{\mathbb{R}|T|}\big{[}\Lambda^{\otimes n}\star\big{(}|\tau_{N}|(T)\eta^{ \otimes n}\big{)}(z)\big{]}\,\big{[}\big{(}|\tau_{N}|(T)\eta^{\otimes n}\big{)} (z)\big{]}\,\mathrm{d}z\] \[=\big{[}\exp\big{(}(2+2\alpha)c(w,|T|)\big{)}-1\big{]}\big{\|} \tau_{N}|(T)\|_{H_{\eta}^{-1\otimes|T|}}^{2}.\]
### Bounding the firing rate through exponential moments
We present here another set of technical result which shows how to handle the weight function in our subsequent commutator estimates.
**Lemma 3.7**.: _Consider weight function \(\eta=\eta_{\alpha}\) and any signed measure \(f\in\mathcal{M}(\mathbb{R})\). The following estimate holds:_
\[\bigg{|}\int_{\mathbb{R}}K\star(\nu f)\,\mathrm{d}x\bigg{|}=\bigg{|}\int_{ \mathbb{R}}\nu f\,\mathrm{d}x\bigg{|}\leq C(\alpha)\|\nu\|_{W^{1,\infty}}\|f\|_ {H_{\eta}^{-1}},\]
_where \(C(\alpha)\) only depends on \(\alpha>0\)._
Proof.: Only the inequality in the statement is not trivial. Choose now a non-negative, smooth function \(\varphi\) with compact support \(\operatorname{supp}\varphi\subset[-1,1]\), such that, \(\varphi_{i}=\varphi(\cdot-i)\), \(i\in\mathbb{N}\) form a partition of unity of \(\mathbb{R}\) in the usual sense that
\[\sum_{i=-\infty}^{\infty}\varphi(x-i)\equiv 1,\quad\forall x\in\mathbb{R}.\]
It is easy to verify that
\[\int_{\mathbb{R}}\varphi\,\mathrm{d}x=1.\]
Then
\[\bigg{|}\int_{\mathbb{R}}\nu f\,\mathrm{d}x\bigg{|} =\bigg{|}\int_{\mathbb{R}}(\nu/\eta)f\eta\,\mathrm{d}x\bigg{|} \leq\sum_{i=-\infty}^{\infty}\bigg{|}\int_{\mathbb{R}}(\nu/\eta)f\eta\,\varphi_ {i}\,\mathrm{d}x\bigg{|}=\sum_{i=-\infty}^{\infty}\bigg{|}\int_{\mathbb{R}} \varphi\star\Big{(}(\nu/\eta)f\eta\,\varphi_{i}\Big{)}\,\mathrm{d}x\bigg{|}\] \[\leq C\sum_{i=-\infty}^{\infty}\bigg{(}\int_{\mathbb{R}}\left| \varphi\star\Big{(}(\nu/\eta)f\eta\,\varphi_{i}\Big{)}\right|^{2}\,\mathrm{d}x \bigg{)}^{\frac{1}{2}},\]
where in the last line we use that each integrand is supported in \([-2+i,2+i]\).
From the smoothness of \(\varphi\), its Fourier transform can be bounded by
\[\hat{\varphi}(\xi)\leq\frac{C}{\sqrt{1+4\pi^{2}\xi^{2}}}=C\hat{K}(\xi).\]
Hence we further have from Lemma 3.1,
\[\bigg{|}\int_{\mathbb{R}}\nu f\;\mathrm{d}x\bigg{|} \leq C\sum_{i=-\infty}^{\infty}\bigg{(}\int_{\mathbb{R}}\left|K \star\big{(}(\nu/\eta)f\eta\,\varphi_{i}\big{)}\right|^{2}\,\mathrm{d}x\bigg{)} ^{\frac{1}{2}},\] \[\leq C\sum_{i=-\infty}^{\infty}\|(\nu/\eta)\,\varphi_{i}\|_{W^{1,\infty}}\bigg{(}\int_{\mathbb{R}}\left|K\star(f\eta)\right|^{2}\,\mathrm{d}x \bigg{)}^{\frac{1}{2}}\] \[\leq C\bigg{(}\sum_{i=-\infty}^{\infty}\|\varphi_{i}/\eta\|_{W^ {1,\infty}}\bigg{)}\|\nu\|_{W^{1,\infty}}\|f\eta\|_{H^{-1}},\]
where the constant \(C\) is some universal constant which may change line by line.
Since each \(\varphi_{i}\) is a translation of \(\varphi\) and has support in \([-1+i,1+i]\), it is easy the check the uniform bound
\[\sum_{i=-\infty}^{\infty}\|\varphi_{i}/\eta\|_{W^{1,\infty}}\leq C(1+\alpha) \sum_{i=-\infty}^{\infty}\exp(-\alpha|i|)<\infty,\]
where the constant only depends on the particular choice of \(\varphi\), which concludes the proof.
This lemma also admits the following tensorization.
**Lemma 3.8**.: _For \(f\in\mathcal{M}(\mathbb{R}^{k})\cap H_{\eta}^{-1\otimes k}\),_
\[\int_{\mathbb{R}^{k-1}}\bigg{(}\int_{\mathbb{R}}K^{\otimes k} \star\big{(}(\nu_{m}/\eta_{m})f\eta^{\otimes k}\big{)}(t,z)\;\mathrm{d}z_{m} \bigg{)}^{2}\prod_{n\neq m}\;\mathrm{d}z_{n}\] \[\leq C(\alpha)^{2}\|\nu\|_{W^{1,\infty}}^{2}\|f\|_{H_{\eta}^{-1 \otimes k}}^{2},\]
_where we recall the notations \(\nu_{m}=\nu(z_{m})\) and \(\eta_{m}=\eta(z_{m})\)._
Proof.: Without any loss of generality, we may assume \(m=k\) and define
\[g(z) =\Big{[}K^{\otimes(k-1)}\star_{1,\dots,(k-1)}(f\eta^{\otimes(k-1) })\Big{]}(z)\] \[=\,\int_{\mathbb{R}^{k-1}}\prod_{n=1}^{k-1}K(u_{n}-z_{n})f(u_{1},\dots,u_{k-1},z_{k}){\prod_{n=1}^{k-1}}\eta(u_{n})\;\mathrm{d}u_{n}.\]
Then, from the previous Lemma,
\[\int_{\mathbb{R}^{k-1}}\bigg{(}\int_{\mathbb{R}}K^{\otimes k} \star\big{(}(\nu_{k}/\eta_{k})f\eta^{\otimes k}\big{)}(t,z)\;\mathrm{d}z_{k} \bigg{)}^{2}\prod_{n=1}^{k-1}\;\mathrm{d}z_{n}\] \[=\,\int_{\mathbb{R}^{k-1}}\bigg{(}\int_{\mathbb{R}}\nu(z_{k})\,g (t,z)\,\mathrm{d}z_{k}\bigg{)}^{2}\prod_{n=1}^{k-1}\;\mathrm{d}z_{n}\] \[\leq\,\int_{\mathbb{R}^{k-1}}C(\alpha)^{2}\|\nu\|_{W^{1,\infty}} ^{2}\int_{\mathbb{R}}\bigg{(}\big{[}K\star_{k}(g\eta_{k})\big{]}(t,z_{1},\dots,z_{k})\bigg{)}^{2}\;\mathrm{d}z_{k}\prod_{n=1}^{k-1}\;\mathrm{d}z_{n}\] \[=C(\alpha)^{2}\|\nu\|_{W^{1,\infty}}^{2}\int_{\mathbb{R}^{k}} \bigg{(}\big{[}K^{\otimes k}\star(f\eta^{\otimes k})\big{]}(t,z_{1},\dots,z_{k })\bigg{)}^{2}\;\prod_{n=1}^{k}\;\mathrm{d}z_{n},\]
which concludes.
## 4. The limiting observables from Vlasov equation
This section is centered on the limiting observables \(\tau_{\infty}(T,w,f)\), \(T\in\mathcal{T}\). We first show that Definition 1.5 is still correct when the kernel and extended density are merely \(w\in\mathcal{W}\) and \(f\in L^{\infty}([0,t_{*}]\times[0,1];\mathcal{M}_{+}(\mathbb{R}))\). We also prove Proposition 1.7, which shows the compactness can be attained not only at the level of weak-* topology of each limiting observable \(\tau_{\infty}\), \(T\in\mathcal{T}\), but also directly at the level of \(w\) and \(f\).
Contrary to the rest of the paper, this section owes much to the technical framework developed in [51], that it extends to our setting.
### Revisiting the definition of limiting observables
A motivation behind introducing the Banach space \(\mathcal{W}\) in its current form is due to its ability to operate as a \(L^{p}\to L^{p}\) mapping.
**Lemma 4.1**.: _Consider the following bounded linear operator_
\[\mathcal{W}\times C([0,1];B) \to L^{\infty}([0,1];B)\] \[(w,\phi) \mapsto\,\int_{[0,1]}\phi(\cdot,\zeta)w(\cdot,\mathrm{d}\zeta)\]
_where \(B\) stands for any Banach space such as \(L^{p}(\mathbb{R})\). Then this operator can be uniquely extended to \(\mathcal{W}\times L^{\infty}([0,1];B)\to L^{\infty}([0,1];B)\) with_
\[\left\|\int_{[0,1]}\phi(\cdot,\zeta)w(\cdot,\mathrm{d}\zeta)\right\|_{L^{p}([ 0,1];B)}\leq\|w\|_{\mathcal{W}}\|\phi\|_{L^{p}([0,1];B)}.\]
Proof.: The cases for \(p=1\) and \(p=\infty\) can be checked through a careful but straightforward density argument, for which we refer to Lemma 3.8 in [51]. Extending the result to \(1<p<\infty\) is an application of textbook result of interpolation between Banach spaces, which can be found in [6] for instance.
The integrals appearing in Definition 1.5 can then be made rigorous by sequentially consider the integrations as operations \(L^{p}\to L^{p}\). To assist such argument, we follow again [51] and introduce the following countable algebra, which, as we see later, contains all necessary information to reproduce the limiting observables \(\tau_{\infty}(T,w,f)\), \(T\in\mathcal{T}\).
**Definition 4.2** (A countable algebra).: _We denote by \(\mathscr{T}\) the countable algebra of transforms over spaces of arbitrarily large dimensions which is built as follows: For each transform \(F\in\mathscr{T}\) there exists \(k\in\mathbb{N}\) (called the rank of \(F\)) so that \(F\) maps each couple \((w,f)\) into a signed measure \(F(w,f)\in L^{\infty}([0,1];\mathcal{M}(\mathbb{R}^{k}))\). The full algebra \(\mathscr{T}\) is obtained in a recursive way according to the following three rules:_
* _(Seed). The elementary_ \(1\)_-rank transform_ \(F_{0}:(w,f)\mapsto f\) _belongs to the algebra_ \(\mathscr{T}\)_._
* _(Graft). Let_ \(F_{1}\in\mathscr{T}\) _and_ \(F_{2}\in\mathscr{T}\) _be_ \(k_{1}\) _rank and_ \(k_{2}\) _rank transforms respectively. Then, the following_ \((k_{1}+k_{2})\)_-rank transform_ \((F_{1}\otimes F_{2})\) _also belongs to_ \(\mathscr{T}\)_:_ \[(F_{1}\otimes F_{2})(w,f):\] \[(\xi,z_{1},\ldots,z_{k_{1}+k_{2}})\mapsto F_{1}(w,f)(\xi,z_{1}, \ldots,z_{k_{1}})F_{2}(w,f)(\xi,z_{k_{1}+1},\ldots,z_{k_{1}+k_{2}}).\]
* _(Grow). Let_ \(F\in\mathscr{T}\) _be a_ \(k\)_-rank transform. Then, the following_ \(k\)_-rank transform_ \(F^{*}\) _also belongs to_ \(\mathscr{T}\)_:_ \[F^{*}(w,f):\] \[(\xi,z_{1},\ldots,z_{k})\mapsto\int_{[0,1]}F(w,f)(\zeta,z_{1}, \ldots,z_{k})w(\xi,\mathrm{d}\zeta).\]
The following lemma shows that the transform of the countable algebra \(\mathscr{T}\) are well-defined on \(\mathcal{W}\).
**Lemma 4.3**.: _Consider any kernel \(w\in\mathcal{W}\) and extended density \(f\in L^{\infty}([0,1];H_{\eta}^{-1}\cap\mathcal{M}_{+}(\mathbb{R}))\). Then for each \(F\in\mathscr{T}\), the signed measure \(F(w,f)\) is well-defined and belongs to \(L^{\infty}([0,1];H_{\eta}^{-1\otimes k}\cap\mathcal{M}_{+}(\mathbb{R}^{k}))\) for some \(k\in\mathbb{N}\). Moreover, as \(n\to\infty\),_
\[F(w^{(n)},f^{(n)})\to F(w,f)\quad\text{ in }\quad L^{2}([0,1];H_{\eta}^{-1 \otimes k})\]
_for any fixed \(F\in\mathscr{T}\), any sequence \(\{f^{(n)}\}_{n=1}^{\infty}\) uniformly bounded in \(L^{\infty}([0,1];H_{\eta}^{-1}\cap\mathcal{M}_{+}(\mathbb{R}))\), and any sequence \(\{w^{(n)}\}_{n=1}^{\infty}\) uniformly bounded in \(\mathcal{W}\), satisfying_
\[f^{(n)}\to f \text{ in }\quad L^{\infty}([0,1];H_{\eta}^{-1}(\mathbb{R})),\] \[w^{(n)}(\xi,\zeta)\to w(\xi,\zeta) \text{ in }\quad L^{2}_{\xi}H_{\zeta}^{-1}\cap L^{2}_{\xi}H_{\xi}^{-1}.\]
We note that since \(\zeta\in[0,\ 1]\), we have that \(L^{2}_{\xi}H_{\zeta}^{-1}\subset L^{\infty}_{\xi}\mathcal{M}_{\zeta}\) with compact embedding.
Proof.: We use an induction argument based on the recursive rules in Definition 4.2.
1. The seed element \(F_{0}(w,f)=f\) is well-defined and belongs to \(L^{\infty}([0,1];H_{\eta}^{-1}\cap\mathcal{M}_{+}(\mathbb{R}))\).
2. Consider two elements \(F_{1}(w,f)\) and \(F_{2}(w,f)\) that are well-defined and satisfying \[F_{i}(w,f)\in L^{\infty}([0,1];H_{\eta}^{-1\otimes k_{i}}\cap\mathcal{M}( \mathbb{R}^{k_{i}})),\quad i=1,2.\] Because both norms are stable under tensorization, for the combined element we have \[\|(F_{1}\otimes F_{2})(w,f)\|_{L^{\infty}([0,1];B_{1}\otimes B_{2})}\leq\|F_ {1}(w,f)\|_{L^{\infty}([0,1];B_{1})}\|F_{2}(w,f)\|_{L^{\infty}([0,1];B_{2})},\] where we may choose either \(B_{1}=H_{\eta}^{-1\otimes k_{1}}\), \(B_{2}=H_{\eta}^{-1\otimes k_{2}}\), \(B_{1}\otimes B_{2}=H_{\eta}^{-1\otimes(k_{1}+k_{2})}\) or \(B_{1}=\mathcal{M}(\mathbb{R}^{k_{1}})\), \(B_{2}=\mathcal{M}(\mathbb{R}^{k_{2}})\), \(B_{1}\otimes B_{2}=\mathcal{M}(\mathbb{R}^{k_{1}+k_{2}})\). Hence, \[(F_{1}\otimes F_{2})(w,f)\in L^{\infty}([0,1];H_{\eta}^{-1\otimes(k_{1}+k_{2} )}\cap\mathcal{M}(\mathbb{R}^{k_{1}+k_{2}})).\]
3. Consider an element \(F(w,f)\) that is well-defined and satisfies \[F(w,f)\in L^{\infty}([0,1];H_{\eta}^{-1\otimes k}\cap\mathcal{M}(\mathbb{R}^{k })).\] Applying Lemma 4.1 with \(p=\infty\) with either \(B=H_{\eta}^{-1\otimes k}\) or \(B=\mathcal{M}(\mathbb{R}^{k})\), for the grow element we have \[\|F^{*}(w,f)\|_{L^{\infty}([0,1];B)}\leq\|w\|_{\mathcal{W}}\|F(w,f)\|_{L^{ \infty}([0,1];B)}.\] Hence, \[F^{*}(w,f)\in L^{\infty}([0,1];H_{\eta}^{-1\otimes k}\cap\mathcal{M}_{+}( \mathbb{R}^{k})).\]
Since \(\mathscr{T}\) is generated by the three rules in Definition 4.2, the above argument shows that any \(F(w,f)\), \(F\in\mathscr{T}\) is well-defined.
We can use a similar argument to prove the convergence \(F(w^{(n)},f^{(n)})\to F(w,f)\) for any fixed \(F\in\mathscr{T}\).
1. For the seed sequence, \(F_{0}(w^{(n)},f^{(n)})=f^{(n)}\to f\) in \(L^{\infty}([0,1];H_{\eta}^{-1}(\mathbb{R}))\), hence the convergence also holds in \(L^{2}([0,1];H_{\eta}^{-1}(\mathbb{R}))\).
2. Consider the two sequences \(F_{1}(w^{(n)},f^{(n)})\) and \(F_{2}(w^{(n)},f^{(n)})\) satisfying \[F_{i}(w^{(n)},f^{(n)})\to F_{i}(w,f)\quad\text{ in }\quad L^{2}([0,1];H_{\eta}^{-1\otimes k_{i}}),\quad i=1,2.\] Then by introducing the intermediary element \(F_{1}(w^{(n)},f^{(n)})\otimes_{z}F_{2}(w,f)\) and by applying the triangular inequality, we have that \[\|(F_{1}\otimes F_{2})(w^{(n)},f^{(n)})-(F_{1}\otimes F_{2})(w,f)\|_ {L^{2}([0,1];H_{\eta}^{-1\otimes(k_{1}+k_{2})})}\] \[\quad\leq\|F_{1}(w^{(n)},f^{(n)})\|_{L^{\infty}([0,1];H_{\eta}^{-1 \otimes k_{1}})}\|F_{2}(w^{(n)},f^{(n)})-F_{2}(w,f)\|_{L^{2}([0,1];H_{\eta}^{-1 \otimes k_{2}})}\] \[\quad\quad\quad\quad+\|F_{1}(w^{(n)},f^{(n)})-F_{1}(w,f)\|_{L^{2}([0,1 ];H_{\eta}^{-1\otimes k_{1}})}\|F_{2}(w,f)\|_{L^{\infty}([0,1];H_{\eta}^{-1 \otimes k_{2}})}.\] As \(n\to\infty\), we immediately have that \[(F_{1}\otimes F_{2})(w^{(n)},f^{(n)})\to(F_{1}\otimes F_{2})(w,f)\quad\text{ in }\quad L^{2}([0,1];H_{\eta}^{-1\otimes(k_{1}+k_{2})}).\]
3. (Grow). Consider a sequence \(F(w^{(n)},f^{(n)})\) satisfying \[F(w^{(n)},f^{(n)})\to F(w,f)\quad\text{ in }\quad L^{2}([0,1];H_{\eta}^{-1\otimes k}).\] The difference between the grow sequences is given by \[\|F^{*}(w^{(n)},f^{(n)})-F^{*}(w,f)\|_{L^{2}([0,1];H_{\eta}^{-1 \otimes k})}\] \[=\bigg{\|}\int_{[0,1]}F(w^{(n)},f^{(n)})(\zeta,\cdot)w^{(n)}(\xi, \mathrm{d}\zeta)-\int_{[0,1]}F(w,f)(\zeta,\cdot)w(\xi,\mathrm{d}\zeta)\bigg{\|} _{L^{2}([0,1];H_{\eta}^{-1\otimes k})}.\] Introduce any \(\phi_{\varepsilon}\in H^{1}([0,1];H_{\eta}^{-1\otimes k})\) approximating \(F(w,f)\) in \(L^{2}([0,1];H_{\eta}^{-1\otimes k})\) and the intermediary elements \[\int_{[0,1]}\phi_{\varepsilon}(\zeta,\cdot)w^{(n)}(\xi,\mathrm{d}\zeta),\quad \int_{[0,1]}\phi_{\varepsilon}(\zeta,\cdot)w(\xi,\mathrm{d}\zeta),\] then apply the triangular inequality and Lemma 4.1 with \(p=2\), \(B=H_{\eta}^{-1\otimes k}\), we have \[\|F^{*}(w^{(n)},f^{(n)})-F^{*}(w,f)\|_{L^{2}([0,1];H_{\eta}^{-1 \otimes k})}\] \[\leq\|w^{(n)}\|_{\mathcal{W}}\|F(w^{(n)},f^{(n)})-\phi_{ \varepsilon}\|_{L^{2}([0,1];H_{\eta}^{-1\otimes k})}+\|w\|_{\mathcal{W}}\|F( w,f)-\phi_{\varepsilon}\|_{L^{2}([0,1];H_{\eta}^{-1\otimes k})}\] \[\quad+\|w^{(n)}-w\|_{L^{2}H^{-1}}\|\phi_{\varepsilon}\|_{H^{1}([0,1];H_{\eta}^{-1\otimes k})}.\] Letting \(n\to\infty\) and \(\varepsilon\to 0\), we conclude that \[F^{*}(w^{(n)},f^{(n)})\to F^{*}(w,f)\quad\text{ in }\quad L^{2}([0,1];H_{\eta}^{-1 \otimes k}).\]
The following lemma shows that it is possible to recover the limiting observables \(\tau_{\infty}(T,w,f)\), \(T\in\mathcal{T}\) from \(F(w,f)\), \(F\in\mathscr{T}\).
**Lemma 4.4**.: _For any tree \(T\in\mathcal{T}\), there exists a transform \(F\in\mathscr{T}\) such that_
\[F(w,f)(\zeta,z_{1},\dots,z_{|T|})=\bigg{(}\int_{[0,1]^{|T|-1}}w_{T}(\xi_{1}, \dots,\xi_{|T|}){\prod_{m=1}^{|T|}f(z_{m},\xi_{m})\;\mathrm{d}\xi_{2}\dots \mathrm{d}\xi_{|T|}}\bigg{)}\bigg{|}_{\xi_{1}=\zeta},\]
_where the variable \(\xi_{1}\) corresponding to the root of \(T\) is not integrated. As a consequence_
\[\tau_{\infty}(T,w,f)=\int_{[0,1]}F(w,f)(\zeta,z_{1},\dots,z_{|T|})\;\mathrm{d}\zeta.\]
Proof of Lemma 4.4.: For the tree \(T_{1}\in\mathcal{T}\) with only one node, the corresponding transform in \(\mathscr{T}\) is the seed element \(F_{0}\). It is easy to verify that
\[F_{0}(w,f)(\zeta,z_{1})=f(z_{1},\zeta),\quad\tau_{\infty}(T_{0},w,f)(z_{1})= \int_{[0,1]}f(z_{1},\zeta)\;\mathrm{d}\zeta.\]
For any tree \(T\in\mathcal{T}\) with more than one node. Let \(i_{1},\dots,i_{k}\in\{2,\dots,|T|\}\) be all the nodes that are directly connected to the root \(1\), and let \(T_{1},\dots,T_{k}\) be the subtrees of \(T\) taking \(i_{1},\dots,i_{k}\) as their roots. Suppose by induction that we have found corresponding transforms \(F_{1},\dots,F_{k}\) for \(T_{1},\dots,T_{k}\), then
\[\int_{[0,1]^{|T|-1}}w_{T}(\xi_{1},\dots,\xi_{|T|}){\prod_{m=1}^{|T|}f(z_{m}, \xi_{m})\;\mathrm{d}\xi_{2},\dots,\mathrm{d}\xi_{|T|}}\] \[\quad=f(z_{1},\xi_{1})\prod_{l=1}^{k}\bigg{(}\int_{[0,1]^{|T_{l}|} }w(\xi_{1},\xi_{i_{l}})\prod_{(j,j^{\prime})\in\mathcal{E}(T_{l})}w(\xi_{j}, \xi_{j^{\prime}})\prod_{m\in T_{l}}f(z_{m},\xi_{m})\mathrm{d}\xi_{m}\bigg{)}\] \[\quad=f(z_{1},\xi_{1})\prod_{l=1}^{k}\int_{[0,1]}F_{l}(w,f)(\xi_{i _{l}},z_{i_{l}},\dots)w(\xi_{1},\mathrm{d}\xi_{i_{l}})=F(w,f)(\xi_{1},z_{1}, \dots,z_{|T|}).\]
Up to an index permutation (so that if \(i\in T_{l},i^{\prime}\in T_{l^{\prime}}\), \(l<l^{\prime}\), then \(i<i^{\prime}\)), it can be reformulated into the more straightforward form
\[F(w,f)=\bigg{[}F_{0}\otimes\bigotimes_{l=1}^{k}(F_{l})^{*}\bigg{]}(w,f),\]
showing \(F\) is obtained by making each \(F_{l}\) grow (by rule (iii)) with depth \(1\), then grafting (by rule (ii)) together all of them with another seed element \(F_{0}\) (by rule (i)).
### Compactness of the limiting observables
We now turn to the proof of Proposition 1.7.
Proof of Proposition 1.7.: To prove (1.15), let us define \(S^{m}(N):=\{1,\ldots,N\}^{m}\) and
\[S^{m}_{\mathrm{diag}}(N):=\big{\{}(i_{1},\ldots,i_{m})\in\{1,\ldots,N\}^{m}: \exists j\neq k\text{ s.t. }i_{j}=i_{k}\big{\}}.\]
Recall that
\[\tilde{w}_{N}(\xi,\zeta) =\sum_{i,j=1}^{N}Nw_{i,j;N}\mathbbm{1}_{[\frac{i-1}{N},\frac{i}{N })}(\xi)\mathbbm{1}_{[\frac{i-1}{N},\frac{i}{N})}(\zeta),\] \[\tilde{f}_{N}(x,\xi) =\sum_{i=1}^{N}f_{N}^{i}(x)\mathbbm{1}_{[\frac{i-1}{N},\frac{i}{N })}(\xi).\]
Since we have independence, it is straightforward that
\[\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})(\mathrm{d}z)=\int_{[0,1]^{|T|}} \prod_{(l,l^{\prime})\in\mathcal{E}(T)}\tilde{w}_{N}(\xi_{l},\xi_{l^{ \prime}})\prod_{m=1}^{|T|}\tilde{f}_{N}(t,\mathrm{d}z_{m},\xi_{m})\;\mathrm{d} \xi_{1},\ldots,\mathrm{d}\xi_{|T|}\] \[=\frac{1}{N}\sum_{(i_{1},\ldots,i_{|T|})\in S^{|T|}(N)}\prod_{(l, l^{\prime})\in\mathcal{E}(T)}w_{i_{l},i_{l^{\prime}};N}\prod_{m=1}^{|T|}f_{N}^{i_{ m}}(\mathrm{d}z_{m}).\]
On the other hand, again from independence,
\[\tau_{N}(T,w_{N},f_{N})(\mathrm{d}z)=\frac{1}{N}\sum_{(i_{1},\ldots,i_{|T|}) \in S^{|T|}(N)\setminus S^{|T|}_{\mathrm{diag}}(N)}\prod_{(l,l^{\prime})\in \mathcal{E}(T)}w_{i_{l},i_{l^{\prime}};N}\prod_{m=1}^{|T|}f_{N}^{i_{m}}( \mathrm{d}z_{m}),\]
where the terms involving repeating index are excluded from the summation, contrary to the case of \(\tau_{\infty}\).
Therefore the difference is controlled by
\[\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})(\mathrm{d}z)-\tau_{ N}(T,w_{N},f_{N})(\mathrm{d}z)\] \[=\frac{1}{N}\sum_{(i_{1},\ldots,i_{|T|})\in S^{|T|}_{\mathrm{diag} }(N)}\prod_{(l,l^{\prime})\in\mathcal{E}(T)}w_{i_{l},i_{l^{\prime}};N}\prod_{ m=1}^{|T|}f_{N}^{i_{m}}(\mathrm{d}z_{m}),\]
whose (weighted) total variation norm is bounded by
\[\int_{\mathbb{R}^{|T|}}\exp\big{(}a\sum_{m=1}^{|T|}|z_{m}|\big{)} |\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})(\mathrm{d}z)-\tau_{N}(T,w_{N},f _{N})(\mathrm{d}z)|\] \[=\int_{\mathbb{R}^{|T|}}\exp\big{(}a\sum_{m=1}^{|T|}|z_{m}|\big{)} \bigg{|}\frac{1}{N}\sum_{(i_{1},\ldots,i_{|T|})\in S^{|T|}_{\mathrm{diag}}(N)} \prod_{(l,l^{\prime})\in\mathcal{E}(T)}w_{i_{l},i_{l^{\prime}};N}\prod_{m=1}^{ |T|}f_{N}^{i_{m}}(\mathrm{d}z_{m})\bigg{|}\] \[\leq\frac{1}{N}\sum_{(i_{1},\ldots,i_{|T|})\in S^{|T|}_{\mathrm{ diag}}(N)}\prod_{(l,l^{\prime})\in\mathcal{E}(T)}|w_{i_{l},i_{l^{\prime}};N}|\int_{ \mathbb{R}^{|T|}}\prod_{m=1}^{|T|}\exp(a|z_{m}|)f_{N}^{i_{m}}(\mathrm{d}z_{m}).\]
When \(|T|=1\), this term is zero as \(S^{|T|}_{\mathrm{diag}}(N)=\varnothing\), while for \(|T|\geq 2\) we use the following lemma.
**Lemma 4.5**.: _The following bound holds_
\[\frac{1}{N}\sum_{(i_{1},\ldots,i_{|T|})\in S^{|T|}_{\mathrm{diag}}(N )}\prod_{(l,l^{\prime})\in\mathcal{E}(T)}|w_{i_{l},i_{l^{\prime}};N}|\] \[\leq \max_{1\leq i,j\leq N}|w_{i,j;N}|\max\Big{(}\max_{i}\sum_{j}|w_{i, j;N}|,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)}^{|T|-2}|T|^{2}.\]
Once we prove Lemma 4.5, we immediately obtain (1.15),
\[\int_{\mathbb{R}|T|}\exp\big{(}a\sum_{m=1}^{|T|}|z_{m}|\big{)}| \tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})(\mathrm{d}z)-\tau_{N}(T,w_{N},f_{ N})(\mathrm{d}z)|\] \[\leq \max_{1\leq i,j\leq N}|w_{i,j;N}|\,\max\Big{(}\max_{i}\sum_{j}|w_ {i,j;N}|,\,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)}^{|T|-2}\,|T|^{2}\,M_{a}^{|T|}.\]
Proof of Lemma 4.5.: Let us consider
\[\sum_{(i_{1},\ldots,i_{|T|})\in S^{|T|}(N)}\mathbbm{1}_{\{i_{m}=i_{m^{\prime}} =i\}\prod(l,l^{\prime})\in\mathcal{E}(T)}\,|w_{i_{l},i_{l^{\prime}};N}|\]
for any \(1\leq m,m^{\prime}\leq|T|\) and \(1\leq i\leq N\). We introduce the path \(P\) which is the set of indices \(n\) on the unique path connecting \(m\) and \(m^{\prime}\). We can immediately remove from the sum the indices not in \(P\) as before,
\[\sum_{(i_{1},\ldots,i_{|T|})\in S^{|T|}(N)}\mathbbm{1}_{\{i_{m}= i_{m^{\prime}}=i\}}\prod_{(l,l^{\prime})\in\mathcal{E}(T)}|w_{i_{l},i_{l^{ \prime}};N}|\] \[\leq\max\Big{(}\max_{i}\sum_{j}|w_{i,j;N}|,\max_{j}\sum_{i}|w_{i, j;N}|\Big{)}^{|T|-|P|}\] \[\sum_{(i_{n_{1}},\ldots,i_{|P|})\in S^{|P|}(N)}\mathbbm{1}_{\{i_{ m}=i_{m^{\prime}}=i\}}\prod_{(l,l^{\prime})\in\mathcal{E}(P)}|w_{i_{l},i_{l^{ \prime}};N}|,\]
where we denote \(P=\{n_{1},\ldots,n_{|P|}\}\) with \(n_{1}=m\) and \(n_{|P|}=m^{\prime}\).
The path \(P\) connecting \(m\) and \(m^{\prime}\) naturally goes up in the tree first (to reach the parent vertex that is shared by \(m\) and \(m^{\prime}\)) and then down. Denote by \(k\) the number of indices for which the path goes up (with possibly \(k=1\) if \(m\) is a parent of \(m^{\prime}\)) and write
\[\sum_{(i_{n_{1}},\ldots,i_{n_{|P|}})\in S^{|P|}(N)}\mathbbm{1}_{ \{i_{m}=i_{m^{\prime}}=i\}}\prod_{(l,l^{\prime})\in\mathcal{E}(P)}|w_{i_{l},i _{l^{\prime}};N}|\] \[=\sum_{1\leq j_{1},\ldots,j_{|P|}\leq N}\mathbbm{1}_{\{j_{1}=j_{ |P|}=i\}}\prod_{n=1}^{k-1}|w_{j_{n+1},j_{n};N}|\,\prod_{n=k}^{|P|-1}|w_{j_{n},j _{n+1};N}|\] \[=\max_{1\leq i,j\leq N}|w_{i,j;N}|\max\Big{(}\max_{i}\sum_{j}|w_{i,j;N}|,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)}^{|P|-2}.\]
Therefore
\[\sum_{(i_{1},\ldots,i_{|T|})\in S^{m}(N)}\mathbbm{1}_{\{i_{m}=i_{ m^{\prime}}=i\}}\prod_{(l,l^{\prime})\in\mathcal{E}(T)}|w_{i_{l},i_{l^{\prime}};N}|\] \[\leq\max_{1\leq i,j\leq N}|w_{i,j;N}|\max\Big{(}\max_{i}\sum_{j}| w_{i,j;N}|,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)}^{|T|-2}.\]
As a consequence,
\[\frac{1}{N}\sum_{(i_{1},\ldots,i_{|T|})\in S^{m}_{\text{diag}}(N)} \prod_{(l,l^{\prime})\in\mathcal{E}(T)}|w_{i_{l},i_{l^{\prime}};N}|\] \[\leq\frac{1}{N}\sum_{i=1}^{N}\sum_{1\leq m,m^{\prime}\leq|T|}\sum_ {(i_{1},\ldots,i_{|T|})\in S^{m}(N)}\mathds{1}_{\{i_{m}=i_{m^{\prime}}=i\}} \prod_{(l,l^{\prime})\in\mathcal{E}(T)}|w_{i_{l},i_{l^{\prime}};N}|\] \[\leq\max_{1\leq i,j\leq N}|w_{i,j;N}|\max\Big{(}\max_{i}\sum_{j}| w_{i,j;N}|,\max_{j}\sum_{i}|w_{i,j;N}|\Big{)}^{|T|-2}|T|^{2},\]
which concludes the proof.
It remains to prove (1.16), for which we first invoke Corollary 4.9 in [51].
**Lemma 4.6** (Corollary 4.9 in [51]).: _Consider any sequence \(g_{n}\) in \(L^{\infty}([0,1])\). Then, there exists \(\Phi:[0,1]\to[0,1]\), a.e. injective, measure preserving, such that the following estimate is verified_
\[\int_{[0,1]}|(g_{n}\circ\Phi)(\xi)-(g_{n}\circ\Phi)(\xi+h)|\;\mathrm{d}\xi\leq 2 ^{n}\|g_{n}\|_{L^{\infty}}2^{-C\sqrt{\log\frac{1}{|h|}}}\]
_for any \(n\in\mathbb{N}\), \(0<|h|<1\) and some universal constant \(C\)._
This lemma tells us that, at the cost of a measure-preserving re-arrangement, a minimum regularity of \(L^{\infty}\) functions on \([0,1]\) can be obtained. In order to apply Lemma 4.6, we need to first check the stability of the algebra \(F(w,f)\), \(F\in\mathscr{T}\) under measure preserving re-arrangements.
**Lemma 4.7**.: _Consider any \(w\in\mathcal{W}\) and \(f\in L^{\infty}([0,1];\mathcal{M}_{+}(\mathbb{R}))\) and any a.e. injective, measure-preserving \(\Phi:[0,1]\to[0,1]\). Define the push forward kernel and measure_
\[w_{\#}(\xi,\mathrm{d}\zeta):=\Phi_{\#}^{-1}w(\Phi(\xi),\cdot)(\mathrm{d}\zeta),\quad f_{\#}(\xi,\mathrm{d}z):=f(\Phi(\xi),\mathrm{d}z),\]
_where \(\Phi^{-1}\) is any a.e. defined left inverse of \(\Phi\). Then the algebra \(F(w,f)\), \(F\in\mathscr{T}\) is stable under \(\Phi\) in the sense that_
\[F(w_{\#},f_{\#})(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k})=F(w,f)(\Phi(\xi),\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k})\]
_for any transform \(F\in\mathscr{T}\) and for a.e. \(\xi\in[0,1]\). Moreover, \(\tau_{\infty}(T,w_{\#},f_{\#})=\tau_{\infty}(T,w,f)\) for any \(T\in\mathcal{T}\)._
Proof.: The proof is again done by an induction argument based on the recursive rules defining \(F(w,f)\), \(F\in\mathscr{T}\).
1. For the seed element \(F_{0}(w,f)\) the property is obvious.
2. Consider two elements \(F_{1}(w,f)\) and \(F_{2}(w,f)\) stable under \(\Phi\). Then the grafted element satisfies \[(F_{1}\otimes F_{2})(w_{\#},f_{\#})(\xi,\mathrm{d}z_{1},\ldots, \mathrm{d}z_{k_{1}+k_{2}})\] \[=F_{1}(w_{\#},f_{\#})(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k_ {1}})F_{2}(w_{\#},f_{\#})(\xi,\mathrm{d}z_{k_{1}+1},\ldots,\mathrm{d}z_{k_{1}+ k_{2}})\] \[=F_{1}(w,f)(\Phi(\xi),\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k_{1}}) F_{2}(w,f)(\Phi(\xi),\mathrm{d}z_{k_{1}+1},\ldots,\mathrm{d}z_{k_{1}+k_{2}})\] \[=F(w,f)(\Phi(\xi),\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k_{1}+k_{2} }),\] which is the stated stability under \(\Phi\).
3. Consider an element \(F(w,f)\) stable under \(\Phi\). Then the grow element satisfies the stability property \[F^{*}(w_{\#},f_{\#})(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k}) =\int_{\zeta\in[0,1]}F(w_{\#},f_{\#})(\zeta,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k})w_{\#}(\xi,\mathrm{d}\zeta)\] \[=\int_{\zeta\in[0,1]}F(w,f)(\Phi(\zeta),\mathrm{d}z_{1},\ldots, \mathrm{d}z_{k})w_{\#}(\xi,\mathrm{d}\zeta)\] \[=\int_{\zeta\in[0,1]}F(w,f)(\zeta,\mathrm{d}z_{1},\ldots,\mathrm{d }z_{k})w(\Phi(\xi),\mathrm{d}\zeta)\] \[=F^{*}(w,f)(\Phi(\xi),\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k}).\]
Finally, for any \(T\in\mathcal{T}\), take \(F\in\mathscr{T}\) as claimed in Lemma 4.4. Then
\[\tau_{\infty}(T,w_{\#},f_{\#})(\mathrm{d}z_{1},\ldots,\mathrm{d}z_{ |T|}) =\,\int_{\xi\in[0,1]}F(w_{\#},f_{\#})(\xi,\mathrm{d}z_{1},\ldots, \mathrm{d}z_{|T|})\;\mathrm{d}\xi\] \[=\,\int_{\xi\in[0,1]}F(w,f)(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d }z_{|T|})\;\mathrm{d}\xi\] \[=\tau_{\infty}(T,w,f)(\mathrm{d}z_{1},\ldots,\mathrm{d}z_{|T|}),\]
which finishes the proof.
The next step is to is to derive the compactness of the algebra \(F(w,f)\), \(F\in\mathscr{T}\) and identify the limit, which we summarize here.
**Lemma 4.8**.: _Under the assumptions of Proposition 1.7, there exists measure-preserving maps \(\Phi_{N}:[0,1]\to[0,1]\) for the sequence of \(N\to\infty\) and \(w\in\mathcal{W}\), \(f\in L^{\infty}([0,1];\mathcal{M}_{+}(\mathbb{R}))\), such that convergence in the following strong-weak-* sense holds: For all \(F\in\mathscr{T}\) and all \(\varphi\in C_{c}(\mathbb{R}^{k})\), where \(k\) is the rank of \(F\),_
\[\begin{split}\lim_{N\to\infty}&\,\int_{z\in \mathbb{R}^{k}}\varphi(z_{1},\ldots,z_{k})F(\tilde{w}_{N},\tilde{f}_{N})(\Phi_ {N}(\xi),\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k})\\ &=\,\int_{z\in\mathbb{R}^{k}}\varphi(z_{1},\ldots,z_{k})F(w,f)( \xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k})\end{split} \tag{4.1}\]
_in any \(L^{p}_{\xi}([0,1])\), \(1\leq p<\infty\)._
Proof.: Since the algebra is countable, we may index the elements as \(\mathscr{T}=\{F_{m}:m\in\mathbb{N}\}\). For each \(m\in\mathbb{N}\), let \(k_{m}\) be the rank of \(F_{m}\) and let \(\{\varphi_{m,l}\}_{l\in\mathbb{N}}\) be any countable dense set of \(C_{c}(\mathbb{R}^{k_{m}})\). Define the functions
\[g^{N}_{m,l}(\xi):=\int_{z\in\mathbb{R}^{k}}\varphi_{m,l}(z_{1},\ldots,z_{k_{m} })F_{m}(\tilde{w}_{N},\tilde{f}_{N})(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{ k_{m}}),\quad\forall m,l,N.\]
It is straightforward that \(\sup_{N}\|g_{m,l}\|_{L^{\infty}([0,1])}<\infty\) from the bounds on \(F_{m}(\tilde{w}_{N},\tilde{f}_{N})\) in the space \(L^{\infty}([0,1];\mathcal{M}(\mathbb{R}^{k_{m}}))\) that follow from Lemma 1.3 and the identification provided by Lemma 4.4.
Thus, by Lemma 4.6, there exists \(\Phi_{N}:[0,1]\to[0,1]\) for the sequence \(N\to\infty\), so that the re-arrangements
\[\tilde{g}^{N}_{m,l}(\xi)=(g^{N}_{m,l}\circ\Phi_{N})(\xi)=\int_{z\in\mathbb{R} ^{k}}\varphi_{m,l}(z_{1},\ldots,z_{k_{m}})F_{m}(\tilde{w}_{N},\tilde{f}_{N})( \Phi_{N}(\xi),\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k_{m}})\]
fulfill the estimates
\[\int_{[0,1]}|\tilde{g}^{N}_{m,l}(\xi)-\tilde{g}^{N}_{m,l}(\xi+h)|\;\mathrm{d} \xi\leq C_{m,l}2^{-C\sqrt{\log\frac{1}{|h|}}},\quad\forall 0<|h|<1\]
for some universal constant \(C>0\) and \(C_{m,l}>0\) depending on the two indexes only.
By the Frechet-Kolmogorov theorem and using a diagonal extraction there exists some subsequence of \(N\) (which we still denote \(N\) for simplicity) and for all \(m,l\in\mathbb{N}\), there exists \(\tilde{g}_{m,l}\in L^{\infty}([0,1])\) such that as \(N\to\infty\),
\[\tilde{g}^{N}_{m,l}\to\tilde{g}_{m,l}\;\text{ in any }L^{p}([0,1]),\;1\leq p<\infty.\]
Let us define, for any \(N\) in the subsequence, any \(F\in\mathscr{T}\) and \(\varphi\in C_{c}(\mathbb{R}^{k})\), where \(k\) is the rank of \(F\),
\[\tilde{g}^{N}_{F,\varphi} :=\,\int_{z\in\mathbb{R}^{k}}\varphi(z_{1},\ldots,z_{k})F(\tilde{ w}_{N},\tilde{f}_{N})(\Phi_{N}(\xi),\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k})\] \[=\,\int_{z\in\mathbb{R}^{k}}\varphi(z_{1},\ldots,z_{k})F(\tilde{ w}_{N;\#},\tilde{f}_{N;\#})(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k}),\]
where we again apply the following notation for the re-arrangement
\[\tilde{w}_{N;\#}(\xi,\mathrm{d}\zeta):=\Phi_{\#}^{-1}\tilde{w}_{N}(\Phi(\xi), \cdot)(\mathrm{d}\zeta),\quad\tilde{f}_{N;\#}(\xi,\mathrm{d}z):=\tilde{f}_{N} (\Phi(\xi),\mathrm{d}z).\]
By a density argument of \(C_{c}(\mathbb{R}^{k})\), we conclude that for any \(F\in\mathscr{T}\) and \(\varphi\in C_{c}(\mathbb{R}^{k})\), there exists \(\tilde{g}_{F,\varphi}\in L^{\infty}([0,1])\) such that as \(N\to\infty\),
\[\tilde{g}_{F,\varphi}^{N}\to\tilde{g}_{F,\varphi}\ \ \text{in any}\ L^{p}([0,1]),\ 1 \leq p<\infty. \tag{4.2}\]
It remains to identify \(w\in\mathcal{W}\) and \(f\in L^{\infty}([0,1];\mathcal{M}_{+}(\mathbb{R}))\) for the limit. Recall that we have defined the kernel space \(\mathcal{W}\) as
\[\mathcal{W}:=\{w\in\mathcal{M}([0,1]^{2}):w(\xi,\mathrm{d}\zeta)\in L^{\infty} _{\xi}([0,1],\mathcal{M}_{\zeta}[0,1]),\ w(\mathrm{d}\xi,\zeta)\in L^{\infty} _{\zeta}([0,1],\mathcal{M}_{\xi}[0,1])\},\]
where \(L^{\infty}_{\xi}([0,1],\mathcal{M}_{\zeta}[0,1])\) denotes the topological dual of \(L^{1}_{\xi}([0,1],C_{\zeta}[0,1])\).
Hence there exists a subsequence (which we still index by \(N\)) and \(w\in\mathcal{W}\), \(f\in L^{\infty}([0,1];\mathcal{M}_{+}(\mathbb{R}))\), such that
\[\tilde{w}_{N;\#}\xrightarrow{*}w,\quad\tilde{f}_{N;\#}\xrightarrow{*}f.\]
By passing to the limit we can immediately obtain the exponential moment bound
\[\operatorname*{ess\,sup}_{\xi\in[0,1]}\int_{\mathbb{R}}\exp\big{(}a|x|\big{)} f(\xi,\mathrm{d}x)\leq M_{a}.\]
Let us define, for any \(F\in\mathscr{T}\) and \(\varphi\in C_{c}(\mathbb{R}^{k})\),
\[g_{F,\varphi}(\xi):=\int_{z\in\mathbb{R}^{k}}\varphi(z_{1},\dots,z_{k})F(w,f) (\xi,\mathrm{d}z_{1},\dots,\mathrm{d}z_{k}).\]
It is straightforward that \(g_{F,\varphi}\in L^{\infty}([0,1])\) and (4.1) can be simply restated as
\[\tilde{g}_{F,\varphi}=g_{F,\varphi}. \tag{4.3}\]
We apply another induction argument based on the recursive rules.
1. For the seed element \(F_{0}(w,f)=f\), it is straightforward that for any \(\psi\in C([0,1])\), \(\phi\in C_{c}(\mathbb{R})\), \[\int_{[0,1]}\psi(\xi)g_{F_{0},\varphi}(\xi)\ \mathrm{d}\xi = \int_{[0,1]}\psi(\xi)\int_{z\in\mathbb{R}}\varphi(z)f(\xi, \mathrm{d}z)\ \mathrm{d}\xi\] \[\stackrel{{*}}{{=}} \lim_{N\to\infty}\int_{[0,1]}\psi(\xi)\int_{z\in\mathbb{R}} \varphi(z)\tilde{f}_{N;\#}(\xi,\mathrm{d}z)\ \mathrm{d}\xi\] \[= \int_{[0,1]}\psi(\xi)\tilde{g}_{F_{0},\varphi}(\xi)\ \mathrm{d}\xi,\] where the equality \(\stackrel{{*}}{{=}}\) is due to the weak-* convergence \(\tilde{f}_{N;\#}\xrightarrow{*}f\). Hence, the identity (4.3) holds for \(F_{0}\).
2. Consider two elements \(F_{1},F_{2}\in\mathscr{T}\) satisfying (4.3). Then for any \(\phi_{1}\in C_{c}(\mathbb{R}^{k_{1}})\), \(\phi_{2}\in C_{c}(\mathbb{R}^{k_{2}})\), \(g_{(F_{1}\otimes F_{2}),(\varphi_{1}\otimes\varphi_{2})}(\xi)\) \[=\int_{z\in\mathbb{R}^{k_{1}+k_{2}}}(\varphi_{1}\otimes\varphi_{2})(z_{1},\dots, z_{k_{1}+k_{2}})(F_{1}\otimes F_{2})(w,f)(\xi,\mathrm{d}z_{1},\dots, \mathrm{d}z_{k_{1}+k_{2}})\] \[=g_{F_{1},\varphi_{1}}(\xi)g_{F_{2},\varphi_{2}}(\xi),\] hence \(g_{(F_{1}\otimes F_{2}),(\varphi_{1}\otimes\varphi_{2})}=g_{F_{1},\varphi_{1 }}g_{F_{2},\varphi_{2}}\). By a similar argument, \(\tilde{g}_{(F_{1}\otimes F_{2}),(\varphi_{1}\otimes\varphi_{2})}^{N}=\tilde{g }_{F_{1},\varphi_{1}}^{N}\tilde{g}_{F_{2},\varphi_{2}}^{N}\) for all \(N\). Passing to the limit (in any \(L^{p}\), \(1\leq p<\infty\)) as \(N\to\infty\) we obtain \(\tilde{g}_{(F_{1}\otimes F_{2}),(\varphi_{1}\otimes\varphi_{2})}=\tilde{g}_{F_{ 1},\varphi_{1}}\tilde{g}_{F_{2},\varphi_{2}}\). Therefore, one can conclude \[g_{(F_{1}\otimes F_{2}),(\varphi_{1}\otimes\varphi_{2})}=\tilde{g}_{(F_{1} \otimes F_{2}),(\varphi_{1}\otimes\varphi_{2})},\] which is (4.3) for \(F=(F_{1}\otimes F_{2})\) when \(\varphi\in C_{c}(\mathbb{R}^{k_{1}+k_{2}})\) is in the tensorized form \(\varphi=\varphi_{1}\otimes\varphi_{2}\). Finally, any \(\varphi\in C_{c}(\mathbb{R}^{k_{1}+k_{2}})\) can be approximated by a sum of tensorized functions so that we derive (4.3) for \(F=(F_{1}\otimes F_{2})\) with any arbitrary \(\varphi\in C_{c}(\mathbb{R}^{k_{1}+k_{2}})\).
* Consider an element \(F\in\mathscr{T}\) satisfying (4.3). Then for any \(\psi\in C([0,1])\), \(\phi\in C_{c}(\mathbb{R}^{k})\), \[\int_{[0,1]}\psi(\xi)g_{F^{*},\varphi}(\xi)\;\mathrm{d}\xi\] \[\quad=\int_{[0,1]}\psi(\xi)\int_{z\in\mathbb{R}^{k}}\varphi(z_{1}, \ldots,z_{k})F^{*}(w,f)(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{k})\;\mathrm{d}\xi\] \[\quad=\int_{\xi\in[0,1]}\psi(\xi)\int_{z\in\mathbb{R}^{k}}\varphi( z_{1},\ldots,z_{k})\int_{\zeta\in[0,1]}F(w,f)(\zeta,\mathrm{d}z_{1},\ldots, \mathrm{d}z_{k})w(\mathrm{d}\xi,\zeta)\;\mathrm{d}\zeta\] \[\quad=\int_{\xi\in[0,1]}\psi(\xi)g_{F,\varphi}(\zeta)w(\mathrm{d} \xi,\zeta)\;\mathrm{d}\zeta.\] By a similar argument, for all \(N\). \[\int_{[0,1]}\psi(\xi)\tilde{g}^{N}_{F^{*},\varphi}(\xi)\;\mathrm{d}\xi=\int_{ \xi\in[0,1]}\psi(\xi)\tilde{g}^{N}_{F,\varphi}(\zeta)\tilde{w}_{N;\#}(\mathrm{ d}\xi,\zeta)\;\mathrm{d}\zeta.\] Next, by the convergence \[\psi(\xi)\tilde{g}^{N}_{F,\varphi}(\zeta) \to\psi(\xi)g_{F,\varphi}(\zeta)\;\;\text{in}\;\;L^{1}_{\zeta}([0,1],C_{ \xi}[0,1]),\] \[\tilde{w}_{N;\#}(\mathrm{d}\xi,\zeta)\;\mathrm{d}\zeta\overset{*}{ \rightharpoonup}w(\mathrm{d}\xi,\zeta)\;\mathrm{d}\zeta\;\;\text{in}\;\;L^{ \infty}_{\zeta}([0,1],\mathcal{M}_{\xi}[0,1]),\] we obtain that \[\lim_{N\to\infty}\int_{\xi\in[0,1]}\psi(\xi)\tilde{g}^{N}_{F,\varphi}(\zeta) \tilde{w}_{N;\#}(\mathrm{d}\xi,\zeta)\;\mathrm{d}\zeta=\int_{\xi\in[0,1]}\psi( \xi)g_{F,\varphi}(\zeta)w(\mathrm{d}\xi,\zeta)\;\mathrm{d}\zeta.\] Hence \(g_{F^{*},\varphi}=\tilde{g}_{F^{*},\varphi}\), which is (4.3) for \(F^{*}\).
We may now conclude the proof of Proposition 1.7. For any \(T\in\mathcal{T}\), there exists \(F\in\mathscr{T}\) such that
\[\tau_{\infty}(T,\tilde{w}_{N;\#},\tilde{f}_{N;\#}) =\int_{[0,1]}F(\tilde{w}_{N},\tilde{f}_{N})(\Phi_{N}(\xi), \mathrm{d}z_{1},\ldots,\mathrm{d}z_{|T|})\;\mathrm{d}\xi,\] \[\tau_{\infty}(T,w,f) =\int_{[0,1]}F(w,f)(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{|T|}) \;\mathrm{d}\xi.\]
For any \(\varphi\in C_{c}(\mathbb{R}^{|T|})\), by Lemma 4.8,
\[\lim_{N\to\infty}\int_{z\in\mathbb{R}^{|T|}}\tau_{\infty}(T,\tilde {w}_{N;\#},\tilde{f}_{N;\#})(\mathrm{d}z_{1},\ldots,\mathrm{d}z_{|T|})\] \[= \lim_{N\to\infty}\int_{[0,1]}\int_{z\in\mathbb{R}^{|T|}}\varphi(z_ {1},\ldots,z_{|T|})F(\tilde{w}_{N},\tilde{f}_{N})(\Phi_{N}(\xi),\mathrm{d}z_{1 },\ldots,\mathrm{d}z_{|T|})\;\mathrm{d}\xi\] \[= \int_{[0,1]}\int_{z\in\mathbb{R}^{|T|}}\varphi(z_{1},\ldots,z_{|T| })F(w,f)(\xi,\mathrm{d}z_{1},\ldots,\mathrm{d}z_{|T|})\;\mathrm{d}\xi\] \[= \int_{z\in\mathbb{R}^{|T|}}\tau_{\infty}(T,w,f)(\mathrm{d}z_{1}, \ldots,\mathrm{d}z_{|T|}).\]
Since \(\varphi\in C_{c}(\mathbb{R}^{|T|})\) is arbitrary we conclude (1.16), restated here:
\[\tau_{\infty}(T,\tilde{w}_{N},\tilde{f}_{N})\overset{*}{\rightharpoonup}\tau_ {\infty}(T,w,f)\in\mathcal{M}(\mathbb{R}^{|T|}),\quad\forall T\in\mathcal{T}.\]
## 5. Proofs of the quantitative results
### The hierarchy of equations
The subsection provides the main proofs of Proposition 2.3, 2.4 and 2.5, which derive the hierarchy of equations from the Liouville equation (2.1) and the Vlasov equation (1.3)-(1.4).
We begin with the proof of Proposition 2.3, showing that the observables corresponding to the laws of \((X^{1;N}_{0},\ldots,X^{N;N}_{0})\) solving (1.1) satisfy the extended BBGKY hierarchy (2.2)-(2.3).
Proof of Proposition 2.3.: Since the coefficients are bounded Lipschitz, the well-posedness of the SDE system (1.1) and the Liouville-type equation (2.1) are classical results. For simplicity of the presentation, we avoid using weak formulations but only present a formal calculation.
Consider any distinct indexes \(i_{1},\ldots,i_{k}\in\{1,\ldots,N\}\). It is easy to verify the following identity deriving the marginal laws from the full joint law,
\[f_{N}^{i_{1},\ldots,i_{k}}(t,z_{1},\ldots,z_{k}) :=\operatorname{Law}(X_{t}^{i_{1};N},\ldots,X_{t}^{i_{k};N})\] \[=\bigg{(}\int_{\mathbb{R}^{N-k}}f(t,x_{1},\ldots,x_{N})\!\prod_{i \neq i_{1},\ldots,i_{k}}\mathrm{d}x_{i}\bigg{)}\bigg{|}_{\forall n=1,\ldots,k, \;x_{i_{l}}=z_{l}}.\]
By integrating Liouville equation (2.1) along spatial directions \(i\notin\{i_{1},\ldots,i_{k}\}\) and calculate the summation \(i\in\{i_{1},\ldots,i_{k}\}\) and \(i\notin\{i_{1},\ldots,i_{k}\}\) separately, we obtain equations for the marginals,
\[\partial_{t}f_{N}^{i_{1},\ldots,i_{k}}(t,z_{1},\ldots,z_{k})\] \[\quad=\sum_{m=1}^{k}\bigg{\{}\bigg{[}-\partial_{z_{m}}(\mu(z_{m} )f_{N}^{i_{1},\ldots,i_{k}}(t,z))+\frac{\sigma^{2}}{2}\partial_{z_{m}}^{2}f_{N }^{i_{1},\ldots,i_{k}}(t,z)\] \[\quad\quad\quad-\nu(z_{m})f_{N}^{i_{1},\ldots,i_{k}}(t,z)+\delta_ {0}(z_{m})\bigg{(}\int_{\mathbb{R}}\nu(u_{m})f_{N}^{i_{1},\ldots,i_{k}}(t,u-w_ {N;i_{m}}^{i_{1},\ldots,i_{k}})\bigg{)}\;\mathrm{d}u_{m}\bigg{)}\bigg{|}_{ \forall n\neq m,\;u_{n}=z_{n}}\bigg{]}\bigg{\}}\] \[\quad\quad+\sum_{i\neq i_{1},\ldots,i_{k}}\int_{\mathbb{R}}\nu(z _{k+1})\bigg{(}f_{N}^{i_{1},\ldots,i_{k},i}(t,z-w_{N;i}^{i_{1},\ldots,i_{k},i} )-f_{N}^{i_{1},\ldots,i_{k},i}(t,z)\bigg{)}\;\mathrm{d}z_{k+1}. \tag{5.1}\]
We can reformulate the last line as
\[\sum_{i\neq i_{1},\ldots,i_{k}}\int_{\mathbb{R}}\nu(z_{k+1})\bigg{(}f_{N}^{i_{ 1},\ldots,i_{k},i}(t,z-w_{N;i}^{i_{1},\ldots,i_{k},i})-f_{N}^{i_{1},\ldots,i_ {k},i}(t,z)\bigg{)}\;\mathrm{d}z_{k+1}\] \[\quad=\sum_{i\neq i_{1},\ldots,i_{k}}\int_{\mathbb{R}}\nu(z_{k+1 })\bigg{(}\int_{0}^{1}\sum_{m=1}^{k}-w_{i_{m},i}\partial_{z_{m}}f_{N}^{i_{1}, \ldots,i_{k},i}(t,z-rw_{N;i}^{i_{1},\ldots,i_{k},i})\;\mathrm{d}r\bigg{)}\; \mathrm{d}z_{k+1}\] \[\quad=\sum_{m=1}^{k}-\partial_{z_{m}}\bigg{[}\sum_{i\neq i_{1}, \ldots,i_{k}}w_{i_{m},i;N}\int_{\mathbb{R}}\nu(z_{k+1})\bigg{(}\int_{0}^{1}f _{N}^{i_{1},\ldots,i_{k},i}(t,z-rw_{N;i}^{i_{1},\ldots,i_{k},i})\;\mathrm{d}r \bigg{)}\;\mathrm{d}z_{k+1}\bigg{]},\]
changing it into an additional advection term \(\partial_{z_{m}}[\ldots]\) to the equation.
Introduce the simple identity
\[f_{N}^{i_{1},\ldots,i_{k}}(u-w_{N;i_{m}}^{i_{1},\ldots,i_{k}})=f_{N}^{i_{1}, \ldots,i_{k}}(u)-\big{\{}f_{N}^{i_{1},\ldots,i_{k}}(u)-f_{N}^{i_{1},\ldots,i_ {k}}(u-w_{N;i_{m}}^{i_{1},\ldots,i_{k}})\big{\}},\]
and proceed to do the same for \(f_{N}^{i_{1},\ldots,i_{k},i}(z-rw_{N;i}^{i_{1},\ldots,i_{k},i})\), so that the marginal equations (5.1) now read
\[\partial_{t}f_{N}^{i_{1},\ldots,i_{k}}(z_{1},\ldots,z_{k})\] \[\quad=\sum_{m=1}^{k}\bigg{\{}\bigg{[}-\partial_{z_{m}}(\mu(z_{m} )f_{N}^{i_{1},\ldots,i_{k}}(z))+\frac{\sigma^{2}}{2}\partial_{z_{m}}^{2}f_{N }^{i_{1},\ldots,i_{k}}(z)-\nu(z_{m})f_{N}^{i_{1},\ldots,i_{k}}(z)\] \[\quad\quad\quad+\delta_{0}(z_{m})\bigg{(}\int_{\mathbb{R}}\nu(u _{m})\Big{(}f_{N}^{i_{1},\ldots,i_{k}}(u)-\big{\{}f_{N}^{i_{1},\ldots,i_{k}}(u )-f_{N}^{i_{1},\ldots,i_{k}}(u-w_{N;i_{m}}^{i_{1},\ldots,i_{k}})\big{\}} \bigg{)}\;\mathrm{d}u_{m}\bigg{)}\bigg{|}_{\forall n\neq m,\;u_{n}=z_{n}} \bigg{]}\] \[\quad\quad\quad\quad-\partial_{z_{m}}\bigg{[}\sum_{i\neq i_{1}, \ldots,i_{k}}w_{i_{m},i;N}\int_{\mathbb{R}}\nu(z_{k+1})\bigg{(}\int_{0}^{1}f _{N}^{i_{1},\ldots,i_{k},i}(z)\] \[\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad-\big{\{}f_{ N}^{i_{1},\ldots,i_{k},i}(z)-f_{N}^{i_{1},\ldots,i_{k},i}(z-rw_{N;i}^{i_{1}, \ldots,i_{k},i})\big{\}}\;\mathrm{d}r\bigg{)}\;\mathrm{d}z_{k+1}\bigg{]}\bigg{\}}, \tag{5.2}\]
where we omit variable \(t\) for simplicity.
By taking the time derivative to the definition of observables (1.2), restated here
\[\tau_{N}(T,w_{N},f_{N})(t,z):=\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}w_{N, T}(i_{1},\ldots,i_{|T|})f_{N}^{i_{1},\ldots,i_{|T|}}(t,z_{1},\ldots,z_{|T|})\]
and substituting the right hand side \(\partial_{t}f_{N}^{i_{1},\ldots,i_{|T|}}\) by the marginal equation (5.2) with \(k=|T|\), we obtain that
\[\partial_{t}\bigg{(}\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}w _{N,T}(i_{1},\ldots,i_{|T|})f_{N}^{i_{1},\ldots,i_{|T|}}(z_{1},\ldots,z_{|T|}) \bigg{)}=\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}w_{N,T}(i_{1},\ldots,i_{| T|})\sum_{m=1}^{|T|}\Bigg{\{}\] \[\bigg{[}-\partial_{z_{m}}(\mu(z_{m})f_{N}^{i_{1},\ldots,i_{|T|}}( z))+\frac{\sigma^{2}}{2}\partial_{z_{m}}^{2}f_{N}^{i_{1},\ldots,i_{|T|}}(z)- \nu(z_{m})f_{N}^{i_{1},\ldots,i_{|T|}}(z)\] \[+\delta_{0}(z_{m})\bigg{(}\int_{\mathbb{R}}\nu(u_{m})\Big{(}f_{N} ^{i_{1},\ldots,i_{|T|}}(u)-\{f_{N}^{i_{1},\ldots,i_{|T|}}(u)-f_{N}^{i_{1}, \ldots,i_{|T|}}(u-w_{N;i_{m}}^{i_{1},\ldots,i_{|T|}})\}\Big{)}\ \mathrm{d}u_{m}\bigg{)}\bigg{|}_{\forall n\neq m,\,u_{n}=z_{n}}\bigg{]}\] \[-\partial_{z_{m}}\bigg{[}\sum_{i\neq i_{1},\ldots,i_{k}}\!\!\!\!w_ {i_{m},i;N}\!\!\!\int_{\mathbb{R}}\nu(z_{|T|+1})\!\bigg{(}\int_{0}^{1}f_{N}^{i _{1},\ldots,i_{|T|},i}(z)-\{f_{N}^{i_{1},\ldots,i_{|T|},i}(z)\] \[-f_{N}^{i_{1},\ldots,i_{|T|},i}(z-rw_{N;i}^{i_{1},\ldots,i_{|T|}, i})\mathrm{d}r\bigg{)}\mathrm{d}z_{|T|+1}\bigg{]}\Bigg{\}}.\]
Noticing the identity \(w_{N,T+j}(i_{1},\ldots,i_{|T|+1})=w_{N,T}(i_{1},\ldots,i_{|T|})w_{i_{j},i_{|T| +1}}\), we see that all the marginals, except the two terms of form \(\{f_{N}^{\prime}(\cdot)-f_{N}^{\prime}(\cdot-w)\}\), are expressed in the right way so they can be rewritten as observables, obtaining (2.2) as the approximate hierarchy and (2.3) as the explicit form of the remainders.
We now turn to the proof of Proposition 2.4. It is worth noting that the main Gronwall estimate could also be written in the probabilistic language of Ito calculus. However, we prefer to keep an approach and notation similar to the rest of the proofs presented.
Proof of Proposition 2.4.: To simplify the argument, we only present a formal calculation where the tensorized weight \(\eta^{\otimes|T|}\) is directly used as the test function, while, strictly speaking, the valid test functions for distributional solutions should have compact support. Given that the remaining coefficients are bounded Lipschitz and all terms in the subsequent calculation are non-negative, passing the limit to justify the use of unbounded weight on the dual side poses no problems.
The weighted total variation \(\||\tau_{N}|(T)\eta^{\otimes|T|}\|_{\mathcal{M}(\mathbb{R}^{|T|})}\) can be decomposed as
\[\||\tau|(T)(t,\cdot)\eta^{\otimes|T|}|\|_{\mathcal{M}(\mathbb{R}^ {|T|})} =\,\int_{\mathbb{R}^{|T|}}\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^ {N}\big{|}w_{N,T}(i_{1},\ldots,i_{|T|})\big{|}f_{N}^{i_{1},\ldots,i_{|T|}}(t,z) \eta^{\otimes|T|}(z)\ \mathrm{d}z\] \[=\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}\big{|}w_{N,T}(i_{1},\ldots,i_{|T|})\big{|}\int_{\mathbb{R}^{|T|}}f_{N}^{i_{1},\ldots,i_{|T|}}(t,z) \eta^{\otimes|T|}(z)\ \mathrm{d}z.\]
For any distinct indexes \(i_{1},\ldots,i_{k}\), we have
\[\int_{\mathbb{R}^{|T|}}f_{N}^{i_{1},\ldots,i_{|T|}}(t,z)\eta^{\otimes|T|}(z)\ \mathrm{d}z=\int_{\mathbb{R}^{N}}f_{N}(t,x)\prod_{i=1}^{|T|}\eta(x_{i_{l}})\ \mathrm{d}x.\]
The forthcoming estimate is not exclusive to our specific choice \(\eta=\eta_{\alpha}\), but for any weight function adhering to the form
\[\eta(x)=\exp(h(x)),\quad\forall x\in\mathbb{R}\]
such that \(\|h^{\prime}\|_{L^{\infty}}\), \(\|h^{\prime\prime}\|_{L^{\infty}}\) are bounded and \(h(0)\leq h(x)\). Our choice of \(\eta=\eta_{\alpha}\) is clearly included by choosing \(h(x)=\sqrt{1+\alpha^{2}x^{2}}\), resulting in \(\|h^{\prime}\|_{L^{\infty}}\leq\alpha\) and \(\|h^{\prime\prime}\|_{L^{\infty}}\leq\alpha^{2}\). The
following inequalities are immediate results by chain rule and fundamental theorem of calculus.
**Lemma 5.1**.: _For any weight function of form \(\eta(x)=\exp(h(x))\) such that \(\|h^{\prime}\|_{L^{\infty}}\), \(\|h^{\prime\prime}\|_{L^{\infty}}\) are bounded and \(h(0)\leq h(x)\), one has that_
\[|\eta^{\prime}/\eta|(x)\leq\|h^{\prime}\|_{L^{\infty}},\quad|\eta^{\prime\prime }/\eta|(x)\leq\|h^{\prime\prime}\|_{L^{\infty}}+\|h^{\prime}\|_{L^{\infty}}^{2},\]
_and_
\[\eta(x+y)-\eta(x)\leq\|h^{\prime}\|_{L^{\infty}}|y|\exp(\|h^{\prime}\|_{L^{ \infty}}|y|)\eta(x).\]
_The last inequality can be extended to the tensorized case \(\eta^{\otimes k}(x)=\prod_{l=1}^{k}\eta(x_{i_{l}})\) as_
\[\eta^{\otimes k}(x+y)-\eta^{\otimes k}(x)\leq\|h^{\prime}\|_{L^{\infty}}\|y\|_ {\ell^{1}}\exp(\|h^{\prime}\|_{L^{\infty}}\|y\|_{\ell^{1}})\eta^{\otimes k}(x).\]
We are now ready to prove Proposition 2.4 under the more general assumption that \(\eta(x)=\exp(h(x))\). Since \(f_{N}\) solves (2.1) in the distributional sense, it is easy to verify that
\[\int_{\mathbb{R}^{N}}f_{N}(t,x)\prod_{l=1}^{|T|}\eta(x_{i_{l}}) \;\mathrm{d}x=\int_{\mathbb{R}^{N}}f_{N}(0,x)\prod_{l=1}^{|T|}\eta(x_{i_{l}}) \;\mathrm{d}x\] \[\quad+\int_{0}^{t}\int_{\mathbb{R}^{N}}f_{N}(s,x)\Bigg{[}\sum_{m= 1}^{|T|}\bigg{(}\mu(x_{i_{m}})(\eta^{\prime}/\eta)(x_{i_{m}})+\frac{1}{2} \sigma^{2}(\eta^{\prime\prime}/\eta)(x_{i_{m}})\bigg{)}\prod_{l=1}^{|T|}\eta(x_ {i_{l}})\] \[\quad+\sum_{j=i_{1},\ldots,i_{|T|}}\nu(x_{j})\bigg{(}\frac{\eta(0 )}{\eta(x_{j})}\prod_{l=1}^{|T|}\eta(x_{i_{l}}+w_{i_{l},j;N})-\prod_{l=1}^{|T| }\eta(x_{i_{l}})\bigg{)}\] \[\quad+\sum_{j\neq i_{1},\ldots,i_{|T|}}\nu(x_{j})\bigg{(}\prod_{l =1}^{|T|}\eta(x_{i_{l}}+w_{i_{l},j;N})-\prod_{l=1}^{|T|}\eta(x_{i_{l}})\bigg{)} \Bigg{]}\;\mathrm{d}x\mathrm{d}s.\]
By Lemma 5.1, we have that
\[\int_{\mathbb{R}^{N}}f_{N}(t,x)\prod_{l=1}^{|T|}\eta(x_{i_{l}}) \;\mathrm{d}x\leq\int_{\mathbb{R}^{N}}f_{N}(0,x)\prod_{l=1}^{|T|}\eta(x_{i_{l} })\;\mathrm{d}x\] \[\quad+\int_{0}^{t}\int_{\mathbb{R}^{N}}f_{N}(s,x)\Bigg{[}\sum_{m= 1}^{|T|}\bigg{(}\|\mu\|_{L^{\infty}}\|h^{\prime}\|_{L^{\infty}}+\frac{1}{2} \sigma^{2}(\|h^{\prime\prime}\|_{L^{\infty}}+\|h^{\prime}\|_{L^{\infty}}^{2} \bigg{)}\bigg{)}\prod_{l=1}^{|T|}\eta(x_{i_{l}})\] \[\quad+\sum_{j=1}^{N}\|\nu\|_{L^{\infty}}\;\|h^{\prime}\|_{L^{ \infty}}\sum_{m=1}^{|T|}|w_{i_{m},j;N}|\exp\Big{(}\|h^{\prime}\|_{L^{\infty}} \;\mathrm{max}_{j}\sum_{i}|w_{i,j;N}|\Big{)}\prod_{l=1}^{|T|}\eta(x_{i_{l}}) \Bigg{]}\;\mathrm{d}x\mathrm{d}s\] \[=\int_{\mathbb{R}^{N}}f_{N}(0,x)\prod_{l=1}^{|T|}\eta(x_{i_{l}}) \;\mathrm{d}x+\Bigg{[}\sum_{m=1}^{|T|}\bigg{(}\|\mu\|_{L^{\infty}}\|h^{\prime }\|_{L^{\infty}}+\frac{1}{2}\sigma^{2}(\|h^{\prime\prime}\|_{L^{\infty}}+\|h^ {\prime}\|_{L^{\infty}}^{2})\bigg{)}\] \[\quad+\sum_{j=1}^{N}\sum_{m=1}^{|T|}|w_{i_{m},j;N}|\;\|\nu\|_{L^{ \infty}}\|h^{\prime}\|_{L^{\infty}}\exp\Big{(}\|h^{\prime}\|_{L^{\infty}}\; \mathrm{max}_{j}\sum_{i}|w_{i,j;N}|\Big{)}\Bigg{]}\int_{0}^{t}\int_{\mathbb{R}^ {N}}f_{N}(s,x)\prod_{l=1}^{|T|}\eta(x_{i_{l}})\;\mathrm{d}x\mathrm{d}s,\]
where the summations of \(j=i_{1},\ldots,i_{|T|}\) and \(j\neq i_{1},\ldots,i_{|T|}\) are combined together by the simple fact that \(h(0)\leq h(x_{j})\), hence \(\eta(0)/\eta(x_{j})\leq 1\).
Furthermore, we have that
\[\sum_{j=1}^{N}\sum_{m=1}^{|T|}|w_{i_{m},j;N}|\leq|T|\;\mathrm{max}_{i}\sum_{j}| w_{i,j;N}|.\]
Hence by choosing
\[C_{\mathcal{W}} =\max\left(\max_{i}\sum_{j}|w_{i,j;N}|,\;\max_{j}\sum_{i}|w_{i,j;N }|\right),\] \[A_{\eta} =\left(\|\mu\|_{L^{\infty}}\|h^{\prime}\|_{L^{\infty}}+\frac{1}{ 2}\sigma^{2}(\|h^{\prime\prime}\|_{L^{\infty}}+\|h^{\prime}\|_{L^{\infty}}^{2})+ \|\nu\|_{L^{\infty}}\|h^{\prime}\|_{L^{\infty}}C_{\mathcal{W}}\exp(\|h^{\prime} \|_{L^{\infty}}C_{\mathcal{W}})\right)\!,\]
we conclude that
\[\int_{\mathbb{R}^{N}}f_{N}(t,x)\prod_{l=1}^{|T|}\eta(x_{i_{l}})\; \mathrm{d}x\] \[\quad\leq\int_{\mathbb{R}^{N}}f_{N}(0,x)\prod_{l=1}^{|T|}\eta(x_{i _{l}})\;\mathrm{d}x+\int_{0}^{t}|T|A_{\eta}\int_{\mathbb{R}^{N}}f_{N}(s,x)\prod_ {l=1}^{|T|}\eta(x_{i_{l}})\;\mathrm{d}x\;\mathrm{d}s.\]
By Gronwall lemma, this implies that
\[\int_{\mathbb{R}^{N}}f_{N}(t,x)\prod_{l=1}^{|T|}\eta(x_{i_{l}})\; \mathrm{d}x\leq\exp\big{(}|T|A_{\eta}t\big{)}\int_{\mathbb{R}^{N}}f_{N}(0,x) \prod_{l=1}^{|T|}\eta(x_{i_{l}})\;\mathrm{d}x.\]
Taking the summation over \(i_{1},\ldots,i_{|T|}\), we have that
\[\||\tau|(T)\eta^{\otimes|T|}(t,\cdot)\|_{\mathcal{M}(\mathbb{R}^{ |T|})} \leq\] \[\leq C_{\eta}\big{(}M_{\eta}\exp(A_{\eta}t_{*})\big{)}^{|T|}\]
Finally, by applying Lemma 3.2 to the left hand side, we immediately obtain (2.4), restated here,
\[\||\tau_{N}|(T)(t,\cdot)\|_{H^{-1\otimes|T|}_{\eta}}\leq C_{\eta}(T)\big{(}\| K\|_{L^{2}(\mathbb{R})}\exp(A_{\eta}t_{*})\big{)}^{|T|}\]
for all \(T\in\mathcal{T},t\in[0,t_{*}]\).
Finally, we give the proof of Proposition 2.5.
Proof of Proposition 2.5.: We show the well-posedness of Vlasov equation (1.3)-(1.4) by a classical fixed point argument. Let us first define the mapping \(f\mapsto\mathcal{L}f\) as the solution of
\[\partial_{t}\mathcal{L}f(t,\xi,x)+\partial_{x}\Big{(}\mu^{*}_{f}(t,\xi,x) \mathcal{L}f(t,\xi,x)\Big{)}-\frac{\sigma^{2}}{2}\partial_{xx}\Big{(} \mathcal{L}f(t,\xi,x)\Big{)}\]
If \(f\) is given, then \(J_{f}\) and \(\mu^{*}_{f}\) are determined, making the above identity a linear equation with respect to \(\mathcal{L}f\). We are going to see that if \(f\in L^{\infty}([0,t_{*}]\times[0,1];H^{-1}_{\eta}\cap\mathcal{M}_{+}(\mathbb{ R}))\), then \(\mathcal{L}f\) belongs to the same space.
By multiplying the equation by the weight function \(\eta\) and applying Leibniz formula, we obtain that
\[\partial_{t}\mathcal{L}f(t,\xi,x)\eta(x)\] \[\quad=-\partial_{x}\Big{(}\mu^{*}_{f}(t,\xi,x)\mathcal{L}f(t,\xi,x)\eta(x)\Big{)}+\frac{\sigma^{2}}{2}\partial_{xx}\Big{(}\mathcal{L}f(t,\xi, x)\eta(x)\Big{)}-\nu(x)\mathcal{L}f(t,\xi,x)\eta(x)\] \[\quad\quad+\delta_{0}(x)\eta(0)J_{f}(t,\xi)+\mu^{*}_{f}(t,\xi,x)( \eta^{\prime}/\eta)(x)\mathcal{L}f(t,\xi,x)\eta(x)\] \[\quad\quad+\frac{\sigma^{2}}{2}\bigg{[}-\partial_{x}\Big{(}2( \eta^{\prime}/\eta)(x)\mathcal{L}f(t,\xi,x)\eta(x)\Big{)}+(\eta^{\prime\prime} /\eta)(x)\mathcal{L}f(t,\xi,x)\eta(x)\bigg{]}.\]
We start the a priori estimate of the linear mapping \(\mathcal{L}\) by the total mass. It is straightforward to verify that
\[\begin{split}\|\mathcal{L}f(t,\cdot,\xi)\|_{\mathcal{M}( \mathbb{R})}&\leq\|f(0,\cdot,\xi)\|_{\mathcal{M}(\mathbb{R})}+ \int_{0}^{t}J_{f}(s,\xi)\;\mathrm{d}s\\ &\leq\|f(0,\cdot,\xi)\|_{\mathcal{M}(\mathbb{R})}+\int_{0}^{t}\| \nu\|_{L^{\infty}}\|f(s,\cdot,\xi)\|_{\mathcal{M}(\mathbb{R})}\;\mathrm{d}s. \end{split} \tag{5.3}\]
Note that by choosing \(t_{1}=1/(2\|\nu\|_{L^{\infty}})\), we have that
\[\sup_{t\in[0,t_{1}]}\|f(t,\cdot,\xi)\|_{\mathcal{M}(\mathbb{R})}\leq 2\|f(0, \cdot,\xi)\|_{\mathcal{M}(\mathbb{R})}\implies\sup_{t\in[0,t_{1}]}\|\mathcal{L} f(t,\cdot,\xi)\|_{\mathcal{M}(\mathbb{R})}\leq 2\|f(0,\cdot,\xi)\|_{\mathcal{M}( \mathbb{R})}.\]
Next, consider the \(\eta\)-weighted total moment,
\[\|\mathcal{L}f(t,\cdot,\xi)\eta\|_{\mathcal{M}(\mathbb{R})}\] \[\quad\leq\|f(0,\cdot,\xi)\eta\|_{\mathcal{M}(\mathbb{R})}+\int_{0}^ {t}\Big{\{}\eta(0)J_{f}(s,\xi)+\Big{[}\|\mu_{f}^{*}(s,\cdot,\xi)\|_{L^{\infty}} \|\eta^{\prime}/\eta\|_{L^{\infty}}+\frac{\sigma^{2}}{2}\|\eta^{\prime\prime}/ \eta\|_{L^{\infty}}\Big{]}\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \
Apply Cauchy-Schwartz inequality, we obtain that
\[\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(} \int_{\mathbb{R}}\Big{[}\Lambda\star\big{(}(\mathcal{L}f-\mathcal{L} g)\eta\big{)}\Big{]}(\mathcal{L}f-\mathcal{L}g)\eta\ \mathrm{d}x\bigg{)}\] \[\leq\frac{4}{\sigma^{2}}\|\mu_{f}^{*}(\mathcal{L}f-\mathcal{L}g) \|_{H_{\eta}^{-1}}^{2}+\frac{4}{\sigma^{2}}\|(\mu_{f}^{*}-\mu_{g}^{*})( \mathcal{L}g)\|_{H_{\eta}^{-1}}^{2}+4\|(\eta^{\prime}/\eta)(\mathcal{L}f- \mathcal{L}g)\|_{H_{\eta}^{-1}}^{2}\] \[\quad+\Big{(}4+\frac{\sigma^{2}}{2}\Big{)}\|(\mathcal{L}f- \mathcal{L}g)\|_{H_{\eta}^{-1}}^{2}+\|\nu(\mathcal{L}f-\mathcal{L}g)\|_{H_{ \eta}^{-1}}^{2}+\|\delta_{0}(J_{f}-J_{g})\|_{H_{\eta}^{-1}}^{2}\] \[\quad+\|\mu_{f}^{*}(\eta^{\prime}/\eta)(\mathcal{L}f-\mathcal{L}g )\|_{H_{\eta}^{-1}}^{2}+\|(\mu_{f}^{*}-\mu_{g}^{*})(\eta^{\prime}/\eta)( \mathcal{L}g)\|_{H_{\eta}^{-1}}^{2}+\frac{\sigma^{2}}{2}\|(\eta^{\prime\prime} /\eta)(\mathcal{L}f-\mathcal{L}g)\|_{H_{\eta}^{-1}}^{2}.\]
Applying Lemma 3.3, we further have that
\[\frac{\mathrm{d}}{\mathrm{d}t}\| (\mathcal{L}f-\mathcal{L}g)\|_{H_{\eta}^{-1}}^{2}\leq\Big{(} \frac{16}{\sigma^{2}}\|\mu_{f}^{*}\|_{W^{1,\infty}}^{2}+16\|\eta^{\prime}/\eta \|_{W^{1,\infty}}^{2}+\Big{(}4+\frac{\sigma^{2}}{2}\Big{)}\] \[+4\|\nu\|_{W^{1,\infty}}^{2}+4\|\mu_{f}^{*}\|_{W^{1,\infty}}^{2} \|\eta^{\prime}/\eta\|_{W^{1,\infty}}^{2}+2\sigma^{2}\|\eta^{\prime\prime}/\eta \|_{W^{1,\infty}}^{2}\Big{)}\|(\mathcal{L}f-\mathcal{L}g)\|_{H_{\eta}^{-1}}^{2}\] \[+\bigg{(}\frac{4}{\sigma^{2}}\|(\mathcal{L}g)\|_{H_{\eta}^{-1}}^{ 2}+4\|\eta^{\prime}/\eta\|_{W^{1,\infty}}^{2}\|(\mathcal{L}g)\|_{H_{\eta}^{-1} }^{2}\bigg{)}|\mu_{f}^{*}-\mu_{g}^{*}|^{2}+\|\delta_{0}\|_{H_{\eta}^{-1}}^{2} |J_{f}-J_{g}|^{2}.\]
Now let us consider the integration over \(\xi\in[0,1]\). Firstly, using that \(w\in\mathcal{W}\) combined with classical interpolation,
\[\int_{[0,1]} |\mu_{f}^{*}(t,\xi,x)-\mu_{g}^{*}(t,\xi,x)|^{2}\ \mathrm{d}\xi=\int_{[0,1]} \bigg{(}\int_{[0,1]}w(\xi,\zeta)\big{(}J_{f}(t,\zeta)-J_{g}(t,\zeta)\big{)}\ \mathrm{d}\zeta\bigg{)}^{2}\ \mathrm{d}\xi\] \[\leq\|w\|_{\mathcal{W}}^{2}\|J_{f}(t,\cdot)-J_{g}(t,\cdot)\|_{L_{ \xi}^{2}}^{2}.\]
Secondly, by Lemma 3.7,
\[\big{|}J_{f}(t,\xi)-J_{g}(t,\xi)\big{|}\leq\bigg{|}\int_{\mathbb{R}}\nu(x) \big{(}f(t,\xi,x)-g(t,\xi,x)\big{)}\ \mathrm{d}x\bigg{|}\leq C(\alpha)\|\nu\|_{W^{1,\infty}}\|f(t,\cdot,\xi)-g(t, \cdot,\xi)\|_{H_{\eta}^{-1}}.\]
Hence, we have that
\[\|J_{f}(t,\cdot)-J_{g}(t,\cdot)\|_{L_{\xi}^{2}}^{2}=\int_{[0,1]} \big{|}J_{f}(t,\xi)-J_{g}(t,\xi)\big{|}^{2}\ \mathrm{d}\xi\] \[\leq C(\alpha)^{2}\|\nu\|_{W^{1,\infty}}^{2}\int_{[0,1]}\|f(t, \cdot,\xi)-g(t,\cdot,\xi)\|_{H_{\eta}^{-1}}^{2}\ \mathrm{d}\xi=C(\alpha)^{2}\|\nu\|_{W^{1,\infty}}^{2}\|f-g\|_{L_{\xi}^{2}(H_{ \eta}^{-1})_{x}}^{2}.\]
Therefore, by integrating over \(\xi\in[0,1]\),
\[\|(\mathcal{L}f-\mathcal{L}g)(t,\cdot,\cdot)\|_{L_{\xi}^{2}(H_{ \eta}^{-1})_{x}}^{2} \leq\int_{0}^{t}M_{0}\|(\mathcal{L}f-\mathcal{L}g)(s,\cdot,\cdot)\|_ {L_{\xi}^{2}(H_{\eta}^{-1})_{x}}^{2}+M_{1}\|(f-g)(s,\cdot,\cdot)\|_{L_{\xi}^{2}( H_{\eta}^{-1})_{x}}^{2}\ \mathrm{d}s\] \[\leq\exp(M_{0}t)\int_{0}^{t}M_{1}\|(f-g)(s,\cdot,\cdot)\|_{L_{\xi} ^{2}(H_{\eta}^{-1})_{x}}^{2}\ \mathrm{d}s \tag{5.5}\]
where \(M_{0},M_{1}\) are required to satisfy that
\[M_{0}\geq \sup_{t\in[0,t_{2}]}\bigg{(}\frac{16}{\sigma^{2}}\|\mu_{f}^{*}\| _{L_{\xi}^{\infty}W_{x}^{1,\infty}}^{2}+16\|\eta^{\prime}/\eta\|_{W^{1,\infty}}^ {2}+\Big{(}4+\frac{\sigma^{2}}{2}\Big{)}\] \[\quad+4\|\nu\|_{W^{1,\infty}}^{2}+4\|\mu_{f}^{*}\|_{L_{\xi}^{ \infty}W_{x}^{1,\infty}}^{2}\|\eta^{\prime}/\eta\|_{W^{1,\infty}}^{2}+2\sigma^{2 }\|\eta^{\prime\prime}/\eta\|_{W^{1,\infty}}^{2}\bigg{)}\] \[M_{1}\geq \sup_{t\in[0,t_{2}]}\bigg{[}\bigg{(}\frac{4}{\sigma^{2}}\|( \mathcal{L}g)\|_{L_{\xi}^{\infty}(H_{\eta}^{-1})_{x}}^{2}+4\|\eta^{\prime}/\eta \|_{W^{1,\infty}}^{2}\|(\mathcal{L}g)\|_{L_{\xi}^{\infty}(H_{\eta}^{-1})_{x}}^{ 2}\Big{)}\|w\|_{\mathcal{W}}^{2}+\|\delta_{0}\|_{H_{\eta}^{-1}}^{2}\bigg{]}C( \alpha)^{2}\|\nu\|_{W^{1,\infty}}^{2}.\]
In addition, by \(w\in\mathcal{W}\) and Lemma 3.7, we can derive
\[\|\mu_{f}^{*}\|_{L_{\xi}^{\infty}W_{x}^{1,\infty}} \leq\|\mu\|_{W_{x}^{1,\infty}}+\sup_{\xi\in[0,1]}\bigg{|}\int_{0}^{ 1}w(\xi,\zeta)J_{f}(t,\zeta)\;\mathrm{d}\zeta\bigg{|}\] \[\leq\|\mu\|_{W_{x}^{1,\infty}}+\|w\|_{\mathcal{W}}\;\|J_{f}(t, \cdot)\|_{L^{\infty}}\] \[\leq\|\mu\|_{W_{x}^{1,\infty}}+\|w\|_{\mathcal{W}}\;C(\alpha)\| \nu\|_{W^{1,\infty}}\|f\|_{L_{\xi}^{\infty}(H_{\eta}^{-1})_{x}}.\]
When \(f,g,\mathcal{L}f,\mathcal{L}g\in E_{R;t_{2}}\), by Lemma 3.2, we have that
\[\|f\|_{L_{\xi}^{\infty}(H_{\eta}^{-1})_{x}}\leq\frac{R}{2},\quad\|(\mathcal{L }g)\|_{L_{\xi}^{\infty}(H_{\eta}^{-1})_{x}}\leq\frac{R}{2},\]
for \(t\in[0,t_{2}]\).
Hence \(M_{0},\;M_{1}\) in (5.5) can be chosen such that they only depend on \(R\) and the regularity of the various fixed coefficients in the system. By choosing sufficiently small \(t_{*}>0\), for example,
\[t_{*}\leq\max\left(t_{2},\;\frac{1}{3M_{1}},\;\frac{\log 2}{M_{0}}\right),\]
by (5.5) we conclude that \(\mathcal{L}\) is contracting on the set \(\mathcal{L}(E_{R;t_{*}})\) for the \(L_{\xi}^{2}(H_{\eta}^{-1})_{x}\) norm. Repeating the argument allows extending the weak solution to any finite time interval as usual, since the a priori estimates (5.3) and (5.4) do not blow up in finite time.
We now turn to the derivation of the limiting hierarchy. Taking the derivative of \(\tau_{\infty}(T)=\tau_{\infty}(T,w,f)\) in Definition 1.5, we first obtain
\[\partial_{t}\tau_{\infty} (T,w,f)(t,z)=\sum_{m=1}^{|T|}\bigg{[}-\partial_{z_{m}}\Big{(} \mu(z_{m})\tau_{\infty}(T)(t,z)\Big{)}+\frac{\sigma^{2}}{2}\partial_{z_{m}}^ {2}\tau_{\infty}(T)(t,z)\] \[-\nu(z_{m})\tau_{\infty}(T)(t,z)+\delta_{0}(z_{m})\bigg{(}\int_{ \mathbb{R}}\nu(u_{m})\tau_{\infty}(T)(t,u)\bigg{)}\bigg{|}_{\forall n\neq m, u_{n}=z_{n}}\] \[-\partial_{z_{m}}\bigg{(}\int_{[0,1]^{|T|}}w_{T}(\xi_{1},\dots, \xi_{|T|})f^{\otimes|T|}(t,z_{1},\xi_{1},\dots,z_{|T|},\xi_{|T|})\] \[\bigg{(}\int_{0}^{1}w(\xi_{m},\xi_{|T|+1})\int_{\mathbb{R}}\nu(z _{|T|+1})f(t,z_{|T|+1},\xi_{|T|+1})\;\mathrm{d}z_{|T|+1}\mathrm{d}\xi_{|T|+1} \bigg{)}\;\mathrm{d}\xi_{1},\dots,\xi_{|T|}\bigg{)}\bigg{]}.\]
The last term can be rewritten by using the observables with one more leaf, resulting the limiting hierarchy (2.6), restated here:
\[\partial_{t}\tau_{\infty} (T)(t,z)\] \[=\sum_{m=1}^{|T|}\Bigg{\{}\bigg{[}-\partial_{z_{m}}(\mu(z_{m}) \tau_{\infty}(T)(t,z))+\frac{\sigma^{2}}{2}\partial_{z_{m}}^{2}\tau_{\infty}( T)(t,z)\] \[-\partial_{z_{m}}\bigg{[}\int_{\mathbb{R}}\nu(z_{|T|+1})\tau_{ \infty}(T+m)(t,z)\;\mathrm{d}z_{|T|+1}\bigg{]}\Bigg{\}}.\]
### Quantitative stability
This subsection focuses on the proof of the main quantitative estimate of the article. The technical Lemma 5.2 about recursive differential inequalities is given separately in the next subsection.
Proof of Theorem 2.6.: For simplicity, let us recall the notation
\[\nu_{m}=1\otimes\dots\otimes\nu\otimes\dots\otimes,\]
where \(\nu\) appears in the \(m\)-th coordinate, i.e. \(\nu_{m}(z)=\nu(z_{m})\). The same convention applies to \(\mu\) and \(\eta\).
Define the difference \(\Delta_{N}(T)(t,z):=\tau_{N}(T)(t,z)-\tau_{\infty}(T)(t,z)\). By subtracting (2.6) from (2.2), one has that
\[\partial_{t}\Delta_{N}(T)(t,z)\] \[\quad-\nu(z_{m})\Delta_{N}(T)(t,z)+\delta_{0}(z_{m})\bigg{(}\int _{\mathbb{R}}\nu(u_{m})\Big{(}\Delta_{N}(T)(t,u)+\mathscr{R}_{N,T,m}(t,u) \Big{)}\;\mathrm{d}u_{m}\Big{)}\bigg{|}_{\forall n\neq m,\,u_{n}=z_{n}}\bigg{]}\] \[\quad-\partial_{z_{m}}\bigg{[}\int_{\mathbb{R}}\nu(z_{|T|+1}) \Big{(}\Delta_{N}(T+m)(t,z)+\tilde{\mathscr{R}}_{N,T+m,|T|+1}(t,z)\Big{)}\; \mathrm{d}z_{|T|+1}\bigg{]}\bigg{\}},\quad\forall T\in\mathcal{T}.\]
We highlight that, for any fixed \(N<\infty\), the above equalities and later inequalities involving \(\Delta_{N}(T)\) can be understood as recursive relations that holds on all \(T\in\mathcal{T}\). At a first glance, one may think that the approximate hierarchy (2.2) is only defined for observables \(\tau_{N}(T)\) with \(|T|\leq N\). Nevertheless, by our formal definition that \(f_{N}^{i_{1},\ldots,i_{k}}\equiv 0\) if there are duplicated indices among \(i_{1},\ldots,i_{k}\), it is easy to verify that for any tree \(T\) such that \(|T|>N\),
\[\tau_{N}(T,w_{N},f_{N})(t,z):=\frac{1}{N}\sum_{i_{1},\ldots,i_{|T|}=1}^{N}w_{ N,T}(i_{1},\ldots,i_{|T|})f_{N}^{i_{1},\ldots,i_{|T|}}(t,z_{1},\ldots,z_{|T|})\equiv 0\]
as in each marginal there must be duplicated indices. By a similar discussion, we see that \(\mathscr{R}_{N,T,m}\equiv 0\) and \(\tilde{\mathscr{R}}_{N,T+m,|T|+1}\equiv 0\) when \(|T|>N\). With these formal definition, it is then straightforward to show that approximate hierarchy (2.2) holds for all \(T\in\mathcal{T}\).
By multiplying by the weight function \(\eta^{\otimes|T|}\) and integrating, we obtain that
\[\Big{(}\partial_{t}\Delta_{N}(T)(t,z)\Big{)}\eta^{\otimes|T|}(z)= \sum_{m=1}^{|T|}\bigg{\{}-\partial_{z_{m}}\Big{(}\mu_{m}\Delta_{N}(T)\eta^{ \otimes|T|}\Big{)}(t,z)+\frac{\sigma^{2}}{2}\partial_{z_{m}}^{2}\Big{(}\Delta _{N}(T)\eta^{\otimes|T|}\Big{)}(t,z)\] \[\quad-\Big{(}\nu_{m}\Delta_{N}(T)\eta^{\otimes|T|}\Big{)}(t,z)+ \Big{(}\mu_{m}(\eta_{m}^{\prime}/\eta_{m})\Delta_{N}(T)\eta^{\otimes|T|}\Big{)} (t,z)\] \[\quad+\delta_{0}(z_{m})\eta(z_{m})\bigg{(}\int_{\mathbb{R}}\Big{(} (\nu_{m}/\eta_{m})(\Delta_{N}(T)+\mathscr{R}_{N,T,m})\eta^{\otimes|T|}\Big{)} (t,u)\;\mathrm{d}u_{m}\Big{)}\bigg{|}_{\forall n\neq m,\,u_{n}=z_{n}}\] \[\quad-\partial_{z_{m}}\bigg{[}\int_{\mathbb{R}}\Big{(}(\nu_{|T|+ 1}/\eta_{|T|+1})(\Delta_{N}(T+m)+\tilde{\mathscr{R}}_{N,T+m,|T|+1})\eta^{ \otimes|T|+1}\Big{)}(t,z)\;\mathrm{d}z_{|T|+1}\bigg{]}\] \[\quad+\frac{\sigma^{2}}{2}\bigg{[}\partial_{z_{m}}\Big{(}-2(\eta _{m}^{\prime}/\eta_{m})\Delta_{N}(T)\eta^{\otimes|T|}\Big{)}+(\eta_{m}^{\prime \prime}/\eta_{m})\Delta_{N}(T)\eta^{\otimes|T|}\bigg{]}(t,z)\bigg{\}}.\]
Substituting \(\big{(}\partial_{t}\Delta_{N}(T)\big{)}\eta^{\otimes|T|}\) in the right hand side of
\[\quad\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(}\frac{1}{2}\int_{ \mathbb{R}|^{T}|}\Big{(}K^{\otimes|T|}\star\big{(}\Delta_{N}(T)\eta^{\otimes|T| }\big{)}(t,z)\Big{)}^{2}\;\mathrm{d}z\bigg{)}\] \[= \int_{\mathbb{R}^{|T|}}\bigg{(}K^{\otimes|T|}\star\big{(}\Delta_{N }(T)\eta^{\otimes|T|}\big{)}(t,z)\bigg{)}\bigg{(}K^{\otimes|T|}\star\big{(} \partial_{t}\Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,z)\bigg{)}\;\mathrm{d}z,\]
yields the extensive expression
\[\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(} \frac{1}{2}\int_{\mathbb{R}^{|T|}}\Big{(}K^{\otimes|T|}\star\big{(} \Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,z)\Big{)}^{2}\;\mathrm{d}z\bigg{)}\] \[=\int_{\mathbb{R}^{|T|}}\sum_{m=1}^{|T|}\Bigg{\{}-\frac{\sigma^{2} }{2}\bigg{[}\partial_{z_{m}}K^{\otimes|T|}\star\big{(}\Delta_{N}(T)\eta^{ \otimes|T|}\big{)}(t,z)\bigg{]}^{2}\] \[\quad+\bigg{[}K^{\otimes|T|}\star\big{(}\Delta_{N}(T)\eta^{ \otimes|T|}\big{)}(t,z)\bigg{]}\bigg{[}-K^{\otimes|T|}\star\big{(}\nu_{m} \Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,z)\] \[\quad+K(z_{m})\eta(0)\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|} \star\big{(}(\nu_{m}/\eta_{m})\Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,u)\; \mathrm{d}u_{m}\bigg{)}\bigg{|}_{\forall n\neq m,\,u_{n}=z_{n}}\] \[\quad+K(z_{m})\eta(0)\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|} \star\big{(}(\nu_{m}/\eta_{m})\mathscr{R}_{N,T,m}\eta^{\otimes|T|}\big{)}(t,u) \;\mathrm{d}u_{m}\bigg{)}\bigg{|}_{\forall n\neq m,\,u_{n}=z_{n}}\] \[\quad+K^{\otimes|T|}\star\big{(}\mu_{m}(\eta_{m}^{\prime}/\eta_{ m})\Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,z)+\frac{\sigma^{2}}{2}K^{\otimes|T|} \star\big{(}(\eta_{m}^{\prime\prime}/\eta_{m})\Delta_{N}(T)\eta^{\otimes|T|} \big{)}(t,z)\bigg{]}\] \[\quad+\bigg{[}\partial_{z_{m}}K^{\otimes|T|}\star\big{(}\Delta_{ N}(T)\eta^{\otimes|T|}\big{)}(t,z)\bigg{]}\bigg{[}K^{\otimes|T|}\star\big{(}\mu_{m} \Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,z)\] \[\quad+\int_{\mathbb{R}}K^{\otimes|T|+1}\star\big{(}(\nu_{|T|+1}/ \eta_{|T|+1})\Delta_{N}(T+m)\eta^{\otimes|T|+1}\big{)}(t,z)\;\mathrm{d}z_{|T|+1}\] \[\quad+\int_{\mathbb{R}}K^{\otimes|T|+1}\star\big{(}(\nu_{|T|+1}/ \eta_{|T|+1})\tilde{\mathscr{R}}_{N,T+m,|T|+1}\eta^{\otimes|T|+1}\big{)}(t,z) \;\mathrm{d}z_{|T|+1}\] \[\quad+\frac{\sigma^{2}}{2}K^{\otimes|T|}\star\big{(}2(\eta_{m}^{ \prime}/\eta_{m})\Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,z)\bigg{]}\Bigg{\}}\; \mathrm{d}z.\]
We then apply Cauchy-Schwartz inequality to obtain,
\[\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(} \frac{1}{2}\|\Delta_{N}(T)\|_{H_{\eta}^{-1\otimes|T|}}^{2}\bigg{)} \leq\sum_{m=1}^{|T|}\Bigg{\{}\Big{(}2+\frac{\sigma^{2}}{4}\Big{)}\|\Delta_{N}( T)\|_{H_{\eta}^{-1\otimes|T|}}^{2}+\frac{1}{2}\|\nu_{m}\Delta_{N}(T)\|_{H_{ \eta}^{-1\otimes|T|}}^{2}\] \[\quad+\frac{1}{2}\|\mu_{m}(\eta_{m}^{\prime}/\eta_{m})\Delta_{N} (T)\|_{H_{\eta}^{-1\otimes|T|}}^{2}+\frac{\sigma^{2}}{4}\|(\eta_{m}^{\prime \prime}/\eta_{m})\Delta_{N}(T)\|_{H_{\eta}^{-1\otimes|T|}}^{2}\] \[\quad+\frac{1}{2}\|K\|_{L^{2}}^{2}\eta(0)^{2}\int_{\mathbb{R}^{|T| -1}}\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|}\star\big{(}(\nu_{m}/\eta_{m}) \Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,z)\;\mathrm{d}z_{m}\bigg{)}^{2}\prod_{n \neq m}\;\mathrm{d}z_{n}\] \[\quad+\frac{2}{\sigma^{2}}\|\mu_{m}\Delta_{N}(T)\|_{H_{\eta}^{-1 \otimes|T|}}^{2}+\frac{\sigma^{2}}{2}\|2(\eta_{m}^{\prime}/\eta_{m})\Delta_{N}( T)\|_{H_{\eta}^{-1\otimes|T|}}^{2}\] \[\quad+\frac{2}{\sigma^{2}}\int_{\mathbb{R}^{|T|}}\bigg{(}\int_{ \mathbb{R}}K^{\otimes|T|+1}\star\big{(}(\nu_{|T|+1}/\eta_{|T|+1})\Delta_{N}(T+m) \eta^{\otimes|T|+1}\big{)}(t,z)\;\mathrm{d}z_{|T|+1}\bigg{)}^{2}\prod_{n=1}^{|T |}\;\mathrm{d}z_{n}\] \[\quad+\frac{2}{\sigma^{2}}\int_{\mathbb{R}^{|T|}}\bigg{(}\int_{ \mathbb{R}}K^{\otimes|T|+1}\star\big{(}(\nu_{|T|+1}/\eta_{|T|+1})\tilde{ \mathscr{R}}_{N,T+m,|T|+1}\eta^{\otimes|T|+1}\big{)}(t,z)\;\mathrm{d}z_{|T|+1} \bigg{)}^{2}\prod_{n=1}^{|T|}\;\mathrm{d}z_{n}\bigg{\}}. \tag{5.6}\]
This is where the proper choice of weak distance becomes critical as we need to bound the various terms in the right-hand side by the norm \(\|\Delta_{N}(T)\|_{H_{\eta}^{-1\otimes|T|}}^{2}\). The commutator estimate in Lemma 3.3 can directly bound all the terms with an explicit \(H_{\eta}^{-1\otimes|T|}\)-norms as the coefficients
\(\mu,\nu\) are \(W^{1,\infty}\) and \(\eta\) is smooth. For example
\[\|\nu_{m}\Delta_{N}(T)\|^{2}_{H_{\eta}^{-1\otimes|T|}}\leq 4\,\|\nu\|^{2}_{W^{1, \infty}(\mathbb{R})}\,\|\Delta_{N}(T)\|^{2}_{H_{\eta}^{-1\otimes|T|}}.\]
This leads to the simplified expression for some constant \(\tilde{C}_{0}\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\bigg{(}\frac{1}{2}\|\Delta_{N}(T) \|^{2}_{H_{\eta}^{-1\otimes|T|}}\bigg{)}\leq\sum_{m=1}^{|T|}\bigg{\{}\tilde{C} _{0}\,\|\Delta_{N}(T)\|^{2}_{H_{\eta}^{-1\otimes|T|}}\\ +\frac{1}{2}\|K\|^{2}_{L^{2}}\eta(0)^{2}\int_{\mathbb{R}^{|T|-1}} \bigg{(}\int_{\mathbb{R}}K^{\otimes|T|}\star\big{(}(\nu_{m}/\eta_{m})\Delta_{N }(T)\eta^{\otimes|T|}\big{)}(t,z)\;\mathrm{d}z_{m}\bigg{)}^{2}\prod_{n\neq m} \;\mathrm{d}z_{n}\\ +\frac{1}{2}\|K\|^{2}_{L^{2}}\eta(0)^{2}\int_{\mathbb{R}^{|T|-1} }\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|}\star\big{(}(\nu_{m}/\eta_{m})\mathscr{ R}_{N,T,m}\eta^{\otimes|T|}\big{)}(t,z)\;\mathrm{d}z_{m}\bigg{)}^{2}\prod_{n\neq m }\;\mathrm{d}z_{n}\\ +\frac{2}{\sigma^{2}}\int_{\mathbb{R}^{|T|}}\bigg{(}\int_{\mathbb{ R}}K^{\otimes|T|+1}\star\big{(}(\nu_{|T|+1}/\eta_{|T|+1})\Delta_{N}(T+m)\eta^{ \otimes|T|+1}\big{)}(t,z)\;\mathrm{d}z_{|T|+1}\bigg{)}^{2}\prod_{n=1}^{|T|} \;\mathrm{d}z_{n}\\ +\frac{2}{\sigma^{2}}\int_{\mathbb{R}^{|T|}}\bigg{(}\int_{\mathbb{ R}}K^{\otimes|T|+1}\star\big{(}(\nu_{|T|+1}/\eta_{|T|+1})\tilde{\mathscr{R}}_{N,T+m,|T|+1 }\eta^{\otimes|T|+1}\big{)}(t,z)\;\mathrm{d}z_{|T|+1}\bigg{)}^{2}\prod_{n=1}^{| T|}\;\mathrm{d}z_{n}\bigg{\}}. \tag{5.7}\]
The remaining integrals terms in (5.6) can be bounded by first applying Lemma 3.8 followed by Proposition 3.6. For example, consider the first remainder term and write by Lemma 3.8,
\[\int_{\mathbb{R}^{|T|-1}}\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|} \star\big{(}(\nu_{m}/\eta_{m})\mathscr{R}_{N,T,m}\eta^{\otimes|T|}\big{)}(t,z) \;\mathrm{d}z_{m}\bigg{)}^{2}\prod_{n\neq m}\;\mathrm{d}z_{n}\\ \leq C(\alpha)^{2}\|\nu\|^{2}_{W^{1,\infty}}\|\mathscr{R}_{N,T,m} \|^{2}_{H_{\eta}^{-1\otimes|T|}}.\]
Next, apply Proposition 3.6 to the right hand side to conclude that
\[\int_{\mathbb{R}^{|T|-1}}\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|} \star\big{(}(\nu_{m}/\eta_{m})\mathscr{R}_{N,T,m}\eta^{\otimes|T|}\big{)}(t,z) \;\mathrm{d}z_{m}\bigg{)}^{2}\prod_{n\neq m}\;\mathrm{d}z_{n}\\ \leq C(\alpha)^{2}\|\nu\|^{2}_{W^{1,\infty}}\big{[}\exp\big{(}(2+2 \alpha)c(w,|T|)\big{)}-1\big{]}\||\tau_{N}|(T)\|^{2}_{H_{\eta}^{-1\otimes|T|}}.\]
The method applies for the other integrals terms in (5.7), which yields
\[\int_{\mathbb{R}^{|T|-1}}\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|} \star\big{(}(\nu_{m}/\eta_{m})\Delta_{N}(T)\eta^{\otimes|T|}\big{)}(t,z)\; \mathrm{d}z_{m}\bigg{)}^{2}\prod_{n\neq m}\;\mathrm{d}z_{n}\\ \leq C(\alpha)^{2}\|\nu\|^{2}_{W^{1,\infty}}\|\Delta_{N}(T)\|^{2}_ {H_{\eta}^{-1\otimes|T|}},\]
\[\int_{\mathbb{R}^{|T|}}\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|+1} \star\big{(}(\nu_{|T|+1}/\eta_{|T|+1})\Delta_{N}(T+m)\eta^{\otimes|T|+1}\big{)} (t,z)\;\mathrm{d}z_{|T|+1}\bigg{)}^{2}\prod_{n=1}^{|T|}\;\mathrm{d}z_{n}\\ \leq C(\alpha)^{2}\|\nu\|^{2}_{W^{1,\infty}}\|\Delta_{N}(T+m)\|^{2} _{H_{\eta}^{-1\otimes(|T|+1)}},\]
together with
\[\int_{\mathbb{R}^{|T|}}\bigg{(}\int_{\mathbb{R}}K^{\otimes|T|+1} \star\big{(}(\nu_{|T|+1}/\eta_{|T|+1})\tilde{\mathscr{R}}_{N,T+m,|T|+1}\eta^{ \otimes|T|+1}\big{)}(t,z)\;\mathrm{d}z_{|T|+1}\bigg{)}^{2}\prod_{n=1}^{|T|}\; \mathrm{d}z_{n}\\ \leq C(\alpha)^{2}\|\nu\|^{2}_{W^{1,\infty}}\big{[}\exp\big{(}(2+2 \alpha)c(w,|T|)\big{)}-1\big{]}\||\tau_{N}|(T)\|^{2}_{H_{\eta}^{-1\otimes|T|}}.\]
Inserting those bounds into the energy estimate (5.7), we obtain a recursive differential inequality: for all \(T\in\mathcal{T}\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\|\Delta_{N}(T)(t,\cdot)\|^{2}_{H_{ \eta}^{-1\otimes|T|}}\leq\sum_{m=1}^{|T|}\Bigg{\{}\tilde{C}_{0}\|\Delta_{N}(T) (t,\cdot)\|^{2}_{H_{\eta}^{-1\otimes|T|}}+\tilde{C}_{1}\|\Delta_{N}(T+m)(t, \cdot)\|^{2}_{H_{\eta}^{-1\otimes(|T|+1)}}\\ +\varepsilon_{0}(T)\||\tau_{N}|(T)(t,\cdot)\|^{2}_{H_{\eta}^{-1 \otimes|T|}}+\varepsilon_{1}(T)\||\tau_{N}|(T+m)(t,\cdot)\|^{2}_{H_{\eta}^{-1 \otimes(|T|+1)}}\Bigg{\}}, \tag{5.8}\]
where we can even provide the explicit expressions for the constants
\[\tilde{C}_{0} =4+\frac{\sigma^{2}}{2}+4\bigg{(}\|\nu\|^{2}_{W^{1,\infty}}+\|\mu (\eta^{\prime}/\eta)\|^{2}_{W^{1,\infty}}+\frac{\sigma^{2}}{2}\|\eta^{\prime \prime}/\eta\|^{2}_{W^{1,\infty}}+\frac{4}{\sigma^{2}}\|\mu\|^{2}_{W^{1, \infty}}+2\sigma^{2}\|(\eta^{\prime}/\eta)\|^{2}_{W^{1,\infty}}\bigg{)}\] \[+\|K\|^{2}_{L^{2}}\eta(0)^{2}C(\alpha)^{2}\|\nu\|^{2}_{W^{1, \infty}},\] \[\tilde{C}_{1} =\frac{4C(\alpha)^{2}}{\sigma^{2}}\|\nu\|^{2}_{W^{1,\infty}},\] \[\varepsilon_{0}(T) =\|K\|^{2}_{L^{2}}\eta(0)^{2}C(\alpha)^{2}\|\nu\|^{2}_{W^{1, \infty}}\big{[}\exp\big{(}(2+2\alpha)c(w,|T|)\big{)}-1\big{]},\] \[\varepsilon_{1}(T) =\frac{4C(\alpha)^{2}}{\sigma^{2}}\|\nu\|^{2}_{W^{1,\infty}} \big{[}\exp\big{(}(2+2\alpha)c(w,|T|)\big{)}-1\big{]}.\]
We can now restrict the recursion relations by truncating them at any given depth \(n\geq 1\), meaning that we only consider the inequalities (5.8) for all \(T\in\mathcal{T}\) such that \(|T|\leq n-1\). In such a case, since
\[c(w,|T|)\leq|T|\big{(}\max_{i,j}|w_{i,j;N}|\big{)}\leq n\bar{w}_{N},\]
the coefficients \(\varepsilon_{0}\), \(\varepsilon_{1}\) can take the vanishing expression
\[\varepsilon_{0}(n) =\|K\|^{2}_{L^{2}}\eta(0)^{2}C(\alpha)^{2}\|\nu\|^{2}_{W^{1, \infty}}\big{[}\exp\big{(}(2+2\alpha)n\bar{w}_{N}\big{)}-1\big{]},\] \[\varepsilon_{1}(n) =\frac{4C(\alpha)^{2}}{\sigma^{2}}\|\nu\|^{2}_{W^{1,\infty}} \big{[}\exp\big{(}(2+2\alpha)n\bar{w}_{N}\big{)}-1\big{]}.\]
For a fixed depth \(n\geq 1\), \(\varepsilon_{0}(n)\) and \(\varepsilon_{1}(n)\) now vanish as \(\bar{w}_{N}\to 0\).
Let us now rescale the energy inequality through some \(\lambda^{|T|}\) factor: For all \(T\in\mathcal{T}\) such that \(|T|\leq n-1\),
\[\frac{\mathrm{d}}{\mathrm{d}t}\lambda^{|T|}\|\Delta_{N}(T)(t, \cdot)\|^{2}_{H_{\eta}^{-1\otimes|T|}}\\ \leq\sum_{m=1}^{|T|}\Bigg{\{}\tilde{C}_{0}\lambda^{|T|}\|\Delta_ {N}(T)(t,\cdot)\|^{2}_{H_{\eta}^{-1\otimes|T|}}+(\tilde{C}_{1}/\lambda)\lambda^ {|T|+1}\|\Delta_{N}(T+m)(t,\cdot)\|^{2}_{H_{\eta}^{-1\otimes(|T|+1)}}\\ +\varepsilon_{0}(n)\lambda^{|T|}\|\tau_{N}|(T)(t,\cdot)\|^{2}_{H_ {\eta}^{-1\otimes|T|}}+(\varepsilon_{1}(n)/\lambda)\lambda^{|T|+1}\||\tau_{N} |(T+m)(t,\cdot)\|^{2}_{H_{\eta}^{-1\otimes(|T|+1)}}\Bigg{\}}. \tag{5.9}\]
We also recall the a priori bound (2.8) for \(\tau_{N},\tau_{\infty}\) assumed in Theorem 2.6:
\[\sup_{t\leq t_{*}}\;\max_{|T|\leq\max(n,\;|T_{*}|)}\lambda^{\frac{|T|}{2}}\; \Big{(}\||\tau_{N}|(T,w_{N},f_{N})(t,\cdot)\|_{H_{\eta}^{-1\otimes|T|}}+\|\tau_ {\infty}(T)(t,\cdot)\|_{H_{\eta}^{-1\otimes|T|}}\Big{)}\leq C_{\lambda;\eta},\]
where \(T_{*}\in\mathcal{T}\) is the tree index in the final estimate (2.7). By a triangle inequality, this implies the following uniform bound of \(\Delta_{N}\),
\[\sup_{t\leq t_{*}}\;\max_{|T|\leq\max(n,\;|T_{*}|)}\lambda^{|T|}\|\Delta_{N}(T)( t,\cdot)\|^{2}_{H_{\eta}^{-1\otimes|T|}}\leq C_{\lambda;\eta}^{2}. \tag{5.10}\]
Denote
\[M_{k}(t) =\max_{|T|\leq k}\lambda^{|T|}\|\Delta_{N}(T)(t,\cdot)\|_{H_{\eta}^{ -1\otimes|T|}}^{2},\] \[C =\tilde{C}_{0}+\tilde{C}_{1}/\lambda,\] \[\varepsilon =\big{[}\varepsilon_{0}(n)+\varepsilon_{1}(n)/\lambda\big{]}C_{ \lambda;\eta}^{2},\] \[L =C_{\lambda;\eta}^{2},\] \[n =n,\quad n^{\prime}=|T_{*}|,\]
so that (5.9) and (5.10) can be summarized as follows,
\[\frac{\mathrm{d}}{\mathrm{d}t}M_{k}(t) \leq k\Big{(}CM_{k+1}(t)+\varepsilon\Big{)}, \forall 1\leq k\leq n-1, \tag{5.11b}\] \[M_{k}(t) \leq L, \forall 1\leq k\leq\max(n,\ n^{\prime}),\ t\in[0,t_{*}]. \tag{5.11a}\]
We now invoke the following result.
**Lemma 5.2**.: _Consider a sequence of non-negative functions \((M_{k}(t))_{k=1}^{\infty}\) on \(t\in[0,t_{*}]\) that satisfies the inequalities (5.11a)-(5.11b) with \(\big{[}\varepsilon/CL+(2\theta)^{n}\big{]}\leq 1\). Then_
\[\max_{1\leq k\leq\max(n,\ n^{\prime})}\big{[}\theta^{k}M_{k}(t)\big{]}\leq L( Ct/\theta+2)\,\max\bigg{(}\big{[}\varepsilon/CL+(2\theta)^{n}\big{]},\ \max_{1\leq k\leq n-1}\big{[}\theta^{k}M_{k}(0)\big{]}/L\bigg{)}^{\frac{1}{p^{ (Ct/\theta+1)}}}, \tag{5.12}\]
_holds for any \(1<p<\infty\), \(0<\theta<2^{-p^{\prime}}\) where \(1/p+1/p^{\prime}=1\), and any \(t\in[0,t_{*}]\)._
Assume for the time being that Lemma 5.2 holds and apply it to (5.9) and (5.10). Choose \(p=2\), \(\theta=1/8\) and substitute \(\varepsilon,C,L\) by its explicit expression to find that
\[\varepsilon/CL=\frac{\varepsilon_{0}(n)+\varepsilon_{1}(n)/\lambda}{\tilde{C}_ {0}+\tilde{C}_{1}/\lambda}=C_{1}\big{[}\exp\big{(}(2+2\alpha)\bar{w}n\big{)} -1\big{]},\]
where \(C_{1}\) depends only on \(\lambda\), the \(W^{1,\infty}\)-regularity of coefficients \(\mu\), \(\nu\) and constant \(\sigma>0\) in (2.1), but neither on \(\bar{w}_{N}\) nor on \(n\). Choosing \(C_{0}=C/\theta\), and as \(\bar{w}_{N}\to 0\) as \(N\to\infty\), we deduce that for \(N\) large enough
\[\bar{\varepsilon}=\varepsilon/CL+(2\theta)^{n}=C_{1}\big{[}\exp\big{(}(2+2 \alpha)n\bar{w}_{N}\big{)}-1\big{]}+(1/4)^{n}\leq 1.\]
The conclusion of Lemma 5.2 hence holds, showing that
\[\max_{|T|\leq\max(n,\ |T_{*}|)}(\lambda/8)^{|T|}\|\tau_{N}(T,w_{N},f _{N})(t,\cdot)-\tau_{\infty}(T)(t,\cdot)\|_{H_{\eta}^{-1\otimes|T|}}^{2}\] \[\quad\leq C_{\lambda;\eta}^{2}\left(C_{0}t+2\right)\,\max\bigg{(} \bar{\varepsilon},\ \max_{|T|\leq n-1}(\lambda/8)^{|T|}\|\tau_{N}(T,w_{N},f _{N})(0,\cdot)-\tau_{\infty}(T)(0,\cdot)\|_{H_{\eta}^{-1\otimes|T|}}^{2}/C_{ \lambda;\eta}^{2}\bigg{)}^{\frac{1}{2^{(C_{0}t+1)}}}.\]
This can be further simplified to (2.7) by relaxing the maximum on the left hand side as \(T=T_{*}\), taking the maximum on the right hand side over \(|T|\leq\max(n,\ |T_{*}|)\), and choosing \(C_{2}\) in (2.7) as \(C_{2}=\max\big{(}C_{0}t+2,\ 2^{(C_{0}t+1)}\big{)}\).
### Proof of Lemma 5.2
Proof of Lemma 5.2.: Let us restate here the recursive differential inequality (5.11a),
\[\frac{\mathrm{d}}{\mathrm{d}t}M_{k}(t)\leq k\Big{(}CM_{k+1}(t)+\varepsilon \Big{)},\ \ \forall 1\leq k\leq n-1,\]
which directly yields
\[\frac{\mathrm{d}}{\mathrm{d}t}\Big{(}M_{k}(t)+(\varepsilon/C)\Big{)}\leq kC \Big{(}M_{k+1}(t)+(\varepsilon/C)\Big{)},\ \ \forall 1\leq k\leq n-1.\]
For any \(1\leq k\leq n-1\) and \(t\in[0,t_{*}]\), by inductively integrating the inequalities in time, we obtain that
\[\Big{(}M_{k}(t)+(\varepsilon/C)\Big{)} \leq C^{n-k}\int_{s}^{t}\binom{n-1}{k-1}\frac{(t-r)^{n-k-1}}{n-k-1} \Big{(}M_{n}(r)+(\varepsilon/C)\Big{)}\ \mathrm{d}r\] \[\quad+\sum_{l=k}^{n-1}C^{l-k}\left(\frac{l-1}{k-1}\right)(t-s)^{l -k}\Big{(}M_{l}(s)+(\varepsilon/C)\Big{)},\]
We estimate the increase on \(M_{k}\) within time steps of size
\[t-s=\theta/C.\]
First, we bound the constant terms,
\[C^{n-k} \int_{s}^{t}\binom{n-1}{k-1}\,\frac{(t-r)^{n-k-1}}{n-k-1}( \varepsilon/C)\ \mathrm{d}r+\sum_{l=k}^{n-1}C^{l-k}\left(\frac{l-1}{k-1}\right)(t-s)^{l-k}( \varepsilon/C)\] \[=(\varepsilon/C)\bigg{\{}C^{n-k}\left(\frac{n-1}{k-1}\right)(t-s )^{n-k-1}+\sum_{l=k}^{n-1}C^{l-k}\left(\frac{l-1}{k-1}\right)(t-s)^{l-k}\bigg{\}}\] \[=(\varepsilon/C)\sum_{l=k}^{n}C^{l-k}\left(\frac{l-1}{k-1}\right) (t-s)^{l-k}\] \[\leq\theta^{-k}(\varepsilon/C)\sum_{l=k}^{n}\left(\frac{l-1}{k-1 }\right)\theta^{l},\]
where the last inequality uses our choice of time step \((t-s)\leq\theta/C\).
On the other hand, for \(\theta\leq 1/2\),
\[\sum_{l=k}^{\infty}\binom{l-1}{k-1}\,\theta^{l}=\frac{1}{(\theta^{-1}-1)^{k}} \leq 1.\]
Hence,
\[C^{n-k}\int_{s}^{t}\binom{n-1}{k-1}\,\frac{(t-r)^{n-k-1}}{n-k-1}(\varepsilon/ C)\ \mathrm{d}r+\sum_{l=k}^{n-1}C^{l-k}\left(\frac{l-1}{k-1}\right)(t-s)^{l-k}( \varepsilon/C)\leq\theta^{-k}\,\frac{\varepsilon}{C}.\]
We now turn to the terms involving \(M_{l}(s)\) and \(M_{n}(r)\) (with \(s\leq r\leq t\)). For \(M_{n}(r)\) we have no choice but to take
\[M_{n}(r)\leq L\]
But for \(M_{l}(s)\), \(k\leq l\leq n-1\), we have
\[M_{l}(s)\leq\min\left(L,\ \max_{1\leq m\leq n-1}\big{[}\theta^{m}M_{m}(s) \big{]}\theta^{-l}\right),\]
together with any geometric average between the two terms. Choose \(\frac{1}{p}+\frac{1}{p^{\prime}}=1\) so that
\[M_{l}(s) \leq L^{\frac{1}{p^{\prime}}}\Big{(}\max_{1\leq m\leq n-1}\big{[} \theta^{m}M_{m}(s)\big{]}\theta^{-l}\Big{)}^{\frac{1}{p}}\] \[=L^{\frac{1}{p^{\prime}}}\max_{1\leq m\leq n-1}\big{[}\theta^{m} M_{m}(s)\big{]}^{\frac{1}{p}}\big{(}\theta^{\frac{1}{p}}\big{)}^{-l}.\]
Then we may write
\[C^{n-k}\int_{s}^{t}\binom{n-1}{k-1}\frac{(t-r)^{n-k-1}}{n-k-1}M_{n}(r )\;\mathrm{d}r+\sum_{l=k}^{n-1}C^{l-k}\left(\frac{l-1}{k-1}\right)(t-s)^{l-k}M_{l }(s),\] \[\quad\leq C^{n-k}\binom{n-1}{k-1}\left(t-s\right)^{n-k}L+\sum_{l=k }^{n-1}C^{l-k}\left(\frac{l-1}{k-1}\right)(t-s)^{l-k}L^{\frac{1}{p^{\prime}}} \max_{1\leq m\leq n-1}\left[\theta^{m}M_{m}(s)\right]^{\frac{1}{p}}\!\left( \theta^{\frac{1}{p}}\right)^{-l}\] \[\quad\leq\theta^{-k}\bigg{\{}L\binom{n-1}{k-1}\theta^{n}+L^{\frac {1}{p^{\prime}}}\max_{1\leq m\leq n-1}\left[\theta^{m}M_{m}(s)\right]^{\frac{1 }{p}}\sum_{l=k}^{n-1}\left(\frac{l-1}{k-1}\right)\left(\theta^{\frac{1}{p^{ \prime}}}\right)^{-l}\bigg{\}},\]
where again use our choice of time step \((t-s)\leq\theta/C\) in the last inequality.
Observe that
\[\binom{n-1}{k-1}\theta^{n}\leq 2^{n-1}\theta^{n},\quad\sum_{l=k}^{n-1}\binom{ l-1}{k-1}\left(\theta^{\frac{1}{p^{\prime}}}\right)^{-l}\leq\frac{1}{(\theta^{- \frac{1}{p^{\prime}}}-1)^{k}}\leq 1\]
when choosing \(\theta^{\frac{1}{p^{\prime}}}\leq 1/2\), so that
\[C^{n-k} \int_{s}^{t}\binom{n-1}{k-1}\frac{(t-r)^{n-k-1}}{n-k-1}M_{n}(r)\; \mathrm{d}r+\sum_{l=k}^{n-1}C^{l-k}\left(\frac{l-1}{k-1}\right)(t-s)^{l-k}M_{l }(s)\] \[\quad\leq\theta^{-k}\,\left(L(2\theta)^{n}+L^{\frac{1}{p^{\prime}} }\max_{1\leq m\leq n-1}\left[\theta^{m}M_{m}(s)\right]^{\frac{1}{p}}\right).\]
Combining those bounds, provided that \(\theta^{\frac{1}{p^{\prime}}}\leq 1/2\), we have that, for all \(1\leq k\leq n-1\),
\[M_{k}(t)\leq\theta^{-k}\bigg{\{}(\varepsilon/C)+L(2\theta)^{n}+L^{\frac{1}{p^ {\prime}}}\max_{1\leq m\leq n-1}\left[\theta^{m}M_{m}(s)\right]^{\frac{1}{p}} \bigg{\}}.\]
On the other hand, for \(n\leq k\leq\max(n,\ n^{\prime})\), we simply have \(M_{k}(t)\leq L\). As \(\theta^{-k+n}\geq 1\),
\[M_{k}(t)\leq L\leq\theta^{-k}\bigg{\{}L(2\theta)^{n}\bigg{\}},\]
and we can combine the two cases to obtain that
\[\max_{1\leq k\leq\max(n,\ n^{\prime})}\left[\theta^{k}M_{k}(t)\right]\leq( \varepsilon/C)+L(2\theta)^{n}+L^{\frac{1}{p^{\prime}}}\max_{1\leq k\leq n-1} \left[\theta^{k}M_{k}(s)\right]^{\frac{1}{p}}.\]
If \(t\leq\theta/C\) we are done but otherwise we need to sum up the various bounds. Denote \(t_{j}=j\,\theta/C\) and write that By the fact that, we have that
\[\max_{1\leq k\leq\max(n,\ n^{\prime})}\left[\theta^{k}M_{k}(t_{j})\right]\] \[\quad\leq(\varepsilon/C)+L(2\theta)^{n}+L^{\frac{1}{p^{\prime}}} \max_{1\leq k\leq n-1}\left[\theta^{k}M_{k}(t_{j-1})\right]^{\frac{1}{p}}\] \[\quad\leq(\varepsilon/C)+L(2\theta)^{n}+L^{\frac{1}{p^{\prime}}} \bigg{\{}(\varepsilon/C)+L(2\theta)^{n}+L^{\frac{1}{p^{\prime}}}\max_{1\leq k \leq n-1}\left[\theta^{k}M_{k}(t_{j-2})\right]^{\frac{1}{p}}\bigg{\}}^{\frac{1 }{p}}\] \[\quad\leq(\varepsilon/C)+L(2\theta)^{n}+L^{\frac{1}{p^{\prime}}} \bigg{\{}(\varepsilon/C)+L(2\theta)^{n}\bigg{\}}^{\frac{1}{p}}+L^{1-\frac{1}{p^ {\prime}}}\max_{1\leq k\leq n-1}\left[\theta^{k}M_{k}(t_{j-2})\right]^{\frac{1 }{p^{\prime}}}\] \[\quad\ldots\] \[\quad\leq\sum_{i=0}^{j-1}L^{1-\frac{1}{p^{\prime}}}\bigg{\{}( \varepsilon/C)+L(2\theta)^{n}\bigg{\}}^{\frac{1}{p^{\prime}}}\ +\ L^{1-\frac{1}{p^{\prime}}}\max_{1\leq k\leq n-1}\left[\theta^{k}M_{k}(0) \right]^{\frac{1}{p^{\prime}}},\]
where we use that \((a+b)^{1/p}\leq a^{1/p}+b^{1/p}\) by concavity.
For any \(t\geq 0\), we hence have with \(j(t)=\left\lfloor\frac{Ct}{\theta}\right\rfloor+1\),
\[\max_{1\leq k\leq\max(n,\ n^{\prime})}\big{[}\theta^{k}M_{k}(t)\big{]}\leq\sum_{ i=0}^{j(t)-1}L^{1-\frac{1}{p^{i}}}\bigg{\{}\varepsilon/C+L(2\theta)^{n}\bigg{\}}^{ \frac{1}{p^{i}}}\ +\ L^{1-\frac{1}{p^{j(t)}}}\max_{1\leq k\leq n-1}\big{[}\theta^{k}M_{k}(0) \big{]}^{\frac{1}{p^{j(t)}}}. \tag{5.13}\]
Finally, by the assumption that \(\big{[}\varepsilon/CL+(2\theta)^{n}\big{]}\leq 1\),
\[\forall i\leq j,\quad L^{1-\frac{1}{p^{j}}}\bigg{\{}\varepsilon/C+L(2\theta)^{ n}\bigg{\}}^{\frac{1}{p^{i}}}=L\bigg{\{}\varepsilon/CL+(2\theta)^{n}\bigg{\}}^{ \frac{1}{p^{i}}}\leq L\bigg{\{}\varepsilon/CL+(2\theta)^{n}\bigg{\}}^{\frac{1 }{p^{j}}}.\]
Hence we can replace every \(i\) and every \(j(t)\) in (5.13) by \((Ct/\theta+1)\), which gives the looser bound (5.12), restated here
\[\max_{1\leq k\leq\max(n,\ n^{\prime})}\big{[}\theta^{k}M_{k}(t)\big{]}\leq L( Ct/\theta+2)\max\bigg{(}\big{[}\varepsilon/CL+(2\theta)^{n}\big{]},\sup_{1 \leq k\leq n-1}\big{[}\theta^{k}M_{k}(0)\big{]}/L\bigg{)}^{\frac{1}{p^{(Ct/ \theta+1)}}}.\]
|
2309.12624 | Quark: A High-Performance Secure Container Runtime for Serverless
Computing | Secure container runtimes serve as the foundational layer for creating and
running containers, which is the bedrock of emerging computing paradigms like
microservices and serverless computing. Although existing secure container
runtimes indeed enhance security via running containers over a guest kernel and
a Virtual Machine Monitor (VMM or Hypervisor), they incur performance penalties
in critical areas such as networking, container startup, and I/O system calls.
In our practice of operating microservices and serverless computing, we build
a high-performance secure container runtime named Quark. Unlike existing
solutions that rely on traditional VM technologies by importing Linux for the
guest kernel and QEMU for the VMM, we take a different approach to building
Quark from the ground up, paving the way for extreme customization to unlock
high performance. Our development centers on co-designing a custom guest kernel
and a VMM for secure containers. To this end, we build a lightweight guest OS
kernel named QKernel and a specialized VMM named QVisor. The QKernel-QVisor
codesign allows us to deliver three key advancements: high-performance
RDMA-based container networking, fast container startup mode, and efficient
mechanisms for executing I/O syscalls. In our practice with real-world apps
like Redis, Quark cuts down P95 latency by 79.3% and increases throughput by
2.43x compared to Kata. Moreover, Quark container startup achieves 96.5% lower
latency than the cold-start mode while saving 81.3% memory cost to the
keep-warm mode. Quark is open-source with an industry-standard codebase in
Rust. | Chenxingyu Zhao, Yulin Sun, Ying Xiong, Arvind Krishnamurthy | 2023-09-22T05:11:48Z | http://arxiv.org/abs/2309.12624v2 | # Quark: A High-Performance Secure Container Runtime for Serverless Computing
###### Abstract
Secure container runtimes serve as the foundational layer for creating and running containers, which is the bedrock of emerging computing paradigms like microservices and serverless computing. Although existing secure container runtimes indeed enhance security via running containers over a guest kernel and a Virtual Machine Monitor (VMM or Hypervisor), they incur performance penalties in critical areas such as networking, container startup, and I/O system calls.
In our practice of operating microservices and serverless computing, we build a high-performance secure container runtime named Quark. Unlike existing solutions that rely on traditional VM technologies by importing Linux for the guest kernel and QEMU for the VMM, we take a different approach to building Quark from the ground up, paving the way for extreme customization to unlock high performance. Our development centers on co-designing a custom guest kernel and a VMM for secure containers. To this end, we build a lightweight guest OS kernel named QKernel and a specialized VMM named QVisor. The QKernel-QVisor codesign allows us to deliver three key advancements: high-performance RDMA-based container networking, fast container startup mode, and efficient mechanisms for executing I/O syscalls. In our practice with real-world apps like Redis, Quark cuts down P95 latency by 79.3% and increases throughput by 2.43x compared to Kata. Moreover, Quark container startup achieves 96.5% lower latency than the cold-start mode while saving 81.3% memory cost to the keep-warm mode. Quark is open-source with an industry-standard codebase in Rust.
## 1 Introduction
Containerization is the de facto deployment manner for emerging cloud-computing paradigms such as microservices and serverless computing, offering significant benefits in aspects of portability, resource efficiency, and ease of scaling [1, 2, 3, 4, 5, 6, 7, 8]. Under the hood of containerization, container runtime (or container engine) is the critical software component that creates and runs containers over host operating systems.
In cloud providers' practice, secure container runtimes (also known as sandboxed runtimes) are widely used because of enhanced security, such as Google gVisor [9], AWS Firecracker [10], Alibaba runD [11], and Azure Sandboxing [12]. As Figure 1 shows, the key to enhancing security in secure container runtimes lies in running containers on a userspace guest kernel of a lightweight Virtual Machine (VM), rather than directly on the host OS. The lightweight VM is launched by a Virtual Machine Monitor (VMM). Introducing the guest kernel and VMM adds an extra layer of isolation between containers and the host OS, reducing the risk of vulnerabilities (e.g., privilege escalation [10, 13, 14]). In common practice, the guest kernel is based on a full-ledge Linux kernel, and the VMM is based on the QEMU/KVM [15, 16], which is heavyweight (e.g., QEMU has over a million lines of code) and previously used for VM rather than containers.
Although the involvement of guest kernel and VMM provides necessary security benefits, it introduces significant performance challenges. We indeed use existing secure runtimes like Kata [17] for customers' serverless platforms. However, we've identified the following key areas where performance enhancements are desired to improve the user experience:
\(\bullet\)**High-performance Network:** Microservices and serverless computing rely heavily on networked systems. On the one hand, complex monolithic services are decomposed into multiple containerized microservices connected via container networking. On the other hand, client requests and service responses are also carried over the container networking. Unfortunately, existing secure container runtimes, which use kernel network stacks, often fail to meet high-performance networking requirements. The reason is that container communications might traverse both the guest kernel's and host OS's network stacks, both of which using Linux kernel have been shown to struggle with achieving high throughput and low latency [18, 19, 20, 21, 22, 23, 24, 25]. As such, secure container runtimes should explore alternative, high-performance network stacks.
\(\bullet\)**Fast Startup:** In line with the microservices and serverless trend, containerized applications generally take the ephemeral execution, which frequently startup to serve requests and then quickly clean up after response. The container startup time takes a significant portion for response latency [26, 27, 28], a critical factor for user experience. The cold-start approach results in prohibitively long latency (in the range of hundreds of milliseconds), while the keep-warm method leads to substantial memory overhead during idle time (practically running thousands of containers on a single machine [10, 11]). It falls upon secure container runtimes to ensure that containers are booted both rapidly and efficiently in terms of memory cost.
Figure 1: Traditional _v.s._ Secure Container Runtimes.
\(\bullet\)**Efficient Syscall:** Popular containerized applications for microservices, such as Redis [29] and Node.js [30], are I/O-intensive and heavily rely on initiating I/O syscalls for disk or network access. Given the involvement of both the guest kernel and VMM, these system calls must traverse multiple layers. Typical secure container runtimes generally rely on the hypercall mechanism to trap into the VMM from the guest kernel. The conventional hypercall approach results in frequent context-switching between the guest kernel and the VMM, thereby leading to significant latency in I/O system calls. Therefore, secure container runtimes play a crucial role in enhancing the efficiency of syscall execution.
In our practice, we build _Quark_, a high-performance secure container runtime that delivers significant improvements in the three critical areas previously discussed. The core principle behind Quark is the co-design of the guest kernel and the virtual machine monitor, which enables extreme customization to unlock high performance. To achieve such co-design, we build a customized guest kernel named _QKernel_ and a bespoke VMM named _QVisor_. QKernel serves as the user-space guest kernel in a lightweight VM, encompassing subsystems like network stacks, memory management, process management, and the syscall virtualization layer, functionally similar to a typical OS kernel like Linux. QVisor functions as a virtual machine monitor (also known as a hypervisor), launching guest kernel and interacting with the host OS and physical devices on host machines.
In the co-design of QKernel and QVisor, we highlight three critical advancements: 1) For container networking, we present _TCP Socket over RDMA_ (TSoR), which boosts network performance by transferring application TCP traffic over RDMA. Notably, TSoR requires no code modifications for containers that use standard POSIX socket APIs. 2) For container startup, we introduce the _Hibernation mode_, which cuts down the startup latency while efficiently reducing the memory cost during idle times. Hibernation mode is enabled with a customized design for memory management and container snapshots. 3) For container syscalls, we design the _QCall_ mechanism, which speeds up the syscall by mitigating the overhead associated with context switching between the guest kernel and the VMM. Further details are in later sections: TSoR in SS4, Hibernation mode in SS5, and QCall in SS6.
Quark is developed with \(\sim\)135K lines of open-source, industry-standard code in Rust. Quark is compatible with Docker and Kubernetes. In SS8, We evaluate Quark with end-to-end applications like Redis [29], Node.js [30], and Etcd [31] and micro-benchmarks like iperf [32]. Quark reduces the P95 latency of Redis by up to 79.3% and boosts throughput by 2.43x compared to Kata [17], a popular runtime. For micro-benchmarks, Quark increases the iperf throughout by 2.46x and reduces the NPtcp latency by 86.6%. Quark container startup achieves 96.5% lower latency than the cold-start mode while saving 81.3% memory cost to the keep-warm mode. More compelling results are in SS8.
## 2 Background and Motivation
### Secure Container Runtime
Secure container runtime enhances the security of traditional container runtimes (e.g., runC [33]) by adding an extra layer of isolation while running containers inside lightweight virtual machines (VMs), which is stronger than traditional containers' processes-level isolation. Traditional container runtimes like runC [33] run containers sharing the same host kernel, inherently leading to vulnerabilities for host OS such as container escaping attacks [13, 14]. To fortify the security boundary between the containerized processes and the host kernel, secure container runtimes employ a _sandboxed_ environment for containers. The combination of the guest kernel and the Virtual Machine Monitor (VMM) creates the sandboxed environment. One VMM process runs per guest kernel, while a group (pod) of containers could run over one guest kernel. Secure runtime intercepts the system calls from containerized applications, thereby precluding direct interactions between the containers and the host kernel.
Secure container runtimes have seen widespread deployment [12, 17, 34, 9, 10], with Kata [17] serving as a widely-used example; Kata runs containers within a VM with a Linux-based guest kernel and utilizes QEMU as its default VMM. Although the sandboxed environment composed of the guest kernel and VMM effectively reduces security risks for the host OS, it introduces performance overhead for containers. The overhead is especially notable when the guest kernel is built on a heavyweight Linux kernel, and the VMM is built on QEMU -- components originally designed for full-fledged VM virtualization as opposed to lightweight containerization. Next, we delve into the performance overhead.
### Performance Bottleneck
**Kernel Network Stack**: Currently, most of the secure container runtimes facilitate network connectivity by utilizing existing Container Network Interface (CNI) plugins such as Flannel [35], Weave [36], and Cilium [37]. These CNI solutions build the data path based on the kernel TCP/IP stack. It is widely reported that the Linux kernel's TCP/IP stack incurs performance overhead, posing challenges for achieving high throughput and low latency [18, 19, 20, 21, 22, 23, 24]. In the context of secure container runtimes, the overhead of kernel TCP/IP is particularly significant as network data typically traverses both the guest kernel's and the host kernel's network stacks, thereby amplifying inherent inefficiencies. As a result, high
\begin{table}
\begin{tabular}{c|c|c}
**Network Solution** & **Latency** & **Throughput** \\ \hline CNI based on Kernel TCP/IP & 64.09 us & 17.1 Gbps \\ \hline Quark based on RDMA & 8.97 us & 37.4 Gbps \\ \end{tabular}
\end{table}
Table 1: Overhead of Kernel TCP/IP Stack
performance networking solutions such as RDMA emerge as compelling alternatives for circumventing the bottlenecks associated with kernel-based network stacks. Some academic researchers have also begun harnessing RDMA to accelerate serverless computing [38, 39]. As Table 1 shows, Quark can achieve lower latency and higher throughput while replacing the kernel TCP/IP with the RDMA-based network solution (test setup in SS 8).
**Hypercall and Context Switch:** A hypercall serves as a trap mechanism that enables a guest VM to request privileged operations from the VMM. The relationship between a hypercall and a VMM is similar to that between a syscall and an OS kernel. Given that secure container runtimes run containers on top of guest kernels and VMMs, they inherently introduce hypercall mechanisms into the system. However, hypercall requires costly context-switching between the guest kernel and VMM (e.g., switching registers and protection domains). Such context-switching overhead has been widely reported as a performance bottleneck in prior work [40, 41, 42, 43]. To measure the impact of this overhead on containerized applications, we conducted one micro-test with Redis, which is hypercall-intensive due to frequent I/O operations. Redis Ping invokes sys_read, sys_write, and epoll_wait hypercalls. As Table 2 shows, we can see that Redis Ping with non-optimized hypercall mechanisms requires about 11 us response latency. Specifically, hypercalls such as sys_read incur as much as 2 us latency, significantly influencing both throughput and latency metrics. Here, we present a preview of results showing the efficacy of QCall--an optimized hypercall mechanism by minimizing context-switching overhead (further details and test setup are discussed in later sections), which effectively increases throughput and reduces latency.
**Container Startup:** The response latency for user requests to containerized services generally comprises three parts: container startup, application initialization, and user request processing. To measure the impact of container startup on response latency, we conduct a micro-test using float processing as a containerized service (further details and test setup are discussed in SS8). Table 3 shows two modes of operating containers: Cold Start where containers are initialized only upon request arrival, and Keep Warm where containers are pre-initialized and maintained in a ready state. Our results indicate that container startup can take several hundred milliseconds, significantly affecting overall response latency. However, while mitigating startup latency, the keep-warm approach incurs idle-time memory overhead, constraining the density of deploying containers on the same host. In practical large-scale microservice deployments where a single machine may host thousands of containerized service instances [10, 44], the idle memory overhead can become prohibitively huge. Therefore, there's a pressing need for a container startup mode that simultaneously minimizes both latency and memory overhead.
## 3 Overview of Quark
### Principle: Co-design QKernel and QVisor
In this paper, we present our experience about how we build a high-performance secure container runtime, named _Quark_. The key design principle for the Quark is to co-design the guest kernel and the virtual machine monitor (VMM). For this purpose, we design a specialized guest kernel, named _QKernel_, along with a bespoke VMM, named _QVisor_. As depicted in Figure 2, QKernel is functionally equivalent to the Linux Kernel used in VM, and QVisor is equivalent to the QEMU. Importantly, the ground-up co-design of QKernel and QVisor delivers tremendous opportunities to flexibly address the performance overhead commonly present in existing secure container runtimes. Next, we introduce the overview of QKernel and QVisor, two key components of the Quark secure container runtime. In later sections, we present high-performance mechanisms of Quark enabled by the QVisor-QKernel co-design.
### QVisor: Hypervisor/VMM
QVisor (short for Quark Visor) is a VMM (also known as a hypervisor) that creates and manages lightweight VMs to run
\begin{table}
\begin{tabular}{c|c|c}
**Test** & **Latency** & **Throughput** \\ \hline Redis Ping w/ Hypercall & 11.0 \(\mu\)s/req & 90.1K RPS \\ \hline Redis Ping w/ Qcall & 5.0 \(\mu\)s/req & 200K RPS \\ \hline Hypercall Overhead (e.g., _sys\_read_) & 2.0 \(\mu\)s/req & - \\ \end{tabular}
\end{table}
Table 2: Overhead of Hypercall
Figure 2: Quark secure container runtime _v.s._ Linux VM.
\begin{table}
\begin{tabular}{c|c|c}
**Startup Mode** & **Response Latency** & **Idle Memory Cost** \\ \hline Cold Start & 563 ms & - \\ \hline Keep Warm & 1.4 ms & 40.82 MB \\ \end{tabular}
\end{table}
Table 3: Overhead of Container Startup
containers. Similar to QEMU, a typical VMM, QVisor is to allocate and isolate host resources such as CPU, memory, and network interface. However, QVisor is more lightweight than QEMU, as it discards many heavyweight but non-essential features for containers, such as device emulation. More importantly, QVisor is co-designed with QKernel, facilitating targeted optimizations in functionalities like hypercalls, network stacks, and container startup, thereby surpassing the capabilities of typical VMMs like QEMU while serving sandboxed containers.
QVisor is implemented as a Linux Container rather than as a simple process running atop the host kernel to conveniently utilize kernel features: cgroup and namespace. Cgroup (control group) allows the QVisor to enforce fine-grained resource limitations on host resources like CPU, memory, and networking. Meanwhile, namespaces provide QVisor with the capability to instantiate isolated views of global host resources such as hostnames, network namespaces, and file systems. The combined use of cgroups and namespaces facilitates enhanced isolation and resource management capabilities for running multiple sandboxed containers on the same host.
### QKernel: Guest Kernel
QKernel (short for Quark Kernel) functions as the guest OS kernel within a lightweight VM. As depicted in Figure 2, QKernel encompasses multiple subsystems similar to those in the Linux Kernel--these include syscall virtualization layers, memory management, process management, a virtual file system, and a custom network stack: 1) The System Call Virtualization Layer of Quark supports POSIX-compliant syscalls for containers. This ensures compatibility with a broad range of container images without requiring any modifications to user code. Notably, standard TCP POSIX socket APIs such as _connect()_, _accept()_, and _send()_ are fully supported. 2) Under the hood of socket APIs, QKernel features its high-performance network stack, named TCP Socket over RDMA, which is fundamentally different from Linux Kernel's TCP/IP stack. Further details are covered in SS4. 3) The Process Management subsystem sets up vCPUs by leveraging the KVM facilities in the host kernel. It also handles hypercalls interacting with QVisor. Here, we introduce a more efficient hypercall mechanism, named QCall, made possible by the co-design of QKernel and QVisor. Detailed information is provided in SS6. 4) Memory Management involves allocating the memory provisioned by QVisor and handling the address translation between guest containers and host memory. This subsystem plays a pivotal role in optimizing container startup times, as detailed in SS5. 5) The Virtual File System (VFS) is to provide the filesystem interface for containers by handling various types of IO operations. QKernel fully supports access to virtual files such as _/dev_ and _/proc_. For physical files, QKernel supports container access to a host directory tree through file system passthrough.
## 4 Container Network: TSoR
### Rationale of Co-design
The co-design of QKernel and QVisor offers an opportunity to create a highly efficient network stack that spans from the NIC's device layer up to the TCP socket layer. Such co-design is advantageous in two key respects. On the one hand, QKernel handles system call virtualization, including support for POSIX socket APIs, laying the groundwork for transparently boosting network performance without requiring any code changes in the applications. Currently, most containerized applications use standard POSIX sockets for communications, regardless of using synchronous manners such as HTTP/gRPC [45] or asynchronous manners such as AMQP [46]. On the other hand, QVisor can directly access the physical NIC on host machines and even utilize kernel-bypass techniques such as RDMA. Leveraging these capabilities, we introduce an efficient network solution called _TCP Socket over RDMA_. This solution transparently accelerates applications that use POSIX socket interfaces by taking advantage of RDMA's low-latency, high-bandwidth, and low CPU utilization benefits.
### TCP Socket over RDMA
TSoR (short for TCP Socket over RDMA) is the networking solution of Quark container runtime. Applying the model of the _network-stack-as-a-service_[47, 48, 49], TSoR comprises two essential service/client components shown in Figure 3: 1) TSoR Service is the core component that provides the network connectivity for the container pods. A pod is a group of one or more containers running over the same QKernel with shared resources such as network namespace and IP address. TSoR Service manages RDMA connections and executes data transmission to other machines via RDMA NIC. TSoR Service also talks to the network orchestration control plane, e.g., Kubernetes API Server. 2) Each pod has a TSoR client. After QKernel intercepts containerized applications' POSIX socket API calls, the TSoR client will take over and talk to TSoR Service to set up the connection with peer nodes and transmit data over RDMA. In the following sections, we describe the TSoR client, TSoR service, and network operations they facilitate.
### TSoR Client
The primary role of the TSoR Client is to intercept socket API calls from containerized applications. QKernel implements all system calls required by POSIX socket APIs, including two types: 1) Control primitives such as _connect_, _listen_, _accept_, and _close_ are to set up/tear down the TCP connections; 2) Data primitives such as _read_, _write_, _send_, and _recv_ are used to process data transmission. In SS 4.5, we will describe how
POSIX socket API calls are processed from the end-to-end workflow.
TSoR Client takes over socket system calls and then communicates with TSoR Service using Shared Memory. Each client creates one shared memory region, which consists of two key data structures: 1) _Message Queue Pair_: The message queue pair consists of two shared memory queues: Submission Queue (SQ) and Completion Queue (CQ) 1. TSoR Clients send request messages to TSoR Service through SQ and receive response messages from TSoR Service through CQ. 2) _Shared Buffers_: Shared Buffers consist of read/write data buffers to store data for application send()/receive(). Shared buffers play a similar role as the socket buffers of standard TCP/IP stack. Note that shared buffers between the TSoR Client and TSoR Service also serve as the registered Memory Region (MR) for RDMA.
Footnote 1: Here, SQ and CQ are used for communication between TSoR Client and TSoR Service, which are different from Queue Pairs of RDMA connection.
### TSoR Service
In this section, we introduce the submodules of the TSoR Service, including the Client Manager, Control Plane Agent, and RDMA connection manager. Among them, the RDMA connection manager is the core submodule.
#### 4.4.1 Client Manager
The Client Manager is responsible for managing TSoR clients and facilitates communication with them through shared memory regions. The shared memory region maintains metadata and data structures associated with each TSoR client, which is described in the TSoR Client SS4.3.
#### 4.4.2 Control Plane Agent
TSoR Service needs to get connection-related metadata from the orchestration system control plane, e.g., Kubernetes API Server in the Kubernetes cluster. The connection-related metadata includes active cluster node list, Pod list, and cluster connection permission control policy. Based on the metadata, TSoR Service determines whether it is permitted to and how it sets up virtual TCP connections (which we refer to as an RDMA Data Channel) to map to real TCP connections in TSoR Clients.
#### 4.4.3 RDMA Connection Manager
RDMA Connection Manager handles RDMA Queue Pair (QP) connections, which use Reliable Connection (RC) transport modes [50]. RDMA Connection Manager is responsible for creating and cleaning up QP connections. QP connection is used to transfer TCP traffic over RDMA. There are two main challenges for RDMA Connection Manager design:
**Challenge #1: RDMA connection scalability**. In a typical cluster environment, micro-services within containers could set up a large number of concurrent TCP connections. However, the existing RDMA network falls short on the scalability issue, which is widely reported in [19, 51, 52, 53]. When the number of QP connections is large, the aggregated performance degrades dramatically. The root cause is the contention on the RDMA NIC's internal hardware resource, which is beyond the control of the container network. Given the high concurrency of scenarios using TCP connections, it is not practicable to build a one-to-one mapping between the TCP connection and the RDMA QP connection.
**Challenge #2: RDMA connection setup latency**: RDMA connection setup consists of creating resources (e.g., QP), exchanging metadata information, and changing state. Usually, RDMA uses a TCP connection as a communication channel to exchange QP metadata. We measured the latency of creating a QP Connection based on TCP and found the latency is up to several milliseconds. For scenarios with frequently establishing short-lived TCP socket connections, the time cost is significant if every short-lived TCP connection requires creating a separate RDMA connection.
We provide two solutions to tackle the aforementioned challenges together:
**Solution #1: Multiplex Node-level RDMA connection**. To solve the RDMA connection scalability issue, we multiplex the single long-lived RDMA connection for all TCP connections between the same pair of nodes. Thus, the number of RDMA connections is determined by the number of cluster nodes, which is orders of magnitude less than the number of TCP connections. RDMA connection multiplexing enables TSoR to support a large number of concurrent TCP connections. As Figure 3 shows, TSoR introduces a concept of RDMA Channel, which is mapped to a TCP socket connection. Each end of the RDMA channel has two data ring buffers: a read buffer and a write buffer. The write buffer on one end connects to the read buffer on the other end. Besides RDMA Data channels for data transfer, each pair of
Figure 3: TSoR Architecture
nodes maintains one RDMA Control Channel to exchange control messages. For example, nodes use control messages to tell remote peers about the available space of read buffer for enforcing rate control.
**Solution #2: Pre-established RDMA Connection.** Due to the long latency of RDMA connection setup, TSoR pre-establish RDMA connections rather than doing that when TCP connections are requested by user applications. When a node joins the cluster, the TSoR Service on the node will start up and establish RDMA Connections to all its peer nodes in the cluster. At the time applications initiate TCP connections, the RDMA data channel can directly use the pre-established RDMA connection. By using the pre-established RDMA connection, TCP connection establishment does not need to pay the time cost for RDMA QP setup. Also, the low latency and reliable data path of RDMA can speed up the handshake process of TCP. In SS4.5.2, we will describe the detailed process of TCP connection establishment.
### TSoR Operation
#### 4.5.1 Data Transmission
Figure 4 shows the workflow of how two containers on different cluster nodes use TSoR to write()/read() data, which has the following main steps:
\(\bullet\)**Step #1**: Containerized application calls \(write\) which invokes the system call \(SysWrite\). The TSoR client intercepts the system call and interacts with the TSoR service. The TSoR Client plays as the producer for the write buffer by copying data from the application into the write buffer.
\(\bullet\)**Step #2**: TSoR clients enqueue one Write Request to the submission queue shared between the TSoR client and TSoR Service. Write Request is a signal to trigger the process of TSoR service.
\(\bullet\)**Step #3**: TSoR service dequeues the Write Request and transmits data to the peer node over RDMA connection if the remote read buffer still has space. TSoR Service plays as a consumer for the local write buffer by transferring data to the remote peer's read buffer using RDMA IB verb.
\(\bullet\)**Step #4**: When TSoR Service is notified upon the arrival of data through RDMA completion event, it enqueues one request into the CQ between the TSoR Service and TSOR client to indicate the read buffer has new arrival data.
\(\bullet\)**Step #5**: When the TSoR client is notified that the read buffer has data arrival, the TSoR client will consume data by copying data from the read buffer to the application buffer. The application finally receives the data from the remote peer.
We make several optimizations for the above data transmission workflow:
\(\bullet\)**Optimization #1 Pipeling**: To enhance data transfer efficiency, we implement pipelining in the Producer-Consumer workflow. On the sender side, the TSoR client fills the write buffer as the producer, while the TSoR service reads this buffer and transfers data to the receiver. Conversely, on the receiver side, the TSoR service acts as the producer for the read buffer, and the TSoR client serves as the consumer. On both the sender and receiver sides, the producer and consumer can work pipelining while transferring a stream of data packets. This pipelining increases data transfer throughput for maximizing the utility of high-throughput RDMA connections.
\(\bullet\)**Optimization #2 Signal Coalescing**: To minimize transmission overhead, we introduce _Signal Coalescing_. This approach allows TSoR clients to skip enqueuing the Write Request to notify the TSoR service to process when they find the write buffer already contains data. Correspondingly, the TSoR Service checks the ring buffer after each RDMA operation and continues to send the remaining data without requiring a new Write Request. Via the _Signal Coalescing_, TSoR client and service can save the cost of manipulating the SQ for most cases. SQ is only used to initiate the data transfer.
\(\bullet\)**Optimization #3 Idle Sleep and notification bitmap**: We enable the mechanism of idle sleep for TSoR Service. In Step-3, instead of continuously polling the submission queue, the service uses a hybrid mode that combines busy polling with event notification. In idle periods, the TSoR Service sleeps until awakened by an event notification from a client. After waking up, it reverts to busy polling for a brief period before returning to sleep mode. To manage multiple client requests efficiently, we employ a 2-layer bitmap, enabling rapid identification of the requesting TSoR client, thus further enhancing performance.
\(\bullet\)**Optimization #4 Lazy notification for read buffer available space**: In Step-3, the TSoR service needs to check whether the remote read buffer has available space before sending data over RDMA. If the buffer is full, the service temporarily stops data transfer. When the application consumes data in the read buffer, the available space will increase, and the TSoR service will notify the other side with the new space using the RDMA control channel. However, to avoid performance degradation due to excessive notifications, we set a
Figure 4: TCP data transmission and connection setup.
threshold: notifications are only sent when more than half of the total ring buffer space becomes available.
#### 4.5.2 TCP Connection Establishment
In this section, we explain how TSoR clients establish TCP connections. Unlike the typical three-way handshake in standard TCP, TSoR uses a more efficient two-way handshake. This is made possible by leveraging pre-established RDMA connections between nodes. With the lower network latency of RDMA, TSoR can quickly establish TCP connections, thus reducing connection setup time.
Figure 4 shows the process of establishing a TCP connection with a two-way handshake as follows:
\(\bullet\)**Step #1:**_Container A_ initiates a TCP connect request by calling _connect_ which invokes system call _SysConnect_. The TSoR client intercepts _SysConnect_, obtains the destination IP address/port, and then interacts with the TSoR Service via enqueuing one TCP connect request into SQ.
\(\bullet\)**Step #2:** TSoR Service running on _Container A_'s host pops out the requests from SQ and extracts the destination container IP and port. TSoR Service looks up the peer cluster node hosting _Container B_, creates a local RDMA Data Channel, and then sends a connection request using RDMA Control Channel over RDMA connection.
\(\bullet\)**Step #3:**_Container B_ waits on connection request via _accept_ socket API call. After the TSoR Service running on _Container B_'s host receives the connection requests, two tasks are executed to accept a connection request: Firstly, the TSoR service enqueues a Connect Request into the CQ to inform the _Container B_ to establish a new connection. Secondly, the TSoR Service creates an RDMA Data channel and sends the response with read ring buffer information associated with the newly created RDMA Channel back to the peer node.
\(\bullet\)**Step #4:**_Container A_'s TSoR Service receives the response and then enqueue a TCP accept request into CQ to notify _Container A_ that one connection is established and is ready for data transmission.
\(\bullet\)**Step #5:** TSoR Client in _Container A_ pops out the response from CQ and then finishes the connection setup phase. The application is notified to be ready to send data.
## 5 Container Startup: Hibernation Mode
### Rationale of Co-design
The co-design of QKernel and QVisor introduces a highly efficient container startup mode called _Hibernation Mode_. Traditional container startup methods suffer from trade-offs: the cold-start approach incurs high latency, whereas the keep-warm method results in excessive idle-time memory consumption. The Hibernation mode aims to reconcile these drawbacks by preserving essential runtime objects--such as container processes, cgroups, and file system mappings--while deflating the memory footprint of idle containerized applications. Maintaining essential runtime objects in an active state minimizes startup latency while deflating memory during idle periods helps save memory usage. This unique mode is made possible through two critical mechanisms--memory reclaiming and memory swapping--both of which necessitate tight coordination between QKernel and QVisor. For memory reclaiming, QKernel handles the memory allocation, while QVisor hosts the manager for memory reclaiming. For memory swapping, QKernel requires customized page tables, while QVisor employs a dedicated swapping manager. Next, we delve into how such codesign delivers the Hibernate mode.
### Hibernation Mode
The Quark system introduces a hibernation mode that balances rapid container startup with efficient memory utilization. As Figure 5(a) illustrates, Quark containers can exist in one of five operational states: 1) Init: Container runtimes, including QKernel and QVisor, are not yet launched, and containerized applications remain uninitialized. 2) Warm: The container runtime process is active, and the containerized applications have been initialized. Containers in this state are ready to handle incoming user requests. 3) Running: User requests arrive, and containers are busy processing the requests. 4) Hibernation: Containers in hibernation mode are _deflated_ warm containers. Deflation here implies that the application processes are paused, and the associated memory is released back to the host kernel. 5) Wake-up: Containers in hibernation can be _inflated_ by restoring application memory and resuming application processes. Once awakened, these containers are ready to process incoming requests. Unlike containers in the warm state, those in the wake-up state employ on-demand memory swapping (more details in SS5.4).
To enable the transition from warm containers to hibernation mode, the following key steps are involved: 1) Pause application processes of containers and block the threads of container runtime. 2) Reclaim application memory pages and release them back to the host OS, which is described in SS5.3. 3) Create snapshots of the container's committed memory pages by swapping them out to disk storage, which is described in SS5.4.
### Memory Management and Reclamation
In hibernation mode, Quark reclaims unused application memory pages and returns them to the host machine, thereby reducing memory overhead while containers are in hibernation. To accomplish this, Quark employs the system call _madwise()_ with the advice parameter set to _MADV_DONTNEED_. This informs the host kernel that the application does not anticipate using these specific memory pages in the near future, enabling the host machine to reclaim these pages to other processes.
After executing the _madwise()_ call, the affected pages become zero-fill-on-demand pages, which are filled with zeros before being given to a process.
To enable the above procedure of reclaiming the memory, the page allocator within the QKernel needs a customized design. Typical memory allocators used by Linux kernels, such as Slab Allocator [54] and Buddy Allocator [55], maintain the free memory blocks in a free list, which is a linear linked list, and its _next_ pointer is kept in the free memory blocks. The free list works well for the host OS kernel. Unfortunately, such a free list design is not suitable for Quark to reclaim the free memory page blocks. Because when we use _madwise()_ to reclaim the free memory page blocks, as pages of the memory block are zero-filled, the _next_ point is cleared, so the free list data structure is broken. Thus, we propose Bitmap Page Allocator to address this particular issue.
As shown in Figure 5(b), the Bitmap Page Allocator manages fixed-size memory blocks, each consisting of 1024 memory pages that are 4KB in size. The first page in each block serves as the control page, maintaining a two-layer hierarchical bitmap to indicate whether each page is free. The L1 bitmap, with 16 bits, narrows the search range by indexing the L2 bitmap, which has 1024 bits. Each bit of the L2 bitmap indicates the allocation status for one page (except for the control page). Each bit of the L1 bitmap corresponds to 64 bits in the L2 bitmap, marking the presence of at least one free page within the indexed 64 pages. To locate a free page, the allocator first identifies a non-zero bit in the L1 bitmap and then scans the corresponding 64 bits in the L2 to locate the exact free page.
### Snapshot by Memory Swapping
Before transitioning the container into hibernation mode, Quark needs to _snapshot_ application data in memory by swapping the data in memory to the persistent storage, such as local disks, in preparation for future restoration. As shown in Figure 5(c), the Swapping Manager in QVisor is responsible for carrying out the swap-out procedures. Conversely, when a container is woken up, the Swapping Manager executes the corresponding swap-in operations. Next, we detail the swap-out and swap-in mechanisms, respectively.
The swap-out process unfolds as follows: 1) QKernel pauses application threads, allowing for the memory pages used by the application to be swapped out. 2) The Swapping Manager scans the guest application's page tables and marks each anonymous page's entry as _Not-Present_, which will cause a page fault upon future access. The page fault serves as a trigger for a subsequent swap-in operation, the details of which will be discussed later. 3) The memory pages are then written to a swap file stored on disk. 4) Finally, the swapped-out pages are returned to the host OS through a memory reclamation procedure described before.
The swap-in process unfolds as follows: 1) When a hibernated container is reactivated, accessing a swapped-out memory page triggers a page fault, starting the swap-in process. 2) The page fault handler, executing on the vCPU, traps from QKernel to QVisor to read the required memory page from the swap file. 3) The page table entry is then marked as _Present_, preventing further page faults for that particular page. Note that the swap-in process is conducted in an on-demand manner rather than loading all data from disk to memory all at once when the container is woken up.
## 6 Container I/O Syscall: QCall
### Rationale of Co-design
The co-design of QKernel and QVisor unlocks an opportunity to implement a hypercall mechanism from the ground up, addressing the traditional overhead associated with context switches between the guest kernel and the VMM. To this end, we introduce _QCall_, which allows the QKernel to request privileged operations from QVisor. QCall retains the functionalities of traditional hypercalls while minimizing the overhead of context-switching.
### QCall
QCall (short for Quark Call) serves as a mechanism that allows the QKernel to execute privileged operations via QVisor.
Figure 5: Quark enables Hibernate Mode via customizing the memory management and swapping.
Diverging from traditional hypercall mechanisms, which typically employ a synchronous workflow by trapping into the VMM, QCall opts for an asynchronous approach. As Figure 6 shows, QCall is facilitated through two core components: 1) _QLib_ is a library that establishes shared queues between QKernel and QVisor via memory sharing. These queues accept jobs submitted by the vCPU threads running in QKernel. 2) _QCall Handling Thread_, a dedicated thread waiting in QVisor, is responsible for dequeuing jobs from QLib and executing the corresponding requests.
The workflow for virtual threads requesting privileged operations unfolds as follows: Virtual threads in QKernel submit a job into QLib, subsequently entering a blocked state as they await job completion. Importantly, the QKernel then swaps out these blocked threads from the vCPU, allowing the vCPU to execute other runnable threads--ensuring the vCPU itself is never blocked. The QCall handling thread will dequeue and execute the job from QLib. Upon completion, QVisor notifies QKernel, transitioning the waiting threads to a _ready-to-execute_ state. The vCPU will then execute these threads. Note that QLib doesn't take the design of a completion queue to avoid the cost of virtual threads polling for a completion signal. Once the job is completed, the thread is transitioned to a ready-to-execute state and is executed by the vCPU as scheduling allows.
One of the advantages of QCall is the elimination of the need for vCPU to trap into QVisor's context from the QKernel's context. Because virtual threads invoke privilege operations via submit jobs rather than trapping into the QVisor. This effectively removes the context-switching overhead associated with traditional hypercalls. In traditional hypercalls, a trap into the VMM is required, followed by a context switch back to the guest kernel after execution is complete. Furthermore, The async workflows allow the vCPU to execute other threads without being blocked or engaged in busy polling.
### Host IO Operations via io_uring
For host I/O operations like file read/write, QKernel leverages io_uring [56] to directly interact with the host kernel, bypassing the QCall mechanism to further accelerate I/O performance. io_uring is an advanced asynchronous I/O interface in the Linux kernel that is designed to reduce overhead by enabling batched and asynchronous I/O requests. It employs a set of lock-free ring buffers that are shared between user and kernel spaces to handle the submission and completion of I/O tasks. This efficiency enables Quark to swiftly execute I/O operations. Importantly, while Quark uses io_uring for data access operations to enhance performance, it maintains control operations such as file opening through QVisor to enhance security. For example, opening a file generally involves setting permission controls, such as readability and writability. Also, control operations are generally less amenable to batching compared with data operations, so performance improvements from io_uring are limited for control operations.
## 7 Implementaion
We developed Quark from the ground up using the Rust programming language, which features memory safety and high performance. Table 4 provides a detailed breakdown of the Lines of Code (LOC) across Quark's core subsystems. Notably, we have engineered the TSoR module as a separate component to facilitate sharing TSoR service among multiple sandboxes. Quark is available as an open-source project.
In the development of Quark, we place a strong emphasis on compatibility to seamlessly support container images without requiring any modifications to user code, recompilations, or dynamic pre-loading, which could introduce security concerns and additional burdens for our users in practical deployments. For container images targeting Linux, QKernel supports about 150 system calls. Notably, Quark fully supports TCP socket system calls while transparently improving the network performance with RDMA.
Quark is Kubernetes-ready and Docker-ready. Our implementation aims to be fully compatible with existing orchestration systems like Kubernetes and Docker. This ensures that Quark can be seamlessly integrated into existing Kubernetes clusters and take advantage of various tools within the Kubernetes ecosystem. To achieve Kubernetes compatibility, Quark adheres to the Container Runtime Interface (CRI)
\begin{table}
\begin{tabular}{c|c|c}
**Component** & **Main Submodule** & \(\sim\)**LOC** \\ \hline \multirow{5}{*}{QKernel} & \multicolumn{2}{c|}{Syscall Layer} & 14.6 K \\ \cline{2-3} & Memory Manager & 5.7 K \\ \cline{2-3} & File System & 34 K \\ \cline{2-3} & Thread Manager & 9.2 K \\ \cline{2-3} & Kernel Utility (e.g., timer) & 14.2 K \\ \cline{2-3} & Network Socket & 13.2 K \\ \hline QVisor & Runtime & 28.4 K \\ \hline TSoR & TSoR Service & 11.2 K \\ \hline \multicolumn{1}{c|}{TSoR Client} & 5 K \\ \hline \multicolumn{1}{c|}{In Total} & 135.5 K \\ \end{tabular}
\end{table}
Table 4: Quark Codebase
Figure 6: QCall mechanism
specification [57], the API set that Kubernetes defines for interaction with container runtimes. For Docker compatibility, Quark complies with the Open Container Initiative's (OCI) Runtime Specification [58]. Furthermore, our TSoR module is fully aligned with the Kubernetes Networking Model, allowing it to be orchestrated by networking control planes such as the Kubernetes API Server.
## 8 Evaluation
### Setup
**Testbed**. Our testbed comprises an RDMA-capable cluster with x86 servers connected to a 100Gbps Arista 716032-CQ switch. Each server is equipped with two Intel Xeon Gold 5218 processors, 96GB memory, and a 100Gbps dual-port Nvidia ConnectX-6 Dx NIC. We run Ubuntu 20.04 with kernel 5.15 and RoCEv2 for RDMA. We use Kubernetes v1.21 and Docker v20.10.
**Comparison Baselines**. We choose two common container runtimes: 1) _runC_[33] is the default runtime for Docker and represents a widely-adopted, non-sandboxed container runtime. By directly operating as a host OS process without involving a guest kernel and VMM, runC generally outperforms VM-based secure container runtimes, making it a strong baseline for performance comparisons. For container networking, we configure runC with Flannel [35], one widely-used Container Networking Interface (CNI) plugin. We use runC version 1.1.4 for experiments. 2) _Kata_[17] is a secure container runtime that runs containers inside a lightweight virtual machine with its own Linux guest kernel and QEMU as the VMM. Like runC, we configure Kata with Flannel to enable container networking. We use Kata version 1.13.
### End-to-end Application IO Performance
#### 8.2.1 Redis
Redis [29] is an in-memory key-value data store. We use the official Redis docker image (v7.0.5) and the built-in Redis-benchmark utility to collect the metrics of latency and throughput while running with different runtimes. We set up the server and client in different containers deployed on different hosts. We select SET, GET, and INCR operations where INCR means incrementing the number stored at _key_ by one.
Figure 7(a) presents the latency for executing SET, GET, and INCR operations with the default data size of 3 bytes. To focus on measuring I/O performance and avoid internal queuing within the application, we set up a single connection for this experiment. For SET, Quark's P95 latency is only 23 us, 79.3% percent lower than kata. For all test operations, Quark achieves significantly lower latency than others. This is primarily attributable to Quark's RDMA-based TSoR networking and its efficient QCall mechanism for I/O operations.
Figure 7(b) presents the throughput comparisons with the metric of Request Per Second (RPS) for SET, GET, and INCR operations. TSoR achieves higher than other solutions for all three operations. For SET operation, Quark achieves about 42K RPS, 4.1x times higher than kata, which highlights the effectiveness of both TSoR and QCall mechanisms.
Figure 7(c) shows the throughput comparisons while increasing the number of connections. Quark almost linearly scales while varying the number of connections from 1 to 3. For GET operation, TSoR achieves 42K RPS for a single connection and 117K RPS for five connections, which is 2.43x higher than kata. While scaling to more connections, performance is bottlenecked by the queue buildup within Redis rather than the network.
#### 8.2.2 Node.js
Node.js [30] is a software platform for server-side networking applications that can act as a web server with support of HTTP and socket. We use node.js image to set up a web server and then use Apache HTTP server benchmarking tool (ab) [59] to generate requests from the client side. The server and client run on different host machines.
Figure 8(a) shows the response latency while varying the document length returned by the Node.js web server. Quark completes the request faster than others, especially when transferring documents with large sizes. For transferring a \(64KB\)
Figure 7: Redis Performance. Quark achieves lower response latency, higher throughput, and better scalability.
document, TSoR achieves 67.8% lower latency than kata because the RDMA-based data path provides higher throughput with lower stack overhead.
Figure 8(b) shows the throughput for transferring documents with different sizes. For a \(64KB\) document, TSoR achieves 2.57x times higher throughput than runC, the performance upper bound for other solutions.
#### 8.2.3 Etcd
Etcd [31] is a distributed key-value store that uses the Raft to achieve strong consensus. We use the etcd image (v3.0.0) to set up a server and run the official benchmarking tool with ten connections. The server and client run over the different host machines. We select four common test cases for etcd: _range_ request is to get multiple keys. _run-put_ is to write a single key within a transaction (txn). _stm_ is the implementation of software transaction memory. A _watch-get_ request tells etcd to notify the requester of getting to any provided keys.
Figure 8(c) shows the throughput as Request Per Second (RPS) while executing different benchmarking tests. To unify the scale of different test cases, we normalize the RPS of other baselines to TSoR's performance numbers. For five common test cases, TSoR achieves higher RPS than other baselines. For example, for a typical test case of _run-put_, TSoR's RPS is 1.49x times higher than runC.
### Network Microbenchmark
**Throughput**: In Figure 9(a), we present a comparison of throughput performance for a single TCP connection between paired containers across hosts. We use iPerf3 to generate TCP traffic while varying the message size. We focus on the throughput performance for transferring the message with a relatively large size (larger than \(4KB\)). Because in practice, most of the throughput-hungry scenarios like file transferring use relatively large message sizes. Varying message size from \(4KB\) to \(256KB\), Quark consistently outperforms other setups. The advantages of Quark's TSoR become even more pronounced as the message size increases. This is because TSoR's RDMA-based data path allows for greater maximum throughput compared to alternatives constrained by the kernel's TCP/IP stack. For a typical message size of \(64KB\), TSoR achieves 34.3 Gbps throughput, which is even higher than runC, which is a non-sandboxed runtime that operates without the overhead of a guest kernel and VMM. It's worth noting that Quark, while employing a sandboxed model, offers stronger security isolation compared to runC. Quark increases the iperf throughout by 2.46x for the message size of \(256KB\) compared to kata.
**Latency**: Figure 9(b) shows latency comparison for a cross-host TCP connection. We use Nptcp to measure the average latency for executing 1000 times message transferring with varying the message size. We focus on the latency performance for a small message which is common in typical microservices using RPCs. TSoR achieves constantly lower latency for small messages. For 64-byte messages, TSoR achieves 9.3 us latency which is 70.9% lower than runC and 86.8% lower than kata. Due to removing lots of layers of security and virtualization, runC generally plays as a performance ceiling for host OS TCP/IP stack-based container network solutions. By leveraging an RDMA-based data path, TSoR avoids the limitations imposed by the Linux kernel's TCP stack, achieving superior latency performance in comparison to runC.
**TCP Connection Establishment Time**. Table 5 shows the TCP connection establishment time. Using TSoR, the es
Figure 8: Node.js and etcd performance. Quark achieves lower response latency and higher throughput.
Figure 9: TCP connection throughput and latency between a pair of containers across hosts with _iperf3_.
tablishment of a TCP connection takes a much shorter time. TSoR only needs less than half of the establishment time compared with runC. As we described before, although TSoR requires one RTT between server and client, similar to the handshake process of standard TCP, RDMA has a much lower RTT latency. So TSoR can benefit from RDMA to establish connections faster. We notice that TSoR's establishment time has much less variation compared with others. Because RDMA's low latency performance is more stable than the host OS TCP/IP stack. As the experiment shows, TSoR can more efficiently handle connection establishment for many short-lived TCP connections, which is challenging for lots of prior work [23, 60]. Particularly aligned with the short-term execution model of microservices and serverless computing, the short-lived TCP connection is a common scenario.
### _Container Startup Performance_
**Request Response Latency**: Figure 10 shows the response latency for different modes of container startup upon the arrival of user requests. In this experiment, we set up four containerized services: Float Processing, Image Processing, Python Helloworld, and Golang Helloworld. All services are triggered via the HTTP requests from the client. For _Cold_ mode, the container startup process includes setup runtime, application initialization, and application processing. For _Warm_ mode, containers are alive before requests arrive. For _Hibernation_ mode, containers are in the hibernation mode, the customized mode enabled in Quark. We measure the end-to-end latency from the user side. From the results, we can see that the hibernation mode significantly reduces the response latency compared to the cold mode. Taking the Float-op as the example, the hibernation mode is 19.9 ms, 96.5% lower than the cold mode. Also, the hibernation mode achieves a comparable latency with the warm mode. Note that the y-axis is log-scale. Additionally, these applications also demonstrate that Quark can benefit different language runtimes such as Python, Golang, and Java.
**Idel Memory Cost:** Figure 11 shows the idle memory usage across different container startup modes, using the same applications as in the response latency experiment. Memory consumption is measured using the Linux utility _pmap_ to report the Proportional Set Size (PSS). In our test, we report the PSS data with ten concurrent running containers. In the cold mode, containers are terminated after request processing, resulting in zero idle memory cost. The _wake-up_ mode refers to containers resuming from hibernation by restoring application memory and processes. Unlike the warm mode, the wake-up mode employs on-demand memory loading while swapping in the application memory from disk. Our results indicate that hibernation mode significantly reduces idle memory consumption compared to the warm mode. For instance, hibernation containers consume 81.3% less memory than the warm mode. Even during active processing, the wake-up mode maintains a lower memory footprint due to its on-demand memory loading technique.
### _Sandbox Memory Overhead_
In this experiment, we evaluate the memory overhead associated with the sandbox, which includes both the guest kernel and VMM. We initiate a container using the commonly used _busybox_ image on both Quark and Kata, then measure the memory consumption of the guest kernel and VMM processes. It's worth noting that runC, being a non-sandboxed container runtime, incurs no such memory overhead. As Table 6 shows, Quark's sandbox consumes considerably less memory compared to Kata's. This is primarily due to QKernel and QVisor being more lightweight than the Linux kernel and QEMU used by Kata, respectively.
## 9 Conclusion
In this work, we present Quark, a high-performance secure container runtime. We've built it from scratch, combining a guest VM kernel called QKernel and a VMM called QVisor. The QKernel-QVisor codesign bring three major improvements: performant networking solution TSoR, fast container startup using Hibernation mode, and efficient syscall mechanism QCall. Comprehensive evaluations show that Quark is high-performance. Industry-standard and open-sourced code-bases have great potential to prototype future innovation.
\begin{table}
\begin{tabular}{c|c|c|c} Test & **Quark** & **runC** & **Kata** \\ \hline busybox memory usage (MB) & 11.8 & N/A & 184.3 \\ \end{tabular}
\end{table}
Table 6: Memory Overhead of Sandboxes.
Figure 11: Idle Memory Cost for Container Start Modes.
Figure 10: Response Latency for HTTP Requests. |
2309.09562 | Training Students' Abstraction Skills Around a CAFÉ 2.0 | Shaping first year students' mind to help them master abstraction skills is
as crucial as it is challenging. Although abstraction is a key competence in
problem-solving (in particular in STEM disciplines), students are often found
to rush that process because they find it hard and do not get any direct
outcome out of it. They prefer to invest their efforts directly in a concrete
ground, rather than using abstraction to create a solution.
To overcome that situation, in the context of our CS1 course, we implemented
a tool called CAF\'E 2.0. It allows students to actively and regularly practice
(thanks to a longitudinal activity) their abstraction skills through a
graphical programming methodology. Moreover, further than reviewing students'
final implementation, CAF\'E 2.0 produces a personalized feedback on how
students modeled their solution, and on how consistent it is with their final
code. This paper describes CAF\'E 2.0 in a general setting and also provides a
concrete example in our CS1 course context. This paper also assesses students'
interaction with CAF\'E 2.0 through perception and participation data. Finally,
we explain how CAF\'E 2.0 could extended in another context than a CS1 course. | Géraldine Brieven, Lev Malcev, Benoit Donnet | 2023-09-18T08:16:22Z | http://arxiv.org/abs/2309.09562v1 | # Training Students' Abstraction Skills Around a Caffe 2.0
# Training Students' Abstraction Skills Around a Caffe 2.0
Geraldine Brieven\(@\),Lev Malcev\(@\),Benoit Donnet\(@\)
Universite de Liege, Institut Montefiore, Belgium
###### Abstract
Shaping first year students' mind to help them master abstraction skills is as crucial as it is challenging. Although abstraction is a key competence in problem-solving (in particular in STEM disciplines), students are often found to rush that process because they find it hard and do not get any direct outcome out of it. They prefer to invest their efforts directly in a concrete ground, rather than using abstraction to create a solution.
To overcome that situation, in the context of our CS1 course, we implemented a tool called Cafe 2.0. It allows students to actively and regularly practice (thanks to a longitudinal activity) their abstraction skills through a graphical programming methodology. Moreover, further than reviewing students' final implementation, Cafe 2.0 produces a personalized feedback on how students modeled their solution, and on how consistent it is with their final code. This paper describes Cafe 2.0 in a general setting and also provides a concrete example in our CS1 course context. This paper also assesses students' interaction with Cafe 2.0 through perception and participation data. Finally, we explain how Cafe 2.0 could extended in another context than a CS1 course.
Cafe 2.0, Computer-Assisted Learning, Automatic Feedback, Abstraction, Graphical Reasoning, CS1, Programming Challenge, Programming Environment
## I Introduction
For a first year student, entering Higher Education is a completely other world opening to them compared to Secondary School. First-year students are expected to digest the topics they are taught on their own, with very limited guidance from the supervisors. On one hand, students should become autonomous as it is part of the abilities they should develop and, on the other hand, supervisors cannot often dedicate enough time to each student individually, since they belong to a large and heterogenous group. Moreover, the subjects students have to integrate are often larger and more complex, making the transition even steeper from Secondary School to Higher Education. This situation contributes to a high failure rate, as well as a high withdrawal ratio throughout the year [1], in particular in CS1 classes [2, 3, 4]. This is especially true in our country, where open access to Higher Education is the rule (with some exceptions in the Medicine and Engineering Faculties). The consequence is that we cannot make any kind of assumptions about a first year student's background. This can be a significant drawback in some areas, like Computer Science [5], that strongly rely on Mathematical skills. Shaky foundations in Mathematics leads to poor abstraction capacities as well as a lack of rigor in problem solving, while abstraction and problem solving represent the core of the Computer Science curriculum. However, many students are not fully aware of that until they take the first evaluation, which reveals where they stand with respect to the curriculum requirements. In many cases, that check point already comes too late in the semester : students feel demotivated and cannot make up the time they have lost as new topics keep being taught.
In that teaching context, one of our goals in our Introduction to Programming class (usually refered to as "CS1") is to maintain students onboard. To do so, regular activities are organised [6] to give them the opportunity to be actively learning and to receive feedback. Most of their productions mobilize abstraction skills since any Computer Scientist should demonstrate such abilities [7, 8]. More generally, abstraction skills are involved in all STEM (Science, Technology, Engineering, and Mathematics) disciplines because they support problem and solution modeling, whatever the nature of the problem [9]. Thanks to abstraction, a whole class of equivalent problems (where only input parameters vary) can be addressed, rather than only some specific instances. In our course, students apply abstraction through the Graphical Loop Invariant Based Programming (Glibp) [10], consisting in representating and manipulating a drawing reflecting a solution supported by an iterative process (i.e., a loop in a piece of code). That context and goals are illustrated in the rectangle labeled "Objectives" in Fig. 1.
Encompassing that rectangle, Fig. 1 expresses how abstraction can be taught with respect to those objectives. In practice, the only way to make a large group of students active and pro
Figure 1: Motivation and context around Cafe 2.0 and our CS1 course.
vide them with a correct and personalized feedback is to transit through a remote activity supported by an online system. We implemented such a learning tool called Cafe 2.0 (standing for "Correction Automatique et Feedback des Etudiants"). It is currently applied only in the context of our CS1 class in which students are exposed to C programming language concepts and a graphical programming methodology (Glibp). Cafe 2.0 automatically assesses students' programming exercises and provides students with high quality feedback and feedforward information (i.e., what should they do to improve their solution). One key point of Cafe 2.0 is that it does not only focus on the program output but also on the cognitive abstraction process inherent to the program construction, which differs from a large majority of learning tools restricting to automatic code simulation and assessment [11, 12, 13, 14, 15, 16, 17, 18, 19]. That differentiation gives to Cafe 2.0 the potential to be transposed in other (STEM) disciplines that would rely on a sequential resolution process founded on a scheme constructed upstream, similarly to our course.
To make students use a learning tool, it needs to be integrated into the course via activities, as shown in the outer rectangle labeled "Activities" in Fig. 1. In our course, the _Programming Challenge Activity_ (Pca) [20] was created. It spans the four-month semester by regularly addressing statements students should solve by submitting some solutions as many times as they want. Furthermore, for each submission, students receive personalized feedback (especially detailed feedback about their solution modeling through a drawing) and feedforward. In addition to that activity, it is aimed to offer students another kind of opportunity to train abstraction skills. To meet that purpose, the GameCodes[21] are currently being implemented. Contrary to the Pca, the GameCodes give students more freedom and guidance across their resolution.
In this paper, we carefully depict Cafe 2.0 features, by defining them from a generic and specific perspective. Doing so, we highlight the interest and potential of Cafe 2.0 out the scope of our CS1 class. Then, this paper discusses two remote activities : the Pca (implemented over Cafe 2.0) and the GameCodes (under implementation). After that, we report how students receive the Pca and Cafe 2.0 during our course organized during Academic Year 2022-2023. Finally, this paper exposes how Cafe 2.0 could be extended to support new problem profiles (from other STEM fields). In particular, we define the checklist a resolution flow should follow to get the best from Cafe 2.0.
The remainder of this paper is organized as follows: Sec. II depicts Cafe 2.0, while Sec. III discusses activities in Cafe 2.0; Sec. IV presents perception and participation data we collected; Sec. V explains how Cafe 2.0 should be extended to support additional disciplines; Sec. VI positions this paper with respect to the state of the art; finally, Sec. VII concludes this paper by summarizing its main achievements.
## II Cafe 2.0
Cafe 2.0 has been implemented as a new version of Cafe 1.0. Initially, Cafe 1.0 emerged as a set of Python scripts integrated in a submission plateform [22]. As shown in Fig. 2, to interact with Cafe 1.0, the students need to solve some statement, format their solution in a text file with predefined placeholders, and then upload it on a submission plateform. Each submission is instantaneously processed by Cafe 1.0 that computes the grades, highlights what should be adapted in the current submission (through the feedback), and provides pointers to the theoretical courses (through the feedforward). In this way, students get the opportunity to realize their misunderstanding and improve their subsequent submissions. Fig. 2 also highlights that, as supervisors, in addition to be timesaving and scalable, such a system allows us to keep track of student's behavior by collecting basic data related to their activity and performance.
However, in practice, that initial version has several drawbacks, i.e., (\(i\)) the absence of interactiveness (since Cafe 1.0 has no interface), (\(ii\)) the lack of guidance across the resolution (since it has to be handled beforehand) [23], and, (\(iii\)), the limited learning analytics collected. That prevents a student from tracking their progress and learning journey. Likely, those limitations were repeling some students from taking the full benefits from that learning approach. That is what leveraged the development of Cafe 2.0, turning Cafe 1.0 into a fully online plateform and expanding it in order to make students' online-learning journey richer and more natural. The different upgrades from Cafe 1.0 are detailed in Sec. II-A.
### Cafe 2.0 Overview
In this subsection, Cafe 2.0 is presented from two perspectives : a generic one (providing a high level overview of Cafe 2.0, so that the reader can project it more easily in their own discipline ground) and a specific one (allowing to embody that generic view).
From a generic point of view, Fig. 2(a) illustrates the different functionalities Cafe 2.0 is currently offering. Those functionalities target two types of end users: the students and the supervisors. Considering the students, they can interact with three different modules : (\(i\)) the _Activity Resolution_, where students can pick some statement, solve it, and submit it in order to receive a personalized feedback and feedforward; (\(ii\)) the _Drawing Editor_, equipping students with graphical components they can drag and drop in order to design some solution according to some outline being shown and detailed during the course ; (\(iii\)) the _Progress Tracker_, whose goal is to depict where students stand in their learning journey, with respect to their activity and performance on the tool. Besides this, a supervisor interacts with three modules that
Figure 2: Illustration on how students interact with Cafe 1.0.
echo the ones intended for the students : (\(i\)) the _Statement Encoding_, through which a supervisor can define a new statement and parametrize its automatic assessment, feedback, and feedforward, in the context of an activity ; (\(ii\)) the _Drawing Editor_, that may be used to define a new statement, if the supervisor wants to include some schemes to be completed ; (\(iii\)) the _Learning Analytics Dashboard_, illustrating the students' learning behavior, based on the collected learning analytics. Such a feature will automatically distill students' performance thanks to the personalized feedback that is constructed, which also allows us to identify precisely the points of misunderstood material.
In Fig. (a)a, the modules that are usable via an interface (i.e., frontend) are comprised in the central rectangle ("Interface"). Then, the backend is represented on the right where the main data that needs to be stored to support those modules is represented through cylinders. Finally, the backend also includes the _Correction and Feedback_ functionality responsible for handling a student's submission, knowing to which statement it is supposed to respond to and relying on a misconception library where typical mistakes made by students have been stored.
Next, from a particular point of view, Fig. (b)b refreshes Fig. (a)a by specifying the different modules that have just been exposed in the context of our CS1 course. More precisely, Fig. (b)b focuses on the modules that are currently implemented (colored in Fig. (a)a) and numbers the flow that must be followed by a statement, from its definition to its resolution (that may rely on several improvement iterations). Considering the different modules, it can be noticed that the generic Drawing Editor defined previously has been instantiated as Glide. It provides pre-defined patterns and tutorials, specific to the programming methodology being taught in our course [10]. Next, the activity that is proposed as a concrete opportunity to practice is the _Programming Challenge Activity_ (Pca) [20]. It mainly consists in addressing some statements to the students whose expected resolution namely relies on an interactive blank outline (referred by Blank Graphical Loop Invariant) to fill in, as shown on the right of Fig. (b)b. All those modules implemented in the context of our CS1 course are detailed in the next subsections.
### _Activity Resolution Module_
Similarly to Cafe 2.0's overview, the _Activity Resolution_ is presented at two levels : a generic one and a specific one. First, Fig. (a)a shows that any resolution supported by Cafe 2.0 should be composed of an _abstraction phase_ followed by a _concrete phase_. Then, for each phase, those can include one or more sequential productions [24], one production being illustrated in a rounded rectangle. Additionally, some productions may also overlap those two phases in order to bridge them based on specific configurations of their solution. Finally, some locks can be defined between the productions, as represented through the diamond "edited". In Fig. (a)a, it comes just after the "Main Representation", meaning that students need to first work on it before deriving some specific states and
Fig. 3: High level overview of Cafe 2.0 infrastructure.
developing their solution. In regards to that, Fig. (b)b depicts that resolution process in the context of our CS1 course. It results in five productions in our case. More concretely, those are also shown through Fig 5. On that figure, the four tabs above support the abstraction of the solution via the Blank Graphical Loop Invariant (described in Sec. II-D) and its transposition into concrete states thanks to movable bars. In particular : (\(i\)) "GLI" consists in filling the Blank Graphical Loop Invariant, turning it as students' own Graphical Loop Invariant; (\(ii\)) "Initial Representation" requires students to graphically manipulate their Graphical Loop Invariant to reflect the initial configuration of their solution. In this way, they can derive how the variables supporting their solution should be declared and initialized in their code; (\(iii\)) "Final Representation" corresponds to graphical manipulation of the Graphical Loop Invariant to illustrate the final solution so that students can deduce under which condition their loop stops; (\(iv\)) "Loop Variant Function" is for proposing a function that gives the number of elements that still need to be processed in order to get the final solution. All those tabs are supposed to be done in sequence. Students have also access to the code editor (bottom part of Fig. 5, labelled as "Code Editor"). The
Figure 4: Resolution module of Caffe 2.0.
Figure 5: Blank Graphical Loop Invariant in the Pca. It also shows how our tool follows the Glibp methodology with tabs, one for each step of the resolution process.
code may be pre-filled with a template [25] students must edit with their code. Students have also access to the "playground" mode in which they can compile and test their pieces of code. Once students are ready, they can submit their whole solution.
At that point, two interests in decomposing students resolution into pre-defined productions can be highlighted. First, it allows Cafe 2.0 to pave students' resolution with respect to a given methodology. Next, it frames their solution, making feasible an automatic correction and personalized feedback mainly thanks to the Blank Graphical Loop Invariant.
### _Correction and Feedback Module_
Fig. 6 presents how a given student's solution gets instantaneously assessed and commented, based on a predefined misconception library. Like previously, that process is represented in a general way (through Fig. 6a) while Fig. 6b instantiates it in the context of our course. Both figures show that the correction and feedback are performed based on a misconception library containing typical mistakes students tend to make. It is worth noticing that the misconception concept is broadly used in the STEM literature[26, 27]. In this paper, we similarly use the terms "misconception", "error", and "mistake" to point out "something that is done wrong".
#### Iii-C1 The misconception library
From a general point of view, any supervisor wanting to use Cafe 2.0 as an automatic assessment and feedback system should define a rubric checklist [28] beforehand, forming so the _misconception library_. That rubric should be organised according to the productions a student's submission is made up of. For each production, typical mistakes should be identified (based on previous experiences, like presented in other studies [29]). Then, each mistake should be characterized by a unique error code, a nature (syntactic/semantic), a gravity factor (quantifying how serious the mistake is), a feedback message (explaining in details the error), and, optionally, a corresponding reference to the course (i.e., feedforward). Once the misconception library has been fed, some respective rule-based checks must be implemented and simply configured in order to catch each mistake based on a given submission.
#### Iii-C2 The Correction and Feedback Construction
When those last set-ups are ready, the system can process a given submission. For a given student's submission, each production is digested by a dedicated checker module that detects any potential mistake defined in the misconception library. If a mistake is captured, the student's final grade gets impacted with respect to the gravity factor characterising the mistake. In addition to this, the corresponding feedback message and reference to the course are added in the list of comments being provided to the student, eventually. That list of comments is splited on a per production basis.
Fig. 6b shows an example of misconception library from which mistakes were detected in a given submission. Once the feedback has been received, a student may improve their solution and submit it again.
### _Production Modeling_
For a given production (a production being represented in a rounded rectangle in Fig. 4), in order to be able to capture errors, a tradeoff must be found between constraining the solution and letting enough freedom to students. The more bounded the solution, the more predictable the students' answers with respect to the typical mistakes, which can be caught through rule-based checks. Besides this, The more freedom students get, the easier the transposition of their own reasoning to the provided canvas. In practice, one way is to shape each production and model the expected solution using blank solution components whose semantic and relations between each others must be specified beforehand. Typically, in our course, solution components stand as fields to fill, instructions in the code, or components of the Blank Graphical Loop Invariant (being movable bars and boxes). In particular, the Blank Graphical Loop Invariant is a blank drawing depicting only the general shape that should follow a correct and rigorous Graphical Loop Invariant [10, 24]. Students must then annotate properly the figure so that the drawing becomes their Graphical Loop Invariant modeling their own solution. An example of Blank Graphical Loop Invariant is provided in Fig. 7. Any Blank Graphical Loop Invariant always comes with two types of boxes: (\(i\)) red boxes standing to host expressions (i.e., constants, variables, operations, or left blank) and are to be completed by students without support; (\(ii\)) green boxes standing to host labels that students must drag and drop from a pre-defined list (see the list on the left of the Blank Graphical Loop Invariant in Fig. 5).
That list contains multiple choices, some of them being the expected answers, others being purely random. Doing so, we pave the way for an automatic correction of the Graphical Loop Invariant (with strong feedback and feedforward). This can be achieved thanks to the fact each box is numbered. In this way, when a student's solution gets corrected, each piece of the solution is easily pointed out, allowing to bring a rich feedback while still keeping it clear and smooth to digest for the student. To define a Blank Graphical Loop Invariant, a supervisor can use the drawing editor Glide.
### _Drawing Editor Module_
The Graphical Loop Invariant Drawing Editor (Glide) proposes to supervisors all the components a Graphical Loop Invariant can be composed of so that they can build up some Blank Graphical Loop Invariant, responding to a given problem.
Besides this, Glide also helps students in drawing their own Graphical Loop Invariant by proposing pre-defined graphical components [10] students must arrange and fill in. Fig. 8 shows a final representation of a Graphical Loop Invariant, illustrating how to compute the product of all integers belonging to a range specified as input. Furthermore, students can be guided across their composition by activating some step-by-step tutorials. Once a student considers their Graphical Loop Invariant is completed, they can submit it and some basic checks are performed. In particular, syntactic mistakes are detected (such as the lowerbound being further than the
upperbound or some description of what has been achieved so far that is missing). However, the Graphical Loop Invariant semantic is not verified, which means that the solution can be positively assessed by the Glide although the Graphical Loop Invariant does not make sense.
Similarly to that particular instance, a Drawing Editor could be implemented in other fields by equipping it with the adequate graphical components, modeling them in order to define rules on and drawing up a step-by-step tutorial guiding students in handling those components.
## III Designing an Activity
Automatic assessment and feedback can be fully beneficial only if it gets encapsulated in some activities with a sound pedagogical reflection behind [30]. In this section, two activities are introduced : the Programming Challenge Activity (Pca) [20] and the GameCodes[21]. The Pca is already available in Caffe 2.0 while the GameCodes are currently under construction.
### _The Pca (relying on the Blank Graphical Loop Invariant)_
The Pca is made up of six Challenges. A Challenge is a statement aligned to one or several theoretical chapters taught the week(s) before. In the fashion of the chapters in our CS1 course, the Challenges are cumulative, requiring a good level of understanding about the previous topics to be properly handled. Each Challenge consists in producing some pieces of code. For Challenges 2, 3 and 4, students must also graphically model their solution by filling some Blank Graphical Loop Invariant [10]. Regarding the modalities, any Challenge starts
Figure 8: Screenshot of Glide.
Figure 6: Correction and Feedback module of Caffe 2.0.
Figure 7: Blank Graphical Loop Invariant as solution ground.
Figure 9: Caffe 2.0 activity timeline over the semester.
on Wednesday, 06:00PM and finishes on Friday, 08:00PM. During this 2-days timeframe, a student can submit up to three times their solution, each one receiving an automatic feedback and feedforward. The latest submission determines the final mark and each Challenge accounts for 2% of the final mark for the course. After that certificate period, students are free to keep training, but it will not affect their final grades. It is worth noting that the first Challenge (called Challenge 0) does not account in the final grade. Its only purpose is to make sure students grasped how to use Caffe 2.0. Finally, each student has the opportunity to play a _trump card_ allowing them to skip one of the Challenges (that, will not count towards the student's mark) [20]. Fig. 9 illustrates the timeline of the Pca for Academic Year 2022-2023. The six Challenges were spread between September, 13th and December, 16th. The blocus (i.e., a period during which no classes are organized and students are supposed to prepare themselves for the upcoming exams) is organized between December, 16th and January 3rd for Academic Year 2022-2023. Fig. 9 also shows when the midterm and the final exam were organized.
### _The GameCodes_
Contrary to the Pca, the purpose of the GameCodes is to give students the opportunity to get a personalised learning experience across exercises resolution [21]. A GameCode corresponds to a large statement to solve through predefined resolution steps. In our case, one GameCode is defined per chapter of the course. Students are fully free to take them or not, as they are not certificate. Furthermore, when they solve a GameCode, students can choose how to handle it by jumping to the resolution step they want at any time, asking for tips or theoretical reminders and submitting answers to get feedback.
## IV Preliminary Evaluation
This section discusses some observations from the data collected by Caffe 2.0. In particular, we focus on how students grasp the tool. Sec. IV-A explains the data we collected, while Sec. IV-B focuses on metrics of interest.
### _Dataset_
During Academic Year 2022-2023, 97 students (\(N_{r}\)=97) registered to our CS1 class. Among them, 76% were new comers (first year at University), while 23% were either repeaters or have changed their programs.
We collected data throughout the semester thanks to Cafe 2.0 Learning Analytics module. Among the registered students, a maximum of 80 students (\(N_{c}\)=80 - 82.5% of \(N_{r}\)) connected at least once on Cafe 2.0.
A survey was conducted at the end of the final exam. 74 students (\(N_{s}\)=74 - 76.3% of \(N_{r}\)) shared their opinion. The survey included Likert scale and open questions, all related to different aspects of Cafe 2.0.
### _Results_
Results include students' view regarding Cafe 2.0 (Sec. IV-B1) as well as how they were active on Cafe 2.0 (Sec. IV-B2).
#### Iv-B1 Perception
This paper addresses a particular focus on three questions students were asked in the survey. The first question compares the impact of Caffe 2.0's experience on students' motivation in learning, with respect to other (more classic) activities. Fig. 10 shows that the Pca (supported by Cafe 2.0 and described in Sec. III-A) comes as the second most stimulating activity, after the theoretical lessons. In particular, 60% of the respondents (strongly) agreed that the Pca was motivating, 25% had no opinion, and 15% did not embrace that online experience. More precisely, both the certificative and the formative periods seem relevant to practice the course. Taking a closer look, the opinions are slightly stronger regarding the formative period. Likely, on the one hand, some students were more autonomous, allowing them to take benefit from the Pca independently from any grading pressure while others did not take the formative periods as an opportunity to train. This is also underpinned by Fig. 12(b) (detailed below). One reason explaining some lower activity outside the certificate period could be that students got stuck despite the feedback, preventing them from progressing and achieving the Challenge.
Besides this, a special interest was dedicated to the way students perceive the automated feedback. From a general point of view, Fig. 11 reflects that the feedback is well-received by the students. More precisely, half of the respondents found is clear and understandable, 30% felt mitigated about it, and 20% could not understand it well. Despite some misunderstanding of the feedback, a majority of respondents (74%) felt boosted in improving their solution after receiving the feedback. In the same way, more than 60% of respondents could identify their gaps, focus on the corresponding theoretical supports to fill them, and better understand the topic. Lastly, it is also interesting to note that, although some students were struggling in digesting the feedback, few of them really felt discouraged.
Finally, students were asked about how relevant it would be to integrate some new functionalities in Cafe 2.0. From Fig. 12, it can be noticed that a large amount of respondents (78%) would like to get more transparency regarding the mistakes they make. It suggests that students may miss self-assessment skills [31]. In the same vein, many students would appreciate to visualize their actual progress with respect to what has been taught in classroom activities so far. It may help them in self-regulating their time by being aware of where their actually stand with respect to the course expectations. Next, 45% of respondents expressed they strongly need the GameCodes to be integrated in Cafe 2.0, which is fully consistent with the proportion of students whose learning got boosted by the GameCodes (see Fig. 10). That need of digitalising the GameCodes makes sense as, currently, the GameCodes are presented as interactive PDF files where the different pages represent the resolution step, which results in heavy documents and low quality-of-experience. Due to its dynamics nature, the GameCodes should definitely be transposed in an only plateform where the content can be organised so that the student easily access the information they need, the rest being fully hidden.
#### Iv-B2 Participation
Fig. 13a depicts an upset plot [32] illustrating how students participated to the Pca. The figure
is made up of three parts: the matrix (bottom right) shows to participation. A dot in the matrix means that at least one student has participated to that Challenge. If there are multiple black dots on a column, it corresponds to students having participated to multiple Challenges (e.g., the first column refers to students having participated to the six Challenges). The histogram on the bottom left is the size of each matrix row, while the histogram on top right gives the number of students in the corresponding column of the matrix. We see that Challenge 1 was the most carried out (82% of registered students) but the number of students dropped over time. This is normal in our country as we face a large attrition rate during the first semester for new comers. Only 23 students took the six Challenges (23.8% of registered students).
Then, Fig 13b distills students' participation over time by highlighting, for each day of the semester, the number of students who started a session on Caffe 2.0. More specifically, the two red squares mark the two main evaluations of the course : the midterm and the final exam. Next, the black rectangles refer to the certificate periods of the Pca. Knowing that, we can note that the number of sessions consistently concentrate during those periods and just before the exam. We can also notice a higher number of sessions a few days before the midterm. The days in-between were dedicated to midterms related to other courses, that is likely why few or no session got launched those days. More generally, it appears that students get the most active only when they are against some evaluation rather than regularly practicing to master what they are being taught. That statement corroborates many research results [33, 34, 23] stating that students have difficulties to self-regulate. However, we can still notice few connections spread along the semester, reflecting students who autonomously took benefits from Cafe 2.0.
## V Extending Cafe 2.0
In Sec. II, Cafe 2.0 was introduced as an interdisciplinary learning tool, aiming to train abstraction and problem-solving skills in general. This section consolidates that ambition by detailing the preparation to perform in order to fit Cafe 2.0 with a new problem profile, related to a STEM field.
### _General Consideration_
To integrate a new problem profile in Cafe 2.0, the following requirements must be met :
Requirement 1 : The resolution should be paved by a sequence of production steps. Requirement 2 : The resolution should run through two phases : the abstraction one and the concrete one. Requirement 3 : The abstraction phase should rely on a graphical reasoning. Requirement 4 : The graphical representation should be dynamic, in such a way that it can be manipulated to illustrate different solution states (general ones and specific ones). Requirement 5 : The graphical representation should be made up of predefined graphical components. They can stand as placeholders or movable elements students must handle when they are designing a solution.
Besides this, independently from the discipline, an activity (that can be similar to the Pca, described in Sec. III-A) requires to be set up to make Cafe 2.0 standing as an integral part of the whole course activities. Finally, it is worth noticing that having first year students as target public is the most appropriate in order to avoid too complex solution modeling. Moreover, it is likely that first year students are those who need this kind of support the most.
### _Application to Physics_
To emphasise the interdisciplinary potential of Cafe 2.0, we match it to a specific problem profile picked from another field than Computer Science. In particular, we are considering the following Kinematics problem in Physics :
A car of mass \(m=1200kg\) is parked on a slope of \(\alpha=30^{\circ}\). We would like to compute the magnitude of the friction forces so that the car is at rest.
Figure 11: Perception of the feedback.
Figure 12: Need for new functionalities.
Figure 10: Perception of the Pca with respect to other activities.
#### V-A1 Activity Resolution Setting
Considering that specific problem type, five productions could be defined : (\(i\)) Representing the situation and identifying the forces that are applied on the object of interest (being the car here) ; (\(ii\)) Choosing a system and decomposing the different forces so that they follow the direction imposed by the system ; (\(iii\)) Deriving the mathematical expression(s) allowing one to formulate the friction forces ; (\(iv\)) Transposing the general representation in a particular case (in which the problem might be possible to be intuitively solved) ; (\(v\)) Using numerical data to compute the solution. Fig. 13(a) illustrates those productions and map them to the two phases Cafe 2.0 is supporting (Requirement 2). Also, Fig. 13(a) shows that the resolution is done in sequence (Requirement 1). Further, the Force Diagram corresponds to Requirement 3. Finally, that drawing is manipulated on step 4, corresponding to Requirement 4.
#### V-A2 Correction and Feedback Setting
To enable automatic Correction and Feedback, a misconception library needs to be defined and fed. That library should cover most of the mistakes students may fall in across their resolution, as explained in Sec II-C and illustrated in Fig. 6. For instance, considering our Kinematics problem, for the second production where forces should be decomposed according to the system, one typical mistake might be that the students did not direct all the arrows according to the orthonormed system that is set. Another example of mistake could be that the second equation does not reflect the drawing above.
#### V-A3 Production Modeling Setting
Similarly to the Blank Graphical Loop Invariant, it is relevant to hide from the expected pictorial representation of the situation the key components. In this way, students must identify and link them on their own while still having benchmarks thanks to the
Figure 14: How Cafe 2.0 can fit to a Physics Introduction course.
Figure 13: Results on participation.
provided canvas. Furthermore, on the supervisor's side, those components need to be modeled (by having a specific semantic and relation with each other) in order to enable automatic personalized feedback. For the Kinematics problem of interest, a relevant blank diagram could be the one exposed through Fig. 13(b). That figure relies on different kind of graphical components, each of them being mapped to a specific color. Like in the Blank Graphical Loop Invariant, red boxes should host variables while green ones expect some description picked from a dropdown list. In addition to them, purple boxes stand for forces and movable arrows are illustrated in yellow. All the rest is fixed.
#### V-C4 Drawing Editor Setting
As suggested above, the graphical components that would be useful to formally represent a situation in Kinematics (Requirement 5) are variables, forces, predefined descriptions, and movable arrows. In addition to them, some relevant patterns should be identified and designed. Then, they should be integrated in the Drawing Editor so that, once a teacher or a student wants to depict a scenario in Kinematics, they can simply select a suitable pattern and drag and drop the graphical components to build up their pictorial representation.
## VI Related Work
Many automated system providing programming exercises were already proposed (e.g., [11, 12, 13, 14, 15, 16, 17]). Most of them apply test-based correction, i.e., student's code is corrected through unit testing (except _UNLOCK_[16] that tackles the problem solving skills in general, not just coding skills). _WebCAT_[11] even makes students write their own tests too. _Kumar's Problets_[12] enables step by step code execution as part of the feedback. Closer to Cafe 2.0, _Dodona_[18] proposes programming assignments with automated feedback and harness the data to regulate the teaching materials. However, Dodona is specialized in practicing coding (considering different programming languages) in a collaborative environment (by allowing students to ask questions on a forum) while Cafe 2.0 focuses on students' abstract thinking, upstream to their code. Other tools also offer some features depicting an abstract representation of some pieces of code. Among them, _Virtual-C IDE_[19] typically allows to visualize a C-program behavior. _Online Python tutor_[35] allows students to execute their code step-by-step and visualize the runtime state of each structure. And _JASM.IN_[36] illustrates dynamic program state and provides state transition rules for the execution of Java language constructs. Cafe 2.0 differentiates from those three tools by guiding students in modeling their solution by themselves, so that they can rely on it to build their code. On the contrary, Virtual-C IDE and JASM.IN automatically post-represent the solution based on some pieces of code students have submitted. Considering now the tools where students are designing their solution on their own, you can namely find _Python turtle graphic library_ that was already demonstrated as improving abstraction skills [37]. However, Cafe 2.0 goes further than that as it bridges students' graphical representation and resulting code by checking the consistency between those two versions of the solution and provides automatic feedback with respect to that.
Automatic feedback has been extensively motivated, described, and discussed in previous studies [38, 39]. In particular, Keuning et al. [39] have shown that it is not that easy to tune feedback with respect to some specific needs (proper to the topic and the statement). If we position Cafe 2.0 with respect to feedback theoretical aspects, Cafe 2.0 implements "Answer-until-correct" (AUC) [40] feedback, as students can refresh their solution as much as they need to. It is similar to Singh et al. [41] where students get a numerical value (the number of required changes) and the suggestion(s) on how to correct the mistake(s). In addition, Cafe 2.0 meets the principles Nicol introduced around student's engagement, self-regulation, and academic experience (the only missing dimension being the social one) [42].
Finally, regarding the methodology Cafe 2.0 is currently supporting, other studies promoted problem-solving through predefined steps [43]. Furthermore, the relevance of transiting through an abstract representation of a solution was corroborated to prevent students from getting overwhelmed with specific problem instances [44].
## VII Conclusion
This paper describes Cafe 2.0, a tool whose purpose is to make students regularly and activily work in order to maintain them on a correct track, by providing instantaneous personalized feedback. In practice, Cafe 2.0 supports online activities that are made up of problems to solve. For each new statement, besides its definition, its corresponding solution needs to be outlined and configured by a supervisor. In particular, the solution should be articulated by different types of productions, each of them being framed through a canvas with specific placeholders whose semantic and potential resulting typical mistakes should be parametrized beforehand. This way, on the one hand, students get guided across their resolution and, on the other hand, automatic and personalized review gets enabled since student's solutions can be anticipated.
Cafe 2.0 is relevant to be integrated in any course where abstract representation through drawings stands as the ground for constructing a solution. More specifically, in our CS1 course, abstraction is introduced in the context of building a loop piece of code. More precisely, the Graphical Loop Invariant Based Programming methodology is taught. Therefore, the central kind of canvas in a resolution flow is the Blank Graphical Loop Invariant, representing the shape of a Graphical Loop Invariant with different kinds of boxes to fill and movable bars, allowing to visualize the solution under construction at a specific iteration.
As further work, Cafe 2.0 could be transposed in other disciplines dedicated to first year students. The first steps towards such an extension have been already taken in the context of a Physics course.
Besides this, more specifically, Cafe 2.0's set of functionalities keeps being expanded in terms of activities and data processing. First, regarding the activities, in addition to the Pca, the GameCodes (Sec. III) are under implementation. Next, on the long run, it is aimed to implement a third kind of activity : the Cdb activity [45]. The purpose of this
activity drastically differs from the Pca and the GameCode as it targets to motivate our programming methodology by applying it in a real-life scenario rather than academically training it. Despite that differentiation, it remains very relevant to integrate such an activity in Cafe 2.0 as it also relies on a sequence of productions to submit, in response to a given problem. The only difference is that, for each production, students can get more freedom since the resulting "instantaneous" feedback is built by students reviewers rather than a machine [46]. Therefore, the difficulty in putting in place such an activity is shifted from the automatic feedback configuration to the peer-feedback reviewing process where teams should be defined, time should be punctuated, feedback should be supervised, and productions should move between students. Besides the activities, it is also planned to dedicate a focus on the Progress Tracker and the Learning Analytics Dashboard in order to get more transparency about students' learning activity and wisely adapt the content students should practice.
|
2309.06645 | Bregman Graph Neural Network | Numerous recent research on graph neural networks (GNNs) has focused on
formulating GNN architectures as an optimization problem with the smoothness
assumption. However, in node classification tasks, the smoothing effect induced
by GNNs tends to assimilate representations and over-homogenize labels of
connected nodes, leading to adverse effects such as over-smoothing and
misclassification. In this paper, we propose a novel bilevel optimization
framework for GNNs inspired by the notion of Bregman distance. We demonstrate
that the GNN layer proposed accordingly can effectively mitigate the
over-smoothing issue by introducing a mechanism reminiscent of the "skip
connection". We validate our theoretical results through comprehensive
empirical studies in which Bregman-enhanced GNNs outperform their original
counterparts in both homophilic and heterophilic graphs. Furthermore, our
experiments also show that Bregman GNNs can produce more robust learning
accuracy even when the number of layers is high, suggesting the effectiveness
of the proposed method in alleviating the over-smoothing issue. | Jiayu Zhai, Lequan Lin, Dai Shi, Junbin Gao | 2023-09-12T23:54:24Z | http://arxiv.org/abs/2309.06645v1 | # Bregman Graph Neural Network
###### Abstract
Numerous recent research on graph neural networks (GNNs) has focused on formulating GNN architectures as an optimization problem with the smoothness assumption. However, in node classification tasks, the smoothing effect induced by GNNs tends to assimilate representations and over-homogenize labels of connected nodes, leading to adverse effects such as over-smoothing and misclassification. In this paper, we propose a novel bilevel optimization framework for GNNs inspired by the notion of Bregman distance. We demonstrate that the GNN layer proposed accordingly can effectively mitigate the over-smoothing issue by introducing a mechanism reminiscent of the "skip connection". We validate our theoretical results through comprehensive empirical studies in which Bregman-enhanced GNNs outperform their original counterparts in both homophilic and heterophilic graphs. Furthermore, our experiments also show that Bregman GNNs can produce more robust learning accuracy even when the number of layers is high, suggesting the effectiveness of the proposed method in alleviating the over-smoothing issue.
Jiayu Zhai, Lequan Lin, Dai Shi, and Junbin Gao+Discipline of Business Analytics, The University of Sydney Business School
The University of Sydney, Camperdown, NSW 2006, Australia
[email protected], {lequan.lin, dai.shi, junbin.gao}@sydney.edu.au Graph Neural Networks, Over-smoothing, Heterophilic Graphs, Bregman Neural Networks
Footnote †: 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
## 1 Introduction
With the extraordinary ability to encode complex relationships among entities in a system, graph data are widely observed in many application domains, such as social networks [1, 2], biological networks [3], recommender systems [4, 5], transportation networks [6, 7], etc. Graphs normally model entities as nodes and then construct edges between node pairs to represent underlying relationships. In addition, node attributes are represented as graph signals. Traditional deep feed-forward neural networks (NNs) only consider the propagation of features (i.e., columns of the graph signal matrix), which leaves the connectivity among nodes unexploited. To overcome this limitation, graph neural networks (GNNs) are designed to additionally aggregate neighbouring node features in the direction of rows, contributing to better graph representation learning (GRL) and eventually outstanding predictive performance in various tasks [8].
Framing NNs as an optimization problem is a well-established research topic in the machine learning community [9, 10, 11]. Likewise, numerous recent research on GNNs focuses on the optimization formulation of GNN layers or the end-to-end GNN training. Some works have shown that GRL can be approximated by the solution of some optimization problem with the smoothness assumption of neighbouring node representations [12, 13, 14]. It has also been proven that the end-to-end training for GNNs can be formulated as a bilevel optimization problem, or alternatively, a faster multi-view single-level optimization framework [15]. In this work, we will consider the bilevel optimization formulation of GNNs, in which the upper-level problem shares the same purpose as optimizing the objective function, and the lower-level problem conducts GRL.
Unifying GNNs as optimization problems provides a new perspective to understanding and analyzing existing methods. For example, considering GNNs in node classification tasks, the smoothness assumption, which tends to homogenize the labels of connected nodes, can lead to several adverse effects such as over-smoothing and inappropriate message-passing for heterophilic graphs [14, 16]. Specifically, the so-called over-smoothing issue appears when node features become indistinguishable after several propagations of GNN layers. This phenomenon is more evident in the graphs where connected nodes are often with the same label, known as homophily. On the other hand, with heterophilic graphs where connected nodes have different labels, the smoothing effect induced by GNNs can even lead to worse classification outcomes, because the model is prone to assign similar labels to connect nodes with similar features after smoothing.
The above-mentioned issues can be mitigated with the concept of "skip connection" [17]. For example, APPNP [18] combines the original node feature with the representation learned by each layer, which effectively preserves local information and helps mitigate over-smoothing issues. Such methods are also helpful with heterophilic graphs because they
mitigate the effect of smoothing in representation learning. It has been shown that designing NNs as a bilevel optimization problem with penalty on the Bregman distance between representations from each two consecutive layers is reminiscent of and even better than applying skip connection [9, 19]. This method simplifies the network architecture by employing a set of invertible activation functions. However, it has no direct extension to GNNs as the problem design is limited by the feature propagation of traditional NNs.
In this paper, we aim to propose a novel bilevel optimization framework for GNNs enlightened by the notion of Bregman distance that can effectively alleviate the adverse effects of smoothing. Similar to other bilevel designs, we develop the upper-level problem to optimize the overall objective function, and the lower-level problem for GRL. We show that the optimization framework can be easily applied to the computational format of GNNs by introducing the same set of activation functions for Bregman NNs [9], and we name such architectures as Bregman GNNs.
The contributions of this work include (1) a novel bilevel optimization framework for designing GNNs with Bregman distance; (2) an alternative solution to the adverse effects of smoothing with a set of specially-designed activation functions sharing a similar purpose with skip connection; (3) solid numerical experiment results to validate the effectiveness of the new framework.
## 2 The Proposed Framework
### Preliminaries
We denote \(\mathcal{G}=(\mathcal{V},\mathcal{E})\) as an undirected graph with a set of nodes \(\mathcal{V}\) and a set of edges \(\mathcal{E}\). \(\mathbf{Z}^{l}\in\mathbb{R}^{n\times d_{l}}\) denotes the node feature matrix at layer \(l\), where \(n\) is the number of nodes, and \(d_{l}\) is the embedding size. The graph adjacency matrix is denoted as \(\mathbf{A}\in\mathbb{R}^{n\times n}\) with \(\mathbf{A}_{ij}=1\) if node \(i\) is connected with \(j\), and \(\mathbf{A}_{ij}=0\) otherwise. We further let \(\mathbf{D}\in\mathbb{R}^{n\times n}\) be the degree matrix, where \(d_{i}=\sum_{j}\mathbf{A}_{ij}\). We now provide some necessary notations and definitions for the formulation of Bregman GNN layers.
**Definition 1** (Class of layer-wise functions \(\mathcal{F}\)[9]).: Define \(\{f_{l}\}_{l=0}^{L-1}:\mathbb{R}^{d_{l+1}}\times\mathbb{R}^{d_{l+1}}\to \mathbb{R}\) to be a specific set of bi-linear functions such that
\[f_{l}(\mathbf{z},\mathbf{z}_{i}^{l}\mathbf{M}_{l})=(\mathbf{z}_{i}^{l}\mathbf{ M}_{l})^{\top}\mathbf{E}_{l}^{\top}\mathbf{z}-\mathbf{b}_{l}^{\top}\mathbf{z}- \mathbf{c}_{l}^{\top}(\mathbf{z}_{i}^{l}\mathbf{M}_{l})+\delta_{l}, \tag{1}\]
where \(\mathbf{b}_{l},\mathbf{c}_{l}\in\mathbb{R}^{d_{l+1}}\) and \(\delta_{l}\in\mathbb{R}\). \(\mathbf{z}_{i}\in\mathbb{R}^{d_{l}}\) and \(\mathbf{z}\in\mathbb{R}^{d_{l+1}}\) are the feature vectors of sample \(i\) at layer \(l\) and \(l+1\), respectively. Finally, the matrix \(\mathbf{M}_{l}\in\mathbb{R}^{d_{l}\times d_{l+1}}\) is the weight matrix, and \(\mathbf{E}_{l}\in\mathbb{R}^{d_{l+1}\times d_{l+1}}\) is the parameter matrix presenting the feature correlation. We note that such design of \(\mathcal{F}\) guarantees the closed form solution of the lower-level optimization of the problem defined in Eq. (4), and this form of \(\mathcal{F}\) has been further assigned to enhance the performance of NN in the work of [9]. We now show how to establish the notion of \(\mathcal{F}\) to the **graph data**. Rather than the feature propagation in NN, where features are considered individually as single vectors, in GNN one shall require to propagate the feature matrix as a whole due to the connectivity of the nodes. Accordingly, one shall consider assigning the matrix trace for each of the terms of the definition of \(\mathcal{F}\), resulting in the following form:
\[f_{l}(\mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l}) =\text{tr}((\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\mathbf{E}_{l} \mathbf{Z}^{\top})-\langle\mathbf{1}\times\mathbf{b}_{l}^{\top},\mathbf{Z}\rangle\] \[-\langle\mathbf{1}\times\mathbf{c}_{l}^{\top},\mathbf{A}\mathbf{ Z}^{l}\mathbf{M}_{l}\rangle+\delta_{l}, \tag{2}\]
where \(\mathbf{1}\) is the \(n\)-dimensional vector with all ones, and \(\langle\cdot,\cdot\rangle\) is the inner product between two matrices. We note that the inclusion of the inner product is due to the fact \(\langle\mathbf{A},\mathbf{B}\rangle=\mathrm{tr}(\mathbf{A}^{\top}\mathbf{B})\). Similarly, as we will show in Section 2.2, the form of \(\mathcal{F}\) under Eq. (2) also guarantees closed form solution of the low-level optimization problem defined in Eq. (5) for GNN. Additionally, to properly define the Bregman GNN layer, we further provide the notion of Bregman distance and proximity operator as follows.
**Definition 2** (Bregman distance[19]).: Bregman distance of the matrix \(\mathbf{P}\) from the matrix \(\mathbf{Q}\) is
\[D_{\phi}(\mathbf{P},\mathbf{Q})=\phi(\mathbf{P})-\phi(\mathbf{Q})-\langle \nabla\phi(\mathbf{Q}),\mathbf{P}-\mathbf{Q}\rangle,\]
where \(\phi\) is a Legendre function [20]. The Bregman distance is actually a general case of many distance measurements. For example, if \(\phi(\mathbf{P})=\frac{1}{2}\|\mathbf{P}\|^{2}\), then \(D_{\phi}(\mathbf{P},\mathbf{Q})=\frac{1}{2}\|\mathbf{P}-\mathbf{Q}\|^{2}\) is the square Euclidean distance.
**Definition 3** (Bregman proximity operator [21]).: The Bregman proximity operator of \(g\) with respect to \(\phi\) is denoted by
\[\mathrm{prox}_{g}^{\phi}(\mathbf{P})=\operatorname*{argmin}_{\mathbf{Q}}\{g( \mathbf{Q})+\phi(\mathbf{Q})-\langle\mathbf{Q},\mathbf{P}\rangle\}. \tag{3}\]
In the next section, we show how bilevel optimization can be constructed for graph data based on these definitions.
### Bilevel optimization for graph data
We start by recalling bilevel optimization on the data input (i.e., images) in general NN. Given a standard training data set \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}_{i=1}^{n}\) where \(\{\mathbf{x}_{i},\mathbf{y}_{i}\}\in\mathbb{R}^{d_{0}}\times\mathbb{R}^{c}\), one can denote the feature propagation of NN as the bilevel optimization problem as follows [9].
\[\operatorname*{minimize}_{\psi,\{f_{l}\}_{l=0}^{l-1}} \sum_{i=1}^{n}\ell\left(\psi\left(\mathbf{z}_{i}^{L}\right),\mathbf{y}_{i }\right)\quad\text{ where }\forall i\in[n], \tag{4}\] \[\left\{\begin{array}{l}\mathbf{z}_{i}^{0}=\mathbf{x}_{i}\\ \text{ for }l=0,1,\ldots,L-1,\\ \mathbf{z}_{i}^{l+1}=\operatorname*{argmin}_{\mathbf{z}\in\mathbb{R}^{d_{l}}}\{f_{l }\left(\mathbf{z},\mathbf{z}_{i}^{l}\mathbf{M}_{l}\right)+D\left(\mathbf{z}, \mathbf{z}_{i}^{l}\mathbf{M}_{l}\right)+g(\mathbf{z})\},\end{array}\right.\]
where \(\psi\in\mathcal{B}\left(\mathbb{R}^{d_{L}},\mathbb{R}^{c}\right)\) serving as Borel measurable function, \(\{f_{l}\}_{l=0}^{L-1}\in\mathcal{F}\left(\mathbb{R}^{d_{l+1}}\times\mathbb{R}^{d _{l+1}}\right)^{L}\), and \(g\in\Gamma_{0}(\mathbb{R}^{d})\) can
be treated as simple convex function for regularization. The upper-level objective is the loss between the prediction \(\widehat{\mathbf{y}}_{i}=\psi(\mathbf{z}_{i}^{L})\) and the ground truth of \(\mathbf{y}_{i}\), where \(\psi\) is a simple transformation such as a linear layer or a linear layer followed by a softmax operator. \(\ell\) is the loss function, such as cross-entropy for classification tasks and quadratic loss for regression. The lower-level optimization problem produces layer-wise feature representation and can be further unrolled as an NN layer [9].
Now we analogize the notion of bilevel optimization from the scope of NN to the graph structured data. It is well-known that the core difference between the propagation in NN and GNN is whether the connectivity between nodes (or samples) is considered [22]. Specifically, unlike NN in which each node feature is propagated individually, the neighbouring information is gathered for each node via GNN propagation according to graph connectivity (i.e., adjacency matrix \(\mathbf{A}\)). Therefore, it is natural for one to generalize the bilevel optimization process defined in Eq. (4) by including the graph adjacency information. Accordingly, the lower-level objective becomes
\[\mathbf{Z}^{l+1} =\operatorname*{argmin}_{\mathbf{Z}\in\mathbb{R}^{n\times d_{l+1} }}\{f_{l}(\mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+D_{\phi_{l}}( \mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+g_{l}(\mathbf{Z})\}. \tag{5}\]
It is not difficult to verify that with the form of \(f\) defined in **Definition 1**, the optimization above can still have a closed form solution. The second term measures the closeness between the feature vectors in layer \(l\) and \(l+1\). The minimization of such term restricts the changes in the feature matrix between layers, thereby diluting the smoothing effects.
**Remark 1**. The form of the \(f_{l}\) can be seen as the negative energy in Restricted Boltzmann Machine [23]. The energy between two vectors \(\mathbf{u}\) and \(\mathbf{v}\) is defined as:
\[E(\mathbf{u},\mathbf{v})=-\mathbf{u}^{\top}\mathbf{E}\mathbf{v}-\mathbf{b}^{ \top}\mathbf{u}-\mathbf{c}^{\top}\mathbf{v}.\]
Thus, the optimization problem aims to maximize this energy.
### Bregman GNN layers
In this section, we show how the Bregman GNN layer is built. A demonstration of model architecture is provided in **Fig. 1**. According to Frecon et al. [9], many widely used activation functions (e.g., Relu and Arctan) can be written as the inverse gradient of strongly convex Legendre functions \(\phi\), and for some particular choice of \(g\) and \(\phi\), the Bregman proximity operator in Eq. (3) can be written as
\[\operatorname{prox}_{g}^{\phi}(\mathbf{P})=\nabla\phi^{-1}(\mathbf{P})=\rho( \mathbf{P}).\]
Since \(\operatorname{tr}((\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\mathbf{E}_{l} \mathbf{Z}^{\top})=\langle(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\mathbf{E }_{l},\mathbf{Z}\rangle\), Eq. (5) becomes
\[\mathbf{Z}^{l+1} =\operatorname*{argmin}_{\mathbf{Z}\in\mathbb{R}^{n\times d_{l+1 }}}\{f_{l}(\mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+D_{\phi_{l}}( \mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+g_{l}(\mathbf{Z})\}\] \[=\operatorname*{argmin}_{\mathbf{Z}\in\mathbb{R}^{n\times d_{l+1 }}}\{g_{l}(\mathbf{Z})+\phi(\mathbf{Z})-\langle\nabla\phi(\mathbf{A}\mathbf{ Z}^{l}\mathbf{M}_{l})\] \[\quad-(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\mathbf{E}_{l}+ \mathbf{1}\times\mathbf{b}_{l}^{\top},\mathbf{Z}\rangle\}\] \[=\operatorname{prox}_{g}^{\phi}(\nabla\phi(\mathbf{A}\mathbf{Z}^{l }\mathbf{M}_{l})-(\mathbf{A}\mathbf{Z}^{l}\mathbf{W}_{l})\mathbf{E}_{l}+ \mathbf{1}\times\mathbf{b}_{l}^{\top})\] \[=\rho(\rho^{-1}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})-(\mathbf{ A}\mathbf{Z}^{l}\mathbf{W}_{l})\mathbf{E}_{l}+\mathbf{1}\times\mathbf{b}_{l}^{ \top}), \tag{6}\]
where we have \(D_{\phi_{l}}(\mathbf{Z},\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})=\phi_{l}( \mathbf{Z})-\phi_{l}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})-\langle\nabla \phi_{l}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l}),\mathbf{Z}-\mathbf{A}\mathbf{ Z}^{l}\mathbf{M}_{l}\rangle\). If we further let \(\mathbf{W}_{l}=-\mathbf{E}_{l}\in\mathbb{R}^{d_{l+1}\times d_{l+1}}\) be the weight matrix, and \(\mathbf{b}_{l}\in\mathbb{R}^{d_{l+1}\text{\'{e}}}\) be the bias, then Eq. (6) can be seen as a layer of GNN:
\[\mathbf{Z}^{l+1}=\rho(\rho^{-1}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})+ \mathbf{A}\mathbf{Z}^{l}\mathbf{W}_{l}+\mathbf{1}\times\mathbf{b}_{l}^{\top}), \tag{7}\]
where \(\rho\) is the activation function and \(\rho^{-1}\) is its inverse. If \(\mathbf{Z}^{l}\) and \(\mathbf{Z}^{l+1}\) share the same dimension i.e., \(n\times d_{l}\), then \(\mathbf{M}_{l}\in\mathbb{R}^{d_{l}\times d_{l}}\). \(\mathbf{W}_{l}\in\mathbb{R}^{d_{l}\times d_{l+1}}\) represents the weights in layer \(l\). \(\mathbf{b}_{l}\in\mathbb{R}^{d_{l+1}\times d_{l+1}}\) represents the biases in layer \(l\). Hence, the parameters that the model should learn are \(\mathbf{M}_{l}\), \(\mathbf{W}_{l}\), and \(\mathbf{b}_{l}\). Regarding the term \(\rho^{-1}(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l})\) in the derivation of Eq. (6), the utilization of inverse activation function for \(\mathbf{A}\mathbf{Z}^{l}\mathbf{M}_{l}\) brings the feature representation of the previous layer to the present layer. This serves a similar purpose as skip connection. Therefore, such design helps the model to maintain the desirable variation of node features, thus inducing the adverse effect of smoothing in GNN propagation. Finally, it is worth noting that the propagation in Eq. (7) can be applied to many existing spatial message-passing GNNs
\begin{table}
\begin{tabular}{c c c c c c c} \hline
**Datasets** & Class & Feature & Node & Edge & Train/val/test & Homophily\% \\ \hline
**Cora** & 7 & 1433 & 2708 & 5278 & 140/500/1000 & 82.5\% \\
**CiteSeer** & 6 & 3703 & 3327 & 4552 & 120/500/1000 & 72.1\% \\ \hline
**Actor** & 5 & 932 & 7600 & 26659 & 60\%/20\%/20\% & 21.4\% \\
**Texas** & 5 & 1703 & 183 & 279 & 60\%/20\%/20\% & 11.0\% \\ \hline \end{tabular}
\end{table}
Table 1: Statistics of the homophilic and heterophilic datasets
Figure 1: Illustration of Bregman GNN based on Equation (7). The model is composed of one hidden layer of classic GNN propagation, one hidden layer of Bregman-modified propagation with the invertible activation functions, and finally, the output layer to make predictions. This architecture can be extended by adding more hidden layers.
such as GCN [22] and GAT [25]. In the next section, we verify such enhancement power of Eq. (7) with comprehensive empirical studies.
## 3 Experiments
The primary objective of our experiment is to test the performance of the proposed Bregman GNNs in comparison with their standard forms, which means the experiments are conducted in an ablation fashion. We first compare the performance of Bregman-enhanced GNNs to their standard forms to show their adaptive power on both homophily and heterophily graphs. Then, we provide the results of an over-smoothing experiment to show the effectiveness of the proposed method in alleviating over-smoothing. Our experiment codes can be found at [https://github.com/jiayuzhai1207/BregmanGNN](https://github.com/jiayuzhai1207/BregmanGNN).
### Datasets and Implementation Details
For the first experiment, we choose 4 commonly-used datasets as shown in **Table 1**, including 2 homophilic graphs **Cora**[28] and **CiteSeer**[28], and 2 heterophilic graphs **Actor**[29] and **Texas**[29]. For the over-smoothing experiment, we only use **Actor**. The train/validation/test split follows the same split in [28] and [29]. In the first experiment, we choose 6 classic GNNs as baselines, and all networks have 3 layers including the output layer. We select this architecture because Bregman GNNs require at least 3 layers: 2 hidden layers to apply the inverse activation function, and the output layer for final classification. In the over-smoothing experiment, we choose GCN and GAT as baselines and set the number of layers in \([3,5,7,9]\). The average test accuracy and its standard deviation are calculated based on the results from 10 runs. Grid search is conducted for hyperparameter tuning. For Bregman GNNs, we select from a set of invertible activation functions that have been shown as Bregman proximity operators, such as ReLU, Tanh, ArcTan, and Softplus [9].
### Results for Homophilic and Heterophilic Graphs
The experiment results are shown in **Table 2**. Overall, Bregman GNNs present good performance compared to their standard counterpart across all datasets. For homophilic graphs, the Bregman architecture achieves consistent improvement on the standard baselines. Notably, the Bregman architecture enhances the accuracy of APPNP by 1.57% for **Cora** and by 1.37% for **Citeseer**. For heterophilic graphs, the Bregman architecture successfully improves the performance of ChebNet, GCN, and GAT for both **Texas** and **Actor** by 0.19% to 1.12%. APPNP remains to have the largest improvement from the Bregman enhancement for **Actor**. One possible reason is that the Bregman architecture provides one additional path for APPNP propagation to access source terms from previous layers, which further mitigates the adverse effect of smoothing. However, no improvement is observed between GraphSAGE and its Bregman form, yet the learning accuracy remains comparable between them. Finally, in most cases, Bregman GNNs show lower standard deviation, indicating higher stability in the node classification task.
### Results for Over-smoothing Experiment
The experiment results are presented in **Fig. 2**. Apparently, the classification accuracy of both Bregman GCN and GAT is consistently higher than their standard counterparts when the number of layers increases. Therefore, we conclude that Bregman GNNs are more robust to the over-smoothing issue. Nevertheless, the overall decrease trend in accuracy indicates that the over-smoothing issue is only alleviated but not fully addressed.
\begin{table}
\begin{tabular}{c|c|c|c|c|c|c|c|c} \hline & \multicolumn{2}{c|}{**Cora**} & \multicolumn{2}{c|}{**CiteSeer**} & \multicolumn{2}{c|}{**Texas**} & \multicolumn{2}{c}{**Actor**} \\ \hline
**Models** & **Bregman** & **Standard** & **Bregman** & **Standard** & **Bregman** & **Standard** & **Bregman** & **Standard** \\ \hline ChebNet [24] & \(81.22\pm 0.94\) & **81.46 \(\pm\) 0.54** & **71.70 \(\pm\) 0.50** & \(71.68\pm 1.20\) & **84.05 \(\pm\) 5.47** & \(83.51\pm 3.91\) & **35.92 \(\pm\) 0.84** & \(35.81\pm 1.16\) \\ GCN [22] & **82.58 \(\pm\) 0.84** & \(82.32\pm 0.69\) & **72.35 \(\pm\) 0.85** & \(71.51\pm 0.40\) & **63.78 \(\pm\) 5.31** & 63.24 \(\pm\) 4.55 & **29.05 \(\pm\) 0.68** & \(27.93\pm 0.79\) \\ GAT [25] & **82.19 \(\pm\) 0.69** & \(81.63\pm 0.71\) & **71.52 \(\pm\) 0.72** & \(70.31\pm 0.81\) & **63.24 \(\pm\) 3.86** & 62.70 \(\pm\) 3.15 & **29.45 \(\pm\) 0.52** & \(28.48\pm 0.70\) \\ APPNP [18] & **82.27 \(\pm\) 0.63** & \(80.70\pm 0.66\) & **72.67 \(\pm\) 0.60** & \(71.30\pm 0.78\) & 61.62 \(\pm\) 4.65 & **62.70 \(\pm\) 5.10** & **27.27 \(\pm\) 0.97** & 26.19 \(\pm\) 1.16 \\ GIN [26] & **80.36 \(\pm\) 0.76** & 80.04 \(\pm\) 1.26 & **69.82 \(\pm\) 0.79** & \(69.23\pm 0.79\) & **63.51 \(\pm\) 5.02** & 63.24 \(\pm\) 5.30 & **28.47 \(\pm\) 1.04** & 27.43 \(\pm\) 1.19 \\ GraphSAGE [27] & **81.74 \(\pm\) 0.66** & \(81.63\pm 0.47\) & **70.81 \(\pm\) 0.57** & \(70.53\pm 0.85\) & 83.51 \(\pm\) 4.75 & **83.78 \(\pm\) 3.82** & 35.34 \(\pm\) 0.68 & **35.60 \(\pm\) 0.70** \\ \hline \end{tabular}
\end{table}
Table 2: Comparison Experiment Results: Node Classification Accuracy (%)
Figure 2: Results on Actor for GCN and GAT with different number of layers. Bregman-enhanced GNNs show higher accuracy when the number of layers increases.
## 4 Conclusion
In this paper, we have proposed a novel bilevel optimization framework whose closed-form solution naturally defines a set of new network architectures called Bregman GNNs. Our experiments show the proposed framework can improve the performance of classic GNNs on both homophilic and heterophilic graphs and alleviate over-smoothing. However, it is worth noting that our method can only serve as a moderator and cannot fully address the over-smoothing issue. Future works may consider further improvement on this limitation.
|
2309.15639 | Enhancing Sharpness-Aware Optimization Through Variance Suppression | Sharpness-aware minimization (SAM) has well documented merits in enhancing
generalization of deep neural networks, even without sizable data augmentation.
Embracing the geometry of the loss function, where neighborhoods of 'flat
minima' heighten generalization ability, SAM seeks 'flat valleys' by minimizing
the maximum loss caused by an adversary perturbing parameters within the
neighborhood. Although critical to account for sharpness of the loss function,
such an 'over-friendly adversary' can curtail the outmost level of
generalization. The novel approach of this contribution fosters stabilization
of adversaries through variance suppression (VaSSO) to avoid such friendliness.
VaSSO's provable stability safeguards its numerical improvement over SAM in
model-agnostic tasks, including image classification and machine translation.
In addition, experiments confirm that VaSSO endows SAM with robustness against
high levels of label noise. | Bingcong Li, Georgios B. Giannakis | 2023-09-27T13:18:23Z | http://arxiv.org/abs/2309.15639v3 | # Enhancing Sharpness-Aware Optimization
###### Abstract
Sharpness-aware minimization (SAM) has well documented merits in enhancing generalization of deep neural networks, even without sizable data augmentation. Embracing the geometry of the loss function, where neighborhoods of 'flat minima' heighten generalization ability, SAM seeks 'flat valleys' by minimizing the maximum loss caused by an _adversary_ perturbing parameters within the neighborhood. Although critical to account for sharpness of the loss function, such an _'over-friendly adversary'_ can curtail the outmost level of generalization. The novel approach of this contribution fosters stabilization of adversaries through _variance suppression_ (VaSSO) to avoid such friendliness. VaSSO's _provable_ stability safeguards its numerical improvement over SAM in model-agnostic tasks, including image classification and machine translation. In addition, experiments confirm that VaSSO endows SAM with robustness against high levels of label noise.
## 1 Introduction
Despite deep neural networks (DNNs) have advanced the concept of "learning from data," and markedly improved performance across several applications in vision and language (Devlin et al., 2018; Tom et al., 2020), their overparametrized nature renders the tendency to overfit on training data (Zhang et al., 2021). This has led to concerns in generalization, which is a practically underscored perspective yet typically suffers from a gap relative to the training performance.
Improving generalizability is challenging. Common approaches include (model) regularization and data augmentation (Srivastava et al., 2014). While it is the default choice to integrate regularization such as weight decay and dropout into training, these methods are often insufficient for DNNs especially when coping with complicated network architectures (Chen et al., 2022). Another line of effort resorts to suitable optimization schemes attempting to find a generalizable local minimum. For example, SGD is more preferable than Adam on certain overparameterized problems since it converges to maximum margin solutions (Wilson et al., 2017). Decoupling weight decay from Adam also empirically facilitates generalizability (Loshchilov and Hutter, 2017). Unfortunately, the underlying mechanism remains unveiled, and whether the generalization merits carry over to other intricate learning tasks calls for additional theoretical elaboration.
Our main focus, sharpness aware minimization (SAM), is a highly compelling optimization approach that facilitates state-of-the-art generalizability by exploiting sharpness of loss landscape (Foret et al., 2021; Chen et al., 2022). A high-level interpretation of sharpness is how violently the loss fluctuates within a neighborhood. It has been shown through large-scale empirical studies that sharpness-based measures highly correlate with generalization (Jiang et al., 2019). Several works have successfully explored sharpness for
generalization advances. For example, Keskar et al. (2016) suggests that the batchsize of SGD impresses solution flatness. Entropy SGD leverages local entropy in search of a flat valley (Chaudhari et al., 2017). Different from prior works, SAM induces flatness by explicitly minimizing the _adversarially_ perturbed loss, defined as the maximum loss of a neighboring area. Thanks to such a formulation, SAM has elevated generalization merits among various tasks in vision and language domains (Chen et al., 2022; Zhang et al., 2022). The mechanism fertilizing SAM's success is theoretically investigated based on arguments of implicit regularization; see e.g., (Andriushchenko and Flammarion, 2022; Wen et al., 2023; Bartlett et al., 2022).
The adversary perturbation, or _adversary_ for short, is central to SAM's heightened generalization because it effectively measures sharpness through the loss difference with original model (Foret et al., 2021; Zhuang et al., 2022; Kim et al., 2022). In practice however, this awareness on sharpness is undermined by what we termed _friendly adversary_. Confined by the stochastic linearization for computational efficiency, SAM's adversary only captures the sharpness for a particular minibatch of data, and can become a friend on other data samples. Because the global sharpness is not approached accurately, the friendly adversary precludes SAM from attaining its utmost generalizability. The present work advocates variance suppressed sharpness aware optimization (VaSSO1) to alleviate 'friendliness' by stabilizing adversaries. With its _provable_ stabilized adversary, VaSSO showcases favorable numerical performance on various deep learning tasks.
Footnote 1: Vasso coincides with the Greek nickname for Vasiliki.
All in all, our contribution is summarized as follows.
* it can completely wipe out the generalization merits.
* A novel approach, VaSSO, is proposed to tackle this issue. VaSSO is equipped with what we termed _variance suppression_ to streamline a principled means for stabilizing adversaries. The theoretically guaranteed stability promotes refined global sharpness estimates, thereby alleviating the issue of friendly adversary.
* A side result is tighter convergence analyses for VaSSO and SAM that i) remove the bounded gradient assumption; and ii) deliver a more flexible choice for hyperparameters.
* Numerical experiments confirm the merits of stabilized adversary in VaSSO. It is demonstrated on image classification and neural machine translation tasks that VaSSO is capable of i) improving generalizability over SAM model-agnostically; and ii) nontrivially robustifying neural networks under the appearance of large label noise.
**Notation**. Bold lowercase (capital) letters denote column vectors (matrices); \(\|\mathbf{x}\|\) stands for \(\ell_{2}\) norm of vector \(\mathbf{x}\); and \(\langle\mathbf{x},\mathbf{y}\rangle\) is the inner product of \(\mathbf{x}\) and \(\mathbf{y}\). \(\mathbb{S}_{\rho}(\mathbf{x})\) denotes the surface of a ball with radius \(\rho\) centered at \(\mathbf{x}\), i.e., \(\mathbb{S}_{\rho}(\mathbf{x}):=\{\mathbf{x}+\rho\mathbf{u}\mid\|\mathbf{u}\|=1\}\).
## 2 The known, the good, and the challenge of SAM
This section starts with a brief recap of SAM (i.e., the known), followed with refined analyses and positive results regarding its convergence (i.e., the good). Lastly, the _friendly adversary_ issue is explained in detail and numerically illustrated.
### The known
Targeting at a minimum in flat basin, SAM enforces small loss around the entire neighborhood in the parameter space (Foret et al., 2021). This idea is formalized by a minimax problem
\[\min_{\mathbf{x}}\max_{\|\mathbf{\epsilon}\|\leq\rho}f\big{(}\mathbf{x}+\mathbf{ \epsilon}\big{)} \tag{1}\]
where \(\rho\) is the radius of considered neighborhood, and the nonconvex objective is defined as \(f(\mathbf{x}):=\mathbb{E}_{\mathcal{B}}[f_{\mathcal{B}}(\mathbf{x})]\). Here, \(\mathbf{x}\) is the neural network parameter, and \(\mathcal{B}\) is a random batch of data. The merits of such a formulation resides in its implicit sharpness measure \(\max_{\|\mathbf{\epsilon}\|\leq\rho}f\big{(}\mathbf{x}+\mathbf{\epsilon}\big{)}-f( \mathbf{x})\), which effectively drives the optimization trajectory towards the desirable flat valley (Kim et al., 2022).
The inner maximization of (1) has a natural interpretation as finding an _adversary_. Critical as it is, obtaining an adversary calls for _stochastic linearization_ to alleviate computational concerns, i.e.,
\[\mathbf{\epsilon}_{t}=\operatorname*{arg\,max}_{\|\mathbf{\epsilon}\|\leq\rho}f( \mathbf{x}_{t}+\mathbf{\epsilon})\stackrel{{(a)}}{{\approx}} \operatorname*{arg\,max}_{\|\mathbf{\epsilon}\|\leq\rho}f(\mathbf{x}_{t})+\langle \nabla f(\mathbf{x}_{t}),\mathbf{\epsilon}\rangle\stackrel{{(b)}}{{ \approx}}\operatorname*{arg\,max}_{\|\mathbf{\epsilon}\|\leq\rho}f(\mathbf{x}_{t} )+\langle\mathbf{g}_{t}(\mathbf{x}_{t}),\mathbf{\epsilon}\rangle \tag{2}\]
where linearization \((a)\) relies on the first order Taylor expansion of \(f(\mathbf{x}_{t}+\mathbf{\epsilon})\). This is typically accurate given the choice of a small \(\rho\). A stochastic gradient \(\mathbf{g}_{t}(\mathbf{x}_{t})\) then substitutes \(\nabla f(\mathbf{x}_{t})\) in \((b)\) to downgrade the computational burden of a full gradient. Catalyzed by the stochastic linearization in (2), it is possible to calculate SAM's adversary in closed-form
\[\boxed{\text{SAM:}\quad\mathbf{\epsilon}_{t}=\rho\frac{\mathbf{g}_{t}(\mathbf{x}_ {t})}{\|\mathbf{g}_{t}(\mathbf{x}_{t})\|}.} \tag{3}\]
SAM then adopts the stochastic gradient of adversary \(\mathbf{g}_{t}(\mathbf{x}_{t}+\mathbf{\epsilon}_{t})\) to update \(\mathbf{x}_{t}\) in a SGD fashion. A step-by-step implementation is summarized in Alg. 1, where the means to find an adversary in line 4 is presented in a generic form in order to unify the algorithmic framework with later sections.
```
1:Initialize:\(\mathbf{x}_{0},\rho\)
2:for\(t=0,\dots,T-1\)do
3: Sample a minibatch \(\mathcal{B}_{t}\), and define stochastic gradient on \(\mathcal{B}_{t}\) as \(\mathbf{g}_{t}(\cdot)\)
4: Find \(\mathbf{\epsilon}_{t}\in\mathbb{S}_{\rho}(\mathbf{0})\) via stochastic linearization; e.g., (4) for VaSSO or (3) for SAM
5: Calculate stochastic gradient \(\mathbf{g}_{t}(\mathbf{x}_{t}+\mathbf{\epsilon}_{t})\)
6: Update model via \(\mathbf{x}_{t+1}=\mathbf{x}_{t}-\eta\mathbf{g}_{t}(\mathbf{x}_{t}+\mathbf{ \epsilon}_{t})\)
7:endfor
8:Return:\(\mathbf{x}_{T}\)
```
**Algorithm 1** Generic form of SAM
### The good
To provide a comprehensive understanding about SAM, this subsection focuses on Alg. 1, and establishes its convergence for (1). Some necessary assumptions are listed below, all of which are common for nonconvex stochastic optimization (Ghadimi and Lan, 2013; Bottou et al., 2016; Mi et al., 2022; Zhuang et al., 2022).
**Assumption 1** (lower bounded loss).: \(f(\mathbf{x})\) _is lower bounded, i.e., \(f(\mathbf{x})\geq f^{*},\forall\mathbf{x}\)._
**Assumption 2** (smoothness).: _The stochastic gradient \(\mathbf{g}(\mathbf{x})\) is \(L\)-Lipschitz, i.e., \(\|\mathbf{g}(\mathbf{x})-\mathbf{g}(\mathbf{y})\|\leq L\|\mathbf{x}-\mathbf{y} \|,\forall\mathbf{x},\mathbf{y}\)._
**Assumption 3** (bounded variance).: _The stochastic gradient \(\mathbf{g}(\mathbf{x})\) is unbiased with bounded variance, that is, \(\mathbb{E}[\mathbf{g}(\mathbf{x})|\mathbf{x}]=\nabla f(\mathbf{x})\) and \(\mathbb{E}[\|\mathbf{g}(\mathbf{x})-\nabla f(\mathbf{x})\|^{2}|\mathbf{x}]= \sigma^{2}\) for some \(\sigma>0\)._
The constraint of (1) is never violated since \(\|\mathbf{\epsilon}_{t}\|=\rho\) holds for each \(t\); see line 4 in Alg. 1. Hence, the convergence of SAM pertains to the behavior of objective, where a tight result is given below.
**Theorem 1** (SAM convergence).: _Suppose that Assumptions 1 - 3 hold. Let \(\eta_{t}\equiv\eta=\frac{\eta_{0}}{\sqrt{T}}\leq\frac{2}{3L}\), and \(\rho=\frac{\rho_{0}}{\sqrt{T}}\). Then with \(c_{0}=1-\frac{3L\eta}{2}\) (clearly \(0<c_{0}<1\)), Alg. 1 guarantees that_
\[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\big{[}\|\nabla f(\mathbf{x}_{t})\|^{2} \big{]}\leq\mathcal{O}\bigg{(}\frac{\sigma^{2}}{\sqrt{T}}\bigg{)}\quad\text{ and}\quad\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\big{[}\|\nabla f(\mathbf{x}_{t}+\mathbf{ \epsilon}_{t})\|^{2}\big{]}\leq\mathcal{O}\bigg{(}\frac{\sigma^{2}}{\sqrt{T}} \bigg{)}.\]
The convergence rate of SAM is the same as SGD up to constant factors, where the detailed expression hidden under big \(\mathcal{O}\) notation can be found in Appendix D. Our results eliminate the need for the bounded gradient assumption compared to existing analyses in (Mi et al., 2022; Zhuang et al., 2022). Moreover, Theorem 1 enables a much larger choice of \(\rho=\mathcal{O}(T^{-1/2})\) relative to (Andriushchenko and Flammarion, 2022), where the latter only supports \(\rho=\mathcal{O}(T^{-1/4})\).
A message from Theorem 1 is that _any_ adversary satisfying \(\mathbf{\epsilon}_{t}\in\mathbb{S}_{\rho}(\mathbf{0})\) ensures converge. Because the surface \(\mathbb{S}_{\rho}(\mathbf{0})\) is a gigantic space, it challenges the plausible optimality of the adversary and poses a natural question - _is it possible to find a more powerful adversary for generalization advances?_
### The challenge: friendly adversary
**Adversary to one minibatch is a friend of others.** SAM's adversary is'malicious' for minibatch \(\mathcal{B}_{t}\) but not necessarily for other data because it only safeguards \(f_{\mathcal{B}_{t}}(\mathbf{x}_{t}+\mathbf{\epsilon}_{t})-f_{\mathcal{B}_{t}}( \mathbf{x}_{t})\geq 0\) for a small \(\rho\). In fact, it can be shown that \(f_{\mathcal{B}}(\mathbf{x}_{t}+\mathbf{\epsilon}_{t})-f_{\mathcal{B}}(\mathbf{x} _{t})\leq 0\) whenever the stochastic gradients do not align well, i.e., \(\langle\mathbf{g}_{t}(\mathbf{x}_{t}),\mathbf{g}_{\mathcal{B}}(\mathbf{x}_{t} )\rangle\leq 0\). Note that such misalignment is common because of the variance in massive training datasets. This issue is referred to as _friendly adversary_, and it implies that the adversary \(\mathbf{\epsilon}_{t}\) cannot accurately depict the global sharpness of \(\mathbf{x}_{t}\). Note that the 'friendly adversary' also has a more involved interpretation, that is, \(\mathbf{g}_{t}(\mathbf{x}_{t})\) falls outside the column space of Hessian at convergence; see more discussions after (Wen et al., 2023, Definition 4.3). This misalignment of higher order derivatives undermines the inductive bias of SAM, thereby worsens generalization.
To numerically visualize the catastrophic impact of the friendly adversary, we manually introduce one by replacing line 4 of Alg. 1 as \(\tilde{\mathbf{\epsilon}}_{t}=\rho\tilde{\mathbf{g}}_{t}(\mathbf{x}_{t})/\tilde{ \mathbf{g}}_{t}(\mathbf{x}_{t})\), where \(\tilde{\mathbf{g}}_{t}\) denotes the gradient on \(\tilde{\mathcal{B}}_{t}\), a randomly
Figure 1: (a) A friendly adversary erases the generalization merits of SAM; (b) \(m\)-sharpness may _not_ directly correlate with variance since noisy gradient degrades generalization; and (c) \(m\)-sharpness may not hold universally. Note that test accuracies in (a) and (b) are normalized to SGD.
sampled batch of the same size as \(\mathcal{B}_{t}\). This modified approach is denoted as SAM-db, and its performance for i) ResNet-18 on CIFAR10 and ii) ResNet-34 on CIFAR1002 can be found in Fig. 1(a). Note that the test accuracy is normalized relative to SGD for the ease of visualization. It is evident that the friendly \(\tilde{\mathbf{\epsilon}}_{t}\) in SAM-db almost erases the generalization benefits entirely.
Footnote 2: [https://www.cs.toronto.edu/~kriz/cifar.html](https://www.cs.toronto.edu/~kriz/cifar.html)
**Source of friendly adversary.** The major cause to the friendly adversary attributes to the gradient variance, which equivalently translates to the lack of stability in SAM's stochastic linearization \((2b)\). An illustrative three dimensional example is shown in Fig. 2, where we plot the adversary \(\mathbf{\epsilon}_{t}\) obtained from different \(\mathbf{g}_{t}\) realization in \((2b)\). The minibatch gradient is simulated by adding Gaussian noise to the true gradient. When the signal to noise ration (SNR) is similar to a practical scenario (ResNet-18 on CIFAR10 shown in Fig. 2 (e)), it can be seen in Fig. 2 (c) and (d) that the adversaries _almost uniformly_ spread over the norm ball, which strongly indicates the deficiency for sharpness evaluation.
**Friendly adversary in the lens of Frank Wolfe.** An additional evidence in supportive to SAM's friendly adversary resides in its connection to stochastic Frank Wolfe (SFW) that also heavily relies on stochastic linearization (Reddi et al., 2016). The stability of SFW is known to be vulnerable - its convergence cannot be guaranteed without a sufficient large batchsize. As thoroughly discussed in Appendix A, the means to obtain adversary in SAM is tantamount to one-step SFW with a _constant_ batchsize. This symbolizes the possible instability of SAM's stochastic linearization.
### A detailed look at friendly adversaries
The gradient variance is major cause to SAM's friendly adversary and unstable stochastic linearization, however this at first glance seems to conflict with an _empirical_ note termed \(m\)-sharpness, stating that the benefit of SAM is clearer when \(\mathbf{\epsilon}_{t}\) is found using subsampled \(\mathcal{B}_{t}\) of size \(m\) (i.e., larger variance).
Since \(m\)-sharpness highly hinges upon the loss curvature, it is unlikely to hold universally. For example, a transformer is trained on IWSLT-14 dataset, where the test performance (BLEU) decreases with smaller \(m\) even if we have tuned \(\rho\) carefully; see Fig. 1(c). On the theoretical side, an example is provided in (Andriushchenko and Flammario, 2022, Sec. 3) to suggest that \(m\)-sharpness is not necessarily related with sharpness or generalization. Moreover, there also exists specific choice for \(m\) such that the \(m\)-sharpness formulation is ill-posed. We will expand on this in Appendix B.
Even in the regime where \(m\)-sharpness is empirically observed such as ResNet-18 on CIFAR10 and ResNet-34 on CIFAR100, we show through experiments that \(m\)-sharpness is _not_ a consequence of gradient variance, thus not contradicting with the friendly adversary issue tackled in this work.
**Observation 1. Same variance, different generalization.** Let \(m=128\) and batchsize \(b=128\). Recall the SAM-db experiment in Fig. 1(a). If \(m\)-sharpness is a direct result of gradient variance, it is logical to expect SAM-db has comparable performance to SAM simply because their batchzises (hence variance) for finding adversary are the same. Unfortunately, SAM-db degrades accuracy. We further increase the variance of \(\tilde{\mathbf{g}}_{t}(\mathbf{x}_{t})\) by setting \(m=64\). The resultant algorithm is denoted as SAM-db-m/2. It does not catch with SAM and performs even worse than SAM-db. These experiments validate that variance/stability correlates with friendly adversary instead of \(m\)-sharpness.
**Observation 2. Enlarged variance degrades generalization.** We explicitly increase variance when finding adversary by adding Gaussian noise \(\mathbf{\zeta}\) to \(\mathbf{g}_{t}(\mathbf{x}_{t})\), i.e., \(\hat{\mathbf{\epsilon}}_{t}=\rho\frac{\mathbf{g}_{t}(\mathbf{x}_{t})+\mathbf{\zeta}}{ \|\mathbf{g}_{t}(\mathbf{x}_{t})+\mathbf{\zeta}\|}\). After tuning the best \(\rho\) to compensate the variance of \(\mathbf{\zeta}\), the test performance is plotted in Fig. 1(b). It can be seen that the generalization merits clearly decrease with larger variance on both ResNet-18 and ResNet-34. This again illustrates that the plausible benefit of \(m\)-sharpness does not stem from increased variance.
In sum, observations 1 and 2 jointly suggest that gradient variance correlates with friendly adversary rather than \(m\)-sharpness, where understanding the latter is beyond the scope of current work.
## 3 Variance-supressed sharpness-aware optimization (VaSSO)
This section advocates variance suppression to handle the friendly adversary. We start with the design of VaSSO, then establish its stability. We also touch upon implementation and possible extensions.
### Algorithm design and stability analysis
A straightforward attempt towards stability is to equip SAM's stochastic linearization with variance reduced gradients such as SVRG and SARAH (Johnson and Zhang, 2013; Nguyen et al., 2017; Li et al., 2020). However, the requirement to compute a full gradient every a few iterations is infeasible and hardly scales well for tasks such as training DNNs.
The proposed variance suppression (VaSSO) overcomes this computational burden through a novel yet simple stochastic linearization. For a prescribed \(\theta\in(0,1)\), VaSSO is summarized below
\[\textbf{VaSSO:}\quad\mathbf{d}_{t}=(1-\theta)\mathbf{d}_{t-1}+ \theta\mathbf{g}_{t}(\mathbf{x}_{t}) \tag{4a}\] \[\boldsymbol{\epsilon}_{t}=\operatorname*{arg\,max}_{\|\boldsymbol{ \epsilon}\|\leq\rho}f(\mathbf{x}_{t})+\langle\mathbf{d}_{t},\boldsymbol{ \epsilon}\rangle=\rho\frac{\mathbf{d}_{t}}{\|\mathbf{d}_{t}\|}. \tag{4b}\]
Compared with (2) of SAM, the key difference is that VaSSO relies on slope \(\mathbf{d}_{t}\) for a more stable stochastic linearization as shown in (4b). The slope \(\mathbf{d}_{t}\) is an exponentially moving average (EMA) of \(\{\mathbf{g}_{t}(\mathbf{x}_{t})\}_{t}\) such that the change over consecutive iterations is smoothed. Noticing that \(\boldsymbol{\epsilon}_{t}\) and \(\mathbf{d}_{t}\) share the same direction, the relatively smoothed \(\{\mathbf{d}_{t}\}_{t}\) thus imply the stability of \(\{\boldsymbol{\epsilon}_{t}\}_{t}\) in VaSSO. Moreover, as \(\mathbf{d}_{t}\) processes information of different minibatch data, the global sharpness can be captured in a principled manner to alleviate the friendly adversary challenge.
To theoretically characterize the effectiveness of VaSSO, our first result considers \(\mathbf{d}_{t}\) as a qualified strategy to estimate \(\nabla f(\mathbf{x}_{t})\), and delves into its mean square error (MSE).
**Theorem 2** (Variance suppression).: _Suppose that Assumptions 1 - 3 hold. Let Alg. 1 equip with i) \(\boldsymbol{\epsilon}_{t}\) obtained by (4) with \(\theta\in(0,1)\); and, ii) \(\eta_{t}\) and \(\rho\) selected the same as Theorem 1. VaSSO guarantees that the MSE of \(\mathbf{d}_{t}\) is bounded by_
\[\mathbb{E}\big{[}\|\mathbf{d}_{t}-\nabla f(\mathbf{x}_{t})\|^{2} \big{]}\leq\theta\sigma^{2}+\mathcal{O}\bigg{(}\frac{(1-\theta)^{2}\sigma^{2 }}{\theta^{2}\sqrt{T}}\bigg{)}. \tag{5}\]
Figure 2: (a) - (d) SAM’s adversaries spread over the surface; (e) SNR is in \([0.01,0.1]\) when training a ResNet-18 on CIFAR10, where the SNR is calculated at the first iteration of every epoch.
Because SAM's gradient estimate has a looser bound on MSE (or variance), that is, \(\mathbb{E}[\|\mathbf{g}_{t}-\nabla f(\mathbf{x}_{t})\|^{2}]\leq\sigma^{2}\), the shrunk MSE in Theorem 2 justifies the name of variance suppression.
Next, we quantify the stability invoked with the suppressed variance. It is convenient to start with necessary notation. Define the _quality_ of a stochastic linearization at \(\mathbf{x}_{t}\) with slope \(\mathbf{v}\) as \(\mathcal{L}_{t}(\mathbf{v}):=\max_{\|\boldsymbol{\epsilon}\|\leq\rho}f( \mathbf{x}_{t})+\langle\mathbf{v},\boldsymbol{\epsilon}\rangle\). For example, \(\mathcal{L}_{t}(\mathbf{d}_{t})\) and \(\mathcal{L}_{t}\big{(}\mathbf{g}_{t}(\mathbf{x}_{t})\big{)}\) are quality of VaSSO and SAM, respectively. Another critical case of concern is \(\mathcal{L}_{t}\big{(}\nabla f(\mathbf{x}_{t})\big{)}\). It is shown in (Zhuang et al., 2022) that \(\mathcal{L}_{t}\big{(}\nabla f(\mathbf{x}_{t})\big{)}\approx\max_{\| \boldsymbol{\epsilon}\|\leq\rho}f(\mathbf{x}_{t}+\boldsymbol{\epsilon})\) given a small \(\rho\). Moreover, \(\mathcal{L}_{t}\big{(}\nabla f(\mathbf{x}_{t})\big{)}-f(\mathbf{x}_{t})\) is also an accurate approximation to the sharpness (Zhuang et al., 2022). These observations safeguard \(\mathcal{L}_{t}(\nabla f(\mathbf{x}_{t}))\) as the anchor when analyzing the stability of SAM and VaSSO.
**Definition 1** (\(\delta\)-stability).: _A stochastic linearization with slope \(\mathbf{v}\) is said to be \(\delta\)-stable if its quality satisfies \(\mathbb{E}\big{[}|\mathcal{L}_{t}(\mathbf{v})-\mathcal{L}_{t}(\nabla f( \mathbf{x}_{t}))|\big{]}\leq\delta\)._
A larger \(\delta\) implies a more friendly adversary, hence is less preferable. We are now well-prepared for our main results on adversary's stability.
**Theorem 3** (Adversaries of VaSSO is more stable than SAM.).: _Suppose that Assumptions 1 - 3 hold. Under the same hyperparameter choices as Theorem 2, the stochastic linearization is \(\big{[}\sqrt{\theta}\rho\sigma+\mathcal{O}(\frac{\rho\sigma}{\theta T^{1/4}}) \big{]}\)-stable for VaSSO, while \(\rho\sigma\)-stable in SAM._
Theorem 3 demonstrates that VaSSO alleviates the friendly adversary problem by promoting stability. Qualitatively, VaSSO is roughly \(\sqrt{\theta}\in(0,1)\) times more stable relative to SAM, since the term in big \(\mathcal{O}\) notation is negligible given a sufficiently large \(T\). Theorem 3 also guides the choice of \(\theta\) - preferably small but not too small, otherwise the term in big \(\mathcal{O}\) is inversely amplified.
### Additional perspectives of VaSSO
Having discussed about the stability, this subsection proceeds with other aspects of VaSSO for a thorough characterization.
**Convergence.** Summarized in the following corollary, the convergence of VaSSO can be pursued as a direct consequence of Theorem 1. The reason is that \(\boldsymbol{\epsilon}_{t}\in\mathbb{S}_{\rho}(\mathbf{0})\) is satisfied by (4).
**Corollary 1** (VaSSO convergence).: _Suppose that Assumptions 1 - 3 hold. Choosing \(\eta_{t}\) and \(\rho\) the same as Theorem 1, then for any \(\theta\in(0,1)\), VaSSO ensures that_
\[\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\big{[}\|\nabla f(\mathbf{x}_{t})\|^{2} \big{]}\leq\mathcal{O}\bigg{(}\frac{\sigma^{2}}{\sqrt{T}}\bigg{)}\quad\text{ and}\quad\frac{1}{T}\sum_{t=0}^{T-1}\mathbb{E}\big{[}\|\nabla f( \mathbf{x}_{t}+\boldsymbol{\epsilon}_{t})\|^{2}\big{]}\leq\mathcal{O}\bigg{(} \frac{\sigma^{2}}{\sqrt{T}}\bigg{)}.\]
**VaSSO better reflects sharpness around optimum.** Consider a near optimal region where \(\|\nabla f(\mathbf{x}_{t})\|\to 0\). Suppose that we are in a big data regime where \(\mathbf{g}_{t}(\mathbf{x}_{t})=\nabla f(\mathbf{x}_{t})+\boldsymbol{\zeta}\) for some Gaussian random variable \(\boldsymbol{\zeta}\). The covariance matrix of \(\boldsymbol{\zeta}\) is assumed to be \(\sigma^{2}\mathbf{I}\) for simplicity, but our discussion can be extended to more general scenarios using arguments from von Mises-Fisher statistics (Mardia and Jupp, 2000). SAM has difficulty to estimate the flatness in this case, since \(\boldsymbol{\epsilon}_{t}\approx\rho\boldsymbol{\zeta}/\|\boldsymbol{\zeta}\|\) uniformly distributes over \(\mathbb{S}_{\rho}(\mathbf{0})\) regardless of whether the neighboring region is sharp. On the other hand, VaSSO has \(\boldsymbol{\epsilon}_{t}=\rho\mathbf{d}_{t}/\|\mathbf{d}_{t}\|\). Because \(\{\mathbf{g}_{\tau}(\mathbf{x}_{\tau})\}_{\tau}\) on sharper valley tend to have larger magnitude, their EMA \(\mathbf{d}_{t}\) is helpful for distinguishing sharp with flat valleys.
**Memory efficient implementation.** Although at first glance VaSSO has to keep both \(\mathbf{d}_{t}\) and \(\boldsymbol{\epsilon}_{t}\) in memory, it can be implemented in a much more memory efficient manner. It is sufficient to store \(\mathbf{d}_{t}\) together
with a scaler \(\|\mathbf{d}_{t}\|\) so that \(\mathbf{\epsilon}_{t}\) can be recovered on demand through normalization; see (4b). Hence, VaSSO has the same memory consumption as SAM.
**Extensions.** VaSSO has the potential to boost the performance of other SAM family approaches by stabilizing their stochastic linearization through variance suppression. For example, adaptive SAM methods (Kwon et al., 2021; Kim et al., 2022) ensure scale invariance for SAM, and GSAM (Zhuang et al., 2022) jointly minimizes a surrogated gap with (1). Nevertheless, these SAM variants leverage stochastic linearization in (2). It is thus envisioned that VaSSO can also alleviate the possible friendly adversary issues therein. Confined by computational resources, we only integrate VaSSO with GSAM in our experiments, and additional evaluation has been added into our research agenda.
## 4 Numerical tests
To support our theoretical findings and validate the powerfulness of variance suppression, this section assesses generalization performance of VaSSO via various learning tasks across vision and language domains. All experiments are run on NVIDIA V100 GPUs.
### Image classification
**Benchmarks.** Building on top of the selected base optimizer such as SGD and AdamW (Kingma and Ba, 2014; Loshchilov and Hutter, 2017), the test accuracy of VaSSO is compared with SAM and two adaptive approaches, ASAM and FisherSAM (Foret et al., 2021; Kwon et al., 2021; Kim et al., 2022).
**CIFAR10.** Neural networks including VGG-11, ResNet-18, WRN-28-10 and PyramidNet-110 are trained on CIFAR10. Standard implementation including random crop, random horizontal flip, normalization and cutout (Devries and Taylor, 2017) are leveraged for data augmentation. The first three models are trained for \(200\) epochs with a batchsize of \(128\), and PyramidNet-110 is trained for \(300\) epochs using batchsize \(256\). Cosine learning rate schedule is applied in all settings. The first three models use initial learning rate \(0.05\), and PyramidNet adopts \(0.1\). Weight decay is chosen as \(0.001\) for SAM, ASAM, FisherSAM and VaSSO following (Du et al., 2022; Mi et al., 2022), but \(0.0005\) for SGD. We tune \(\rho\) from \(\{0.01,0.05,0.1,0.2,0.5\}\) for SAM and find that \(\rho=0.1\) gives the best results for ResNet and WRN, \(\rho=0.05\) and \(\rho=0.2\) suit best for and VGG and PyramidNet, respectively. ASAM and VaSSO adopt the same \(\rho\) as SAM. FisherSAM uses the recommended \(\rho=0.1\)(Kim et al., 2022). For VaSSO, we tune \(\theta=\{0.4,0.9\}\) and report the best accuracy although VaSSO with both parameters outperforms SAM. We find that \(\theta=0.4\) works the best for ResNet-18 and WRN-28-10 while \(\theta=0.9\) achieves the best accuracy in other cases.
It is shown in Table 1 that VaSSO offers \(0.2\) to \(0.3\) accuracy improvement over SAM in all tested scenarios except for PyramidNet-110, where the improvement is about \(0.1\). These results illustrate that suppressed variance and the induced stabilized adversary are indeed beneficial for generalizability.
**CIFAR100.** The training setups on this dataset are the same as those on CIFAR10, except for the best choice for \(\rho\) of SAM is \(0.2\). The numerical results are listed in Table 2. It can be seen that SAM has significant generalization gain over SGD, and this gain is further amplified by VaSSO. On all tested models, VaSSO improves the test accuracy of SAM by \(0.2\) to \(0.3\). These experiments once again corroborate the generalization merits of VaSSO as a blessing of the stabilized adversary.
**ImageNet.** Next, we investigate the performance of VaSSO on larger scale experiments by training ResNet-50 and ViT-S/32 on ImageNet (Deng et al., 2009). Implementation details are deferred to Appendix C. Note that the baseline optimizer is SGD for ResNet and AdamW for ViT. VaSSO is also integrated with GSAM (V+G) to demonstrate that the variance suppression also benefits other SAM type approaches (Zhuang et al., 2022). For ResNet-50, it can be observed that vanilla VaSSO outperforms other SAM variants,
and offers a gain of \(0.26\) over SAM. V+G showcases the best performance with a gain of \(0.28\) on top of GSAM. VaSSO and V+G also exhibit the best test accuracy on ViT-S/32, where VaSSO improves SAM by 0.56 and V+G outperforms GSAM by 0.19. These numerical improvement demonstrates that stability of adversaries is indeed desirable.
### Neural machine translation
Having demonstrated the benefits of a suppressed variance on vision tasks, we then test VaSSO on German to English translation using a Transformer (Vaswani et al., 2017) trained on IWSLT-14 dataset (Cettolo et al., 2014). The fairseq implementation is adopted. AdamW is chosen as base optimizer in SAM and VaSSO because of its improved performance over SGD. The learning rate of AdamW is initialized to \(5\times 10^{-4}\) and then follows an inverse square root schedule. For momentum, we choose \(\beta_{1}=0.9\) and \(\beta_{2}=0.98\). Label smoothing is also applied with a rate of \(0.1\). Hyperparameter \(\rho\) is tuned for SAM from \(\{0.01,0.05,0.1,0.2\}\), and \(\rho=0.1\) performs the best. The same \(\rho\) is picked for ASAM and VaSSO as well.
The validation perplexity and test BLEU scores are shown in Table 4. It can be seen that both SAM and ASAM have better performance on validation perplexity and BLEU relative to AdamW. Although VaSSO with \(\theta=0.9\) has slightly higher validation perplexity, its BLEU score outperforms SAM and ASAM. VaSSO with \(\theta=0.4\) showcases the best generalization performance on this task, providing a \(0.22\) improvement on BLEU score relative to AdamW. This aligns with Theorems 2 and 3, which suggest that a small \(\theta\) is more beneficial to the stability of adversary.
### Additional tests
Additional experiments are conducted to corroborate the merits of suppressed variance and stabilized adversary in VaSSO. In particular, this subsection evaluates several flatness related metrics after training a ResNet-18 on CIFAR10 for \(200\) epochs, utilizing the same hyperparameters as those in Section 4.1.
**Hessian spectrum.** We first assess Hessian eigenvalues of a ResNet-18 trained with SAM and VaSSO.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline CIFAR10 & SGD & SAM & ASAM & FisherSAM & VaSSO \\ \hline
**VGG-11-BN** & 93.20\({}_{\pm 0.05}\) & 93.82\({}_{\pm 0.05}\) & 93.47\({}_{\pm 0.04}\) & 93.60\({}_{\pm 0.09}\) & **94.10\({}_{\pm 0.07}\)** \\
**ResNet-18** & 96.25\({}_{\pm 0.06}\) & 96.58\({}_{\pm 0.10}\) & 96.33\({}_{\pm 0.09}\) & 96.72\({}_{\pm 0.03}\) & **96.77\({}_{\pm 0.09}\)** \\
**WRN-28-10** & 97.08\({}_{\pm 0.16}\) & 97.32\({}_{\pm 0.11}\) & 97.15\({}_{\pm 0.05}\) & 97.46\({}_{\pm 0.18}\) & **97.54\({}_{\pm 0.12}\)** \\
**PyramidNet-110** & 97.39\({}_{\pm 0.09}\) & 97.85\({}_{\pm 0.14}\) & 97.56\({}_{\pm 0.11}\) & 97.84\({}_{\pm 0.18}\) & **97.93\({}_{\pm 0.08}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Test accuracy (%) of VaSSO on various neural networks trained on CIFAR10.
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline CIFAR100 & SGD & SAM & ASAM & FisherSAM & VaSSO \\ \hline
**ResNet-18** & 77.90\({}_{\pm 0.07}\) & 80.96\({}_{\pm 0.12}\) & 79.91\({}_{\pm 0.04}\) & 80.99\({}_{\pm 0.13}\) & **81.30\({}_{\pm 0.13}\)** \\
**WRN-28-10** & 81.71\({}_{\pm 0.13}\) & 84.88\({}_{\pm 0.10}\) & 83.54\({}_{\pm 0.14}\) & 84.91\({}_{\pm 0.07}\) & **85.06\({}_{\pm 0.05}\)** \\
**PyramidNet-110** & 83.50\({}_{\pm 0.12}\) & 85.60\({}_{\pm 0.11}\) & 83.72\({}_{\pm 0.09}\) & 85.55\({}_{\pm 0.14}\) & **85.85\({}_{\pm 0.09}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Test accuracy (%) of VaSSO on various neural networks trained on CIFAR100.
We focus on the largest eigenvalue \(\lambda_{1}\) and the ratio of largest to the fifth largest eigenvalue \(\lambda_{1}/\lambda_{5}\). These measurements are also adopted in (Foret et al., 2021; Jastrzebski et al., 2020) to reflect the flatness of the solution, where smaller numbers are more preferable. Because exact calculation for Hessian spectrum is too expensive provided the size of ResNet-18, we instead leverage Lanczos algorithm for approximation (Ghorbani et al., 2019). The results can be found in Table 5. It can be seen that SAM indeed converges to a much flatter solution compared with SGD, and VaSSO further improves upon SAM. This confirms that the friendly adversary issue is indeed alleviated by the suppressed variance in VaSSO, which in turn boosts the generalization of ResNet18 as shown earlier in Section 4.1.
**Label noise.** It is known that SAM holds great potential to harness robustness to neural networks under the appearance of label noise in training data (Foret et al., 2021). As the training loss landscape is largely perturbed by the label noise, this is a setting where the suppressed variance and stabilized adversaries are expected to be advantageous. In our experiments, we measure the performance VaSSO in the scenarios where certain fraction of the training labels are randomly flipped. Considering \(\theta=\{0.9,0.4,0.2\}\), the corresponding test accuracies are summarized in Table 6.
Our first observation is that VaSSO outperforms SAM at different levels of label noise. VaSSO elevates higher generalization improvement as the ratio of label noise grows. In the case of \(75\%\) label noise, VaSSO with \(\theta=0.4\) nontrivially outperforms SAM with an absolute improvement more than \(5\), while VaSSO with \(\theta=0.2\) markedly improves SAM by roughly \(10\). In all scenarios, \(\theta=0.2\) showcases the best performance and \(\theta=0.9\) exhibits the worst generalization when comparing among VaSSO. In addition, when fixing the choice to \(\theta\), e.g., \(\theta=0.2\), it is found that VaSSO has larger absolute accuracy improvement over SAM under higher level of label noise. These observations coincide with Theorem 3, which predicts that VaSSO is suitable for settings with larger label noise due to enhanced stability especially when \(\theta\) is chosen small (but not too small).
\begin{table}
\begin{tabular}{c|c c c c c} \hline \hline & \multicolumn{1}{c}{AdamW} & SAM & \multicolumn{1}{c}{ASAM} & \multicolumn{1}{c}{\begin{tabular}{c} VaSSO \\ (\(\theta=0.9\)) \\ \end{tabular} } & \multicolumn{1}{c}{
\begin{tabular}{c} VaSSO \\ (\(\theta=0.4\)) \\ \end{tabular} } \\ \hline val. ppl. & 5.02\(\pm_{0.03}\) & 5.00\(\pm_{0.04}\) & **4.99\(\pm_{0.03}\)** & 5.00\(\pm_{0.03}\) & **4.99\(\pm_{0.03}\)** \\ BLEU & 34.66\(\pm_{0.06}\) & 34.75\(\pm_{0.04}\) & 34.76\(\pm_{0.04}\) & 34.81\(\pm_{0.04}\) & **34.88\(\pm_{0.03}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of VaSSO for training a Transformer on IWSLT-14 dataset.
\begin{table}
\begin{tabular}{c|c c c c c c} \hline \hline ImageNet & vanilla & SAM & ASAM & GSAM & VaSSO & V+G \\ \hline
**ResNet-50** & 76.62\(\pm_{0.12}\) & 77.16\(\pm_{0.14}\) & 77.10\(\pm_{0.16}\) & 77.20\(\pm_{0.13}\) & **77.42\(\pm_{0.13}\)** & **77.48\(\pm_{0.04}\)** \\
**ViT-S/32** & 68.12\(\pm_{0.05}\) & 68.98\(\pm_{0.08}\) & 68.74\(\pm_{0.11}\) & 69.42\(\pm_{0.18}\) & **69.54\(\pm_{0.15}\)** & **69.61\(\pm_{0.11}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 3: Test accuracy (%) of VaSSO on ImageNet, where V+G is short for VaSSO + GSAM.
## 5 Other related works
This section discusses additional related work on generalizability of DNNs. The possibility of blending VaSSO with other approaches is also entailed to broaden the scope of this work.
**Sharpness and generalization.** Since the study of Keskar et al. (2016), the relation between sharpness and generalization has been intensively investigated. It is observed that sharpness is closely correlated with the ratio between learning rate and batchsize in SGD (Jastrzebski et al., 2017). Theoretical understandings on the generalization error using sharpness-related measures can be found in e.g., (Dziugaite and Roy, 2017; Neyshabur et al., 2017; Wang and Mao, 2022). These works justify the goal of seeking for a flatter valley to enhance generalizability. Targeting at a flatter minimum, approaches other than SAM are also developed. For example, Izmailov et al. (2018) proposes stochastic weight averaging for DNNs. Wu et al. (2020) studies a similar algorithm as SAM while putting more emphases on the robustness of adversarial training.
**Other SAM type approaches.** Besides the discussed ones such as GSAM and ASAM, (Zhao et al., 2022) proposes a variant of SAM by penalizing the gradient norm based on the observation where sharper valley tends to have gradient with larger norm. Barrett and Dherin (2021) arrive at a similar conclusion by analyzing the gradient flow. Exploiting multiple (ascent) steps to find an adversary is systematically studied in (Kim et al., 2023). SAM has also been extended to tackle the challenges in domain adaptation (Wang et al., 2023). However, these works overlook the friendly adversary issue, and the proposed VaSSO provides algorithmic possibilities for generalization benefits by stabilizing their adversaries. Since the desirable confluence with VaSSO can be intricate, we leave an in-depth investigation for future work.
**Limitation of VaSSO and possible solutions.** The drastically improved generalization of VaSSO comes at the cost of additional computation. Similar to SAM, VaSSO requires to backpropagate twice per iteration. Various works have tackled this issue and developed lightweight SAM. LookSAM computes the extra stochastic gradient once every a few iterations and reuses it in a fine-grained manner to approximate the additional gradient (Liu et al., 2022). ESAM obtains its adversary based on stochastic weight perturbation, and further saves computation by selecting a subset of the minibatch data for gradient computation (Du et al., 2022). The computational burden of SAM can be compressed by switching between SAM and SGD flowing a predesigned schedule (Zhao et al., 2022), or in an adaptive fashion (Jiang et al., 2023). SAF connects SAM with distillation for computational merits (Du et al., 2022). It should be pointed out that most of these works follow the stochastic linearization of SAM, hence can also encounter the issue of friendly adversary. This opens the door of merging VaSSO with these approaches for generalization merits while respecting computational overhead simultaneously. This has been included in our research agenda.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline & SAM & \begin{tabular}{c} VaSSO \\ (\(\theta=0.9\)) \\ \end{tabular} & \begin{tabular}{c} VaSSO \\ (\(\theta=0.4\)) \\ \end{tabular} &
\begin{tabular}{c} VaSSO \\ (\(\theta=0.2\)) \\ \end{tabular} \\ \hline
**25\% label noise** & 96.39\({}_{\pm 0.12}\) & 96.36\({}_{\pm 0.11}\) & 96.42\({}_{\pm 0.12}\) & **96.48\({}_{\pm 0.09}\)** \\
**50\% label noise** & 93.93\({}_{\pm 0.21}\) & 94.00\({}_{\pm 0.24}\) & 94.63\({}_{\pm 0.21}\) & **94.93\({}_{\pm 0.16}\)** \\
**75\% label noise** & 75.36\({}_{\pm 0.42}\) & 77.40\({}_{\pm 0.37}\) & 80.94\({}_{\pm 0.40}\) & **85.02\({}_{\pm 0.39}\)** \\ \hline \hline \end{tabular}
\end{table}
Table 6: Test accuracy (%) of VaSSO on CIFAR10 under different levels of label noise.
Concluding remarks
This contribution demonstrates that stabilizing adversary through variance suppression consolidates the generalization merits of sharpness aware optimization. The proposed approach, VaSSO, provably facilitates stability over SAM. The theoretical merit of VaSSO reveals itself in numerical experiments, and catalyzes model-agnostic improvement over SAM among various vision and language tasks. Moreover, VaSSO nontrivially enhances model robustness against high levels of label noise. Our results corroborate VaSSO as a competitive alternative of SAM.
|
2309.13248 | Rethinking Amodal Video Segmentation from Learning Supervised Signals
with Object-centric Representation | Video amodal segmentation is a particularly challenging task in computer
vision, which requires to deduce the full shape of an object from the visible
parts of it. Recently, some studies have achieved promising performance by
using motion flow to integrate information across frames under a
self-supervised setting. However, motion flow has a clear limitation by the two
factors of moving cameras and object deformation. This paper presents a
rethinking to previous works. We particularly leverage the supervised signals
with object-centric representation in \textit{real-world scenarios}. The
underlying idea is the supervision signal of the specific object and the
features from different views can mutually benefit the deduction of the full
mask in any specific frame. We thus propose an Efficient object-centric
Representation amodal Segmentation (EoRaS). Specially, beyond solely relying on
supervision signals, we design a translation module to project image features
into the Bird's-Eye View (BEV), which introduces 3D information to improve
current feature quality. Furthermore, we propose a multi-view fusion layer
based temporal module which is equipped with a set of object slots and
interacts with features from different views by attention mechanism to fulfill
sufficient object representation completion. As a result, the full mask of the
object can be decoded from image features updated by object slots. Extensive
experiments on both real-world and synthetic benchmarks demonstrate the
superiority of our proposed method, achieving state-of-the-art performance. Our
code will be released at \url{https://github.com/kfan21/EoRaS}. | Ke Fan, Jingshi Lei, Xuelin Qian, Miaopeng Yu, Tianjun Xiao, Tong He, Zheng Zhang, Yanwei Fu | 2023-09-23T04:12:02Z | http://arxiv.org/abs/2309.13248v1 | Rethinking Amodal Video Segmentation from Learning Supervised Signals with Object-centric Representation
###### Abstract
Video amodal segmentation is a particularly challenging task in computer vision, which requires to deduce the full shape of an object from the visible parts of it. Recently, some studies have achieved promising performance by using motion flow to integrate information across frames under a self-supervised setting. However, motion flow has a clear limitation by the two factors of moving cameras and object deformation. This paper presents a rethinking to previous works. We particularly leverage the supervised signals with object-centric representation in real-world scenarios. The underlying idea is the supervision signal of the specific object and the features from different views can mutually benefit the deduction of the full mask in any specific frame. We thus propose an Efficient object-centric Representation amodal Segmentation (EoRaS). Specially, beyond solely relying on supervision signals, we design a translation module to project image features into the Bird's-Eye View (BEV), which introduces 3D information to improve current feature quality. Furthermore, we propose a multi-view fusion layer based temporal module which is equipped with a set of object slots and interacts with features from different views by attention mechanism to fulfill sufficient object representation completion. As a result, the full mask of the object can be decoded from image features updated by object slots. Extensive experiments on both real-world and synthetic benchmarks demonstrate the superiority of our proposed method, achieving state-of-the-art performance. Our code will be released at [https://github.com/kfan21/EoRaS](https://github.com/kfan21/EoRaS).
+
Footnote †: dagger\) co-first authors; \(\ast\) corresponding authors.
## 1 Introduction
Deep learning has demonstrated remarkable success in various computer vision tasks. Nevertheless, neural networks are limited to learning visible patterns in the data, and are typically challenged in reasoning about the broader and unseen components. Currently, most researches in object detection and segmentation tasks concentrate on enhancing the visible part's performance, leaving few studies on inferring occluded information. Conversely, humans possess an innate ability to imagine and extrapolate, enabling us to easily complete an occluded part of an image based on prior knowledge. This critical capacity is instrumental in advanced deep learning models for real-world scenarios, such as medical diagnosis and autonomous driving. Thereby, the central issue addressed in this paper is the video amodal segmentation task, which aims to deduce an object's complete mask, whether it is partially obscured or not.
Prior studies on image amodal segmentation [22, 30, 32]
Figure 1: Illustrations of the difference between view prior, shape prior, and our model. While SaVos [35] draws support from the optical flow to realize the view prior, image-level amodal segmentation algorithms typically just utilize the shape prior brought in by the supervision signals. Consequently, they are limited by camera motion and complicated object types, respectively. Unlike the previous methods, beyond the mergence of those two priors, EoRaS utilizes view prior by object-centric learning and further introduces the BEV space where obstruction doesn’t exist, which enables our EoRaS to easily handle complex scenarios.
are over-reliance on prior knowledge, which actually hampers the model's generalization abilities, resulting in limited improvements under complex circumstances. For video amodal, Yao et al. [35] proposed that the occluded part of the current frame may appear in other frames, and therefore, information from all frames should be collected to fill in the occluded regions of any specific frame. While this method achieves promising results under the _self-supervised setting_, it fails when camera motion exists, as 2D warping is used to make connections within different frames, leading to distorted signals.
This paper aims to propose a better approach for video amodal segmentation by rethinking the importance of using supervised signals with object-centric representation. Such object-centric representations reflect the compositional nature of our world, and potentially facilitate supporting more complex tasks like reasoning about relations between objects. While signals such as motion flow and shape priors have shown promising results, they are limited by moving cameras and complicated object types respectively. In contrast, recent advances [6, 13, 21] in video object segmentation produce highly accurate object masks that are less sensitive to moving cameras, making them better suited as supervision signals. Surprisingly, such video object masks have not been fully exploited before.
To this end, we propose a novel approach that learns video amodal segmentation not only from observed object supervision signals in the current frame (_shape prior_) but also from integrated information of object features under different views (_view prior_). Our motivation is clearly shown in Fig. 1. By using visual patterns of other views to explain away occluded object parts in the current frame [31], our approach gets rid of optical flow and eliminates the shortcomings of mere reliance on shape priors. Our model is highly effective, even in complex scenarios.
In this paper, we propose a novel supervised method for the video amodal segmentation task that leverages a multi-view fusion layer based temporal module and a Bird's-Eye View (BEV) feature translation network. Rather than relying on warping the amodal prediction into the next frame using optical flow or using shape priors alone, we enhance the current frame features by incorporating feature information from different viewpoints and leveraging the supervision signals simultaneously. Specifically, we first extract front-view features from the videos using FPN50 [18]. Then, we employ a translation network to transform these front-view features into bird's-eye view features, which bring in 3D information through the usage of the intrinsic matrix. In contrast to some related work [28] extracting object-centric 3D representation by object reconstruction, the acquisition of BEV feature is simpler, faster, and easier to train. As each frame is equivalent to a unique view, features from both different frames and the BEV space, which carry shape information about the occluded part, are further utilized. We repurpose the vanilla object-centric representations [19] - object slots to integrate those information, which is accomplished by our novel multi-view fusion layer. Finally, we refine the front-view features using the updated object slots containing object information from multiple views and decode the full mask. Compared to previous methods [35], our model can handle scenarios with 3D viewing angle changes or complex object shapes better by leveraging shape knowledge and integrating information across multiple views simultaneously.
To evaluate our method, we conduct extensive experiments on real-world and synthetic amodal benchmarks. The results demonstrate that our model achieves outstanding performance compared to comparable models and effectively demonstrates the efficacy of our architecture.
In summary, our main contributions are listed below. (1) Our contribution lies in formulating the video amodal segmentation task using supervised signals for the first time. Our model efficiently learns the shape and view priors, enabling it to handle complex scenarios with ease. (2) We propose a novel approach to learning object-centric representations through a multi-view fusion layer based temporal module equipped with a set of object slots, which achieves significant improvement in the correlation of information from different views. (3) We introduce the novel concept of bird's-eye view features in our amodal task, which provides front-view features with 3D information, resulting in consistent benefits. (4) By utilizing the bird's-eye view generator and multi-view fusion layer based temporal module, our algorithm achieves remarkable improvement on both real-world and synthetic amodal benchmarks, highlighting the novelty of our approach.
## 2 Related Work
**Amodal segmentation** is a more challenging task than instance segmentation because it requires predicting the full shape of occluded objects through the visible parts. While previous literature has focused on using shape priors effectively through multi-level coding [22], variational autoencoder [14], shape prior memory codebook construction [32], mixing feature decoupling [16] or Bayesian model [29], relying solely on shape priors can lead to poor empirical performance due to distribution shifts between training data and real scenarios. To address this issue, [35] leverages spatiotemporal consistency and dense object motion to explain away occlusion. Although their work has made progress in video amodal segmentation, optical flow can cause object deformation in the presence of camera motion. In contrast, our proposed architecture introduces a novel approach that does not require optical flow and utilizes bird's-eye view features to bring in 3D information that enhances the learning of front-view features.
**Object-centric learning** aims at identifying all the objects from raw input for better understanding the complex scenes. Existing object-centric learning methods can be categorized into unsupervised and supervised methods. While unsupervised methods use image/scene reconstruction to extract object representations from images/scenes [2, 19, 25], supervised methods represent each object as a query embedding and pay much attention to obtaining a great initialization [3, 4, 7, 9, 13, 34]. Our EoRaS is more related to the supervised method in terms of constructing a set of learnable queries as an information container.
**BEV maps generation** requires to generate semantic maps in bird's-eye view space. Due to a lack of high-quality annotated data, most of the early work adopts weak supervision by utilizing stereo information [20, 21] or obtaining pseudo label [27]. Others directly translate semantic segmentation maps from image space into bird's-eye view space [6, 26]. With the advent of large-scale annotated datasets, research on supervised methods has also made some progress. [23] and [24] respectively take advantage of dense transformer layer and 1D sequence-to-sequence translations to learn a map representation. [1] and [17] instead blend features from multi-camera images to construct BEV map. In our EoRaS, the bird's-eye view feature is utilized to integrate 3D information into the front-view feature. To the best of our knowledge, it's the first attempt to incorporate the BEV translation module in the amodal segmentation task.
## 3 Methodology
This paper focuses on the video amodal segmentation task. Specifically, given a video sequence \(\{I_{t}\}_{t=1}^{T}\) with \(K\) objects, EoRaS aims to predict the full mask \(\{M_{t}^{k}\}\) of each object in all frames, where \(k\) is the index of objects. In our EoRaS, visible masks \(\{V_{t}^{k}\}\) also serves as supervision _but will not be utilized at the test phase_.
### Architecture
The overall architecture of EoRaS is shown in Figure 2. Our EoRaS is mainly comprised of four modules: (i) the feature encoding module which extracts the front-view feature \(f_{t}^{k}\) from the input frames; (ii) the BEV translation network which converts the front-view features into bird's-eye view angle \(b_{t}^{k}\) using the camera intrinsic matrix \(K\) and neural network; (iii) the multi-view fusion layer based temporal module which utilizes the object slots updated through the forward and backward streams to integrate the feature information from different views and fulfill the completion of each front-view feature; and (iv) the deconvolution network that estimates the full masks and visible masks of the current frame simultaneously.
**Feature Encoding Module** In this module, FPN50 [18] pretrained on ImageNet [5] is used to extract features from the input frames. These features are obtained from a frontal perspective and capture a lot of information but will fail to make inferences about the missing parts of the objects.
\[f_{t}^{k}=FPN(I_{t}^{k}) \tag{1}\]
**BEV Translation Network** The features from the bird's-eye view (BEV) are widely used and work well in autonomous driving research. Recall that features from different perspectives are likely to contain the missing part information and contribute to the full mask deduction of the
Figure 2: A schematic illustration of our method. The novelty of this architecture mainly lies in the BEV translation network and the multi-view fusion layer. The SP and SR represent the shape provider and receiver (see Section 3.1 for detail), respectively.
current frame. As obstruct doesn't exist in BEV space unless objects are stacked on top of each other, it is reasonable to introduce BEV as a special perspective to promote the completion of the front-view feature. Consider a horizontally placed camera, for each frame, a 3D volume feature \(V_{3D}\) is constructed in the camera coordinate system. As the BEV feature generation just involves the current frame, we omit the subscript of time \(t\) and the object index \(k\) for simplicity.
Denote the camera intrinsic matrices as \(K\), we first focus on a single point \((x,y,z)\) in the camera coordinate space. By utilizing intrinsic matrices, this point can be easily projected into the image/feature plane and we denote its coordinate as \((u,v)\):
\[\left(\begin{array}{c}\lambda u\\ \lambda v\\ \lambda\end{array}\right)=K\left(\begin{array}{c}x\\ y\\ z\end{array}\right) \tag{2}\]
We use bilinear interpolate to obtain the feature at \((u,v)\) from the corresponding front-view feature \(f\). The obtained value at \((u,v)\) will act as the volume feature at \((x,y,z)\).
As shown in Figure 2(a), the 3D volume in the _camera coordinate_ will be rasterized into a group of points \(p_{ijk}=(x_{i},y_{j},z_{k})\), where \(1\leq i\leq m,1\leq j\leq n,1\leq k\leq h\) and \(x_{i},y_{j},z_{k}\) are three predefined 1D grid. \(x,y,z\) represent the direction of width, depth, and height, respectively. The value of \(V_{3D}\) will be gotten by simply repeating the above process for each point. Further, by stacking the feature of the volume obtained from different channels together, we will get \(V_{3D}\in\mathbb{R}^{c\times m\times n\times h}\). Since our goal is to acquire BEV features, \(V_{3D}\) is rearranged to \(\mathbb{R}^{ch\times m\times n}\) and sent to a lightweight CNN for compression along height dimension:
\[b_{k}^{t}=\mathrm{CNN}(V_{3D}.reshape(ch,m,n)) \tag{3}\]
**Multi-view Fusion Layer based Temporal Encoder** As the occluded part of a specific view may potentially appear in other frames, we can make full use of the information in each frame (equivalent to different perspectives) to refine the completion of the object shape. Specially, inspired by DETR [3] and Slot Attention [19], we would like to generate an object-centric feature utilizing both front-view and BEV representations.
A direct method is to follow [19], which uses ConvGRU to aggregate temporal information. However, the cost of nested recurrent slot computation to gather the object information from each frame is expensive when processing videos. Here, we propose a more efficient attention-based encoder architecture named Multi-view Fusion Layer. Generally, in such a layer, three \(N\)-layer object attentions which is a non-recurrent variant of slot attention are carefully designed and closely connected. And features from different views and object slots serve as the inputs.
In particular, as shown in Figure 2(b), each object attention layer (\(\mathrm{ObjAttention}(SP,SR)\)) is stacked by self-attention, cross-attention, and feed-forward networks and serves as information fusion network. The variable absorbing the missing shape information during the fusion process is named shape receiver (SR), and another one is dubbed shape provider (SP) as it offers extra shape patches. The total forward process in \(\mathrm{ObjAttention}\) is formulated as,
\[\hat{SR}=SR+\mathrm{Attention}(SR,SR,SR) \tag{4}\]
\[\widehat{SR}=\hat{SR}+\mathrm{Attention}(SP,\hat{SR},SP) \tag{5}\]
\[output=\mathrm{MLP}(\widetilde{SR}) \tag{6}\]
where \(\mathrm{Attention}(K,Q,V),\mathrm{MLP}(\cdot)\) denotes multi-head attention module and two-layer feedforward network, respectively. And we omit all normalization layers. \(\mathrm{ObjAttention}\) first enhances the SR representation by renewing information contained in itself, then extracts fresh properties from the SP.
Figure 3: (a) A three-dimensional cuboid is built and rasterized in the camera coordinate. For each voxel, we use the intrinsic matrix to obtain its coordinates in the plane system, and use bilinear interpolation on the front-view features to obtain its feature. Then, a convolutional network is used to obtain the bev map. (b) Our object attention layer in the multi-view fusion layer is stacked by self-attention, cross-attention, and feedforward network. This layer is designed for shape information fusion and takes two variables as input. We nominate the variable to be updated as shape receiver (SR) and another one as shape provider (SP).
On the other hand, similar to [3], a set of object slots \(S_{0}\in R^{n_{s}\times d}\) is initialized before the videos enter. \(n_{s}\) denotes the number of slots and \(d\) is the feature dimension. In our model, \(S_{0}\) is set to be learnable and serves as a container that gathers shape information from various views.
With the above preparations, we now go to the detail of our multi-view fusion layer. For each frame, we take advantage of the \(S_{t-1}\) from the last frame which includes object shape information from previous frames, and provide it with the fresh characters from the front-view \(f_{t}^{k}\) and BEV feature \(b_{t}^{k}\) under current perspective at first:
\[S_{t}^{\prime}=\mathrm{ObjAttention}(SR=S_{t-1},SP=b_{t}^{k}) \tag{7}\]
\[S_{t}=\mathrm{ObjAttention}(SR=S_{t}^{\prime},SP=f_{t}^{k}) \tag{8}\]
Then, the updated slots will provide clues about the occluded part and help complete the front-view features of the current frame by setting the front-view features as SR in the object attention layer. Thus, we inversely enhance the front-view feature using the object slots by:
\[\hat{f}_{t}^{k}=\mathrm{ObjAttention}(SR=f_{t}^{k},SP=S_{t}) \tag{9}\]
**Deconvolution Network** The deconvolution network (DeConv) is served as the mask predictor and takes the updated front-view features as input since it shares the same perspective with the full mask to be predicted. In our experiments, we just construct several de-convolutional layers for this module.
\[\hat{M}_{t}^{k},\hat{V}_{t}^{k}=\mathrm{DeConv}(\hat{f}_{t}^{k}) \tag{10}\]
where \(\hat{M}_{t}^{k}\) and \(\hat{V}_{t}^{k}\) are the full and visible mask predictions of the current frame, respectively.
**Bi-directional Prediction** Cold start problem exists under the above framework since the first few frames may not be informative enough. Thus, backward prediction is added to solve this problem. We simply concatenate the forward and backward features, and send them to the final deconvolution network.
### Loss Function for EoRaS
Our EoRaS is designed as an end-to-end framework and trained with the focal loss (\(\mathrm{Focal}()\)) using both full mask and visible mask as supervision signals. Note that the discard of the visible mask loss will not heavily damage the model performance, as shown in Tab. 4. The overall loss function is
\[\mathcal{L}_{full}=\sum_{t=1}^{T}\sum_{k=1}^{K}\mathrm{Focal}(\hat{M}_{t}^{k}, M_{t}^{k}) \tag{11}\]
\[\mathcal{L}_{vis}=\sum_{t=1}^{T}\sum_{k=1}^{K}\mathrm{Focal}(\hat{V}_{t}^{k}, V_{t}^{k}) \tag{12}\]
\[\mathcal{L}=\mathcal{L}_{full}+\lambda\cdot\mathcal{L}_{vis}. \tag{13}\]
## 4 Experiments
To fully evaluate our model, we conduct extensive experiments on both real-world and synthetic amodal segmentation benchmarks, including Movi-B, Movi-D, and KITTI datasets, with the visualization in Fig. 4.
**Movi Dataset**[11] is a _synthetic_ dataset consisting of random scenes and objects created by Kubric [11]. In our experiments, we consider two datasets (Movi-B and Movi-D) with different objects and different levels of occlusions. _We extract the amodal information during generation of the two datasets_. The objects in Movi-B and Movi-D are from the CLEVR [15], which consists of 11 relatively regular object shapes, and Google Scanned Objects [8], which contains 1030 realistic objects, respectively. Both datasets use the background from Poly Haven. To create situations with serious occlusion, all objects are set to be static and stacked closely together. Videos are created by setting the camera to rotate around the objects. Overall, compared with Movi-B, Movi-D has a more complex object shape and lower camera viewing angle with more serious occlusion.
**KITTI Dataset**[10] is currently the largest real-world autonomous driving evaluation dataset. It has been widely used in many vision tasks, such as object detection and optical flow prediction. [22] annotated some images in KITTI with amodal information and [35] matched these images to its original video frame. Note that since these videos are not sufficiently annotated, it is a weakly supervised scenario. For a fair comparison, we follow the same data split in [35]. The visible masks and object tracks are extracted by PointTrack [33]. It is noteworthy that only the car category is annotated in this dataset.
### Competitors and Settings
**Competitors** We compare our method with the following related methods: (1) **VM (Visible Mask)**, directly use
Figure 4: Visualization of datasets. The first and second rows show images from the Movi-B and Movi-D, respectively. The remaining four images belong to the KITTI.
the ground truth visible mask as amodal prediction; (2) **Convex**, take the convex hull of the visible mask as the amodal mask; (3) **PCNET[36]**, a self-supervised image-level amodal completion method by in turn recovering occlusion ordering and completing amodal masks and content; (4) **AISFormer**[30], an image-level amodal segmentation model equipped with a transformer-based mask head and achieves the new state-of-the-art recently; (5) **Savos**[35], a recent state-of-the-art method in the field of self-supervised video amodal segmentation and is modified to supervised version by removing the 2D warping and bringing the supervised signal for fair comparison (We also did additional experiments involving warping operation, but the experiment results are quite inferior); (6) **BiLSTM**[12], a variant of our proposed method for which we keep the same FPN50 backbone but utilize BiLSTM to aggregate temporal information across frames.
**Implementations** Results on all datasets are reported in terms of mIOU metrics for both full mask and occluded regions. Since most amodal segmentation algorithms use the visible mask or the bounding boxes of the visible part as model input, the estimation results of the visible area may be more confident, and the mIOU of the occluded area can better reflect the model performance. On all datasets, the mIOU metric of the occluded part is only computed on those partially occluded objects. We use AdamW as optimizer with batch size 4 for 50 epochs. The learning rate is set to \(1e-5\) on Movi datasets and \(1e-4\) on the KITTI dataset. Exponential learning rate decay is used where the decay rate is 0.95. The weight decay is \(5e-4\). And the \(\gamma\) in focal loss is set to 2. We set \(\lambda=1\), \(n_{s}=8\), \(N=2\) and train our model on four Tesla T4 GPUs using PyTorch.
### Results on Movi Datasets
As shown in Table 1, compared with supervised SaVos, our EoRaS achieves extremely significant performance improvements on both Movi datasets. In particular, by applying our algorithm, the prediction of the full mask of the objects in the two datasets is improved by 8.50% and 8.83%, respectively. The improvements are more remarkable in the prediction of occluded parts. For the performance on
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline \multirow{2}{*}{DATASET} & \multirow{2}{*}{METHODS} & \multicolumn{2}{c}{Metrics} \\ \cline{3-4} & & mIoIo\({}_{full}\) & mIoU\({}_{occ}\) \\ \hline \multirow{6}{*}{Movi-B} & VM & 59.19 & - \\ & Convex & 64.21 & 18.42 \\ & PCNET & 65.79 & 24.02 \\ & AISFormer & 77.34 & 43.53 \\ & SaVos-sup. & 70.72 & 33.61 \\ & BiLSTM & 77.93 & 46.21 \\ \cline{2-4} & EoRaS _(Ours)_ & **79.22** & **47.89** \\ \hline \multirow{6}{*}{Movi-D} & VM & 56.92 & - \\ & Convex & 60.18 & 16.48 \\ & PCNET & 64.35 & 27.31 \\ \cline{1-1} & AISFormer & 67.72 & 33.65 \\ \cline{1-1} & SaVos-sup. & 60.61 & 22.64 \\ \cline{1-1} & BiLSTM & 68.43 & 36.00 \\ \cline{1-1} \cline{2-4} & EoRaS _(Ours)_ & **69.44** & **36.96** \\ \hline \multirow{6}{*}{KITTI} & VM & 74.75 & - \\ & Convex & 78.62 & 8.29 \\ \cline{1-1} & PCNET & 81.58 & 17.90 \\ \cline{1-1} & AISFormer & 86.42 & 51.04 \\ \cline{1-1} & SaVos-sup. & 83.09 & 37.33 \\ \cline{1-1} & BiLSTM & 86.68 & 49.95 \\ \cline{1-1} \cline{2-4} & EoRaS _(Ours)_ & **87.07** & **52.00** \\ \hline \hline \end{tabular}
\end{table}
Table 1: The performance of EoRaS on real-world and synthetic video amodal benchmarks.
Figure 5: Qualitative comparison between our EoRaS and competitors. From top to down, the three rows are from Movi-B, Movi-D, and KITTI, respectively.
the occluded part, our EoRaS achieves 14.28% improvement on the Movi-B over the baseline SaVos, and surprisingly improves by 14.32% on the Movi-D. Moreover, the performance of EoRaS also exceeds the recent state-of-the-art image-level algorithm AISFormer by a clear margin on both datasets. And it's noteworthy that EoRaS outperforms the combination of FPN50 and BiLSTM/Transformer by at least 1% in plenty of experiments, showing the effectiveness of introducing the BEV module. Additionally, despite the usage of ground truth visible mask in Convex and PCNET, EoRaS still exhibits amazing power. We also present the qualitative results in Figure 5. Obviously, the full masks deduced by EoRaS are the closest to the original object shape among all the competitors. Above all, EoRaS is more suitable for solving the video amodal segmentation task, and leads to the new state-of-the-art.
### Results on KITTI Dataset
The experiment results on the KITTI dataset are shown in Table 1. For objects in real scenes, our EoRaS can still exceed all the current state-of-the-art methods. Compared with the image-level baseline, we achieve 0.65% and 0.96% improvement for the full and occluded mask prediction, respectively. For the supervised SaVos, EoRaS achieves enormous promotion, \(\sim\)4% on the full shape and \(\sim\)15% on the missing part. Furthermore, other video-level baselines consistently underperform our EoRaS by \(\sim\)2% on the deduction of the occluded part. Qualitative comparison in Figure 5 clearly exhibits the great precision of EoRaS. The above evidence is sufficient enough to prove the effectiveness of EoRaS under weakly supervised settings.
## 5 Further Analysis
**Effectiveness of Temporal and BEV Modules** As shown in Table 2, on Movi-B dataset, our temporal module brings about \(\sim\)2.3% performance improvement in occluded part prediction. After plugging in the BEV module, the occluded mIOU is further improved by 1.06%. Additionally, Bi-direction prediction also plays an important role in our model as it brings 1.38% performance improvement for the missing part deduction. On the Movi-D dataset, the improvements brought in by those modules are also significant, as presented in the right table. Some visualizations derived from different architectures are presented in Figure 6. It's clear that both temporal and BEV modules are capable of improving the smoothness and shape similarity of full masks. These experiments fully prove the effectiveness of the modules proposed in this paper, and also verify the correctness of our hypothesis that feature information from different perspectives can benefit the completion of object shape in any specific frame/view.
**Sensitivity Analysis of Slot Number** To analyze the sensi
\begin{table}
\begin{tabular}{c|c c c|c c} \hline \hline \multirow{2}{*}{No.} & \multicolumn{3}{c|}{Designs} & \multicolumn{2}{c}{Metrics} \\ \cline{2-6} & Temporal & Bi-direction & BEV & mIoU\({}_{full}\) & mIoU\({}_{occ}\) \\ \hline
1 & \(\times\) & \(\times\) & \(\times\) & 76.93 & 44.55 \\
2 & ✓ & ✓ & \(\times\) & 78.66 & 46.83 \\
3 & ✓ & \(\times\) & ✓ & 78.42 & 46.51 \\
4 & ✓ & ✓ & ✓ & **79.22** & **47.89** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Ablation study of our temporal and bev module on Movi-B (left) and Movi-D (right) dataset.
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline Dataset & \# Slots & mIoU\({}_{full}\) & mIoU\({}_{occ}\) \\ \hline & 8 & 79.22 & 47.89 \\ & 16 & 79.22 & 47.79 \\ & 32 & 79.29 & 47.88 \\ Movi-B & 64 & 79.22 & 47.78 \\ & 128 & 79.19 & 47.73 \\ & 256 & 79.20 & 47.75 \\ \hline & 8 & 69.44 & 36.96 \\ & 16 & 69.42 & 36.92 \\ Movi-D & 32 & 69.50 & 37.27 \\ & 64 & 69.38 & 37.01 \\ & 128 & 69.47 & 37.02 \\ & 256 & 69.45 & 37.06 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Sensitivity Analysis of Slot Number. Despite the diverse settings of slot number, the performance of EoRaS just changes slightly, demonstrating the robustness against hyper-parameter \(n_{s}\).
Figure 6: Visualizations derived from models with/without proposed temporal or BEV module. Clearly, the complete model deduces the best full mask, and both the temporal and BEV module can bring consistent benefits.
t the performance of the positivity to the choice of \(n_{s}\), we conduct experiments by widely tuning the slot number. The results are presented in Table 3 and indicate that the number of slots has almost no impact on the performance of our model. This phenomenon demonstrates the robustness of our model against the diverse choices of slot numbers.
**Different choices of \(\lambda\)** We conduct experiments to analyze the effect of \(\lambda\) on the performance of our model, and the results are presented in Table 4. First of all, the utilization of visible masks in supervision signals will benefit the model training as also shown in previous modal segmentation algorithms. But the way that EoRaS differs lies in the insensitivity to the choice of \(\lambda\) once the visible mask is added, which demonstrates the superiority of EoRaS.
**Open Set Segmentation** To evaluate the capacity of out-of-distribution generalization, we conduct open set segmentation experiments on Movi-D and KITTI datasets. Models are pretrained on the relatively simple Movi-B dataset. As presented in Table 5, EoRaS achieves the best accuracy among all competitors. Concretely, compared with supervised Savos, EoRaS outperforms by at least 6%, showing strong dominance. Again, the image-level SOTA algorithm underperforms EoRaS by \(\sim\)2% on the occluded part deduction, indicating that the integration of information from different views indeed benefits the generalization ability.
**Test-time Assistance by Ground Truth Visible Mask (GTVM)** The same as SaVos and PCNET, we explore the utilization of GTVM at the test phase. On the one hand, the post-processing (PP), including taking the intersection of the predicted full mask and GTVM, is feasible. On the other hand, containing partial shape information, GTVM may be capable of serving as a shape guidance (SG) for mask completion. To this end, we simply train our model with the concatenation of images and visible masks. The experimental results are presented in Table 6. Overall, the introduction of GTVM brings in huge benefits, which is inline with [35, 36]. Despite the usage of GTVM in those algorithms, our EoRaS still outperforms them by a large margin (see Table 1), suggesting the powerful function.
## 6 Conclusion
In this paper, we proposed a brand-new pipeline named EoRaS to cope with the video amodal segmentation task. Based on the assumption that both the supervision signals (shape prior) and the features from different perspectives (view prior) will benefit the deduction of the full mask under any specific view, the multi-view fusion layer based temporal encoder and BEV translation network are designed to integrate 3D information and front-view shape patches from different frames respectively in an object-centric pattern. Utilizing those modules, our EoRaS eliminates the optical flow usage and the over-reliance on shape priors, achieving high efficiency even in complex scenarios. We conduct experiments on both real-world and synthetic video amodal benchmarks, including Movi-B, Movi-D, and KITTI datasets. The empirical results demonstrate that our EoRaS achieves the new state-of-the-art performance.
**Acknowledgements.** This work is supported by China Postdoctoral Science Foundation (2022M710746). Yanwei Fu is with the School of Data Science, Shanghai Key Lab of Intelligent Information Processing, Fudan University, and Fudan ISTBI--ZINU Algorithm Centre for Brain-inspired Intelligence, Zhejiang Normal University, Jinhua, China.
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Metrics} \\ \cline{3-4} & & mIoU\({}_{full}\) & mIoU\({}_{occ}\) \\ \hline \multirow{4}{*}{Movi-B} & EoRaS & 79.22 & 47.89 \\ & +PP\({}^{*}\) & 79.38 & 47.66 \\ & +PP & 81.20 & 47.89 \\ & +SG & **81.76** & **49.39** \\ \hline \multirow{4}{*}{Movi-D} & EoRaS & 69.44 & 36.96 \\ & +PP\({}^{*}\) & 69.95 & 36.81 \\ \cline{1-1} & +PP & 72.76 & 36.96 \\ \cline{1-1} & +SG & **74.10** & **38.33** \\ \hline \hline \end{tabular}
\end{table}
Table 6: The performance of EoRaS while using GTVM at test phase on Movi dataset. PP\({}^{*}\) and PP means the predicted and ground truth visible mask are used in post-process, respectively. And SG represents the model trained with the concatenation of images and visible masks.
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline \multirow{2}{*}{Methods} & \multirow{2}{*}{Target} & \multicolumn{2}{c}{Metrics} \\ \cline{3-4} & & mIoU\({}_{full}\) & mIoU\({}_{occ}\) \\ \hline \multirow{2}{*}{AISFormer [30]} & Movi-D & 62.94 & 28.65 \\ & KITTI & 71.36 & 29.84 \\ \hline \multirow{2}{*}{SaVos-Sup. [35]} & Movi-D & 57.19 & 25.85 \\ & KITTI & 65.49 & 21.82 \\ \hline \multirow{2}{*}{EoRaS (_Ours_)} & Movi-D & **63.98** & **31.22** \\ & KITTI & **71.73** & **31.35** \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of EoRaS under different \(\lambda\).
\begin{table}
\begin{tabular}{c|c|c c} \hline \hline \multirow{2}{*}{Dataset} & \multirow{2}{*}{Methods} & \multicolumn{2}{c}{Metrics} \\ \cline{3-4} & & mIoU\({}_{full}\) & mIoU\({}_{occ}\) \\ \hline \multirow{4}{*}{Movi-B} & EoRaS & 79.22 & 47.89 \\ & +PP\({}^{*}\) & 79.38 & 47.66 \\ & +PP & 81.20 & 47.89 \\ & +SG & **81.76** & **49.39** \\ \hline \multirow{4}{*}{Movi-D} & EoRaS & 69.44 & 36.96 \\ & +PP\({}^{*}\) & 69.95 & 36.81 \\ \cline{1-1} & +PP & 72.76 & 36.96 \\ \cline{1-1} & +SG & **74.10** & **38.33** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Open set segmentation on Movi-D and KITTI datasets. We use EoRaS pretrained on the Movi-B dataset and conduct transfer learning experiments without finetuning. Our EoRaS achieves the highest performance, indicating its great generalization ability. |
2309.10741 | Symmetry Lie Algebras of Varieties with Applications to Algebraic
Statistics | The motivation for this paper is to detect when an irreducible projective
variety V is not toric. We do this by analyzing a Lie group and a Lie algebra
associated to V. If the dimension of V is strictly less than the dimension of
the above mentioned objects, then V is not a toric variety. We provide an
algorithm to compute the Lie algebra of an irreducible variety and use it to
provide examples of non-toric statistical models in algebraic statistics. | Aida Maraj, Arpan Pal | 2023-09-19T16:36:55Z | http://arxiv.org/abs/2309.10741v3 | # Symmetry Lie Algebras of Varieties with Applications to Algebraic Statistics
###### Abstract
The motivation for this paper is to detect when an irreducible projective variety \(V\) is not toric. This is achieved by analyzing a Lie group and a Lie algebra associated with \(V\). If the dimension of \(V\) is strictly less than the dimension of the aforementioned objects, then \(V\) is not a toric variety. We provide an algorithm to compute the Lie algebra of an irreducible variety and use it to present examples of non-toric statistical models in algebraic statistics.
**Keywords**: _symmetry Lie group, symmetry Lie Algebra, toric variety, toric ideal, statistical model_
## 1 Introduction
This paper is motivated from the need to classify statistical models with toric structure in algebraic statistics as many statistical models can be described as zero sets of polynomial ideals intersected with the probability space. We say that a statistical model has a toric structure if its vanishing ideal is prime and generated by binomials, possibly after a linear change of variables. The toric structure in a statistical model is of interest due to the importance in applications: generating sets of toric ideals produce Markov bases and contribute in hypothesis testing algorithms [14, 15], contribute in facilitating maximum likelihood degree computations [1, 13], toric varieties are intrinsically linked to smoothness criteria in exponential families [1], and the polytope associated to a toric model is useful when studying the existence of maximum likelihood estimates [15]. Numerous papers, including [1, 1, 13, 14, 15, 16, 17, 18, 19], bear witness to the interest in statistical models with toric structures. While all these papers provide sufficient conditions for a statistcial model to be toric, the first example of a statistical model (staged tree model) with non-toric structure was only recently provided by Nicklasson in [14].
This present paper proposes the symmetry Lie group associated to a homogeneous prime ideal \(I\subseteq\mathbb{C}[x_{1},\ldots,x_{n}]\) (Definition 9) and the symmetry Lie group of an irreducible projective variety \(V\subseteq\mathbb{C}^{n}\) (Definition 16) as efficient ways to distinguish non-toric varieties. The two groups sit naturally in \(\operatorname{GL}_{n}(\mathbb{C})\) as stabilizers of the ideal/variety under natural actions of matrix multiplication. They agree when \(I\) is the vanishing ideal \(I(V)\) of variety \(V\). A fundamental observation of the paper is that if the dimension of the symmetry Lie algebra for variety \(V\) or its vanishing ideal \(I(V)\) is strictly less than the dimension of \(V\), then \(V\) cannot be a toric variety.
**Theorem 1**.: _(see also Theorem 24) Let \(V\) be an irreducible projective variety with vanishing ideal \(I(V)\). Let \(G_{I(V)}\) be the symmetry Lie group for \(I(V)\) as defined in Definition 9. If \(\dim(G_{I})<\dim(V)\) then \(V\) is not a toric variety._
Unfortunately, Lie groups tend to be challenging to compute. Since Theorem 1 requires only the dimension of the symmetry Lie group, we are free to work with the Lie algebras of these groups. According to standard Lie theory literature, the dimension of a Lie group as a manifold is equal to the vector space dimension of its Lie algebra. Moreover, Lie algebras offer a friendlier structure, leading to the formulation of the following theorem and the subsequent development of an algorithm, which we implement using SageMath.
**Theorem 2**.: _(Theorem 26 restated) Let \(I\subseteq\mathbb{C}[x_{1},\ldots,x_{n}]\) be a homogeneous prime ideal minimally generated by polynomials of degree at most \(d\). Let \(\mathscr{B}([I]_{d})=\{f_{1},\ldots,f_{k}\}\) be a finite basis for the d-th graded component \([I]_{d}\) of ideal \(I\) as in Proposition 5. Take \(g\in M_{n}(\mathbb{C})\) to be an \(n\times n\) matrix with unknown entries \(gij\). For each \(f_{i}\in\mathscr{B}([I]_{d})\) consider the matrix_
\[M_{i}(g):=\left(\begin{matrix}\overrightarrow{f_{1}}&\overrightarrow{f_{2}} &\ldots&\overrightarrow{f_{k}}&\overrightarrow{g\ast f_{i}}\end{matrix} \right),\]
_where the \(\ast\) action is introduced in Definition 19, and \(\overrightarrow{f_{i}}\) is the vector representation of polynomial \(f_{i}\) in \([I]_{d}\). Then the symmetry Lie algebra for \(I\) is the set of all matrices \(g\in\mathbb{C}^{n\times n}\) such that \(\operatorname{rank}(M_{i}(g))=k\) for \(i=1,\ldots,k\)._
The observation and its implementation allow us to provide other statistical models with non-toric structure, worked in detail in Section 5. This includes disproving Conjecture 6.8 in [1] that not all staged tree models with one stage have toric structure. We also provide the example of a Gaussian graphical model whose variety is not toric, which to the knowledge of the authors, is first such example. We also discuss ways to use symmetry Lie algebras to find statistical models with toric structure.
Stabilizers of ideals under group actions are very common in literature and have already shown to be useful in analyzing questions related to binomial generators [14]. Lie groups are not new to algebraic statistics either - Draisma, Kuhnt and Zwiernik [13, 14] use a different symmetry Lie group for Gaussian graphical models that acts on the covariance matrices of the model via conjugation in problems related to their maximum likelihood estimate.
**The structure of the paper**. In Section 2 we recall definitions of a toric variety, ideal with toric structure, graded components of a homogeneous polynomial ideal, Lie groups and Lie algebras. We also include relevant results about them, which the reader may find useful later in the paper. Section 3 concerns introducing the symmetry Lie group of an ideal (see Definition 9) and of a variety (see Definition 16). We show in Proposition 17 that the two definitions agree when working with the vanishing ideal of a variety. Section 3 concludes with a proof of Theorem 1. Section 4 concerns with the Lie algebra for the symmetry Lie group of a homogeneous prime ideal, with a focus on describing this object practically via a group action 19. Then we state Theorem 24, which is the Lie algebra version of Theorem 1. The rest of Section 4 is in the service to facilitate computations of the symmetry Lie algebras of homogeneous prime ideals, hence proving Theorem 2, and providing an algorithm for it attached to this paper. In Section 5 we apply the methods developed in this paper to varieties arriving from staged tree models and Gaussian graphical models in algebraic statistics. We end with a discussion on other possible applications of symmetry Lie algebras.
Preliminaries
We start with a list of notations that is kept uniform throughout the paper, unless specifically stated otherwise:
* \(\mathbb{C}^{n}\) is the \(n\)-dimensional affine space with entries in the complex numbers \(\mathbb{C}\)
* \(\mathbb{T}_{n}\) is the \(n\)-dimensional algebraic torus isomorphic to \((\mathbb{C}\setminus\{0\})^{n}\)
* \(R\) denotes the polynomial ring \(\mathbb{C}[x_{1},\ldots,x_{n}]\)
* \(M_{n}(\mathbb{C})\) is the ring of \(n\times n\) matrices with entries in \(\mathbb{C}\)
* \(\mathrm{GL}_{n}(\mathbb{C})\) is the general linear group of invertible \(n\times n\) matrices with entries in \(\mathbb{C}\)
* \(V\) denotes an irreducible variety in \(\mathbb{C}^{n}\)
* \(I\) is a homogeneous prime ideal in \(R\)
We briefly recall the definitions of a toric variety, toric ideal, graded components of a polynomial ideal over a standard graded polynomial ring, Lie groups and Lie algebras. For details on toric varieties, ideals and graded components we recommend [11, 12]. Books [13] and [14] offer comprehensive insights into Lie groups.
**Toric varieties.** The following are two equivalent definitions for an _affine toric variety_. An affine toric variety is an irreducible affine variety \(V\) containing a torus \(\mathbb{T}_{r}\) as a Zariski open subset such that the action of \(\mathbb{T}_{r}\) on itself extends to an algebraic action of \(\mathbb{T}_{r}\) on \(V\), that is, an action \(\mathbb{T}_{r}\times V\to V\) given by a morphism.
For the second definition let \(\mathcal{A}=\{\mathbf{a}_{1},\ldots,\mathbf{a}_{n}\}\) be a lattice for the torus \(\mathbb{T}_{r}\). Consider the map
\[\Phi_{\mathcal{A}}:\mathbb{T}_{r}\longrightarrow(\mathbb{C}\setminus\{0\})^{n },\quad\Phi_{\mathcal{A}}(\mathbf{t})=(\chi^{\mathbf{a}_{1}}(\mathbf{t}), \ldots,\chi^{\mathbf{a}_{n}}(\mathbf{t})) \tag{1}\]
where \(\chi^{\mathbf{a}_{i}}\) are morphisms from \(\mathbb{T}_{r}\) to \(\mathbb{C}\setminus\{0\}\) (characters) of the torus \(\mathbb{T}_{r}\). If \(\mathbb{T}_{r}=(\mathbb{C}\setminus\{0\})^{r}\), as in our situation, then each character has form \(\chi^{\mathbf{a}_{i}}(\mathbf{t})=t_{1}^{a_{1i}}\cdots t_{r}^{a_{ri}}\) for \(\mathcal{A}\subseteq\mathbb{Z}^{r}\). An affine variety \(V\) is a toric affine variety if it is the Zariski closure of image of \(\Phi_{\mathcal{A}}\) for some character lattice \(\mathcal{A}\) of some torus \(\mathbb{T}_{r}\). In particular, from the two definitions one has that in a toric variety, the map in Equation (1) induces the group action
\[\tilde{\Phi}_{\mathcal{A}}:\mathbb{T}_{r}\times\mathbb{C}^{n}\longrightarrow \mathbb{C}^{n},\quad\tilde{\Phi}_{\mathcal{A}}(\mathbf{t},v)=(\chi^{\mathbf{ a}_{1}}(\mathbf{t})v_{1},\ldots,\chi^{\mathbf{a}_{n}}(\mathbf{t})v_{n}) \tag{2}\]
such that \(\tilde{\Phi}_{\mathcal{A}}(\mathbb{T}_{r}\times V)\subseteq V\).
A common fact that we will use in this paper is that for a given toric variety \(V\), the dimension of the torus is bigger than or equal to the dimension of the variety. We state this formally as a proposition.
**Proposition 3**.: _Let \(V\) be an irreducible toric variety of dimension \(\dim(V)\) given as an orbit closure under a torus \(\mathbb{T}_{r}\) action. Then \(\dim(V)\leq\dim(\mathbb{T}_{r})\)._
**Toric ideals.** An ideal \(I\subseteq R\) is said to be _toric_ if it is prime and it has a generating set made of binomials. Equivalently, the ideal \(I\) is toric if and only if it can be written as kernel of some monomial map from a polynomial ring to a Laurent ring. The ideal \(I\) has a _binomial structure_ if if it has a binomial set of generators in variables \(x_{1},\ldots,x_{n}\), or if there exists an invertible linear change of variables such that the ideal has a binomial generating set in the new variables. We say that \(I\) has a _toric structure_ if it is prime and has a binomial structure.
**Example 4**.: _(bias coin model in algebraic statistics, see [14]) The prime ideal \(I=\langle x_{1}x_{3}-x_{2}x_{3}-x_{2}^{2}\rangle\) is clearly not a binomial ideal in variables \(x_{1},x_{2},x_{3}\). Consider the change of variables \(x_{1}=p_{1}\), \(x_{2}=p_{2}+p_{1}\) and \(x_{3}=p_{3}-p_{2}\). The generator of \(I\) takes the form_
\[x_{1}x_{3}-x_{2}x_{3}-x_{2}^{2}=p_{1}(p_{3}-p_{2})-(p_{2}-p_{1})(p_{3}-p_{2})-( p_{2}-p_{1})^{2}=p_{1}^{2}-p_{2}p_{3}.\]
_So, \(I\) is toric and is generated by the binomial \(p_{1}^{2}-p_{2}p_{3}\) in the polynomial ring \(\mathbb{C}[p_{1},p_{2},p_{3}]\)._
A common way to show that an irreducible variety is toric is by proving that it is the vanishing of a prime ideal with binomial structure. The binomial structure approach has previously been taken by Katthan, Michalek and Miller in [13]. Other relevant literature is found in questions related to computing the bimonial part of a polynomial ideal [11], the sparse/short generators of a polynomial ideal [10, 11], and shifted toricity [12].
**Graded components of an ideal.** As a standard graded polynomial ring, \(R\) is a direct sum of its graded components; that is, \(R=\oplus_{d\geq 0}[R]_{d}\), where
\[[R]_{d}=\{p\in R\mid p\text{ is homogeneous of degree }d\}\cup\{0\}\]
is the vector space of all homogeneous polynomials in \(R\) of degree \(d\). We refer to the set of monomials of degree \(d\) in \(R\) as _the standard basis_ for \([R]_{d}\), and denote it \(\mathscr{B}([R]_{d})\). Similarly, a homogeneous ideal \(I\subseteq R\) has \(I=\oplus_{d\geq 0}[I]_{d}\), where the vector space
\[[I]_{d}=\{p\in I\mid p\text{ is homogeneous of degree }d\}\cup\{0\}\]
is referred to as the _d-th graded component of \(I\)_. One can simply describe a basis for \([I]_{d}\) as in the following proposition.
**Proposition 5**.: _Let \(I\subseteq R\) be an ideal generated by homogeneous polynomials \(p_{1},\ldots,p_{k}\) of degrees \(d_{1},\ldots,d_{k}\), respectively. Then, for each \(d\in\mathbb{N}\),_
\[\mathscr{S}([I]_{d})=\{m_{i}p_{i}\mid 1\leq i\leq k,d_{i}\leq d,m_{i}\in \mathscr{B}([R]_{d-d_{i}})\}\]
_is a spanning set for the vector space \([I]_{d}\). Consequently, any linearly independent set \(\mathscr{B}([I]_{d})\subseteq\mathscr{S}([I]_{d})\) is a basis for \([I]_{d}\)._
Proof.: An element \(m_{i}p_{i}\in\mathscr{S}([I]_{d})\) is a degree-\(d\) polynomial in \(I\), implying \(m_{i}p_{i}\in[I]_{d}\). As \([I]_{d}\) is a linear space, linear combinations of these elements are in \([I]_{d}\), implying \(\operatorname{Span}(\mathscr{S}([I]_{d}))\subseteq[I]_{d}\). Conversely, consider \(p\in[I]_{d}\). Given that \(p\in I\), we can express \(p\) as \(p=\sum\limits_{i=1}^{k}f_{i}p_{i}\), where \(f_{i}\in[R]_{d-d_{i}}\). So, \(p\) is a linear combination of polynomials \(m_{i}p_{i}\) for \(m_{i}\in\mathscr{B}([R]_{d})\), as desired.
**Lie groups and Lie algebras.** A _Lie group_ is a group that is also a finite-dimensional real smooth manifold, in which the group operations of multiplication and inversion are smooth maps. The general linear group \(\operatorname{GL}_{n}(\mathbb{C})\) is a classical example of a Lie group. We will need Cartan's theorem.
**Theorem 6**.: [13, Corollary 3.45] _Any closed subgroup of \(\operatorname{GL}_{n}(\mathbb{C})\) is a Lie group._
The _Lie algebra_\(\mathfrak{g}\) of a Lie group \(G\) is the tangent space of \(G\) at the identity. For a Lie group in \(\operatorname{GL}_{n}(\mathbb{C})\), its Lie algebra has the particular form (see [13, Section 3.3])
\[\mathfrak{g}=\{g\in M_{n}(\mathbb{C})\mid e^{tg}\in G\text{ for all }t\in(- \epsilon,\epsilon)\}, \tag{3}\]
for some real number \(\epsilon>0\). It is a common fact the dimension of a Lie group as a manifold is equal to the dimension of its Lie algebra as a linear space.
**Proposition 7**.: [13, Proposition 4.61] _Let \(\mathfrak{g}\) be the Lie algebra of the Lie group \(G\). Then \(\dim(\mathfrak{g})=\dim(G)\)._
Symmetry Lie Groups
**Symmetry Lie groups of homogeneous prime ideals.** Consider the group action of \(\text{GL}_{n}(\mathbb{C})\) on the polynomial ring \(R=\mathbb{C}[x_{1},\ldots,x_{n}]\),
\[\text{GL}_{n}\times R\to R,\ (g,p)\mapsto g\cdot p \tag{4}\]
with the rules:
* \(g\cdot c=c\) for constant polynomial \(c\in R\)
* \(g\cdot x_{i}=\sum\limits_{j=1}^{n}g_{ij}x_{j}\) for variable \(x_{i}\in R\)
* \(g\cdot p(x)q(x)=(g\cdot p(x))(g\cdot q(x))\) for any two polynomials \(p(x),q(x)\in R\)
extended linearly to \(R\). Alternatively, one can think of this group action as substituting variable \(x_{i}\) in \(p(x)\) with \(g_{i}x\), where \(g_{i}\) is the \(i\)-th column of \(g\) and \(x=[x_{1},\ldots,x_{n}]^{-1}\) is the vector of variables.
**Example 8**.: _Take polynomial \(p(x)=x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}\in\mathbb{C}[x_{1},x_{2}]\) and matrix \(g=\begin{bmatrix}g_{11}&g_{12}\\ g_{21}&g_{22}\end{bmatrix}\in\text{GL}_{2}(\mathbb{C})\). Then_
\[g\cdot p(x)=(g_{11}^{2}+g_{21}^{2}+g_{11}g_{21})x_{1}^{2}+(g_{12}+g_{22}+g_{12 }g_{22})x_{2}^{2}+(2g_{11}g_{12}+2g_{21}g_{22}+g_{11}g_{22}+g_{12}g_{21})x_{1}x _{2}.\]
Now we consider the group that acts as stabilizer for an ideal \(I\) of \(R\).
**Definition 9**.: _Let \(I\subseteq R\) be an ideal. The stabilizer of \(I\) is_
\[G_{I}=\{g\in\text{GL}_{n}(\mathbb{C})\mid g\cdot p\in I,\forall p\in I\}.\]
We will show in Theorem 15 that when \(I\) is a homogeneous prime ideal, \(G_{I}\) is a Lie group in \(\text{GL}_{n}(\mathbb{C})\), which we refer to as _the symmetry Lie group_ of \(I\). First we simplify our problem and show that a generating set of \(I\) is sufficient to fully determine the stabilizer \(G_{I}\).
**Lemma 10**.: _Let \(I\) be a homogeneous ideal in \(R\) generated by polynomials \(p_{1},\ldots,p_{k}\). Then \(G_{I}=\{g\in\text{GL}_{n}(\mathbb{C})\mid g\cdot p_{i}\in I,1\leq i\leq k\}\)._
Proof.: Denote \(G_{I}^{\prime}=\{g\in\text{GL}_{n}(\mathbb{C})\mid g\cdot p_{i}\in I,1\leq i \leq k\}\). It is clear that \(G_{I}\subseteq G_{I}^{\prime}\). Take a matrix \(g\in G_{I}^{\prime}\). A polynomial \(p\in I\) has form \(p=\sum\limits_{i=1}^{k}q_{i}p_{i}\) for some polynomials \(q_{1},\ldots,q_{k}\in R\). Since \(g\cdot p_{i}\in I\) for \(i=1,\ldots k\), one has that \(g\cdot p=\sum\limits_{i=1}^{k}(g\cdot q_{i})(g\cdot p_{i})\) is also in \(I\), and hence \(g\in G_{I}\).
**Example 11**.: _Consider the ideal \(I=\langle p(x)\rangle\) where \(p(x)=x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}\) as in Example 8. Then \(G_{I}\) contains the invertible matrices \(g\in\text{GL}_{n}(\mathbb{C})\) such that_
\[g_{11}^{2}+g_{21}^{2}+g_{11}g_{21}=g_{12}^{2}+g_{22}^{2}+g_{12}g_{22}=2g_{11}g _{12}+2g_{21}g_{22}+g_{11}g_{22}+g_{12}g_{21}.\]
_Solving these equations one has that \(G_{I}\) is a two dimensional manifold containing the two matrices_
\[\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\text{ and }\begin{pmatrix}0&1\\ -1&-1\end{pmatrix}.\]
Note that the action (4) preserves degrees; that is, the polynomials \(p\) and \(g\cdot p\) have the same degree. This property allows us to safely define the symmetry Lie group for vector spaces \([I]_{d}\) of a homogeneous prime ideal \(I\).
**Definition 12**.: _Let \(I\subseteq R\) be a homogeneous ideal. The stabilizer of the vector space \([I]_{d}\) is_
\[G_{[I]_{d}}=\{g\in\text{GL}_{n}(\mathbb{C})\mid g\cdot p\in[I]_{d},\forall p\in[ I]_{d}\}.\]
We can similarly state an analogous version to Lemma 10 for \(G_{[I]_{d}}\).
**Lemma 13**.: _Let \(I\) be a homogeneous ideal in \(R\) generated by the homogeneous polynomials \(p_{1},\ldots,p_{k}\) of degrees \(d_{1},\ldots,d_{k}\), respectively. For each \(d\in\mathbb{N}\), let \(\mathscr{B}([I]_{d})\) be as in Proposition 5. Then, \(G_{[I]_{d}}=\{g\in\text{GL}_{n}(\mathbb{C})\mid g\cdot p\in[I]_{d},\forall p \in\mathscr{B}([I]_{d})\}\)._
Proof.: Denote \(G^{\prime}_{[I]_{d}}=\{g\in\text{GL}_{n}(\mathbb{C})\mid g\cdot p\in[I]_{d}, \forall p\in\mathscr{B}([I]_{d})\}\). It is clear that \(G^{\prime}_{[i]_{d}}\subseteq G_{[i]_{d}}\). Take \(g\in G^{\prime}_{[I]_{d}}\). Since \(I_{d}=\text{span}(\mathscr{B}([I]_{d}))\) as a vector space, a polynomial \(p\in[I]_{d}\) has form \(p=\sum\limits_{i=1}^{\ell}c_{i}f_{i}\) for \(c_{i}\in\mathbb{C}\) and \(f_{i}\in\mathscr{B}([I]_{d})\). So, \(g\cdot p=\sum\limits_{i=1}^{\ell}c_{i}(g\cdot f_{i})\) is in \([I]_{d}\) since each \(g\cdot f_{i}\in[I]_{d}\), as desired.
Now let us look at the example of the irrelevant maximal ideal in \(R\).
**Example 14**.: _Let \(I=\langle x_{1},\ldots,x_{n}\rangle\) in \(R\). Then \([I]_{d}=[R]_{d}\) for all \(d\in\mathbb{N}\). So,_
\[G_{I}=\{g\in\text{GL}_{n}(\mathbb{C})\mid g\cdot x_{i}\in I,1\leq i\leq n\}= \text{GL}_{n}(\mathbb{C}),\]
_and, for each \(d\in\mathbb{N}\), we have_
\[G_{[I]_{d}}=\{g\in\text{GL}_{n}(\mathbb{C})\mid g\cdot m\in[R]_{d},\forall m \in\mathscr{B}([R]_{d})\}=\text{GL}_{n}(\mathbb{C}).\]
In Example 14, \(G_{I}=G_{[I]_{d}}\) for any \(d\in\mathbb{N}\). A similar statement holds in a more general setting.
**Theorem 15**.: _Let \(I\) be a homogeneous prime ideal in \(R\) generated by polynomials \(p_{1},\ldots,p_{k}\) of degrees \(d_{1},\ldots,d_{k}\), respectively. Then, \((G_{[I]_{d}})_{d\in\mathbb{N}}\) is a non-increasing sequence of Lie groups with respect to inclusion. Moreover, \(G_{I}=G_{[I]_{d}}\) for \(d\geq\max\{d_{1},\ldots,d_{k}\}\)._
Proof.: We will use Theorem 6 to show that each \(G_{[I]_{d}}\) is a Lie group. Pick \(d\in\mathbb{N}\). The identity matrix \(Id_{n}\in G_{[I]_{d}}\) since \(Id_{n}\cdot p=p\in I_{d}\) for any \(p\in[I]_{d}\). For any \(g,h\in G_{[I]_{d}}\) and \(p\in I_{d}\), \(gh\cdot p=g\cdot(h\cdot p)\in[I]_{d}\). For \(g\in G_{[I]_{d}}\), the map \(\tilde{g}:R_{d}\to R_{d}\) given by \(\tilde{g}(p)=g\cdot p\) is an invertible linear transformation with inverse \(\tilde{g}^{-1}\). Now \(\tilde{g}([I]_{d})\subseteq[I]_{d}\). Given that \([I]_{d}\) is of finite dimension, \(\tilde{g}([I]_{d})=[I]_{d}\), and so \(g^{-1}\in G_{[I]_{d}}\). Lastly, take a sequence of matrices \(g_{i}\in G_{[I]_{d}}\) converging to \(g\). Then \(g\cdot p=(\lim\limits_{i\to\infty}g_{i})\cdot p=\lim\limits_{i\to\infty}(g_{i} \cdot p)\in[I]_{d}\). Hence, \(G_{[I]_{d}}\) is a closed subgroup of \(\text{GL}_{n}(\mathbb{C})\).
Next, we will show that the sequence of the Lie groups \((G_{[I]_{d}})_{d\in\mathbb{N}}\) is non-increasing. We can safely assume \(I\neq\langle x_{1},\ldots,x_{n}\rangle\). The case \(I=\langle x_{1},\ldots,x_{n}\rangle\) was proved true in Example 14. Since \(I\neq\langle x_{1},\ldots,x_{n}\rangle\), there is some variable \(x_{t}\in[R]_{1}\) not in \(I\). Take \(g\in G_{[I]_{d}}\). Consider the linear form \(g^{-1}\cdot x_{t}\in[R]_{1}\). For arbitrary \(p\in G_{[I]_{d-1}}\), by property (3) of the group action 4,
\[g\cdot((g^{-1}\cdot x_{t})p)=(g\cdot(g^{-1}\cdot x_{t}))(g\cdot p)=x_{t}(g\cdot p )\in[I]_{d}\subseteq I.\]
Since \(I\) is prime and \(x_{t}\notin I\), one must have that \(g\cdot p\in I\). Our group action preserves the degree of a polynomial, so \(g\cdot p\in[I]_{d-1}\), and consequentially \(g\in G_{[I]_{d-1}}\), which concludes that \(G_{[I]_{d}}\subseteq G_{[I]_{d-1}}\).
Now, for the final part of the theorem, it is enough to show that \(G_{[I]_{d}}\subseteq G_{I}\) for \(d=\max\{d_{1},\ldots,d_{k}\}\). Take \(g\in G_{[I]_{d}}\). By Lemma 10, it is enough to show that \(g\cdot p_{i}\in I\), for any \(1\leq i\leq k\). If \(p_{i}\in[I]_{d}\subseteq I\) we are done. Otherwise, recall that \(qp_{i}\in[I]_{d}\) for all standard basis element \(q\in\mathscr{B}([R]_{d-d_{i}})\). In particular, this is true for \(q=(g^{-1}\cdot x_{t})^{d-d_{i}}\). Thus we have
\[g\cdot((g^{-1}\cdot x_{t})^{d-d_{i}}p_{i})=x_{t}^{d-d_{i}}(g\cdot p_{i})\in[I]_{d }\subseteq I,\]
where \(x_{t}\) is one of the variables not in \(I\), as earlier. Given that \(I\) is a prime ideal and \(x_{t}\notin I\), one has that \(g\cdot p_{i}\in I\), as desired. In particular, \(G_{I}\) is a Lie group.
**Symmetry Lie groups of irreducible varieties.** One can similarly define the symmetry Lie group of an irreducible variety. Consider the general linear group \(\text{GL}_{n}(\mathbb{C})\) acting on \(\mathbb{C}^{n}\) with the rule:
\[\text{for each }v\in\mathbb{C}^{n}\text{ and }g=(g_{ij})_{n\times n}\in M_{n}( \mathbb{C})\text{, one has }g\bullet v=g^{-1}v,\]
where the point \(v\) is interpreted as an \(n\times 1\) vector.
**Definition 16**.: _The stabilizer of a variety \(V\) in \(\mathbb{C}^{n}\) is the set_
\[G_{V}=\{g\in\text{GL}_{n}(\mathbb{C})\ |\ g\bullet v\in V,\forall v\in V\}.\]
This definition is chosen to coincide with the symmetry Lie group of the vanishing ideal of that variety.
**Proposition 17**.: _Let \(V\) be an irreducible variety in \(\mathbb{P}^{n-1}\) and let \(I\subseteq R\) be its vanishing ideal. Then \(G_{I}=G_{V}\). In particular, \(G_{V}\) is a Lie group when \(V\) is a irreducible projective variety._
Proof.: The definitions of the two group actions imply that for any \(p\in I\) and \(v\in V\), one has
\[p(g\bullet v)=(g^{-1}\cdot p)(v). \tag{5}\]
Hence,
\[g\in G_{V} \longleftrightarrow p(g\bullet v)=0\text{ for all }p\in I,v\in V \quad\text{(by the definition of }G_{V})\] \[\longleftrightarrow(g^{-1}\cdot p)(v)=0\text{ for all }p\in I,\ v\in V \quad\text{(by Equation \eqref{eq:G_V})}\] \[\longleftrightarrow g^{-1}\in G_{I}\quad\text{(by the definition of }G_{I})\] \[\longleftrightarrow g\in G_{I}\quad\text{(since }G_{I}\text{ is a group).}\]
The final part arises from the correspondence between prime homogeneous ideals and irreducible projective varieties.
We will refer to \(G_{V}\) as _the symmetry Lie group_ of \(V\). Now, we are ready to complete the proof of Theorem 1.
Proof of Theorem 1.: Suppose that \(V\subseteq\mathbb{C}^{n}\) is a toric variety with torus \(\mathbb{T}_{r}\). By Proposition 3, \(\dim(V)\leq\dim(\mathbb{T}_{r})\). Next, we show that \(\dim(\mathbb{T}_{r})\leq\dim(G_{V})\) by providing an embedding of \(\mathbb{T}_{r}\) in \(G_{V}\). Start with the embedding \(\iota\)
\[\iota:\mathbb{T}_{n}\to GL_{n}(\mathbb{C}),\ (t_{1},t_{2},\dots,t_{n}) \rightarrow\begin{pmatrix}\dfrac{1}{t_{1}}&0&\dots&0\\ 0&\dfrac{1}{t_{2}}&\dots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\dots&\dfrac{1}{t_{n}}\end{pmatrix}\]
and consider the composition \(\iota\circ\Phi_{\mathcal{A}}\) for some \((\Phi_{\mathcal{A}})\) in Equation (1). Note that for \(t\in\mathbb{T}_{r}\) and \(v\in V\),
\[\iota\circ\Phi_{\mathcal{A}}(\mathbf{t})\bullet v=(t^{a_{1}}v_{1},\cdots,t^{ a_{n}}v_{n})=\tilde{\Phi}_{\mathcal{A}}(\mathbf{t},v)\in V,\text{ for }\tilde{\Phi}_{\mathcal{A}}\text{ as in Equation \eqref{eq:G_V}.}\]
Hence \(\iota\circ\Phi_{\mathcal{A}}(\mathbb{T}_{r})\subseteq G_{V}\), which makes \(\iota\circ\Phi_{\mathcal{A}}\) a desired embedding. The chain of inequalities \(\dim(V)\leq\dim(\mathbb{T}_{r})\leq\dim(G_{V})\) concludes the proof.
The rest of the article concerns with only homogeneous prime ideals in polynomial rings. Proposition 17 allows for interpretations of the results to irreducible varieties.
Symmetry Lie Algebras
Let \(G_{I}\) be the symmetry Lie group of a homogeneous prime ideal \(I\) in \(R\). The _Lie algebra_ for \(G_{I}\), denoted \(\mathfrak{g}_{I}\), is
\[\mathfrak{g}_{I}=\{g\in M_{n}(\mathbb{C})\mid e^{tg}\in G_{I},\forall t\in(- \epsilon,\epsilon)\},\]
for some real value \(\epsilon>0\). We will refer to it as the _symmetry Lie algebra of the prime ideal \(I\)_.
**Example 18**.: _Let \(I=\langle x_{1},\ldots,x_{n}\rangle\). By Example 14, \(G_{I}=\text{GL}_{n}(\mathbb{C})\), and so \(\mathfrak{g}_{I}=M_{n}(\mathbb{C})\)._
Next, we describe \(\mathfrak{g}_{I}\) (see Theorem 23) in terms of a group action as Definition 19 dictates.
**Definition 19**.: _Let \(I\subseteq R\) be an ideal. The \(*\)-stabilizer for \(I\) is the set_
\[\mathfrak{g}_{I}^{*}=\{g\in M_{n}(\mathbb{C})\mid g*p\in I,\text{ for }p\in I\}, \tag{6}\]
_where \(*\) is a group action of \(M_{n}(\mathbb{C})\) in \(R\) determined by the rules:_
1. \(g*c=0\) _for any constant_ \(c\in R\)__
2. \(g*x_{i}=g\cdot x_{i}\) _for any variable_ \(x_{i}\in R\)__
3. \(g*(p_{1}p_{2})=(g*p_{1})p_{2}+p_{1}(g*p_{2})\)_, for any_ \(p_{1},p_{2}\in R\)__
_extended linearly to \(R\)._
**Example 20**.: _Take \(I=\langle x_{1}^{2}+x_{2}^{2}+x_{1}x_{2}\rangle\). Then \(\begin{pmatrix}0&1\\ -1&-1\end{pmatrix}*(x_{1}^{2}+x_{2}^{2}+x_{1}x_{2})=-x_{1}^{2}-x_{2}^{2}-x_{1}x _{2}\in I\)._
To prove Theorem 23, we require the following two lemmas. Lemma 21 establishes a connection between the definition of \(*\) and the tangent space of \(G_{I}\), while Lemma 22 transfers the definition of \(\mathfrak{g}_{I}^{*}\) to a generating set of \(I\).
**Lemma 21**.: _Let \(g(t)\subseteq G_{I}\) be a smooth curve for \(t\in(-1,1)\) such that \(g(0)\) is the identity matrix. Then for any \(p\in R\),_
\[\frac{d}{dt}\big{|}_{t=0}(g(t)\cdot p)=g^{\prime}(0)*p. \tag{7}\]
Proof.: First notice that if \(p=cp_{1}+p_{2}\) for polynomials \(p_{1},p_{2}\in R\), and constant \(c\in\mathbb{C}\), the linearity of differentiation implies
\[\frac{d}{dt}\big{|}_{t=0}g(t)\cdot(cp_{1}+p_{2})=c\frac{d}{dt}\big{|}_{t=0}g(t )\cdot p_{1}+\frac{d}{dt}\big{|}_{t=0}g(t)\cdot p_{2}.\]
Hence, following the definition of \(*\) action, it is enough to check that Equation (7) holds for any monomial in \(R\).
If \(p\) is a constant monomial \(c\in R\), then
\[\frac{d}{dt}\big{|}_{t=0}(g(t)\cdot c)=\frac{d}{dt}\big{|}_{t=0}(c)=0=g^{ \prime}(0)*c.\]
If \(p\) is some variable \(x_{j}\in R\), then
\[\frac{d}{dt}\big{|}_{t=0}(g(t)\cdot x_{j})=\frac{d}{dt}\big{|}_{t=0}\sum_{i=1} ^{n}g(t)_{ij}x_{i}=\sum_{i=1}^{n}g^{\prime}(0)_{ij}x_{i}=g^{\prime}(0)\cdot x _{j}.\]
Suppose Equation (7) holds for any monomial of degree at most \(d\). A monomial of degree \(d+1\) in \(R\) has form \(x_{j}m\), where \(x_{j}\) is some variable in \(R\) and \(m\) is a monomial of degree \(d\) in \(R\). We have,
\[\frac{d}{dt}\big{|}_{t=0}(g(t)\cdot x_{j}m) =\frac{d}{dt}\big{|}_{t=0}(g(t)\cdot x_{j})(g(t)\cdot m)\quad(\text {property (\ref{eq:1}) of }\cdot\text{action})\] \[=(g(0)\cdot x_{j})\frac{d}{dt}\big{|}_{t=0}(g(t)\cdot m)+(g(0) \cdot m)\frac{d}{dt}\big{|}_{t=0}(g(t)\cdot x_{i})\quad(\text{product rule of differenation})\] \[=x_{j}(g^{\prime}(0)*m)+m(g^{\prime}(0)*x_{j})\quad(g(0)=Id_{n} \text{ and induction hypothesis})\] \[=g^{\prime}(0)*x_{j}m.\quad(\text{property (\ref{eq:1}) of }\ast\text{ action}).\qed\]
**Lemma 22**.: _Let \(I\) be a prime ideal generated by homogeneous polynomials \(p_{1},\dots,p_{k}\). Then \(\mathfrak{g}_{I}^{*}=\{g\in M_{n}(\mathbb{C})\mid g*p_{i}\in I,\text{ for }i=1,\dots,k\}\)._
Proof.: Denote \(\mathfrak{g}_{I}^{\prime}=\{g\in M_{n}(\mathbb{C})\mid g*p_{i}\in I,\text{ for }i=1,\dots,k\}\). Clearly \(\mathfrak{g}_{I}*\subseteq\mathfrak{g}_{I}^{\prime}\). Now take \(g\in\mathfrak{g}_{I}^{\prime}\). A polynomial \(p\) in \(I\) is of form \(p=\sum\limits_{i=1}^{k}f_{i}p_{i}\in I\), where \(f_{1},\dots,f_{k}\in R\). Since \(g*p_{i}\in I\) for \(i=1\dots k\), we have
\[g*p=g*\sum\limits_{i=1}^{k}f_{i}p_{i}=\sum\limits_{i=1}^{k}g*(f_{i}p_{i})= \sum\limits_{i=1}^{k}[f_{i}(g*p_{i})+(g*f_{i})g_{i}]\in I.\]
Hence, \(g\in\mathfrak{g}_{I}*\), as desired.
**Theorem 23**.: _Let \(I\) be a homogeneous prime ideal generated by homogeneous polynomials \(p_{1},\dots,p_{k}\). Then,_
\[\mathfrak{g}_{I}=\{g\in M_{n}(\mathbb{C})\mid g*p_{i}\in I,\text{ for }i=1,\dots,k\}.\]
Proof.: Take \(g\in\mathfrak{g}_{I}\); that is, \(g(t)=e^{tg}\in G_{I}\) for \(t\) is some open interval. For any \(p\in I\), \(g(t)\cdot p\in I\) implies \(\frac{d}{dt}\big{|}_{t=0}(g(t)\cdot p)\in I\), and so, by Lemma 21, \(g^{\prime}(0)*p\in I\). Since, \(g^{\prime}(0)=g\), we have \(g*p\in I\) and hence, \(g\in\mathfrak{g}_{I}^{*}\).
Conversely, take \(g\in\mathfrak{g}_{I}^{*}\) and arbitrary polynomial \(p\in I\). In order to show that \(g\in\mathfrak{g}_{I}\), we need to prove that there is some value \(\epsilon>0\) such that \(g(t)=e^{tg}\in G_{I}\) for \(t\in(-\epsilon,\epsilon)\). By Lemma 22 we need to work only the generators of \(I\). For each generator \(p_{j}\in I\), let \(S_{j}=\{t\in\mathbb{R}\mid g(t)\cdot p_{j}\in I\}\), the stabilizer of \(p_{j}\). Note that \(0\in S_{j}\) for any \(1\leq j\leq k\). Moreover, \(0\) is an interior point of each \(S_{j}\). Indeed, suppose there is some \(j\) for which \(0\) is a boundary point. This implies that there is a sequence \((t_{i})_{i\in\mathbb{N}}\) in the complement of \(S_{j}\) that converges to \(0\). Hence, for any \(i\in\mathbb{N}\),
\[g(t_{i})\cdot p_{j}\notin I\to g(t_{i})\cdot p_{j}-p_{j}\notin I\to\frac{g(t_{i })\cdot p_{j}-p_{j}}{t_{i}}\notin I\to\lim\limits_{i\to\infty}\frac{g(t_{i}) \cdot p_{j}-g(0)\cdot p_{j}}{t_{i}}\notin I.\]
Since \(t_{i}\to 0\) as \(i\to\infty\), by definition of differentiation and Lemma 21 we have
\[\lim\limits_{i\to\infty}\frac{g(t_{i})\cdot p_{j}-g(0)\cdot p_{j}}{t_{i}}=\frac {d}{dt}\big{|}_{t_{i}=0}(g(t_{i})\cdot p_{j})=g^{\prime}(0)*p_{j}=g*p_{j} \notin I,\]
which contradicts \(g\in\mathfrak{g}_{I}^{*}\). So, \(0\) is an interior point of each set \(S_{j}\). Hence there is some open interval \((-\epsilon,\epsilon)\) for \(\epsilon>0\) in the intersection of \(S_{j}\)-s and \(g\in\mathfrak{g}_{I}\), as desired.
Now we are ready to rephrase Theorem 1 in terms of symmetry Lie algebras.
**Theorem 24**.: _[Theorem 1 revised] Let \(I\) be a homogeneous prime ideal and let \(\mathfrak{g}_{I}\) be its symmetry Lie algebra. If \(\dim(\mathfrak{g}_{I})<\dim(I)\) then \(V(I)\) is not a toric variety._
Proof.: See Proposition 7.
For the remainder of this section, we will employ the \(*\) action defined in Definition 19 to develop an algorithm for computing the symmetry Lie algebra of a homogeneous prime ideal. To accomplish this, we will revisit the graded components of an ideal and reinterpret polynomials in them as vectors in a linear space.
Recall from the preliminaries section that, for each \(d\in\mathbb{N}\), the graded component \(R_{d}\) of the polynomial ring \(R=\mathbb{C}[x_{1},\ldots,x_{n}]\) is a vector space with dimension \(\binom{n+d-1}{d}\). It has a standard basis \(\mathscr{B}([R]_{d})\), consisting of monomials of degree \(d\) in \(R\). We fix an order on the monomials in \(R_{d}\); for simplicity, in this paper, we choose the graded reverse lexicographic order, but any order works.
**Definition 25**.: _Let \(\mathscr{B}([R]_{d})\) be the standard basis of \([R]_{d}\), ordered. The vector representation of a polynomial \(p\in[R]_{d}\) is the vector \(\overrightarrow{p}\in\mathbb{C}^{\binom{n+d-1}{d}}\) of coefficients of \(p\) in the given order._
For instance, \(p(x)=x_{1}^{2}+2x_{1}x_{3}-x_{2}x_{3}\in\mathbb{C}[x_{1},x_{2},x_{3}]\) has \(\overrightarrow{p}=[1\ 0\ 0\ 2\ -1\ 0]^{T}\) in \([R]_{2}\).
We recall the basis \(\mathscr{B}([I]_{d})\) for the \(d\)-th graded component of a homogeneous ideal \(I\) from Proposition 5 and Theorem 2.
**Theorem 26**.: _Let \(I\subseteq R\) be a homogeneous prime ideal minimally generated by polynomials of degree at most \(d\). Let \(\mathscr{B}([I]_{d})=\{f_{1},\ldots,f_{k}\}\) be a basis for \([I]_{d}\). Let \(g\in M_{n}(\mathbb{C})\) be the \(n\times n\) matrix whose entries \(g_{ij}\) are unknown. For each \(f_{i}\in\mathscr{B}([I]_{d})\) consider the matrix_
\[M_{i}(g):=\left(\overrightarrow{f_{1}}\ \ \overrightarrow{f_{2}}\ \ \ldots\ \ \overrightarrow{f_{k}}\ \ \overrightarrow{g\ast f_{i}}\right).\]
_Then, \(\mathfrak{g}_{I}=\{g\in M_{n}(\mathbb{C})\mid\operatorname{rank}(M_{i}(g))=k, \text{ for }i=1,\ldots,k\}\)._
Proof.: For each \(d\in\mathbb{N}\), let \(\mathfrak{g}_{[I]_{d}}\) be the Lie algebra of the symmetry Lie group \(G_{[I]_{d}}\). Analogous to the proof of Theorem 23, since the basis \(\mathscr{B}([I]_{d})\) is finite, one shows that
\[\mathfrak{g}_{[I]_{d}}=\{g\in\mathfrak{g}_{n}(\mathbb{C})\mid g\ast f_{i}\in [I]_{d},\forall f_{i}\in\mathscr{B}([I]_{d})\}.\]
By Theorem 15, \(G_{I}=G_{[I]_{d}}\). Therefore, their associated Lie algebras \(\mathfrak{g}_{I}\) and \(\mathfrak{g}_{[I]_{d}}\) are equal:
\[\mathfrak{g}_{I}=\{g\in\mathfrak{g}_{n}(\mathbb{C})\mid g\ast f_{i}\in[I]_{d}, \forall f_{i}\in\mathscr{B}([I]_{d})\}.\]
Now, for each \(1\leq i\leq k\), one has that \(g\ast f_{i}\in[I]_{d}\) if and only if \(g\ast f_{i}\) is a linear combination of the basis elements \(\mathscr{B}([I]_{d})\), if and only if \(\overrightarrow{g\ast f_{i}}\) is a linear combination of \(\overrightarrow{f_{1}},\ldots,\overrightarrow{f_{k}}\), if and only if the matrix \(M_{i}(g)\) has rank \(k\), as desired.
The implementation of the algorithm can be found on GitHub at the following URL: [https://github.com/arpan-pal/Toric_via_symmetry](https://github.com/arpan-pal/Toric_via_symmetry). Here is an illustration.
``` In[]:R=PolynomialRing(QQ,['x','y','z'])R.inject_variables()Lie=symmal([x^2+y^2+z^2,x*y],3)Out[]:Definingx,y,zDefiningx,y,zDefiningx,y,z,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gllll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gllll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gllll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gllll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gllll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gll,gllll,gll,gll,gll,gll,gllll,gll,gll,gllll,gll,gll,gllll,gll,gllll,gll,gll,gll,gll,gll,gll,gllll,gll,gll,gllll,gll,gll,gllll,gll,gll,gll,gllll,gll,gll,gllll,gll,gll,gll,gll,gll,gll,gll,gllll,gll,gllll,gllll,gll,gllll,gll,gllll,gllll,gll,gllll,gllll,gll,gll,gllll,gllll,gllll,gllll,gllll,gllll,gll,gllll,gllll,gllll,gllll,gll,gllll,gll,gllll,gll,gllll,gllll,gll,gllll,gll,gllll,gll,gll,gllll,gllll,gll,gllll,gllll,gllll,gllll,gllll,gllll,gllll,gllllll,gllll,gllll,gllll,gllll,gll,gllllll,gllll,gll,gllllll,gll,gllllll,gllllll,gllllll,gllll,gllllll,gllllll,gllllll,gllll,gllllll,gllll,gllll,gllll,gllll,gllll,gllllll,gllll,gllllll,gllllll,gllll,gllllll,gllllll,gllllll,gllllll,gllllll,gllll,gllllll,gllllll,gllllllll,gllll,gllllll,gllllll,gllllll,gllll,gllllllll,gllllllll,gllllll,gllllllll,gllll,gllllll,gllllllll,gllllllll,gllllll,gllllllll,gllllll,gllllll,gllllllll,gllllll,gllllllll,gllllllll,gllllllll,gllllllllll,gllllllll,gllllllllll,gllllllllll,gllllllll,gllllllllllll,gllllllllll,gllllllllll,gllllllll,gllllllll,gllllllllllll,gllllllll,gllllllllllll,gllllllllllll,gllllllllll,gllllllllllll,gllllllllll,gllllllllllll,gllllllllllllll,gllllllllllll,gllllllllllllll,gllllllllllll,gllllllllllll,gllllllllllllll,gllllllllllll,gllllllllllllll,gllllllllllllllllll,gllllllllllllllllll,gllllllllllllllllllll,g
Applications to Algebraic Statistics
As discussed in the introduction, the classification of statistical models with toric structure is of interest in algebraic statistics. When the vanishing ideal of a statistical model is not toric, a preferred method to check if its variety is toric is by searching for linear transformations under which the vanishing ideal becomes toric. The binomial structure has been successfully investigated in phylogenetics [11, 12], staged tree models [10, 13], and several Bayesian networks [1, 1, 14]. In this section, we apply Theorem 24 and its implementation in Theorem 2 to provide examples of ideals for staged tree models and Gaussian graphical models that cannot be turned toric under any change of variables. Of course, there is much more space left for exploration, and we encourage the interested reader to delve into it.
**Staged tree models.** Staged tree models are discrete statistical models encoding relationships between events. They are realizable as rooted trees with colored vertices and labeled edges directed away from the root, called staged trees. Vertices represent events, edge labels represent conditional probabilities, and the colors on the vertices represent an equivalence relation--vertices of the same color have the same outgoing edge labels. We use \(\theta_{ij}\) to denote the label associated with an edge \([i,j]\). A key constraint is that the sum of the labels of all edges emanating from the same vertex in a staged tree must be equal to one. The staged tree model is defined as the set of points in \(\mathbb{R}^{n}\) parametrized by multiplying edge labels along the root-to-leaf paths \(\lambda_{1},\ldots,\lambda_{n}\) in the staged tree \(\mathcal{T}\). In algebro-geometric terms, the staged tree model consists of points inside the toric variety \(V(\ker\varphi_{\mathcal{T}})\), where
\[\varphi_{\mathcal{T}}:\mathbb{R}[x_{1},\ldots,x_{n}]\to\mathbb{R}[\Theta,z]/ \langle\sum_{j}\theta_{ij}-z\rangle,\quad x_{r}\mapsto z^{n-\ell(\lambda_{r}) }\prod_{[i,j]\in E(\lambda_{r})}\theta_{ij}\text{ for }r=1,\ldots,n. \tag{8}\]
For an introduction to staged tree models, we refer the reader to [1]. Detailed information on toric staged tree models after a linear change of variables can be found in [10]. The latter paper poses several open questions, two of which we address here. The first example of a non-toric staged tree model (and Bayesian network) is credited to Nicklasson [13]. The code associated with [10] includes an implementation of Equation (8) that we utilize for faster computations.
**Example 27**.: _The discussion section in [10] raises the question of whether \(\ker(\varphi_{\mathcal{T}})\) for the staged tree \(\mathcal{T}\) in Figure 1 becomes toric after a linear change of variables. We provide a negative answer here, and below are the details._
_The ideal \(\ker_{\mathcal{T}}\mathbb{C}[x_{1},\ldots,x_{8}]\) has dimension \(5\) and generated by the three quadratics_
\[p_{1}=(x_{1}+x_{2})x_{8}-(x_{3}+x_{7})x_{8},\ p_{2}=(x_{7}+x_{8})x_{1}-(x_{5}+ x_{6})x_{2},\text{ and }p_{3}=x_{3}x_{6}-x_{4}x_{5}.\]
_So, \(\mathscr{B}([I]_{2})=\{p_{1},p_{2},p_{3}\}\), and_
\[\overrightarrow{p_{1}}=[0,0,0,\ 0,0,1,1,0,0,0,-1,-1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0]^{T},\] \[\overrightarrow{p_{2}}=[0,0,0,0,0,0,0,1,0,0,0,0,0,0,1,0,0,0,0,-1,0, 0,0,0,-1,0,0,0,0,0,0,0,0,0,0]^{T},\] \[\overrightarrow{p_{3}}=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0, 0,-1,0,0,0,0,0,0,0,0,0,0,0]^{T}.\] \[\overrightarrow{g\cdot p_{1}^{*}}=[g_{18}+g_{28},g_{17}+g_{27}-g_{38 }-g_{48},g_{16}+g_{26},g_{15}+g_{25},g_{14}+g_{24}-g_{78},g_{13}+g_{23}-g_{78},\] \[g_{12}+g_{22}+g_{88},g_{11}+g_{21}+g_{88},-g_{37}-g_{47},-g_{36}- g_{46},-g_{35}-g_{45},-g_{34}-g_{44}-g_{77},-g_{33}-\] \[g_{43}-g_{77},-g_{32}-g_{42}+g_{87},-g_{31}-g_{41}+g_{87},0,0,-g_ {76},-g_{76},g_{86},g_{86},0,-g_{75},-g_{75},g_{85},g_{85},\] \[-g_{74},-g_{73}-g_{74},-g_{72}+g_{84},-g_{71}+g_{84},-g_{73},-g_{7 2}+g_{83},-g_{71}+g_{83},g_{82},g_{81}+g_{82},g_{81}]^{T}.\]
_We can now compute \(M_{1}(g)=\left(\overrightarrow{p_{1}}\ \ \overrightarrow{p_{2}}\ \ \overrightarrow{p_{3}}\ \ \overrightarrow{g\cdot p_{1}}\right)\). Similarly one computes \(M_{2}(g)\) and \(M_{3}(g)\). Via Algorithm 2 we obtain that \(\mathfrak{g}_{\ker(\varphi_{7})}\) is the \(4\) dimensional vector space generated by_
\[\begin{pmatrix}1&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&1&1&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&1&0&0\\ 0&0&0&0&0&1&0\\ 0&0&0&0&0&1\end{pmatrix},\ \begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&-1&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&-1&-1&0&0\\ 0&0&0&0&0&0&-1&0\\ 0&0&0&0&0&0&-1\end{pmatrix},\ \begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&-1&0&0&0&0\\ 0&0&0&0&-1&-1&0&0\\ 0&0&0&0&0&0&-1\end{pmatrix},\ \begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&-1&-1&0&0\\ 0&0&0&0&0&0&-1\end{pmatrix},\ \begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&-1&-1&0&0\\ 0&0&0&0&0&0&-1\end{pmatrix},\ \begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&-1\end{pmatrix}.\end{pmatrix}\]
By Theorem 24, \(\ker(\varphi_{7})\) is not toric under any linear change of variables.
**Example 28**.: _We use Theorem 1 on the ideal of the staged tree \(\mathcal{T}\) in Figure 2 to disprove Conjecture 1 in [10] that all one staged trees are toric. The ideal \(\ker(\varphi_{7})\) of the one-stage tree in Figure 2 is of dimension \(3\) and minimally generated by the two by two minors of the matrix_
\[\begin{pmatrix}x_{3}+...+x_{9}&x_{3}&x_{5}&x_{7}\\ x_{1}&x_{5}+...+x_{9}&x_{6}&x_{8}\\ x_{2}&x_{4}&x_{7}+x_{8}+x_{9}&x_{9}\end{pmatrix}.\]
_Using Algorithm 2 and simplifications (details of implementation in GitHub) we get that its
Figure 1:
Figure 2:
symmetry Lie algebra is a \(2\) dimensional vector space generated by_
\[\begin{pmatrix}1&0&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0\\ 0&0&1&0&0&0&0&0&0\\ 0&0&0&1&0&0&0&0&0\\ 0&0&0&0&1&0&0&0\\ 0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&1&0&0\\ 0&0&0&0&0&0&1&0\\ 0&0&0&0&0&0&0&1\end{pmatrix},\quad\begin{pmatrix}-2&0&0&0&0&0&0&0&0\\ 0&-3&0&0&0&0&0&0&0\\ 0&0&-3&0&0&0&0&0&0\\ 0&1&0&-4&0&0&0&0&0\\ 0&0&1&0&-4&0&0&0&0\\ 0&0&0&2&2&-2&0&0&0\\ 1&1&1&1&1&-1&0&0\\ 0&0&0&0&0&0&-1&0\\ 1&1&1&1&1&1&1&1&0\end{pmatrix}.\]
_By Theorem 1, \(\ker(\varphi_{\mathcal{T}})\) is not toric under any linear change of variables._
A naive hope is for a staged tree model whose tree contains a subtree with non-toric structure to not have a toric structure. As the example below shows, in general, this is not true.
**Example 29**.: _Consider the one staged tree model \(\mathcal{T}^{\prime}\) of depth \(4\) and has the maximum number of edges. The staged tree \(\mathcal{T}^{\prime}\) has the staged tree Figure 2 as its subtree. By [12, Lemma 6.1], all maximal one stage trees have toric vanishing ideals, \(\ker(\varphi_{\mathcal{T}^{\prime}})\) is toric - it has a minimal generating set of \(66\) linear binomials and \(75\) quadratic binomials._
However, we suspect the following to hold.
**Conjecture 30**.: _Let \(\mathcal{T}\) with non-toric \(V(\ker\varphi_{\mathcal{T}})\) that uses color set \(S\). Let \(\mathcal{T}^{\prime}\) be a staged tree such that its restriction to color set \(S\) gives \(\mathcal{T}\). Then \(V(\ker\varphi_{\mathcal{T}^{\prime}})\) is not a toric variety._
**Gaussian graphical models.** A Gaussian graphical model is a collection of multivariate Gaussian distributions in which a graph encodes conditional independence relations among the random variables. Its set of concentration matrices, that is, the inverses of covariance matrices, is a linear space of symmetric matrices intersected with the cone of positive definite matrices. For a graph \(G\), this linear space is the set of all \(n\times n\) symmetric matrices \(K=(k_{ij})\) with \(k_{ij}=0\) if \([i,j]\) is an edge in \(G\). Its inverse space, the space of covariance matrices, on the contrary, is most often not a friendly variety. It is found as the vanishing of the kernel of the rational map
\[\rho_{G}:\mathbb{R}[\Sigma]\to\mathbb{R}(K),\ \rho_{G}(\sigma_{ij})=\frac{(- 1)^{i+j}K_{[n]\setminus\{i\},[n]\setminus\{j\}}}{\det(K)}, \tag{9}\]
where \(K_{[n]\setminus\{i\},[n]\setminus\{j\}}\) is the \(ij\)-th minor of the symmetric matrix \(K\).
Misra and Sullivant show in [13] that block graphs produce toric Gaussian graphical models. In the next example we show that the ideal \(\rho_{\mathcal{S}}\) arriving from a four cycle is not toric under any change of variables. To the knowledge of the authors this is the first example of a non-toric Gaussian graphical model.
**Example 31**.: _Consider the four cycle \(G\) with edges \([1,2],[2,3],[3,4],[1,4]\) as in Figure 3._
_The ideal \(\ker(\rho_{G})\) of the Gaussian graphical model in Figure 3 is of dimension \(8\) and generated by_
\[p_{1} =\sigma_{23}\sigma_{14}\sigma_{24}-\sigma_{13}\sigma_{24}^{2}- \sigma_{22}\sigma_{14}\sigma_{34}+\sigma_{12}\sigma_{24}\sigma_{34}+\sigma_{22 }\sigma_{13}\sigma_{44}-\sigma_{12}\sigma_{23}\sigma_{44},\] \[p_{2} =\sigma_{13}\sigma_{23}\sigma_{14}-\sigma_{24}\sigma_{13}^{2}- \sigma_{12}\sigma_{33}\sigma_{14}+\sigma_{11}\sigma_{33}\sigma_{24}+\sigma_{12 }\sigma_{13}\sigma_{34}-\sigma_{11}\sigma_{23}\sigma_{34}.\]
_The ideal \(\ker(\rho_{G})\) of the Gaussian graphical model in Figure 3 is of dimension \(8\) and generated by_
\[p_{1} =\sigma_{23}\sigma_{14}\sigma_{24}-\sigma_{13}\sigma_{24}^{2}- \sigma_{22}\sigma_{14}\sigma_{34}+\sigma_{12}\sigma_{24}\sigma_{34}+\sigma_{22 }\sigma_{13}\sigma_{44}-\sigma_{12}\sigma_{23}\sigma_{44},\] \[p_{2} =\sigma_{13}\sigma_{23}\sigma_{14}-\sigma_{24}\sigma_{13}^{2}- \sigma_{12}\sigma_{33}\sigma_{14}+\sigma_{11}\sigma_{33}\sigma_{24}+\sigma_{12 }\sigma_{13}\sigma_{34}-\sigma_{11}\sigma_{23}\sigma_{34}.\]
_The ideal \(\ker(\rho_{G})\) of the Gaussian graphical model in Figure 3 is of dimension \(8\) and generated by_
\[p_{1} =\sigma_{23}\sigma_{14}\sigma_{24}-\sigma_{13}\sigma_{24}^{2}- \sigma_{22}\sigma_{14}\sigma_{34}+\sigma_{12}\sigma_{24}\sigma_{34}+\sigma_{2 2}\sigma_{13}\sigma_{44}-\sigma_{12}\sigma_{23}\sigma_{44},\] \[p_{2} =\sigma_{13}\sigma_{23}\sigma_{14}-\sigma_{24}\sigma_{13}^{2}- \sigma_{12}\sigma_{33}\sigma_{14}+\sigma_{11}\sigma_{33}\sigma_{24}+\sigma_{12 }\sigma_{13}\sigma_{34}-\sigma_{11}\sigma_{23}\sigma_{34}.\]
_The ideal \(\ker(\rho_{G})\) of the Gaussian graphical model in Figure 3 is of dimension \(8\) and generated by_
\[p_{1} =\sigma_{23}\sigma_{14}\sigma_{24}-\sigma_{13}\sigma_{24}^{2}- \sigma_{22}\sigma_{14}\sigma_{34}+\sigma_{12}\sigma_{24}\sigma_{34}+\sigma_{2 2}\sigma_{13}\sigma_{44}-\sigma_{12}\sigma_{23}\sigma_{44},\] \[p_{2} =\sigma_{13}\sigma_{23}\sigma_{14}-\sigma_{24}\sigma_{13}^{2}- \sigma_{12}\sigma_{33}\sigma_{14}+\sigma_{11}\sigma_{33}\sigma_{24}+\sigma_{12 }\sigma_{13}\sigma_{34}-\sigma_{11}\sigma_{23}\sigma_{34}.\]
_The ideal \(\ker(\rho_{G})\) of the Gaussian graphical model in Figure 3 is of dimension \(8\) and generated by_
\[p_{1} =\sigma_{23}\sigma_{14}\sigma_{24}-\sigma_{13}\sigma_{24}^{2}- \sigma_{22}\sigma_{14}\sigma_{34}+\sigma_{12}\sigma_{24}\sigma_{34}+\sigma_{2 2}\sigma_{13}\sigma_{44}-\sigma_{12}\sigma_{23}\sigma_{44},\] \[p_{2} =\sigma_{13}\sigma_{23}\sigma_{14}-\sigma_{24}\sigma_{13}^{2}- \sigma_{12}\sigma_{33}\sigma_{14}+\sigma_{11}\sigma_{33}\sigma_{24}+\sigma_{12 }\sigma_{13}\sigma_{34}-\sigma_{11}\sigma_{23}\sigma_{34}.\]
_The ideal \(\ker(\rho_{G})\) of the Gaussian graphical model in Figure 3 is of dimension \(8\) and generated by_
\[p_{1} =\sigma_{23}\sigma_{14}\sigma_{24}-\sigma_{13}\sigma_{24}^{2}- \sigma_{22}\sigma_{14}\sigma_{34}+\sigma_{12}\sigma_{24}\sigma_{34}+\sigma_{2 2}\sigma_{13}\sigma_{44}-\sigma_{12}\sigma_{23}\sigma_{44},\] \[p_{2} =\sigma_{13}\sigma_{23}\sigma_{14}-\sigma_{24}\sigma_{13}^{2}- \sigma_{12}\sigma_{33}\sigma_{14}+\sigma_{11}\sigma_{33}\sigma_{24}+\sigma_{12 }\sigma_{13}\sigma_{34}-\sigma_{11}\sigma_{23}\sigma_{34}.\]
_The ideal \(\ker(\rho_{G})\) of the Gaussian graphical model in Figure 3 is of dimension \(8\) and generated by_
\[p_{1} =\sigma_{23}\sigma_{14}\sigma_{24}-\sigma_{13}\sigma_{24}^{2}- \sigma_{22}\sigma_{14}\sigma_{34}+\sigma_{12}\sigma_{24}\sigma_{34}+\sigma_{22 }\sigma_{13}\sigma_{44}-\sigma_{12}\sigma_{23}\sigma_{44},\] \[p_{2} =\sigma_{13}\sigma_{23}\sigma_{14}-\sigma_{24}\sigma_{13}^{2}- \sigma_{12}\sigma_{33}\sigma_{14}+\sigma_{11}\sigma_{33}\sigma_{24}+\sigma_{12} \sigma_{13}\sigma_{34}-\sigma_{11}\sigma_{23}\sigma_{34}.\]
_Using Algorithm in 2, one has that the symmetry Lie algebra is a \(4\) dimensional vector space generated by_
\[\begin{pmatrix}1&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&-1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&-1&0&0&0&0\\ 0&0&0&0&-1&0&0&0\\ 0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&-1&0\\ 0&0&0&0&0&0&0&0&-1\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&1&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0\end{pmatrix},\]
_By Theorem 1, \(\ker(\rho_{G})\) is not toric under any linear change of variables._
A natural question emerges:
**Question 32**.: _Let \(G\) be a graph, such that \(V(\ker(\rho_{G})\) is not toric. Let \(G^{\prime}\) be a graph with \(G\) as subgraph. Is it true that \(V(\ker(\rho_{G^{\prime}})\) is not a toric variety?_
#### Colored Gaussian graphical models.
Colored Gaussian graphical models are generalizations of Gaussian graphical models. The graph is colored, and in addition to \(k_{ij}=0\) whenever \([i,j]\) is a missing edge in the graph, one has that \(k_{ii}=k_{jj}\) when vertices \(i\) and \(j\) have the same color, and \(k_{ij}=k_{uv}\) when the edges \([i,j]\) and \([u,v]\) have the same color. The vanishing ideal for a colored graph \(\mathcal{G}\) is the kernel of the map in Equation (9) adapted to the subset of parameters in \(K\). For an introduction to these models we recommend [11] and for work on the colored Gaussian graphical models with toric vanishing ideals see [10]. Note that Example 31 also serves as the first example of a _colored_ Gaussian graphical model whose vanishing ideal is not toric under any linear change of variables.
The ideal in the next example uses symmetry Lie algebras on an ideal arriving from a colored Gaussian graphical model to discover its toric structure.
**Example 33**.: _Consider ideal \(I=\ker(\rho_{\mathcal{G}})\subseteq R=\mathbb{C}[\sigma_{11},\sigma_{12}, \ldots,\sigma_{33}]\) for the colored graph \(\mathcal{G}\) in Figure 4. This is a \(4\) dimensional ideal generated by_
\[p_{1}=\sigma_{12}\sigma_{13}-\sigma_{11}\sigma_{23}\text{ and }p_{2}=\sigma_{12}^{2}- \sigma_{11}\sigma_{22}-\sigma_{13}^{2}+\sigma_{11}\sigma_{33}.\]
_Using Algorithm in 2, one finds that the symmetry Lie algebra of \(I\) is the \(11\) dimensional vector space of matrices generated by_
\[\begin{pmatrix}1&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&-1&0\\ 0&1&0&-1\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 1&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0&0\\ -2&0&0&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 1&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0&0\\ 0&-2&0&0&2\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&-1&0&0&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&-1&0&0&1\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&-1&0&0&1\end{pmatrix},\]
\[\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 1&0&0&0&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&1&0&0&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&1&0&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\]
\[\begin{pmatrix}0&0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\quad\begin{pmatrix}0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&1&0\end{pmatrix},\]
\[\begin{pmatrix}0&0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&0&0&0&0&0\\ 0&0&0&0&0&0\\ 0&0&
_Since \(\dim(\mathfrak{g}_{I})\geq\dim(I)\), Theorem 1 is inconclusive. We will show that \(I\) is toric after an appropriate linear change of variables by further analyzing \(\mathfrak{g}_{I}\). We are in search for a torus in \(\mathfrak{g}_{I}\) of dimension at least \(4\). Consider the invertible matrix \(B\):_
\[B=\left(\begin{array}{cccccc}0&0&0&0&0&1\\ 0&-I&I&0&0&0\\ 0&0&0&0&1&0\\ 0&1&1&0&0&0\\ 1&0&0&1&0&0\\ 2I&0&0&-2I&1&0\end{array}\right),\]
_and apply a change of basis in \(\mathfrak{g}_{I}\) with respect to matrix \(B\). The set \(B^{-1}*A*B\) for each basis element \(A\) listed below is now a basis for \(\mathfrak{g}_{I}\), containing exactly \(4\) diagonal matrices._
_Consider the change of variables in \(R\) induced by rows of \(B\):_
\[\sigma_{11} \mapsto s_{33}\] \[\sigma_{12} \mapsto-is_{12}+is_{22}\] \[\sigma_{22} \mapsto s_{23}\] \[\sigma_{13} \mapsto s_{12}+s_{22}\] \[\sigma_{23} \mapsto s_{11}+s_{13}\] \[\sigma_{33} \mapsto 2is_{11}-2is_{13}+s_{23}.\]
_This change of variables sends the original generating polynomials \(p_{1},p_{2}\) of \(I\) to_
\[p_{1}^{\prime}=-is_{12}^{2}+is_{22}^{2}-s_{11}s_{33}-s_{13}s_{33}\text{ and }p_{2}^{\prime}=-2s_{12}^{2}-2s_{22}^{2}+2is_{11}s_{33}-2is_{13}s_{33}.\]
_The polynomials \(p_{1}^{\prime},p_{2}^{\prime}\) are not binomials, but_
\[q_{1}=\frac{2p_{1}^{\prime}+ip_{2}^{\prime}}{4}=is_{12}^{2}+s_{11}s_{33}\text{ and }q_{2}=\frac{2p_{1}^{\prime}-ip_{2}^{\prime}}{4}=is_{22}^{2}-s_{13}s_{33}\]
_are, and hence \(I=\langle p_{1}^{\prime},p_{2}^{\prime}\rangle=\langle q_{1},q_{2}\rangle\), is toric in variables \(s_{11},\ldots,s_{33}\) induced by matrix \(B\)._
### Discussion
In Theorem 1, we saw that the dimension of the symmetry Lie algebra of an irreducible projective variety \(V\) provides a necessary condition for \(V\) to be toric, and Example 33 signals that the symmetry Lie algebra of a variety can be used to provide sufficient conditions that guarantee that a given variety is toric. So, we ask:
**Question 34**.: _Can symmetry Lie algebras detect when \(V\) is a toric variety? Alternatively, can symmetry Lie algebras detect when there is a linear change of variables under which a prime ideal is toric?_
While we leave the answer to this question for possible future work, we discuss another example of a toric variety for which the change of basis matrix does not induce the right change of variables.
**Example 35**.: _Consider the ideal \(I\) generated by \(p=x^{2}+y^{2}+z^{2}\) in \(\mathbb{C}[x,y,z]\). The symmetry Lie algebra of \(I\) is the \(4\) dimensional vector space of matrices generated by_
\[\begin{pmatrix}1&0&0\\ 0&1&0\\ 0&0&1\end{pmatrix},\begin{pmatrix}0&1&0\\ -1&0&0\\ 0&0&0\end{pmatrix},\begin{pmatrix}0&0&1\\ 0&0&0\\ -1&0&0\end{pmatrix},\begin{pmatrix}0&0&0\\ 0&0&1\\ 0&-1&0\end{pmatrix} \tag{10}\]
_Take the invertible matrix_
\[B=\left(\begin{array}{ccc}1&0&0\\ 0&i&-0.5-0.5i\\ 0&i&0.5+0.5i\end{array}\right). \tag{11}\]
_Then \(B^{-1}AB\) for \(A\) in (10) is the list of matrices_
\[\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right),\left(\begin{array}{ccc}0&i&-0.5-0.5i\\ -0.5i&0&0\\ -0.5+0.5i&0&0\end{array}\right),\left(\begin{array}{ccc}0&i&0.5+0.5i\\ -0.5i&0&0\\ 0.5-0.5i&0&0\end{array}\right),\left(\begin{array}{ccc}0&0&0\\ 0&1&0\\ 0&0&-1\end{array}\right) \tag{12}\]
_two of which are upper triangular. The change of variables induced by \(B\) maps \(p\) to_
\[p^{\prime}=x^{2}+(iy_{1}+(-0.5-0.5i)z)^{2}+(y+(0.5+0.5i)z)^{2}=x^{2}+2yx+iz^{2}.\]
_Clearly the ideal generated by \(p^{\prime}\) is not toric. On the other side, the ideal \(I\) is toric in the new variables \(x=x^{\prime}\), \(y=y+iz\) and \(z^{\prime}=-iy+iz\) with generator \(q=(x^{\prime})^{2}+y^{\prime}z^{\prime}\). Consider the induced change of variable matrix_
\[B=\begin{pmatrix}1&0&0\\ 0&i&-i\\ 0&i&1\end{pmatrix}\]
_and compute \(B^{-1}AB\) to any matrix \(A\) in Equation (10). We have the list_
\[\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&1\end{array}\right),\left(\begin{array}{ccc}0&i&-i\\ -0.5-0.5i&0&0\\ -0.5+0.5i&0&0\end{array}\right),\left(\begin{array}{ccc}0&i&1\\ 0.5-0.5i&0&0\\ 0.5-0.5i&0&0\end{array}\right),\left(\begin{array}{ccc}0&0&0\\ 0&1&-1-i\\ 0&0&-1\end{array}\right)\]
_two of which are upper triangular. From here it is clear that one can apply the matrix_
\[C=\left(\begin{array}{ccc}1&0&0\\ 0&1&0.5+0.5i\\ 0&0&1\end{array}\right)(\text{notice, }B=B^{\prime}C)\]
_to turn these matrices into the list of matrices in (12)._
## Acknowledgments
This project initiated at the Texas Algebraic Geometry Symposium 2022 held at Texas A&M University. The authors are grateful to JM Landsberg for suggesting the problem and for mentoring. We thank Thomas Yahl for the useful discussions and Lisa Nicklasson for comments on a previous version of this paper. Aida Maraj was partially funded by the National Science Foundation (Grant No. DMS-2306672). |
2309.10999 | Pointing-and-Acquisition for Optical Wireless in 6G: From Algorithms to
Performance Evaluation | The increasing demand for wireless communication services has led to the
development of non-terrestrial networks, which enables various air and space
applications. Free-space optical (FSO) communication is considered one of the
essential technologies capable of connecting terrestrial and non-terrestrial
layers. In this article, we analyze considerations and challenges for FSO
communications between gateways and aircraft from a pointing-and-acquisition
perspective. Based on the analysis, we first develop a baseline method that
utilizes conventional devices and mechanisms. Furthermore, we propose an
algorithm that combines angle of arrival (AoA) estimation through supplementary
radio frequency (RF) links and beam tracking using retroreflectors. Through
extensive simulations, we demonstrate that the proposed method offers superior
performance in terms of link acquisition and maintenance. | Hyung-Joo Moon, Chan-Byoung Chae, Kai-Kit Wong, Mohamed-Slim Alouini | 2023-09-20T01:41:37Z | http://arxiv.org/abs/2309.10999v1 | # Pointing-and-Acquisition for Optical Wireless in 6G: From Algorithms to Performance Evaluation
###### Abstract
The increasing demand for wireless communication services has led to the development of non-terrestrial networks, which enables various air and space applications. Free-space optical (FSO) communication is considered one of the essential technologies capable of connecting terrestrial and non-terrestrial layers. In this article, we analyze considerations and challenges for FSO communications between gateways and aircraft from a pointing-and-acquisition perspective. Based on the analysis, we first develop a baseline method that utilizes conventional devices and mechanisms. Furthermore, we propose an algorithm that combines angle of arrival (AoA) estimation through supplementary radio frequency (RF) links and beam tracking using retroreflectors. Through extensive simulations, we demonstrate that the proposed method offers superior performance in terms of link acquisition and maintenance.
## I Introduction
As network users demand higher data rates and lower service latency, sixth-generation (6G) networks aim to address various applications and functional nodes with different critical constraints. Free-space optical (FSO) communications have emerged as promising solutions for future wireless networks due to their significant advantages, including wide bandwidth, immunity to eavesdropping, long link distances, and no interference with radio-frequency (RF)-based terrestrial networks [1]. Aerial FSO communications have the potential to become a key technology that integrates terrestrial and non-terrestrial networks (NTNs).
FSO backhaul link applications can overcome the installation cost and environmental constraints. As depicted in Fig. 1, the applications include data traffic offloading, coverage extension, and mission-critical services enabled by the flexible deployment of mobile base stations, creating a variety of on-demand cells that constitute heterogeneous networks [2]. In particular, low-altitude and high-altitude platforms could play a vital role in the 6G network, acting as on-demand agile base stations. While concerns may arise regarding the flexible deployment of the aerial platforms and potential interruptions to other radio resources, FSO communications address these issues, making it a promising enabler for future integrated network design.
The successful implementation of long-distance FSO communications largely depends on the performance of pointing, acquisition, and tracking (PAT) systems [3]. A PAT system is both fundamental and challenging to ensure the viability of FSO links in 6G networks, where network outages are not permissible or must be anticipated in advance [1]. The design of the PAT system has evolved over decades of theoretical and experimental research. However, current technical standards and system-level designs mostly consider satellite communications or static aerial communications [4]. In order to contribute to the broader applicability of FSO communications within dynamic future wireless networks, we introduce a novel high-level approach for pointing-and-acquisition tailored to near-earth 6G NTNs. To the best of our knowledge, this is the first comprehensive study of design issues in bidirectional FSO communications between ground and air nodes from a PAT perspective.
Throughout this article, we thoroughly discuss the following topics.
1. We introduce a baseline PAT system for vertical FSO links equipped with conventional detectors and actuators. The mechanisms of link acquisition and link maintenance for bidirectional communications are defined based on clear foundations.
2. We outline various considerations for the PAT of the vertical FSO link based on a survey of experimental works. The external effects of the atmosphere, internal limitations of aircraft payload, and other factors impact the link-acquisition and link-maintenance processes.
3. We propose novel pointing-and-acquisition algorithms for aircraft communications in 6G. The considerations for bidirectional FSO communications are mitigated through the proposed techniques added to the baseline system. The simulation results show that our method surpasses the baseline method in terms of robustness and agility.
## II PAT for Vertical Aerial FSO Links
There are no specific standardizations for PAT system design, largely because link acquisition and link maintenance predominantly rely on local processing, and the system requirements vary greatly depending on the mission. Nevertheless, a range of experimental studies have effectively demonstrated a variety of promising system architectures. Based on this knowledge, this section presents the baseline PAT system for aerial FSO links, which is reasonably designed using conventional devices and mechanisms to facilitate bidirectional communications. Building on this foundational approach, we further develop a PAT system that ensures fast and robust link connections.
### _Baseline of the Pointing, Acquisition, and Tracking System_
The PAT system consists of open-loop coarse pointing (OLCP), closed-loop coarse pointing (CLCP), and fine tracking [3]. The OLCP is an initial link acquisition process that
relies on the prior positioning information of the terminals. In this stage, open-loop beam control is implemented to locate the other terminal and establish a beacon link connection. Then, the CLCP supports closed-loop link maintenance by utilizing feedback from the receiver image sensor. In order to mitigate pointing disturbances with higher frequencies, the fine-tracking system also employs closed-loop beam control to suppress pointing disturbance over a wider bandwidth than the CLCP. More specifically, we now introduce the baseline PAT algorithm of the ground-air bidirectional link, as depicted in Fig. 2.
#### I-B1 OloCp
When a vertical FSO link is scheduled for a gateway and aircraft, the aircraft first transmits its positioning information to the gateway via RF links. The gateway then sets an area where the aircraft can be located and scans the area with a beacon beam. The aircraft receives the beacon beam through a focal plane array (FPA) to estimate the incident direction of the beam. Generally, charge-coupled devices or complementary metal-oxide-semiconductor cameras are used as FPAs to offer a wide field of view (FoV). The aircraft controls the gimbal through this estimation to transmit the beacon laser to the gateway. By detecting the downlink beacon beam, the gateway also controls the gimbal to align the transmit and receive pointing directions.
#### I-B2 Fine Tracking
If the beam alignment is accurate enough to be within the FoV of the communication detector, both terminals transmit a communication beam. To maintain the link, a quadcell measures the pointing error of the received communication beam in the horizontal and vertical directions, using the difference in the received power in each quadrant. Using the feedback signal from the quadcell, a fast steering mirror (FSM) controller controls the FSM to compensate for the pointing error of both the transmitting and receiving beams. At the quadcell, detecting communication beams using a beam splitter [3] and detecting beacon beams [5] are both possible. As shown in Fig. 2, we adopt communication beam detection in the baseline system. This approach helps avoid wavelength-selective effects and extra calibration errors by implementing misalignment and communication detection within the same wavelength and optical system.
#### I-B3 Clcp
Along with the fine-tracking process, the CLCP process manages the pointing direction of the entire PAT payload using the gimbals. It maintains the link within the FoV of both the beacon and communication receivers and protects the maximum dynamic range of the FSMs by initializing their tilting angles. The CLCP and fine-tracking processes involve trade-offs between tracking accuracy and dynamic range. Consequently, both closed-loop controls are crucial for enabling bidirectional communications, particularly for low-altitude mobile aircraft that require a broad operating angle range and tracking bandwidth due to the extreme transmission conditions [6].
### _Device Considerations_
Candidates such as modulating retroreflectors (MRRs) and liquid crystals are under research as tracking actuators for cost-efficient PAT payloads [3]. However, using these cost-efficient devices makes implementing bidirectional links challenging, as specified in Section III-B. Therefore, we adopt conventional spot position detection and mechanical beam steering methods as the baseline PAT system. The cooperative tracking of fine-tracking and CLCP systems compensates for both short- and long-term angular fluctuations caused by mobility, posture change, and mechanical jitters of the aircraft. Stable suppression of pointing error allows higher received signal power through using narrower communication beams and prevents link outages due to various factors during flight.
### _Hybrid RF/FSO Link_
Hybrid RF/FSO communications are one of the most active research areas in optical wireless communications. The integration of these two different systems offers robustness, flexibility, and extensive communication capacity [7]. Although many works highlight improvements in communication performance, the additional RF link also provides advantages in the PAT aspect. First, as discussed in Section II-A, the RF link allows the real-time exchange of aircraft positioning information, which is essential for link acquisition [3, 8]. Second, the exchange of network-level control information,
Fig. 1: The role of optical wireless backhaul links in 6G networks is illustrated. Low altitude platforms (LAPs), high altitude platforms (HAPs), and satellites can be supported by FSO links to provide connectivity to unserved areas.
such as link switching between gateways or aircraft, can be fully supported by RF links [2]. This ensures reliable and seamless transfer of control plane data between the nodes, while point-to-point FSO links support the exchange of user data at higher data rates. Lastly, RF links can serve as a supplementary data link parallel to FSO communications [7]. Due to these advantages and its necessity to enable flexible link scheduling in aerial networks, our baseline PAT system includes a supplementary RF link.
## III PAT Considerations for Aerial FSO Communications
### _Atmospheric Effects_
Atmospheric conditions significantly impact signal quality in aerial FSO communications. Among various atmospheric effects, attenuation, beam wander, and scintillation can pose major challenges in PAT systems.
#### Iii-A1 Attenuation
During the propagation of a beam, various molecules and small particles absorb and scatter the electromagnetic wave, leading to attenuation. Following the Beer-Lambert law, these atmospheric effects reduce the signal strength exponentially over distance [6]. Attenuation can vary significantly depending on weather conditions, such as rain, fog, and clear weather, ranging from \(0.5\) to over \(30\) dB/km. Consequently, the power loss of communication and beacon beams can dramatically increase due to weather changes or cloud movements. In such cases, a network management system should command the aircraft to move to a clearer site or schedule other gateways and aircraft to establish a new routing path [2].
#### Iii-A2 Beam Wander
Beam wander refers to the phenomenon where the beam path shifts due to an eddy larger than the beam size. This random movement of the beacon and communication beam footprints on the receiver plane leads to considerable power loss. Near the Earth surface, the path alteration due to beam wander extend up to hundreds of microradians [9]. However, the impact of beam wander is less critical at high elevations, where the atmospheric density is significantly lower. Furthermore, the authors of [9] showed that the chromatic effect of beam wander is negligible. This demonstrates that the pointing errors of the beacon beam and communication beam are highly correlated, and the cooperation of coarse pointing and fine tracking can ensure the alignment of the two links concurrently.
#### Iii-A3 Scintillation
Scintillation is an intensity fluctuation from small eddies causing random intra-beam phase disturbance. It creates rapid and severe fluctuations in the received signal power and significantly affects the signal quality of both beacon and communication beams. Moreover, it is essential to consider that wider beams are less affected by scintillation when determining the beam divergence angle of FSO communication systems [10].
### _Payload Limit_
There are large differences in payload capacity depending on aircraft type and size, which significantly constrains available power and equipment. Thus, various PAT techniques utilizing lightweight and cost-effective devices have been proposed to support aircraft with limited payload capacity.
#### Iii-B1 Modulating Retroreflectors
Aerial FSO communications using an MRR present a viable solution for aircraft communications. This method, both cost- and energy-efficient, lightens the aircraft payload by eliminating the need for communication and beacon lasers and substituting them with an MRR [8]. When the uplink beam is reflected in the MRR installed on the aircraft, the reflected beam automatically aligns with the direction of the gateway. As a result, the downlink pointing direction is unaffected by the posture instability and jitter of the aircraft. However, this method only allows for unidirectional communication since the payload of the aircraft can only modulate and reflect the incoming beam. Furthermore, a significant power loss occurs during the round-trip propagation of the signal.
#### Iii-B2 Liquid-Crystal
Liquid-crystal based beam-steering methods utilize an arrangement of a transmissive liquid-crystal layer to modulate the beam direction. This non-mechanical approach offers high precision control and low cost, making it an attractive enabler for PAT systems [6]. However, a key drawback is the limited dynamic angle range of liquid-crystal based modulators, restricted to a few milliradians. It presents challenges in supporting the rapidly changing tracking environment of aircraft communications. Additionally, liquid
Fig. 2: The baseline PAT system for a vertical and bidirectional FSO link are described.
crystals have a relatively slow response time compared to electromechanical devices, which results in inadequate compensation for wide-bandwidth pointing disturbances of aircraft.
#### Iii-B3 Beaconless PAT
In environments with reliably precise initial positioning and rare link switching, the beaconless PAT system is often the preferred choice. For satellite missions, the Consultative Committee for Space Data Systems (CCSDS) advocates the beaconless PAT approach for energy conservation [4]. Utilizing appropriate detectors and actuators in PAT systems that depend exclusively on communication beams can significantly reduce the weight and operational complexity of the aircraft payload by eliminating the beacon transceivers. While this approach offers substantial cost benefits, the use of a beacon beam with an increased beam divergence is appealing for bidirectional aerial communications, especially when there is high angular movement and dynamic link scheduling.
### _Positioning_
During the OLCP process, a gateway requires precise positioning information of the aircraft. Therefore, it is generally assumed that the global navigation satellite system (GNSS) positioning information of the aircraft is first delivered to the gateway [8]. When the gateway initially transmits the beacon beam to illuminate the aircraft, the scanning area depends on the primary positioning information error and the open-loop pointing error of the gimbal. For rapid backhaul link scheduling and acquisition among multiple ground gateways and aircraft, having precise positioning information for all possible aerial nodes is desirable at potential supporting gateways, whether managed in a centralized or distributed process.
### _Mobility_
The pointing error caused by the movement of the aircraft is referred to as the point-ahead angle (PAA). It is widely considered to compensate for the PAA through biased control of the transmit beam using a point-ahead mirror in satellite communications [13]. In the aerial FSO communication systems, the PAA can be effectively disregarded since the aircrafts travel at much lower speeds than satellites. A factor that is more crucial but difficult to analyze is the angular fluctuation resulting from posture changes in the aircraft during flight. It becomes especially critical when the FoV of the optical system and the dynamic range of the tracking actuators are limited. In such situations, a robust CLCP process is essential for maintaining the communication beam within the FoV of the detector. Considering the severe changes in aircraft pointing direction, in Section IV, we propose and evaluate the pointing-and-acquisition algorithms that improve the outage performance and link-acquisition speed.
### _Others_
#### Iii-E1 During OLCP
During the OLCP process, the gateway controls the gimbal and transmits a beacon beam to scan the aircraft. At this stage, open-loop gimbal control introduces pointing errors due to calibration error, step size, mechanical jitter, and thermal deformation [5]. Factors related to the posture instability of the aircraft can be avoided during the OLCP process if the gateway initiates the beacon transmission and the receiver FoV of the aircraft is sufficiently broad.
#### Iii-E2 During CLCP
During the CLCP process, factors outside the closed-loop control no longer affect the PAT performance. Instead, the feedback accuracy of the FPA measurement results in pointing errors. To be specific, as the payload estimates the incident direction of the received beam, noisy reception at the FPA creates an estimation error called noise equivalent angle (NEA). Also, the closed-loop pointing error occurs when the gimbal controller actuates the gimbal toward the estimated direction. The body pointing error of the aircraft, determined by the accuracy of attitude sensing
and control within the navigation system [11], also impacts the pointing error during the CLCP process. Furthermore, misalignment between the attitude sensor and the beacon transmitter contributes to pointing errors.
#### Iii-B3 During Fine Tracking
Due to the signal noise at the quadcell, beam spot detection by the quadrants generates an NEA during the fine-tracking process. The control signal is calculated linearly from the output voltage of the quadcell, as illustrated in Fig. 2. However, the relationship between the actual incident direction and the output voltage is nonlinear, leading to a pointing error [9]. In addition, calibration residuals and control errors of the FSM, misalignment between the communication detector and the quadcell, and residual mechanical disturbances of the fine-tracking loop all contribute to the pointing error.
## IV Pointing-and-Acquisition Algorithms and Performance Evaluation
### _Proposed Pointing-and-Acquisition Algorithms_
We propose novel pointing-and-acquisition algorithms for bidirectional FSO communications between a gateway and an aircraft, as illustrated in Fig. 3. The gateway RF module is equipped with either a planar or lens antenna array, allowing for the angle-of-arrival (AoA) estimation using the received RF signals [12]. Moreover, in our proposed system, beacon laser at the aircraft is replaced with multiple passive corner-cube reflectors (CCRs), reflecting the uplink beacon beam back to the gateway [14]. In this system model, the CCRs selectively reflect the beacon beam using chromatic filters to avoid interference between the downlink communication beam and the reflected uplink communication beam. In the following paragraphs, we will describe the operation scheme of the ground and aerial payloads before the FSO communications, during the OLCP, and during the CLCP processes.
#### Iv-A1 Before FSO Communications
Network management systems and aircraft decide whether to establish an FSO link between a particular gateway and aircraft based on link availability between the nodes and an efficient routing path for services. Especially when a gateway can serve a limited number of aircraft simultaneously, a link scheduling process by the network management system and rapid link acquisition are necessary. Aircraft can then be flexibly deployed as mobile base stations of the 6G network, supporting high data rates and massive connectivity in unserved, disaster, and temporarily crowded areas. For these applications, we assume that the aircraft continuously exchange control and user data with ground gateways via RF links. The network management system can request FSO connections immediately in this circumstance.
#### Iv-A2 OLCP for Link-acquisition
When an FSO link is requested between an aircraft and a gateway, the aircraft transmits its GNSS information via the RF link. In the proposed algorithm, the gateway receives the GNSS information and estimates the AoA of the signal using the antenna array. By integrating the GNSS information with the estimated AoA via the maximum likelihood criterion, the gateway controls the gimbal [12]. It then transmits a beacon laser to scan the aircraft until the aircraft receives the beacon beam through the FPA and aligns the gimbal pointing. In the baseline algorithm, a beacon laser at the aircraft transmits a beacon beam back to the gateway to achieve an alignment of the gateway payload using the FPA detection of the beam. However, in the proposed method, the CCRs deployed around the payload reflect the uplink beacon beam back to the gateway. In other words, the payloads are aligned using FPA detection like the baseline algorithm, but CCRs replace the beacon laser at the aircraft. When the direction of the incident beam falls within the FoV of the communication detector, they initiate communication and start the fine-tracking and CLCP processes.
#### Iv-A3 CLCP for Link-maintenance
The CLCP process compensates for large angular movements of entire payloads using
Fig. 3: The proposed pointing-and-acquisition algorithms for bidirectional FSO communications between the gateway and aircraft are depicted.
gimbals. The gateway transmits a beacon beam and captures the reflected beam, while the aircraft only receives the beam through the FPA. Both sides detect the misalignment and utilize closed-loop gimbal control to keep the received beam spot centered on the FPA. This process also protects the maximum dynamic range of the FSM by initializing the tilted angle of the FSM. When an outage occurs, the terminals can reconstruct the link through the OLCP process. For precise coordination between the RF module and gimbal controller when reconstructing the link, AoA estimation is continuously performed during the FSO communications to ensure accurate mapping between the RF AoA and gimbal control.
### _Advantages of the Proposed Algorithms_
As depicted in Fig. 3, our proposed algorithm introduces two methods. The first combines AoA estimation of downlink RF signals at the gateway with GNSS data, and the second involves passive CCRs placed around the FSO transceiver at the aircraft. These enhancements augment both the stability and the link-acquisition speed, as verified by our simulations.
#### Iii-B1 RF AoA Estimation
When GNSS information of the aircraft is acquired by the gateway to point the aircraft during the initial link acquisition, the RF AoA estimation improves the pointing accuracy. The study in [12] reveals that a simple linear combination of GNSS data and AoA estimation can reduce outage probability by factors ranging from multiple to hundreds, depending on channel conditions. In our algorithms, we also utilize this approach to restore the link during unexpected outages by continuously implementing the AoA estimation during FSO communications.
#### Iii-B2 Deployment of the CCRs at the Aircraft
Deploying multiple passive CCRs around the aircraft transceiver offers numerous benefits. Given that a beacon link does not require high-frequency modulation, uplink beacon signals can be re-purposed for downlink via low-cost passive CCRs. Therefore, it can replace the role of the beacon laser in the aircraft. From an energy-saving perspective, this can achieve the same energy consumption at the aircraft as the recommended beaconless system [4]. In addition, the retroreflective nature of the CCR eliminates the need for downlink beacon beam pointing. In the process of the uplink beacon beam reflecting down, the diverse beams from these CCRs allow the beacon receiver at the gateway to acquire a diversity effect. The results presented in [14] show that this diversity yields great advantages against outages.
### _Performance Metrics_
We categorize outages into two types to evaluate the proposed algorithms from an outage perspective.
#### Iii-C1 Fine-tracking Outage
This outage occurs when the misalignment surpasses the FoV of the quadcell, or when the signal power the quadcell receives is too weak due to deep fading or a sudden surge in pointing errors. When the outage occurs, the pointing loss of the communication beam increases as the jitter cannot be compensated by the fine-tracking system. However, the CLCP process can quickly restore the fine-tracking system by adjusting the gimbal pointing.
#### Iii-C2 Link Outage
The link outage occurs when the beacon beam is no longer detected at the FPA due to power outages or drastic changes in payload pointing direction. In such cases, the system initiates the OLCP process to reestablish the link. If communication is unavailable due to weather conditions, the two terminals must rely on RF links or have the network management system route the data through other links.
Fig. 4: A real-time link status and pointing loss (PL) of the baseline and proposed algorithms are presented. The proposed method shows shorter link-acquisition time and enhanced outage performance compared to the baseline method.
### _Simulation Results_
During the simulation in MATLAB, we assume that the coherence time of the random channel is \(0.1\) s. For each channel slot, we operate the detectors and tracking actuators of our pointing-and-acquisition systems. We define five states: link request, OLCP process, well-connected, fine-tracking outage, and link-outage. Success in the OLCP process or an outage during the well-connected state leads to state changes for individual links, such as the uplink beacon, downlink beacon, uplink communication, and downlink communication links.
We set the link distance to \(2\) km and the standard deviation of GNSS error to \(5\) m [12]. The parameters related to the PAT devices, the quadcell FoV, FPA FoV, and CLCP loop frequency, are set to \(2\) mrad, \(40\) mrad, and \(1\) Hz, respectively. We assume the standard deviation of the residual errors for the open-loop gimbal control, closed-loop gimbal control, and FSM control to be \(3\) mrad, \(0.3\) mrad, and \(100\,\mathrm{\SIUnitSymbolMicro rad}\), respectively. The visibility range of the FSO channel is \(3\) km, and the gamma-gamma fading channel with strong turbulence is assumed [6]. The communication and beacon beam divergence angles are \(500\,\mathrm{\SIUnitSymbolMicro rad}\) and \(5\) mrad, respectively. In the retroreflective channel, reciprocity is considered by sampling the correlated uplink and downlink random channels with correlation coefficients of \(0.4\) and \(0.7\). The chosen correlation coefficients are based on wave optical calculations [15]. The positioning estimation for the aircraft is performed using the GNSS information and AoA-estimation results as proposed in [12], and 4 CCRs are circularly deployed around the aircraft payload.
The simulation results indicate that the proposed algorithm offers a reduced outage probability and a shorter average link-acquisition time. Fig. 4 depicts the link status and pointing loss over a 30-minute mission flight. The mission begins in the link-outage state, with the OLCP process attempting to establish a connection. During the communication phase, the link outage mainly occur due to the posture instability of the aircraft. Therefore, the fixed beam pointing of the reflected beam and the spatial diversity effect achieved by multiple CCRs significantly contribute to the stability of the FSO link.
In Fig. 5, we contrast the performance of the proposed algorithm against the baseline algorithm, the baseline algorithm augmented with the AoA estimation technique, and the baseline algorithm enhanced with CCR deployment. The results show that the average number of link outages over a one-hour flight is considerably reduced for both cases, where the correlation coefficient of the retroreflective channel is set to \(0.4\) and \(0.7\), respectively. Moreover, when the AoA-estimation technique is incorporated, the average link-acquisition time significantly decreases due to enhanced positioning accuracy during the OLCP process.
Fig. 6 represents the pointing error distribution of the downlink communication beam during the simulation. When the AoA-estimation method is utilized, the link can quickly recover from a large pointing error. The deployment of CCRs on the aircraft enhances the robustness of the beacon tracking system, thereby minimizing the pointing error. Lastly, the results for our proposed algorithm show that we can significantly suppress the pointing error by combining both techniques.
## V Conclusion
In this article, we highlighted the challenges of vertical FSO communications from the perspective of a PAT. The bidirectional connectivity of vertical links is influenced by various factors, including payload constraints, positioning accuracy, mobility, and other physical limitations of aircraft. Based on the comprehensive investigation, we developed the baseline PAT algorithms and novel pointing-and-acquisition algorithms. The simulation results showed the enhanced performance of the proposed algorithms using AoA estimation and retroreflectors. In conclusion, this article provides valuable insights into aerial FSO communications as an enabler of the future integrated ground-air 6G networks.
|
2309.15611 | A common approach to singular perturbation and homogenization I:
Quasilinear ODE systems | We consider periodic homogenization of boundary value problems for
quasilinear second-order ODE systems in divergence form of the type
$a(x,x/\varepsilon,u(x),u'(x))'= f(x,x/\varepsilon,u(x),u'(x))$ for $x \in
[0,1]$. For small $\varepsilon>0$ we show existence of weak solutions
$u=u_\varepsilon$ as well as their local uniqueness for $\|u-u_0\|_\infty
\approx 0$, where $u_0$ is a given non-degenerate solution to the homogenized
boundary value problem, and we describe the rate of convergence to zero for
$\varepsilon \to 0$ of the homogenization error $\|u_\varepsilon-u_0\|_\infty$.
In particular, we show that this rate depends on the smoothness of the maps
$a(\cdot,y,u,u')$ and $f(\cdot,y,u,u')$.
Our assumptions are, roughly speaking, as follows: The maps
$a,f:[0,1]\times\mathbb{R}\times\mathbb{R}^n\times\mathbb{R}^n\to\mathbb{R}^n$
are continuous, the maps $a(x,y,\cdot,\cdot)$ and $f(x,y,\cdot,\cdot)$ are
$C^1$-smooth, the maps $a(x,\cdot,u,u')$ and $f(x,\cdot,u,u')$ are 1-periodic,
and the maps $a(x,y,u,\cdot)$ are strongly monotone and Lipschitz continuous
uniformly with respect to $x$, $y$ and bounded $u$. No global solution
uniqueness is supposed. Because $x$ is one-dimensional, no correctors and no
cell problems are needed. But, because the problem is nonlinear, we have to
care about commutability of homogenization and linearization.
The main tool of the proofs is an abstract result of implicit function
theorem type which in the past has been applied to singularly perturbed
nonlinear ODEs and elliptic and parabolic PDEs and, hence, which permits a
common approach to existence and local uniqueness results for singularly
perturbed problems and and for homogenization problems. | Nikolai N. Nefedov, Lutz Recke | 2023-09-27T12:18:28Z | http://arxiv.org/abs/2309.15611v3 | ###### Abstract
###### Abstract
We consider perodic homogenization of boundary value problems for quasilinear second-order ODE systems in divergence form of the type
\[a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon,u(x),u^{ \prime}(x))\text{ for }x\in[0,1].\]
For small \(\varepsilon>0\) we show existence of weak solutions \(u=u_{\varepsilon}\) as well as their local uniqueness for \(\|u-u_{0}\|_{\infty}\approx 0\), where \(u_{0}\) is a given non-degenerate solution to the homogenized boundary value problem, and we describe the rate of convergence to zero for \(\varepsilon\to 0\) of the homogenization error \(\|u_{\varepsilon}-u_{0}\|_{\infty}\). In particular, we show that this rate depends on the smoothness of the maps \(a(\cdot,y,u,u^{\prime})\) and \(f(\cdot,y,u,u^{\prime})\).
Our assumptions are, roughly speaking, as follows: The maps \(a,f:[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R} ^{n}\) are continuous, the maps \(a(x,y,\cdot,\cdot)\) and \(f(x,y,\cdot,\cdot)\) are \(C^{1}\)-smooth, the maps \(a(x,\cdot,u,u^{\prime})\) and \(f(x,\cdot,u,u^{\prime})\) are \(1\)-periodic, and the maps \(a(x,y,u,\cdot)\) are strongly monotone and Lipschitz continuous uniformly with respect to \(x\), \(y\) and bounded \(u\). Neither global solution uniqueness is supposed nor \(W^{2,2}\)-regularity of \(u_{0}\).
The main tool of the proofs is an abstract result of implicit function theorem type which in the past has been applied to singularly perturbed nonlinear ODEs and elliptic and parabolic PDEs and, hence, which permits a common approach to existence and local uniqueness results for singularly perturbed problems and and for homogenization problems.
**A Common Approach to Singular Perturbation and Homogenization I: Quasilinear ODE Systems**
Nikolay N. Nefedov (Moscow) and Lutz Recke (Berlin)
## 1 Introduction
In this paper we present an abstract result of implicit function theorem type (see Section 2), which in the past has been applied in [5, 6, 7, 16, 20, 22, 23] to singularly perturbed nonlinear ODEs and PDEs and, in Part II [17], to periodic homogenization of semilinear elliptic PDE systems. In the present paper we apply it to describe periodic homogenization for systems of quasilinear second-order ODEs in divergence form of the type
\[a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon,u(x),u^{ \prime}(x))\text{ for }x\in[0,1] \tag{1.1}\]
with one Dirichlet and one natural boundary condition
\[u(0)=a(1,1/\varepsilon,u(1),u^{\prime}(1))=0 \tag{1.2}\]
as well as with other boundary conditions (see Section 4). Here \(\varepsilon>0\) is the small homogenization parameter, and we look for vector valued solutions \(u:[0,1]\to\mathbb{R}^{n}\). The coefficient functions \(a,f:[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}\to\mathbb{R} ^{n}\) are supposed to be \(1\)-periodic with respect to the second argument, i.e.
\[a(x,y+1,u,u^{\prime})=a(x,y,u,u^{\prime})\text{ and }f(x,y+1,u,u^{\prime})=f(x,y,u,u^{ \prime}) \tag{1.3}\]
for all \(x\in[0,1]\), \(y\in\mathbb{R}\) and \(u,u^{\prime}\in\mathbb{R}^{n}\). Further, we suppose that the maps \(a\) and \(f\) are continuous and that their first partial derivatives with respect to the third and fourth arguments exist and are continuous, i.e.
\[a,\partial_{u}a,\partial_{u^{\prime}}a,f,\partial_{u}f,\partial_{u^{\prime}} f\text{ are continuous on }[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}. \tag{1.4}\]
And finally, we supose that the maps \(a(x,y,u,\cdot)\) are strongly monotone and Lipschitz continuous uniformly with respect to \(x\), \(y\) and bounded \(u\), i.e. that there exist constants \(M\geq m>0\) such that for all \(x\in[0,1]\), \(y\in\mathbb{R}\) and \(u,u^{\prime}_{1},u^{\prime}_{2}\in\mathbb{R}^{n}\) with \(\|u\|\leq 1\) we have
\[\left.\begin{array}{l}\big{(}a(x,y,u,u^{\prime}_{1})-a(x,y,u,u^{\prime}_{2}) \big{)}\cdot(u^{\prime}_{1}-u^{\prime}_{2})\geq m\|u^{\prime}_{1}-u^{\prime}_{ 2}\|^{2},\\ \|a(x,y,u,u^{\prime}_{1})-a(x,y,u,u^{\prime}_{2})\|\|\leq M\|u^{\prime}_{1}-u^ {\prime}_{2}\|.\end{array}\right\} \tag{1.5}\]
Here and in what follows we denote by \(v\cdot w\) the Euclidean scalar product of vectors \(v,w\in\mathbb{R}^{n}\), and \(\|v\|:=\sqrt{v\cdot v}\) is the Euclidean norm of the vector \(v\in\mathbb{R}^{n}\).
From assumption (1.5) it follows that for all \(x\in[0,1]\), \(y\in\mathbb{R}\) and \(u\in\mathbb{R}^{n}\) with \(\|u\|\leq 1\) the maps \(a(x,y,u,\cdot)\) are bijective from \(\mathbb{R}^{n}\) onto \(\mathbb{R}^{n}\). We denote
\[b(x,y,u,\cdot):=a(x,y,u,\cdot)^{-1},\mbox{ i.e. }a(x,y,u,b(x,y,u,u^{\prime}))=b(x,y,u,a(x,y,u,u^{ \prime}))=u^{\prime} \tag{1.6}\]
and
\[b_{0}(x,u,u^{\prime}):=\int_{0}^{1}b(x,y,u,u^{\prime})dy \tag{1.7}\]
for all \(x\in[0,1]\), \(y\in\mathbb{R}\) and \(u,u^{\prime}\in\mathbb{R}^{n}\) with \(\|u\|\leq 1\). Also the maps \(b_{0}(x,u,\cdot)\) are strongly monotone and Lipschitz continuous and, hence, bijective from \(\mathbb{R}^{n}\) onto \(\mathbb{R}^{n}\), and we denote
\[a_{0}(x,u,\cdot):=b_{0}(x,u,\cdot)^{-1},\mbox{ i.e. }a_{0}(x,u,b_{0}(x,u,u^{\prime}))=b_{0}(x,u,a_{0}(x,u,u^{ \prime}))=u^{\prime} \tag{1.8}\]
and
\[f_{0}(x,u,u^{\prime}):=\int_{0}^{1}f(x,y,u,b(x,y,u,u^{\prime}))dy \tag{1.9}\]
for \(x\in[0,1]\) and \(u,u^{\prime}\in\mathbb{R}^{n}\) with \(\|u\|\leq 1\), and the boundary value problem
\[a_{0}(x,u(x),u^{\prime}(x))^{\prime}=f_{0}(x,u(x),u^{\prime}(x))\mbox{ for }x\in[0,1],\;u(0)=a_{0}(1,u(1),u^{\prime}(1))=0 \tag{1.10}\]
is the homogenized version of the boundary value problem (1.1)-(1.2).
A vector function \(u\in C^{1}([0,1];\mathbb{R}^{n})\) is called weak solution to (1.1)-(1.2) if it satisfies the Dirichlet boundary condition \(u(0)=0\) and the variational equation
\[\left.\begin{array}{l}\int_{0}^{1}\Big{(}a(x,x/\varepsilon,u(x),u^{\prime} (x))\cdot\varphi^{\prime}(x)+f(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot \varphi(x)\Big{)}dx=0\\ \mbox{ for all }\varphi\in C^{1}([0,1];\mathbb{R}^{n})\mbox{ with }\varphi(0)=0,\end{array}\right\} \tag{1.11}\]
and similar for the homogenized boundary value problem (1.10) and its linearization (1.12). Weak solutions to (1.1)-(1.2) are not classical solutions, i.e. they are not \(C^{2}\)-smooth, in general, because the maps \(a(\cdot,\cdot,u,u^{\prime})\) are not \(C^{1}\)-smooth, in general.
Now we formulate our result about existence and local uniqueness of weak solutions \(u=u_{\varepsilon}\) to (1.1)-(1.2) with \(\varepsilon\approx 0\), which are close to a given non-degenerate solution \(u=u_{0}\) to (1.10), and about the rate of convergence to zero for \(\varepsilon\to 0\) of the homogenization error \(\|u_{\varepsilon}-u_{0}\|_{\infty}\). Here and in what follows we denote by
\[\|u\|_{\infty}:=\max_{x\in[0,1]}\|u(x)\|\]
the maximum norm in the function space \(C([0,1];\mathbb{R}^{n})\).
**Theorem 1.1**: _Suppose (1.3)-(1.5), and let \(u=u_{0}\) be a weak solution to (1.10) such that \(\|u_{0}\|_{\infty}<1\) and that the linearized boundary value problem_
\[\left.\begin{array}{l}\Big{(}\partial_{u}a_{0}(x,u_{0}(x),u^{\prime}_{0}(x))u (x)+\partial^{\prime}_{u}a_{0}(x,u_{0}(x),u^{\prime}_{0}(x))u^{\prime}(x) \Big{)}^{\prime}\\ =\partial_{u}f_{0}(x,u_{0}(x),u^{\prime}_{0}(x))u(x)+\partial_{u^{\prime}}f_{0 }(x,u_{0}(x),u^{\prime}_{0}(x))u^{\prime}(x)\mbox{ for }x\in[0,1],\\ u(0)=\partial_{u}a_{0}(1,u_{0}(1),u^{\prime}_{0}(1))u(1)+\partial_{u^{\prime}}a_{0 }(1,u_{0}(1),u^{\prime}_{0}(1))u^{\prime}(1)=0\end{array}\right\} \tag{1.12}\]
_does not have weak solutions \(u\neq 0\). Then the following is true:_
_(i) There exist \(\varepsilon_{0}>0\) and \(\delta>0\) such that for all \(\varepsilon\in(0,\varepsilon_{0}]\) there exists exactly one weak solution \(u=u_{\varepsilon}\) to (1.1)-(1.2) with \(\|u-u_{0}\|_{\infty}\leq\delta\). Moreover,_
\[\|u_{\varepsilon}-u_{0}\|_{\infty}\to 0\mbox{ for }\varepsilon\to 0. \tag{1.13}\]
_(ii) If also_
\[\partial_{x}a,\partial_{x}f\mbox{ exist and are continuous on }[0,1]\times\mathbb{R}\times \mathbb{R}^{n}\times\mathbb{R}^{n}, \tag{1.14}\]
_then_
\[\|u_{\varepsilon}-u_{0}\|_{\infty}=O(\varepsilon)\mbox{ for }\varepsilon \to 0. \tag{1.15}\]
**Remark 1.2**: _Our notion of weak solutions \(u\) to (1.1)-(1.2) does not include the requirement \(u\in W^{1,2}((0,1);\mathbb{R}^{n})\), as usual, but the stronger requirement \(u\in C^{1}([0,1];\mathbb{R}^{n})\). We do this in order to avoid to suppose growth restrictions for the reaction functions \(f(x,y,u,\cdot)\). If we would suppose that the functions \(f(x,y,u,\cdot)\) have linear growth, then any solution \(u\in W^{1,2}((0,1);\mathbb{R}^{n})\) to (1.11) with \(u(0)=0\) would be \(C^{1}\)-smooth and, hence, a weak solution to (1.1)-(1.2) in the sense introduced above._
**Remark 1.3**: _The homogenized version \(a_{0}\) of the map \(a\) depends on \(a\) only (cf. (1.6) and (1.8)), but the homogenized version \(f_{0}\) of the map \(f\) depends not only on \(f\), but also on \(a\) (cf. (1.9)), i.e. the homogenization of \(f\) is "relative to \(a\)". For linear problems this effect is well-known, cf. [2, Remark 1.13.1], [28] and [29, formula (3.9)]._
**Remark 1.4**: _It is easy to verify that the map \(b\), which is defined in (1.6), is continuous and its first partial derivatives with respect to the third and fourth arguments exist and are continuous, that the maps \(b(x,\cdot,u,u^{\prime})\) are 1-periodic, and that_
\[\big{(}b(x,y,u,u^{\prime}_{1})-b(x,y,u,u^{\prime}_{2})\big{)}\cdot(u^{\prime} _{1}-u^{\prime}_{2})\geq\frac{m}{M^{2}}\|u^{\prime}_{1}-u^{\prime}_{2}\|^{2}, \tag{1.16}\]
\[\|b(x,y,u,u^{\prime}_{1})-b(x,y,u,u^{\prime}_{2})\|\leq\frac{1}{m}\|u^{ \prime}_{1}-u^{\prime}_{2}\| \tag{1.17}\]
_for all \(x\in[0,1]\), \(y\in\mathbb{R}\) and \(u,u^{\prime}_{1},u^{\prime}_{2}\in\mathbb{R}^{n}\) with \(\|u\|\leq 1\). Similarly, also the map \(a_{0}\), which is defined in (1.8), is continuous and its first partial derivatives with respect to the third and fourth arguments exist and are continuous, and_
\[(a_{0}(x,u,u^{\prime}_{1})-a_{0}(x,u,u^{\prime}_{2}))\cdot(u^{ \prime}_{1}-u^{\prime}_{1}) \geq \frac{m^{3}}{M^{2}}\|u^{\prime}_{1}-u^{\prime}_{2}\|^{2},\] \[\|a_{0}(x,u,u^{\prime}_{1})-a_{0}(x,u,u^{\prime}_{2})\| \leq \frac{M^{2}}{m}\|u^{\prime}_{1}-u^{\prime}_{2}\|\]
_for all \(x\in[0,1]\), \(u,u^{\prime}_{1},u^{\prime}_{2}\in\mathbb{R}^{n}\) with \(\|u\|\leq 1\)._
**Remark 1.5**: _In many applications the maps \(a(x,y,u,\cdot)\) and \(f(x,y,u,\cdot)\) are affine, i.e._
\[a(x,y,u,u^{\prime})=A(x,y,u)u^{\prime}+\bar{a}(x,y,u)\mbox{ and }f(x,y,u,u^{ \prime})=F(x,y,u)u^{\prime}+\bar{f}(x,y,u)\]
_with \(n\times n\)-matrices \(A(x,y,u)\) and \(F(x,y,u)\) and vectors \(\bar{a}(x,y,u)\) and \(\bar{f}(x,y,u)\). Then also the maps \(a_{0}(x,u,\cdot)\) and \(f_{0}(x,u,\cdot)\) are affine, i.e._
\[a_{0}(x,u,u^{\prime})=A_{0}(x,u)u^{\prime}+\bar{a}_{0}(x,u)\mbox{ and }f_{0}(x,u,u^{ \prime})=F_{0}(x,u)u^{\prime}+\bar{f}_{0}(x,u)\]
_with_
\[A_{0}(x,u) := \left(\int_{0}^{1}A(x,y,u)^{-1}dy\right)^{-1},\] \[\bar{a}_{0}(x,u) := \left(\int_{0}^{1}A(x,y,u)^{-1}dy\right)^{-1}\int_{0}^{1}A(x,y,u)^ {-1}\bar{a}(x,y,u)dy,\] \[F_{0}(x,u) := \int_{0}^{1}F(x,y,u)A(x,y,u)^{-1}dy,\] \[\bar{f}_{0}(x,u) := \int_{0}^{1}\left(\bar{f}(x,y,u)-F(x,y,u)A(x,y,u)^{-1}\bar{a}(x,y, u)\right)dy.\]
_Hence, \(A_{0}\) depends on \(A\) only, but \(\bar{a}_{0}\) depends on \(\bar{a}\) and \(A\), \(F_{0}\) depends on \(F\) and \(A\), and \(\bar{f}_{0}\) depends on \(\bar{f}\), \(A\), \(\bar{a}\) and \(F\)._
**Remark 1.6**: _The assumption of Theorem 1.1, that there do not exist nontrivial weak solutions to (1.10), is rather implicit. But there exist simple explicit sufficient conditions for it. For example, if not only the matrices \(\partial_{u^{\prime}}a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\) are positive definit (this follows from assumption (1.5)), but also the matrices \(\partial_{u^{\prime}}f_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\), and if the corresponding definitness coefficients are sufficiently large in comparison with the matrix norms of \(\partial_{u}a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\) and \(\partial_{u}f_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\), then there do not exist nontrivial weak solutions to (1.10). In order to verify this one can use the formulas_
\[\partial_{u^{\prime}}a_{0}(x,u,u^{\prime})=\left(\int_{0}^{1} \partial_{u^{\prime}}a(x,y,u,u^{\prime})^{-1}dy\right)^{-1},\] \[\partial_{u}a_{0}(x,u,u^{\prime})=\left(\int_{0}^{1}\partial_{u^ {\prime}}a(x,y,u,u^{\prime})^{-1}dy\right)^{-1}\int_{0}^{1}\partial_{u^{ \prime}}a(x,y,u,u^{\prime})^{-1}\partial_{u}a(x,y,u,u^{\prime})dy,\] \[\partial_{u^{\prime}}f_{0}(x,u,u^{\prime})=\int_{0}^{1}\partial_{ u^{\prime}}f(x,y,u,u^{\prime})\partial_{u^{\prime}}a(x,y,u,u^{\prime})^{-1}dy,\] \[\partial_{u}f_{0}(x,u,u^{\prime})=\int_{0}^{1}\left(\partial_{u} f(x,y,u,u^{\prime})-\partial_{u^{\prime}}f(x,y,u,u^{\prime})\partial_{u^{ \prime}}a(x,y,u,u^{\prime})^{-1}\partial_{u}a(x,y,u,u^{\prime})\right)dy.\]
**Remark 1.7**: _The assertions of Theorem 1.1 remain true also in cases where the maps \(a(x,\cdot,u,u^{\prime})\) or \(f(x,\cdot,u,u^{\prime})\) are allowed to be discontinous, for example, if_
\[a(x,y,u,u^{\prime})=a_{1}(x,u,u^{\prime})a_{2}(y)\text{ and }f(x,y,u,u^{ \prime})=f_{1}(x,u,u^{\prime})f_{2}(y)\]
_with vector functions \(a_{1},f_{1}\in C^{1}([0,1]\times\mathbb{R}^{n}\times\mathbb{R}^{n};\mathbb{ R}^{n})\) and 1-periodic functions \(a_{2},f_{2}\in L^{\infty}(\mathbb{R})\), or, more general, if the maps \((x,u,u^{\prime})\mapsto a(x,\cdot,u,u^{\prime})\) and \((x,u,u^{\prime})\mapsto f(x,\cdot,u,u^{\prime})\) are continuous from \([0,1]\times\mathbb{R}^{n}\times\mathbb{R}^{n}\) into \(L^{\infty}(\mathbb{R})\) (cf. also Remark 3.3). For the case of linear scalar equations see [2, Theorems 6.1 and 6.3] and [30, Theorem 1.2]._
**Remark 1.8**: \(L^{\infty}\)_-estimates of the homogenization error \(u_{\varepsilon}-u_{0}\) exist, to the best of our knowledge, for linear homogenization problems only: For scalar ODEs of the type \(\left(a(x/\varepsilon)u^{\prime}(x)\right)^{\prime}=f(x)\) (with a smooth 1-periodic function \(a:\mathbb{R}\to\mathbb{R}\) and a smooth function \(f:[0,1]\to\mathbb{R}\)) in [18, Section 1], for scalar ODEs with stratified structure of the type \(\left(a(x,\rho(x)/\varepsilon)u^{\prime}(x)\right)^{\prime}=f(x)\) in [30, Theorem 1.2]. For \(L^{\infty}\) homogenization error estimates for scalar linear elliptic PDEs of the type \(\operatorname{div}a(x/\varepsilon)\nabla u(x)=f(x)\) see, e.g. [2, Chapter 2.4] and [15] and for linear elliptic systems [24, Theorem 7.5.1]._
_What concerns existence and local uniqueness for nonlinear homogenization problems (without assumption of global uniqueness) we know only the result [4] for scalar semilinear elliptic PDEs of the type \(\operatorname{div}a(x/\varepsilon)\nabla u(x)=f(x)g(u(x)),\) where the nonlinearity \(g\) is supposed to have a sufficiently small local Lipschitz constant (on an appropriate bounded interval). Let us mention also [13, 14], where existence and local uniqueness for a homogenization problem for the linear Poisson equation with periodic nonlinear Robin boundary conditions is shown. There the specific structure of the problem (no highly oscillating diffusion coefficients) allows to apply the classical implicit function theorem._
**Remark 1.9**: _Consider the quasilinear elliptic PDE \(\operatorname{div}a(x,x/\varepsilon,u(x),\nabla u(x))=f(x)\) with flux function \(a:\Omega\times\mathbb{R}^{d}\times\mathbb{R}\times\mathbb{R}^{d}\to\mathbb{R} ^{d}\) (with \(\Omega\subseteq\mathbb{R}^{d}\)) such that \(a(x,y+e_{j},u,v)=a(x,y,u,v)\) for all \(j=1,\ldots,d\) (\(e_{1}:=(1,0,\ldots,0,0),\ldots,e_{d}:=(0,0,\ldots,0,1)\) is the standard basis in \(\mathbb{R}^{d}\)) and that \(a(x,y,u,\cdot)\) is strongly monotone and Lipschitz continuous uniformly with respect to \(x\), \(y\) and bounded \(u\). The usual formula for the homogenized flux function is (cf., e.g. [8, 11, 21, 27])_
\[a_{0}(x,u,v):=\int_{[0,1]^{d}}a(x,y,u,v+\nabla_{y}w(x,y,u,v))dy, \tag{1.18}\]
_where \(w(x,\cdot,u,v)\) is the solution (which depends parametrically on \(x\), \(u\) and \(v\)) of the cell problem \(div_{y}\ a(x,y,u,v+\nabla_{y}w(x,y,u,v))=0\), \(w(x,y+e_{j},u,v)=w(x,y,u,v)\) and \(\int_{[0,1]^{d}}w(x,y,u,v)dy=0\). In space dimension one, i.e. \(d=1\), this looks as follows:_
\[a_{0}(x,u,v):=\int_{0}^{1}a(x,y,u,v+\partial_{y}w(x,y,u,v))dy \tag{1.19}\]
_and_
\[\left.\begin{array}{l}\frac{d}{dy}a(x,y,u,v+\partial_{y}w(x,y,u,v))=0,\\ w(x,y+1,u,v)=w(x,y,u,v),\;\int_{0}^{1}w(x,y,u,v)dy=0.\end{array}\right\} \tag{1.20}\]
_From (1.20) it follows that \(a(x,y,u,v+\partial_{y}w(x,y,u,v))\) is constant with respect to \(y\). Therefore (1.19) yields that \(a(x,y,u,v+\partial_{y}w(x,y,u,v))=a_{0}(x,u,v)\) and, hence, \(b(x,y,u,a_{0}(x,u,v))=v+\partial_{y}w(x,y,u,v),\) i.e._
\[v=\int_{0}^{1}(v+\partial_{y}w(x,y,u,v))dy=\int_{0}^{1}b(x,y,u,a_{0}(x,u,v)) dy=b_{0}(x,u,a_{0}(x,u,v)).\]
_On the other hand, the solution to (1.20) with \(v=b_{0}(x,u,\bar{v})\) (with arbitrary \(\bar{v}\in\mathbb{R}\)) is_
\[w(x,y,u,b_{0}(x,u,\bar{v}))=\int_{0}^{y}(b(x,z,u,\bar{v})-b_{0}(x,u,\bar{v}) )dz-\int_{0}^{1}\int_{0}^{z_{1}}(b(x,z_{2},u,\bar{v})-b_{0}(x,u,\bar{v}))dz_{ 2}dz_{1}.\]
_Therefore_
\[a_{0}(x,u,b_{0}(x,u,\bar{v})) = \int_{0}^{1}a(x,y,u,b_{0}(x,u,\bar{v})-\partial_{y}w(x,y,u,b_{0}( x,u,\bar{v})))dy\] \[= \int_{0}^{1}a(x,y,u,b(x,y,u,\bar{v})dy=\bar{v}.\]
_It follows that \(a_{0}(x,u,\cdot)=b_{0}(x,u,\cdot)^{-1}\). In other words: Our definition (1.8) of the homogenized flux function \(a_{0}\) is the same as the usual one for PDEs, i.e. for (1.18), considered in the case \(d=1\). In the linear case this has been shown in [2, Remark 2.3] and [10, Proposition 6.16]. In [2, Remark 5.9] the formulas (1.8) and (1.19) are called dual formulas (there the linear case is considered, but with multidimensional space variable \(x\))._
Our paper is organized as follows: In Section 2 we consider abstract nonlinear parameter depending equations of the type
\[\mathcal{F}_{\varepsilon}(w)=0. \tag{1.21}\]
Here \(\varepsilon>0\) is the parameter. We prove a result on existence and local uniqueness of a family of solutions \(w=w_{\varepsilon}\approx w_{0}\) to (1.21) with \(\varepsilon\approx 0\), where \(w_{0}\) is an approximate solution to (1.21), i.e. an element with \(\mathcal{F}_{\varepsilon}(w_{0})\to 0\) for \(\varepsilon\to 0\), and we estimate the norm of the error \(w_{\varepsilon}-w_{0}\) by the norm of the discrepancy \(\mathcal{F}_{\varepsilon}(w_{0})\). This type of generalized implicit function theorems has been successfully applied to singularly perturbed ODEs and PDEs in [5, 6, 7, 16, 20, 22, 23]). Contrary to the classical implicit function theorem it is not supposed that the linearized operators \(\mathcal{F}^{\prime}_{\varepsilon}(u)\) converge for \(\varepsilon\to 0\) in the uniform operator norm. And, indeed, in the applications to singularly perturbed problems as well as to periodic homogenization problems they do not converge for \(\varepsilon\to 0\) in the uniform operator norm (cf. Remark 3.6 below). Hence, the present paper is a first step (on the ODE level) to create a common approach to existence, local uniqueness and error estimates for singularly perturbed problems and for homogenization problems. In Part II [17] we apply this approach to periodic homogenization for semilinear elliptic PDE systems.
In Section 3 we prove Theorem 1.1 by means of the results of Section 2. For that reason we transform the boundary value problem (1.1)-(1.2) into the system (3.3)-(3.4) of integral equations, and for that system of integral equations we introduce an abstract setting of the type (1.21). For that abstract setting we have to verify the key assumptions (2.1) and (2.3) of Theorem 2.1, and we do this in the Subsections 3.1 and 3.2, respectively.
Finally, in Section 4 we show that Theorem 1.1 remains true also for inhomogeneous natural boundary conditions, but not, in general, for inhomogeneous Neumann boundary conditions. Remark that the difficulties with inhomogeneous Neumann boundary conditions are well-known already for scalar linear problems (see, e.g. [2, Remark 1.2.10 and Section 1.7.1] and [26]). Further, we show how to prove that the assertions of Theorem 1.1 are true also for two Dirichlet boundary conditions.
## 2 An abstract result of implicit function theorem type
In this section we formulate and prove Theorem 2.1 below.
**Theorem 2.1**: _Let be given a Banach space \(W\) with norm \(\|\cdot\|_{W}\), an open set \(W_{0}\subseteq W\), an element \(w_{0}\in W_{0}\) and a family of \(C^{1}\)-maps \(\mathcal{F}_{\varepsilon}:W_{0}\to W\) with \(\varepsilon>0\) as family parameter. Suppose that_
\[\|\mathcal{F}_{\varepsilon}(w_{0})\|_{W}\to 0\text{ for }\varepsilon\to 0. \tag{2.1}\]
_Further, suppose that there exists \(\varepsilon_{0}>0\) such that_
\[\mathcal{F}^{\prime}_{\varepsilon}(w_{0})\text{ is Fredholm of index zero from }W\text{ into }W\text{ for all }\varepsilon\in(0,\varepsilon_{0}], \tag{2.2}\] \[\inf\{\|\mathcal{F}^{\prime}_{\varepsilon}(w_{0})w\|_{W}:\, \varepsilon\in(0,\varepsilon_{0}],\;\|w\|_{W}=1\}=:\alpha>0,\] (2.3) \[\sup_{\|w\|_{W}\leq 1}\|(\mathcal{F}^{\prime}_{\varepsilon}(w_{0} +w_{1})-\mathcal{F}^{\prime}_{\varepsilon}(w_{0}))w\|_{W}\to 0\text{ for } \varepsilon+\|w_{1}\|_{W}\to 0. \tag{2.4}\]
_Then there exist \(\varepsilon_{1}\in(0,\varepsilon_{0}]\) and \(\delta>0\) such that for all \(\varepsilon\in(0,\varepsilon_{1}]\) there exists exactly one \(w=w_{\varepsilon}\in W_{0}\) with \(\mathcal{F}_{\varepsilon}(w)=0\) and \(\|w-w_{0}\|_{W}\leq\delta\). Moreover,_
\[\|w_{\varepsilon}-w_{0}\|_{W}<\frac{2}{\alpha}\|\mathcal{F}_{\varepsilon}(w_{0} )\|_{W}. \tag{2.5}\]
**Proof** Assumptions (2.2) and (2.3) imply, that for all \(\varepsilon\in(0,\varepsilon_{0})\) the operator \(\mathcal{F}_{\varepsilon}^{\prime}(w_{0})\) is an isomorphism from \(W\) onto \(W\) and
\[\left\|\mathcal{F}_{\varepsilon}^{\prime}(w_{0})^{-1}w\right\|_{W}\leq\frac{1}{ \alpha}\|w\|_{W}\text{ for all }w\in W. \tag{2.6}\]
Hence, the map \(\mathcal{G}_{\varepsilon}:W_{0}\to W\),
\[\mathcal{G}_{\varepsilon}(w):=w-\mathcal{F}_{\varepsilon}^{\prime}(w_{0})^{-1 }\mathcal{F}_{\varepsilon}(w)\]
is well-defined. Obviously, \(w\) is a fixed point of \(\mathcal{G}_{\varepsilon}\) if and only if \(\mathcal{F}_{\varepsilon}(w)=0\).
For \(r>0\) denote \(\mathbb{B}_{r}:=\{w\in W:\;\|w-w_{0}\|_{W}\leq r\}.\) We are going to show that for sufficiently small \(\varepsilon>0\) and \(r>0\) the map \(\mathcal{G}_{\varepsilon}\) is strictly contractive from the closed ball \(\mathbb{B}_{r}\) into itself.
In order to verify the strict contractivity of \(\mathcal{G}_{\varepsilon}\) we take \(\varepsilon\in(0,\varepsilon_{0}]\) and \(v,w\in W_{0}\) and estimate as follows:
\[\|\mathcal{G}_{\varepsilon}(v)-\mathcal{G}_{\varepsilon}(w)\|_{W }=\left\|v-w-\mathcal{F}_{\varepsilon}(w_{0})^{-1}(\mathcal{F}_{\varepsilon }(v)-\mathcal{F}_{\varepsilon}(w))\right\|_{W}\] \[=\left\|\mathcal{F}_{\varepsilon}^{\prime}(w_{0})^{-1}\int_{0}^{1 }\left(\mathcal{F}_{\varepsilon}^{\prime}(w_{0})-\mathcal{F}_{\varepsilon}^{ \prime}(sv+(1-s)w)\right)ds(v-w)\right\|_{W}\] \[\leq\frac{1}{\alpha}\int_{0}^{1}\|\left(\mathcal{F}_{\varepsilon }^{\prime}(w_{0})-\mathcal{F}_{\varepsilon}^{\prime}(sv+(1-s)w)\right)(v-w)\| _{W}ds.\]
Here we used (2.6). Because of assumption (2.4) there exist \(\varepsilon_{1}\in(0,\varepsilon_{0}]\) and \(r_{0}>0\) such that \(\mathbb{B}_{r_{0}}\subset W_{0}\) and \(\|\left(\mathcal{F}_{\varepsilon}^{\prime}(w_{0})-\mathcal{F}_{\varepsilon}^{ \prime}(sv+(1-s)w)\right)(v-w)\|_{W}\leq\frac{\alpha}{2}\|v-w\|_{W}\) for all \(\varepsilon\in(0,\varepsilon_{1}]\), \(s\in[0,1]\) and \(v,w\in\mathbb{B}_{r_{0}}\). Hence,
\[\|\mathcal{G}_{\varepsilon}(v)-\mathcal{G}_{\varepsilon}(w)\|_{W}\leq\frac{1} {2}\|v-w\|_{W}\text{ for all }\varepsilon\in(0,\varepsilon_{1}]\text{ and }v,w\in\mathbb{B}_{r_{0}}. \tag{2.7}\]
Now, let us show that \(\mathcal{G}_{\varepsilon}\) maps \(\mathbb{B}_{r_{0}}\) into \(\mathbb{B}_{r_{0}}\) for all sufficiently small \(\varepsilon>0\). Take \(\varepsilon\in(0,\varepsilon_{1}]\) and \(w\in\mathbb{B}_{r_{0}}\). Then (2.6) and (2.7) imply
\[\left\|\mathcal{G}_{\varepsilon}(w)-w_{0}\right\|_{W}\leq\left\| \mathcal{G}_{\varepsilon}(w)-\mathcal{G}_{\varepsilon}(w_{0})\right\|_{W}+\left\| \mathcal{G}_{\varepsilon}(w_{0})-w_{0}\right\|_{W}\] \[\leq\frac{1}{2}\left\|w-w_{0}\right\|_{W}+\left\|\mathcal{F}_{ \varepsilon}^{\prime}(w_{0})^{-1}\mathcal{F}_{\varepsilon}(w_{0})\right\|_{W} \leq\frac{r_{0}}{2}+\frac{1}{\alpha}\left\|\mathcal{F}_{\varepsilon}(w_{0}) \right\|_{W}.\]
But assumption (2.1) yields that, if \(\varepsilon_{1}\) is taken sufficiently small, for all \(\varepsilon\in(0,\varepsilon_{1}]\) we have \(\|\mathcal{F}_{\varepsilon}(w_{0})\|_{W}\leq\alpha r_{0}/2\). Hence, for those \(\varepsilon\) we get \(\left\|\mathcal{G}_{\varepsilon}(w)-w_{0}\right\|_{W}\leq r_{0}\).
Therefore, Banach's fixed point principle yields the following: For all \(\varepsilon\in(0,\varepsilon_{1}]\) there exists exactly one \(w=w_{\varepsilon}\in\mathbb{B}_{r_{0}}\) with \(\mathcal{F}_{\varepsilon}(w)=0\).
Finally, let us prove (2.5). We take \(\varepsilon\in(0,\varepsilon_{1}]\) and estimate as above:
\[\|w_{\varepsilon}-w_{0}\|_{W}\leq\|\mathcal{G}_{\varepsilon}(w_{\varepsilon})- \mathcal{G}_{\varepsilon}(w_{0})\|_{W}+\|\mathcal{G}_{\varepsilon}(w)-w_{0}\| _{W}\leq\frac{1}{2}\|w_{\varepsilon}-w_{0}\|_{W}+\frac{1}{\alpha}\|\mathcal{F }_{\varepsilon}(w_{0})\|_{W}.\]
Hence, (2.5) is true. \(\blacksquare\)
**Remark 2.2**: _In [5, 6, 7, 16, 20, 22, 23]) slightly more general versions of Theorem 2.1 are used, i.e. those with \(\mathcal{F}_{\varepsilon}\) mapping one Banach space into another one, both with \(\varepsilon\)-depending norms. Moreover, there the approximate solutions are allowed to be \(\varepsilon\)-depending, i.e. to be a family of approximate solutions. Hence, these versions of Theorem 2.1 seem to be appropriate for applications to homogenization problems with approximate solutions defined by using correctors of first or higher-order (see, e.g. [1, 9, 12])._
_For another result of the type of Theorem 2.1 and its applications to semilinear elliptic PDE systems with numerically determined approximate solutions see [3, Theorem 2.1]._
Proof of Theorem 1.1
In this section we will prove Theorem 1.1 by means of Theorem 2.1. Hence, all assumptions of Theorem 1.1 (i.e. (1.3)-(1.5), existence of the weak solution \(u=u_{0}\) to (1.10), non-existence of weak solutions \(u\neq 0\) to (1.12)) will be supposed to be satisfied (without mentioning their use). At places, where we use the additional assumption (1.14) of Theorem 1.1(ii), we will mention this.
In order to transform the problem of weak solutions \(u\approx u_{0}\) to the boundary value problem (1.1)-(1.2) into the problem of solutions \(w\approx w_{0}\) to an appropriate operator equation \({\cal F}_{\varepsilon}(w)=0\), we use the notation
\[\mathbb{B}:=\{u\in C([0,1];\mathbb{R}^{n}):\;\|u\|_{\infty}<1\} \tag{3.1}\]
and Lemmas 3.1, 3.2 and 3.4 below.
**Lemma 3.1**: _For all \(\varepsilon>0\) the following is true:_
_(i) If \(u\in\mathbb{B}\) is a weak solution to (1.1)-(1.2) and if \(v\in C([0,1];\mathbb{R}^{n})\) is defined by_
\[v(x):=a(x,x/\varepsilon,u(x),u^{\prime}(x))\mbox{ for }x\in[0,1], \tag{3.2}\]
_then_
\[u(x)=\int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy\mbox{ for }x\in[0,1], \tag{3.3}\] \[v(x)=-\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))dy\mbox{ for }x\in[0,1]. \tag{3.4}\]
_(ii) If \((u,v)\in\mathbb{B}\times C([0,1];\mathbb{R}^{n})\) is a solution to (3.3)-(3.4), then \(u\) is a weak solution to (1.1)-(1.2)._
**Proof** (i) Let \(u\in\mathbb{B}\) be a weak solution to (1.1)-(1.2). Take an arbitrary test function \(\varphi\in C^{1}([0,1];\mathbb{R}^{n})\) with \(\varphi(0)=0\). Then (1.11) yields that
\[0 = \int_{0}^{1}\left(a(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot \varphi^{\prime}(x)+f(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\int_{0}^{x} \varphi^{\prime}(y)dy\right)dx.\] \[= \int_{0}^{1}\left(a(x,x/\varepsilon,u(x),u^{\prime}(x))+\int_{x} ^{1}f(y,y/\varepsilon,u(y),u^{\prime}(y))dy\right)\cdot\varphi^{\prime}(x)dx.\]
Therefore \(a(x,x/\varepsilon,u(x),u^{\prime}(x))+\int_{x}^{1}f(y,y/\varepsilon,u(y),u^{ \prime}(y))dy\) is constant with respect to \(x\). In particular, the function \(x\mapsto a(x,x/\varepsilon,u(x),u^{\prime}(x))\) is \(C^{1}\)-smooth, and
\[a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon,u(x),u^{ \prime}(x)). \tag{3.5}\]
If \(v\in C([0,1];\mathbb{R}^{n})\) is defined by (3.2), then
\[u^{\prime}(x)=b(x,x/\varepsilon,u(x),v(x)). \tag{3.6}\]
Inserting (3.2) and (3.6) into (3.5) we get (3.4). Moreover, the boundary condition \(u(0)=0\) and (3.6) yield (3.3).
(ii) Let \((u,v)\in\mathbb{B}\times C([0,1];\mathbb{R}^{n})\) be a solution to (3.3)-(3.4). From (3.3) follows \(u(0)=0\) and (3.6), and from (3.6) follows \(v(x)=a(x,x/\varepsilon,u(x),u^{\prime}(x))\). Therefore (3.4) implies \(v(1)=a(1,1/\varepsilon,u(1),u^{\prime}(1))=0\) and \(v^{\prime}(x)=a(x,x/\varepsilon,u(x),u^{\prime}(x))^{\prime}=f(x,x/\varepsilon, u(x),b(x,x/\varepsilon,u(x),v(x)))\). If we multiply this scalarly by an arbitrary test function \(\varphi\in C^{1}([0,1];\mathbb{R}^{n})\) with \(\varphi(0)=0\), integrate with respect to \(x\) and use the boundary condition \(v(1)=0\), then we get (1.11).
The following lemma is the only tool from classical homogenization theory which we are going to use. For related results see, e.g. [18, Lemma 1.1], [24, Proposition 2.2.2], [29, Lemma 3.1]. In Lemma 3.2 below we use the following notation for maps \(g\in C([0,1]\times\mathbb{R};\mathbb{R}^{n})\):
\[\omega_{g}(\varepsilon) := \sup\{\|g(x_{1},y)-g(x_{2},y)\|:\;x_{1},x_{2}\in[0,1],\;y\in \mathbb{R},\;|x_{1}-x_{2}|\leq\varepsilon\}\mbox{ for }\varepsilon>0,\] \[\|g\|_{*} := \sup\{\|g(x,y)\|:\;(x,y)\in[0,1]\times\mathbb{R}\}.\]
**Lemma 3.2**: _Let be given \(g\in C([0,1]\times\mathbb{R};\mathbb{R}^{n})\) such that \(g(x,y+1)=g(x,y)\) for all \(x\in[0,1]\) and \(y\in\mathbb{R}\). Then for all \(x\in[0,1]\) and \(\varepsilon>0\) we have_
\[\left\|\int_{0}^{x}\left(g(y,y/\varepsilon)-\int_{0}^{1}g(y,z)dz\right)dy\right\| \leq 2\left(\omega_{g}(\varepsilon)+\varepsilon\|g\|_{*}\right) \tag{3.7}\]
_and, if the partial derivative \(\partial_{x}g\) exists and is continuous,_
\[\left\|\int_{0}^{x}\left(g(y,y/\varepsilon)-\int_{0}^{1}g(y,z)dz\right)dy \right\|\leq 2\varepsilon\left(\|g\|_{*}+\|\partial_{x}g\|_{*}\right). \tag{3.8}\]
**Proof** Define
\[h(x,y):=g(x,y)-\int_{0}^{1}g(x,z)dz.\]
Then \(h(x,y+1)=h(x,y)\) and \(\int_{y}^{y+1}h(x,z)dz=0\) and \(\omega_{h}(\varepsilon)\leq 2\omega_{g}(\varepsilon)\). Therefore for \(x\in[0,1]\) and \(\varepsilon>0\) we have
\[\int_{0}^{x}\left(g(y,y/\varepsilon)-\int_{0}^{1}g(y,z)dz\right) dy=\int_{0}^{x}h(y,y/\varepsilon)dy=\varepsilon\int_{0}^{x/\varepsilon}h( \varepsilon y,y)dy\] \[=\varepsilon\left(\sum_{j=1}^{[x/\varepsilon]}\int_{j-1}^{j}h( \varepsilon y,y)dy+\int_{[x/\varepsilon]}^{x/\varepsilon}h(\varepsilon y,y)dy\right)\] \[=\varepsilon\left(\sum_{j=1}^{[x/\varepsilon]}\int_{j-1}^{j}\left( h(\varepsilon y,y)dy-h(\varepsilon j,y)\right)dy+\int_{[x/\varepsilon]}^{x/ \varepsilon}h(\varepsilon y,y)dy\right),\]
where \([x/\varepsilon]\) is the integer part of \(x/\varepsilon\), i.e. the largest integer which is not larger than \(x/\varepsilon\). In particular, \(\varepsilon[x/\varepsilon]\leq x\leq 1\).
For \(y\in[j-1,j]\) we have that \(0\leq\varepsilon(j-y)\leq\varepsilon\) and, hence, that \(|h(\varepsilon y,y)-h(\varepsilon j,y)|\leq w_{h}(\varepsilon)\). Therefore
\[\left\|\int_{0}^{x}\left(g(y,y/\varepsilon)-\int_{0}^{1}g(y,z)dz\right)dy \right\|\leq\varepsilon\left([x/\varepsilon]w_{h}(\varepsilon)+\|h\|_{*} \right)\leq 2\left(w_{g}(\varepsilon)+\varepsilon\|g\|_{*}\right),\]
i.e. (3.7) is proved.
If \(\partial_{x}g\) exists and is continuous, then \(g(x_{1},y)-g(x_{2},y)=(x_{1}-x_{2})\int_{0}^{1}\partial_{x}g(sx_{1}+(1-s)x_{2},y)ds\), i.e. \(\omega_{g}(\varepsilon)\leq\varepsilon\|\partial_{x}g\|_{*}\). Hence, in that case (3.7) implies (3.8).
**Remark 3.3**: _If \(g(x,y)=g_{1}(x)g_{2}(y)\) with \(g_{1}\in C([0,1];\mathbb{R}^{n})\) and \(g_{2}\in L^{\infty}(\mathbb{R})\) or, more general, if the map \(x\mapsto g(x,\cdot)\) is continuous from \([0,1]\) into \(L^{\infty}(\mathbb{R})\), then the assertions of Lemma 3.2 remain true._
Similarly to Lemma 3.1 we get, that the function \(u_{0}\), which is by assumtion of Theorem 1.1 a weak solution to (1.12), and the function \(v_{0}\in C([0,1];\mathbb{R}^{n})\), which is defined by
\[v_{0}(x):=a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\mbox{ for }x\in[0,1], \tag{3.9}\]
satisfy
\[\left.\begin{array}{l}u_{0}(x)=\int_{0}^{x}b_{0}(y,u_{0}(y),v_{0}(y))dy,\\ v_{0}(x)=-\int_{x}^{1}f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y)))dy\end{array} \right\}\mbox{ for }x\in[0,1]. \tag{3.10}\]
**Lemma 3.4**: _For all \(\gamma>0\) there exists \(\delta>0\) such that for all \(\varepsilon>0\), \(u\in\mathbb{B}\) and \(v\in C([0,1];\mathbb{R}^{n})\) with (3.4) and \(\varepsilon+\|u-u_{0}\|_{\infty}\leq\delta\) we have \(\|v-v_{0}\|_{\infty}\leq\gamma\)._
**Proof** Take arbitrary \(\varepsilon>0\), \(u\in\mathbb{B}\) and \(v\in C([0,1];\mathbb{R}^{n})\). Because of (3.4) and (3.10) we have
\[v(x)-v_{0}(x) = \int_{x}^{1}\left(f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y)))-f (y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))\,dy\right.\] \[= \alpha_{\varepsilon}(x)+\alpha_{\varepsilon,u}(x)+\alpha_{ \varepsilon,u,v}(x)\]
with
\[\alpha_{\varepsilon}(x)\] \[:=\int_{x}^{1}\left(f_{0}(y,u(y),b(y,y/\varepsilon,u(y),v(y)))-f (y,y/\varepsilon,u_{0}(y),b(y,y/\varepsilon,u_{0}(y),v_{0}(y)))\,dy,\right.\] \[\left.\alpha_{\varepsilon,u}(x)\right.\] \[:=\int_{x}^{1}\left(f(y,y/\varepsilon,u_{0}(y),b(y,y/\varepsilon, u_{0}(y),v_{0}(y)))-f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v_{0}(y)) \right)dy,\] \[\left.\alpha_{\varepsilon,u,v}(x)\right.\] \[:=\int_{x}^{1}\left(f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v_{0}(y)))-f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y))\right)dy.\]
From (1.9) it follows that \(\alpha_{\varepsilon}(x)\) equals to
\[\int_{x}^{1}\left(\int_{0}^{1}f(y,z,u_{0}(y),b(y,z,u_{0}(y),v_{0}(y)))dz-f(y, y/\varepsilon,u_{0}(y),b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\right)dy,\]
and Lemma 3.2 with \(g(x,y)=f(x,y,u_{0}(x),b(x,y,u_{0}(x),v_{0}(x)))\) yields that \(\|\alpha_{\varepsilon}\|_{\infty}\to 0\) for \(\varepsilon\to 0\). Further, we have
\[\alpha_{\varepsilon,u}(x)=\int_{x}^{1}\int_{0}^{1}\Big{(}\partial _{u}f(y,y/\varepsilon,u_{s}(y),b(y,y/\varepsilon,u_{s}(y),v_{0}(y)))\] \[\quad+\partial_{u^{\prime}}f(y,y/\varepsilon,u_{s}(y),b(y,z,u_{ s}(y),v_{0}(y))\partial_{u}b(y,y/\varepsilon,u_{s}(y),v_{0}(y))\Big{)}ds\,(u_{0}(y)-u(y))dy\]
with \(u_{s}(y):=su_{0}(y)+(1-s)u(y)\). Hence, \(\|\alpha_{\varepsilon,u}\|_{\infty}\leq\mbox{const}\|u-u_{0}\|_{\infty}\), where the constant does not depend on \(\varepsilon\) and \(u\). Finally, we have
\[\alpha_{\varepsilon,u,v}(x)=\int_{x}^{1}\int_{0}^{1}\partial_{u^{\prime}}f(y, y/\varepsilon,u(y),b_{s}(y))ds\,(b(y,y/\varepsilon,u(y),v_{0}(y))-b(y,y/ \varepsilon,u(y),v(y)))dy\]
with \(b_{s}(y):=sb(y,y/\varepsilon,u(y),v_{0}(y))+(1-s)b(y,y/\varepsilon,u(y),v(y))\). Hence, because of (1.17) there exists a constant \(c>0\), which does not depend on \(\varepsilon\), \(x\), \(u\) and \(v\), such that
\[\|\alpha_{\varepsilon,u,v}\|\leq c\int_{x}^{1}\|v(y)-v_{0}(y)\|dy.\]
It follows that \(\|v(x)-v_{0}(x)\|\leq\|\alpha_{\varepsilon}\|_{\infty}+\|\alpha_{\varepsilon, u}\|_{\infty}+c\int_{x}^{1}\|v(y))-v_{0}(y))\|dy\), and Gronwall's inequality yields that \(\|v(x)-v_{0}(x)\|\leq\left(\|\alpha_{\varepsilon}\|_{\infty}+\|\alpha_{ \varepsilon,u}\|_{\infty}\right)\exp\left(c(1-x)\right),\) i.e.
\[\|v-v_{0}\|_{\infty}\to 0\text{ for }\varepsilon+\|u-u_{0}\|_{\infty}\to 0.\]
Now we are going to apply Theorem 2.1 in order to solve the boundary value problem (1.1)-(1.2) with \(\varepsilon\approx 0\) and \(\|u-u_{0}\|_{\infty}\approx 0\). We introduce the setting of Theorem 2.1 as follows:
\[W:=C([0,1];\mathbb{R}^{n})^{2},\ \|(u,v)\|_{W}:=\|u\|_{\infty}+\|v \|_{\infty},\ W_{0}:=\mathbb{B}\times C([0,1];\mathbb{R}^{n}),\ w_{0}:=(u_{0},v _{0}),\] \[\mathcal{F}_{\varepsilon}(u,v)=(\mathcal{U}_{\varepsilon}(u,v), \mathcal{V}_{\varepsilon}(u,v))\]
with
\[[\mathcal{U}_{\varepsilon}(u,v)](x) :=u(x)-\int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy\] \[[\mathcal{V}_{\varepsilon}(u,v)](x) :=v(x)+\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y), v(y)))dy.\]
Here \(\mathbb{B}\) is the open ball in \(C([0,1];\mathbb{R}^{n})\), defined in (3.1), \(u_{0}\) is the solution to the linearized boundary value problem (1.12), which is given by assumption of Theorem 1.1, and \(v_{0}\) is defined in (3.9)
Because of Lemmas 3.1, 3.2 and 3.4 we have the following: If \(u\in\mathbb{B}\) is a weak solution to (1.1)-(1.2) with \(\varepsilon\approx 0\) and \(\|u-u_{0}\|_{\infty}\approx 0\), then there exists \(v\in C([0,1];\mathbb{R}^{n})\) with \(\|v-v_{0}\|_{\infty}\approx 0\) such that \(\mathcal{F}_{\varepsilon}(u,v)=0\). And if \((u,v)\in W_{0}\) satisfies \(\mathcal{F}_{\varepsilon}(u,v)=0\) with \(\varepsilon\approx 0\) and \(\|u-u_{0}\|_{\infty}+\|v-v_{0}\|_{\infty}\approx 0\), then \(u\) is a weak solution to (1.1)-(1.2). Moreover, if all the assumptions of Theorem 2.1, i.e. (2.1)-(2.4), are satisfied in the setting introduced above, then Theorem 2.1 yields the assertions of Theorem 1.1(i), in particular (2.5) yields (1.13). If, moreover,
\[\text{assumption \eqref{eq:1.1} implies that }\|\mathcal{F}_{\varepsilon}(u_{0},v_{0})\|_{W}=O( \varepsilon)\text{ for }\varepsilon\to 0 \tag{3.11}\]
in the setting introduced above, then the assertion (2.5) of Theorem 2.1 yields also assertion (1.15) of Theorem 1.1.
Hence, it remains to verify the assumptions (2.1)-(2.4) of Theorem 2.1 and the assertion (3.11) in the setting introduced above.
### Verification of (2.1) and (3.11)
Because of (1.7), (1.9) and (3.10) we have
\[[\mathcal{U}_{\varepsilon}(u_{0},v_{0})](x)=\int_{0}^{x}\left(\int_{0}^{1}b(y, z,u_{0}(y),v_{0}(y))dz-b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\right)dy\]
and
\[[\mathcal{V}_{\varepsilon}(u_{0},v_{0})](x)\] \[=\int_{x}^{1}\Big{(}f(y,y/\varepsilon,u_{0}(y),b(y,y/\varepsilon, u_{0}(y),v_{0}(y)))-\int_{0}^{1}f(y,z,u_{0}(y),b(y,z,u_{0}(y),v_{0}(y)))dzdz\Big{)}dy.\]
Hence, Lemma 3.2 with
\[g(x,y)=b(x,x/\varepsilon,u_{0}(x),v_{0}(x))\text{ and }g(x,y)=f(x,x/\varepsilon,u_{0}(x),b(x,x/ \varepsilon,u_{0}(x),v_{0}(x))),\]
respectively, yields for \(\varepsilon\to 0\) that
\[\|\mathcal{U}_{\varepsilon}(u_{0},v_{0})\|_{\infty}+\|\mathcal{V}_{\varepsilon }(u_{0},v_{0})\|_{\infty}=\left\{\begin{array}{l}o(1),\\ O(\varepsilon),\text{ if (\ref{eq:2.1}) is satified}.\end{array}\right.\]
### Verification of (2.2)-(2.4)
We have
\[[\mathcal{F}^{\prime}_{\varepsilon}(u,v)](\bar{u},\bar{v})=(\partial_{u} \mathcal{U}_{\varepsilon}(u,v)\bar{u}+\partial_{v}\mathcal{U}_{\varepsilon}( u,v)\bar{v},\partial_{u}\mathcal{V}_{\varepsilon}(u,v)\bar{u}+\partial_{v} \mathcal{V}_{\varepsilon}(u,v)\bar{v})\]
with
\[[\partial_{u}\mathcal{U}_{\varepsilon}(u,v)\bar{u}](x)=\bar{u}(x)-\int_{0}^{x }\partial_{u}b(y,y/\varepsilon,u(y),v(y))\bar{u}(y)dy,\]
\[[\partial_{v}\mathcal{U}_{\varepsilon}(u,v)\bar{v}](x)=-\int_{0}^{x}\partial _{u^{\prime}}b(y,y/\varepsilon,u(y),v(y))\bar{v}(y)dy,\]
\[[\partial_{u}\mathcal{V}_{\varepsilon}(u,v)\bar{u}](x)=\int_{x}^{1}\Big{(} \partial_{u}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y))\]
\[+\partial_{u^{\prime}}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)) \partial_{u}b(y,y/\varepsilon,u(y),v(y))\Big{)}\bar{u}(y)dy,\]
\[[\partial_{v}\mathcal{V}_{\varepsilon}(u,v)\bar{v}](x)=\bar{v}(x)\]
\[+\int_{x}^{1}\partial_{u^{\prime}}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u( y),v(y))\partial_{u^{\prime}}b(y,y/\varepsilon,u(y),v(y))\bar{v}(y)dy.\]
In order to verify assumption (2.4) of Theorem 2.1 we calculate as follows:
\[[(\partial_{u}\mathcal{U}_{\varepsilon}(u_{0},v_{0})-\partial_{u }\mathcal{U}_{\varepsilon}(u_{1},v_{1}))\bar{u}](x)\] \[= \int_{0}^{x}\partial_{u}b(y,y/\varepsilon,u_{1}(y),v_{1}(y))- \partial_{u}b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\bar{u}(y)dy.\]
But \(\partial_{u}b\) is uniformly continuous on \(\{(x,y,u,v)\in[0,1]\times\mathbb{R}\times\mathbb{R}^{n}\times\mathbb{R}^{n}:\ \|u\|\leq 1,\ \|v\|\leq R\}\) for all \(R>0\). Hence,
\[\|\partial_{u}b(y,y/\varepsilon,u_{0}(y),v_{0}(y))-\partial_{u}b(y,y/ \varepsilon,u_{1}(y),v_{1}(y))\bar{u}(y)\|\to 0\text{ for }\|u_{0}-u_{1}\|_{\infty}+\|v_{0}-v_{1}\|_{\infty}\to 0\]
uniformly with respect to \(\varepsilon>0\), \(y\in[0,1]\) and \(\|\bar{u}\|_{\infty}\leq 1\). Similarly one can estimate the terms \((\partial_{v}\mathcal{U}_{\varepsilon}(u_{0},v_{0})-\partial_{u}\mathcal{U}_{ \varepsilon}(u_{1},v_{1}))\bar{v}\), \((\partial_{u}\mathcal{V}_{\varepsilon}(u_{0},v_{0})-\partial_{u}\mathcal{V} _{\varepsilon}(u_{1},v_{1}))\bar{u}\) and \((\partial_{v}\mathcal{V}_{\varepsilon}(u_{0},v_{0})-\partial_{u}\mathcal{V}_{ \varepsilon}(u_{1},v_{1}))\bar{v}\).
Further, for any \((u,v)\in\mathcal{W}\) we have that \(\mathcal{F}^{\prime}_{\varepsilon}(u,v)-I\) (\(I\) is the identity in \(W\)) is a linear bounded operator from \(W\) into \(C^{1}([0,1];\mathbb{R}^{n})^{2}\), where \(C^{1}([0,1];\mathbb{R}^{n})\) is equipped with its usual norm \(\|u\|_{\infty}+\|u^{\prime}\|_{\infty}\). Hence, the Arclea-Ascoli Theorem yields that for any \((u,v)\in W\) the operator \(\mathcal{F}^{\prime}_{\varepsilon}(u,v)-I\) is compact from \(W\) into \(W\), and, therefore, the operator \(\mathcal{F}^{\prime}_{\varepsilon}(u,v)\) is Fredholm of index zero from \(W\) into \(W\), i.e. (2.2) is satisfied.
Now, let us verify (2.3).
Suppose that (2.3) is not true, i.e. that it is not true that there exists \(\varepsilon_{0}>0\) such that \(\inf\{\|\mathcal{F}^{\prime}_{\varepsilon}(w_{0})w\|_{W}:\ \varepsilon\in(0, \varepsilon_{0}],w\in W,\|w\|_{W}=1\}>0\). Then there exist sequences \(\varepsilon_{1},\varepsilon_{2},\ldots>0\) and \(u_{1},u_{2},\ldots\in C([0,1];\mathbb{R}^{n})\) and \(v_{1},v_{2},\ldots\in C([0,1];\mathbb{R}^{n})\) such that
\[\lim_{\begin{array}{l}k\to\infty\\ k\to\infty\end{array}}\varepsilon_{k}=0, \tag{3.12}\] \[\lim_{\begin{array}{l}k\to\infty\end{array}}\|\partial_{u} \mathcal{U}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}\mathcal{U}_{ \varepsilon_{k}}(u_{0},v_{0})v_{k}\|_{\infty}=0,\] \[\lim_{\begin{array}{l}k\to\infty\end{array}}\|\partial_{u} \mathcal{V}_{\varepsilon_{k}}(u_{0},v_{0})u_{k}+\partial_{v}\mathcal{V}_{ \varepsilon_{k}}(u_{0},v_{0})v_{k}\|_{\infty}=0,\]
but
\[\|u_{k}\|_{\infty}+\|v_{k}\|_{\infty}=1\text{ for all }k\in\mathbb{N}. \tag{3.13}\]
Denote
\[\bar{u}_{k}(x) :=\int_{0}^{x}\Big{(}\partial_{u}b(y,y/\varepsilon_{n},u_{0}(y),v _{0}(y))u_{k}(y)+\partial_{u^{\prime}}b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))v _{k}(y)\Big{)}dy, \tag{3.14}\] \[\bar{v}_{k}(x) :=\int_{x}^{1}\Big{(}\big{(}\partial_{u}f(y,y/\varepsilon_{n},u_{ 0}(y),b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))\] \[\quad+\partial_{u^{\prime}}f(y,y/\varepsilon_{n},u_{0}(y),b(y,y/ \varepsilon_{n},u_{0}(y),v_{0}(y))\partial_{u}b(y,y/\varepsilon_{n},u_{0}(y), v_{0}(y))\big{)}u_{k}(y)\] \[\quad+\partial_{u^{\prime}}f(y,y/\varepsilon_{n},u_{0}(y),b(y,y/ \varepsilon_{n},u_{0}(y),v_{0}(y))\partial_{u^{\prime}}b(y,y/\varepsilon_{n},u_{0}(y),v_{0}(y))v_{k}(y)\Big{)}dy.\]
Then
\[\sup_{k\in\mathbb{N}}\big{(}\|\bar{u}^{\prime}_{k}\|_{\infty}+\|\bar{v}^{ \prime}_{k}\|_{\infty}\big{)}<\infty. \tag{3.15}\]
Hence, because of the Arzela-Ascoli Theorem without loss of generality we may assume that there exist \(\bar{u}_{0},\bar{v}_{0}\in C([0,1];\mathbb{R}^{n})\) such that
\[\lim_{k\to\infty}\big{(}\|\bar{u}_{k}-\bar{u}_{0}\|_{\infty}+\|\bar{v}_{k}- \bar{v}_{0}\|_{\infty}\big{)}=0. \tag{3.16}\]
But we have that
\[u_{k} =\bar{u}_{k}+\partial_{u}\mathcal{U}_{\varepsilon_{k}}(u_{0},v_{ 0})u_{k}+\partial_{v}\mathcal{U}_{\varepsilon_{k}}(u_{0},v_{0})v_{k},\] \[v_{k} =\bar{v}_{n}-\partial_{u}\mathcal{V}_{\varepsilon_{k}}(u_{0},v_{ 0})u_{k}+\partial_{v}\mathcal{V}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}.\]
Hence, (3.12) yields that \(\lim_{k\to\infty}\big{(}\|u_{k}-\bar{u}_{0}\|_{\infty}+\|v_{k}-\bar{v}_{0}\|_ {\infty}\big{)}=0\), and (3.13) implies that
\[\|\bar{u}_{0}\|_{\infty}+\|\bar{v}_{0}\|_{\infty}=1. \tag{3.17}\]
We are going to show that (3.16) and (3.17) lead to a contradiction. Because of (1.7) we have for all \(x\in[0,1]\) that
\[\int_{0}^{x}\left(\partial_{u}b_{0}(y,u_{0}(y),v_{0}(y))\bar{u}_{ 0}(y)+\partial_{u^{\prime}}b_{0}(y,u_{0}(y),v_{0}(y))\bar{v}_{0}(y)\right)dy\] \[=\int_{0}^{x}\left(\partial_{u}\int_{0}^{1}b(y,z,u_{0}(y),v_{0}(y ))dz\;\bar{u}_{0}(y)+\partial_{u^{\prime}}\int_{0}^{1}b(y,z,u_{0}(y),v_{0}(y) )dz\;\bar{v}_{0}(y)\right)dy\] \[=\int_{0}^{x}\int_{0}^{1}\left(\partial_{u}b(y,z,u_{0}(y),v_{0}( y))dz\bar{u}_{0}(y)+\partial_{u^{\prime}}b(y,z,u_{0}(y),v_{0}(y))dz\bar{v}_{0}(y) \right)dzdy,\]
and because of Lemma 3.2 with \(g(x,y)=\partial_{u}b(x,y,u_{0}(x),v_{0}(x))\bar{u}_{0}(x)+\partial_{u^{\prime} }b(x,y,u_{0}(x),v_{0}(x))\bar{v}_{0}(x)\) this equals to
\[\lim_{k\to\infty}\int_{0}^{x}\left(\partial_{u}b(y,y/\varepsilon_{k},u_{0}(y), v_{0}(y))dz\bar{u}_{0}(y)+\partial_{u^{\prime}}b(y,y/\varepsilon_{k},u_{0}(y),v_{0}(y ))dz\bar{v}_{0}(y)\right)dy,\]
and because of (3.16) this is
\[\lim_{k\to\infty}\int_{0}^{x}\left(\partial_{u}b(y,y/\varepsilon_{k},u_{0}(y), v_{0}(y))\bar{u}_{k}(y)+\partial_{u^{\prime}}b(y,y/\varepsilon_{k},u_{0}(y),v_{0}(y)) \bar{v}_{k}(y)\right)dy,\]
and, finally, because of (3.14) and (3.16) this equals to \(\lim_{k\to\infty}\bar{u}_{k}=\bar{u}_{0}.\) We end up with
\[\bar{u}_{0}(x)=\int_{0}^{x}\Big{(}\partial_{u}b_{0}(y,u_{0}(y),v_{0}(y))\bar{u}_ {0}(y)+\partial_{u^{\prime}}b_{0}(y,u_{0}(y),v_{0}(y))\bar{v}_{0}(y)\Big{)}dy. \tag{3.18}\]
Similarly one shows that
\[\bar{v}_{0}(x) =-\int_{0}^{x}\Big{(}\big{(}\partial_{u}f_{0}(y,u_{0}(y),b_{0}(y, u_{0}(y),v_{0}(y))\] \[\qquad+\partial_{u^{\prime}}f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{ 0}(y))\partial_{u}b_{0}(y,u_{0}(y),v_{0}(y))\big{)}\bar{u}_{0}(y)\] \[\qquad+\partial_{u^{\prime}}f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{ 0}(y))\partial_{u^{\prime}}b_{0}(y,u_{0}(y),v_{0}(y))\bar{v}_{0}(y)\Big{)}dy\]
and similarly to Lemma 3.1 it follows that \(\bar{u}_{0}\) is a weak solution to the linearized boundary value problem (1.12). Therefore, the assumption of Theorem 1.1, that (1.12) does not have nontrivial solution, implies that \(u_{0}=0\). Then (3.18) implies that \(\partial_{u^{\prime}}b_{0}(x,u_{0}(x),v_{0}(x))\bar{v}_{0}(x)=0\) for all \(x\in[0,1]\). But (1.16) yields that
\[\partial_{u^{\prime}}b_{0}(x,u_{0}(x),v_{0}(x))v\cdot v\geq\frac{m}{M^{2}} \|v\|^{2} \tag{3.19}\]
for all \(x\in[0,1]\) and \(v\in\mathbb{R}^{n}\). We get that \(\bar{v}_{0}=0\), what contradicts to (3.17).
**Remark 3.5**: _In the proof above we used the well-known fact (cf., e.g. [19]) that the operations linearization and homogenization commute._
**Remark 3.6**: _It is easy to verify that the linear operators \(\mathcal{F}^{\prime}_{\varepsilon}(u,v)\) do not converge for \(\varepsilon\to 0\) in the uniform operator norm in \(\mathcal{L}(W)\), in general. For example, \(\int_{0}^{x}\partial_{u}b(y,y/\varepsilon,u(y),v(y))\bar{u}(y)dy\) (with fixed \(u,v\in C([0,1];\mathbb{R}^{n})\)) does not converge for \(\varepsilon\to 0\) uniformly with respect to \(x\in[0,1]\) and \(\bar{u}\in C([0,1];\mathbb{R}^{n})\) with \(\|\bar{u}\|_{\infty}\leq 1\), in general. But in the proof above a subsequence of the sequence \(\int_{0}^{x}\partial_{u}b(y,y/\varepsilon_{k},u_{0}(y),v_{0}(y))\bar{u}_{k}(y)dy\) converges for \(k\to\infty\) uniformly with respect to \(x\in[0,1]\), and this is because of (3.15)._
## 4 Other boundary conditions
### Inhomogeneous boundary conditions
If the homogeneous natural boundary condition in (1.2) is replaced by an corresponding inhomogeneous one, i.e.
\[u(0)=0,\;a(1,1/\varepsilon,u(1),u^{\prime}(1))=u^{1}, \tag{4.1}\]
then the weak formulation of the boundary value problem (1.1),(4.1) is as follows: Find \(u\in W^{1,2}((0,1);\mathbb{R}^{n})\) such that \(u(0)=0\) and
\[\int_{0}^{1}\Big{(}a(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot \varphi^{\prime}(x)+f(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi(x) \Big{)}dx=u^{1}\cdot\varphi(1)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\text{for all }\varphi\in C^{1}([0,1];\mathbb{R}^{n})\text{ with }\varphi(0)=0.\]
The system (3.3)-(3.4) of integral equations has to be replaced by
\[u(x) =\int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy,\] \[v(x) =u^{1}-\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y), v(y)))dy,\]
and the results of Theorem 1.1 remain unchanged.
But if the inhomogeneous natural boundary condition in \(x=1\) in (4.1) is replaced by an inhomogeneous Neumann condition, i.e. \(u(0)=u^{0},\ u^{\prime}(1)=u^{1}\neq 0\), then, in general, there do not exist solution families \(u=u_{\varepsilon}\) which converge pointwise for \(\varepsilon\to 0\). For example, the solution to the scalar linear boundary value problem
\[\left(\frac{u^{\prime}(x)}{2+\sin(2\pi x/\varepsilon)}\right)^{\prime}=1\ \mbox{for}\ x\in[0,1],\ u(0)=0,u^{\prime}(1)=u^{1} \tag{4.2}\]
is
\[u_{\varepsilon}(x)=\int_{0}^{x}(2+\sin(2\pi y/\varepsilon))\left(\frac{u^{1}} {2+\sin(2\pi/\varepsilon)}+y-1\right)dy.\]
Hence,
\[u_{\varepsilon}(1)=-1+\frac{2u^{1}}{2+\sin(2\pi/\varepsilon)}+O(\varepsilon) \ \mbox{for}\ \varepsilon\to 0,\]
i.e. \(u_{\varepsilon}(1)\) does not converge for \(\varepsilon\to 0\) if \(u^{1}\neq 0\).
**Remark 4.1**: _Consider the scalar linear boundary value problem (4.2) with \(u^{1}=0\). Then_
\[u_{\varepsilon}(1)=\int_{0}^{1}(2+\sin(2\pi y/\varepsilon))(y-1)dy=-1-\frac{ \varepsilon}{2\pi}+O(\varepsilon^{2})\ \mbox{for}\ \varepsilon\to 0,\]
_and the solution to the corresponding homogenized problem \(\frac{1}{2}u^{\prime\prime}(x)=1\) for \(x\in[0,1]\), \(u(0)=u^{\prime}(1)=0\) is \(u_{0}(x)=x(x-2)\). Hence,_
\[u_{\varepsilon}(1)-u_{0}(1)=-\frac{\varepsilon}{2\pi}+O(\varepsilon^{2})\ \mbox{for}\ \varepsilon\to 0,\]
_i.e. the asymptotic error estimate (1.15) of Theorem 1.1 is sharp._
### Two Dirichlet boundary conditions
If we consider (1.1) with two homogeneous Dirichlet boundary conditions, i.e. \(u(0)=u(1)=0\), then the weak formulation is as follows: Find \(u\in W^{1,2}((0,1);\mathbb{R}^{n})\) such that \(u(0)=u(1)=0\) and
\[\left.\begin{array}{c}\int_{0}^{1}\Big{(}a(x,x/\varepsilon,u(x),u^{\prime}( x))\cdot\varphi^{\prime}(x)+f(x,x/\varepsilon,u(x),u^{\prime}(x))\cdot\varphi(x) \Big{)}dx=0\\ \mbox{for all}\ \varphi\in C^{1}([0,1];\mathbb{R}^{n})\ \mbox{with}\ \varphi(0)= \varphi(1)=0,\end{array}\right\} \tag{4.3}\]
and the system (3.3)-(3.4) has to be changed to the system
\[u(x) = \int_{0}^{x}b(y,y/\varepsilon,u(y),v(y))dy,\] \[v(x) = w-\int_{x}^{1}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y )))dy,\] \[0 = \int_{0}^{1}b(x,x/\varepsilon,u(x),v(x))dx\]
with unknowns \((u,v,w)\in C([0,1];\mathbb{R}^{n})\times C([0,1];\mathbb{R}^{n})\times\mathbb{ R}^{n}\). This system of integral equations can be treated by Theorem 2.1 in the following setting:
\[W:=C([0,1];\mathbb{R}^{n})^{2}\times\mathbb{R}^{n},\ \|(u,v,w)\|_{W}:=\|u\|_{ \infty}+\|v\|_{\infty}+\|w\|,\ W_{0}:=\mathbb{B}\times C([0,1];\mathbb{R}^{n} )\times\mathbb{R}^{n}.\]
The approximate solution \(w_{0}\) of Theorem 2.1 is the triple \((u_{0},v_{0},w_{0})\in C([0,1];\mathbb{R}^{n})^{2}\times\mathbb{R}^{n}\), where \(u_{0}\) is the given weak solution of the homogenized problem, i.e. \(u_{0}\in W^{1,2}((0,1);\mathbb{R}^{n})\) with \(\|u_{0}\|_{\infty}<1\) and \(u_{0}(0)=u_{0}(1)=0\) and
\[\int_{0}^{1}\Big{(}a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\cdot\varphi ^{\prime}(x)+f_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\cdot\varphi(x)\Big{)}dx=0\] \[\mbox{for all }\varphi\in C^{1}([0,1];\mathbb{R}^{n})\mbox{ with } \varphi(0)=\varphi(1)=0,\]
and \(v_{0}(x):=a_{0}(x,u_{0}(x),u_{0}^{\prime}(x))\), \(w_{0}:=v_{0}(1)\). The homogenized vector functions \(a_{0}\) and \(f_{0}\) are defined as in (1.8) and (1.9). Finally, the maps \(\mathcal{F}_{\varepsilon}:W\to W\) are defined by
\[\mathcal{F}_{\varepsilon}(u,v,w):=(\mathcal{U}_{\varepsilon}(u,v),\mathcal{V }_{\varepsilon}(u,v,w),\mathcal{W}_{\varepsilon}(u,v))\]
with
\[[\mathcal{U}_{\varepsilon}(u,v)](x):=u(x)-\int_{0}^{x}b(y,y/ \varepsilon,u(y),v(y))dy,\] \[[\mathcal{V}_{\varepsilon}(u,v,w)](x):=v(x)-w+\int_{x}^{1}f(y,y/ \varepsilon,u(y),b(y,y/\varepsilon,u(y),v(y)))dy,\] \[\mathcal{W}_{\varepsilon}(u,v):=\int_{0}^{1}b(x,x/\varepsilon,u( x),v(x))dx.\]
As in Subsection 3.1 it follows that
\[[\mathcal{U}_{\varepsilon}(u_{0},v_{0})](x)=\int_{0}^{x}\Big{(} b_{0}(y,u_{0}(y),v_{0}(y))-b(y,y/\varepsilon,u_{0}(y),v_{0}(y))\Big{)}dy,\] \[[\mathcal{V}_{\varepsilon}(u_{0},v_{0},w_{0})](x)\] \[=\int_{x}^{1}\Big{(}f(y,y/\varepsilon,u(y),b(y,y/\varepsilon,u( y),v(y)))-f_{0}(y,u_{0}(y),b_{0}(y,u_{0}(y),v_{0}(y)))\Big{)}dy,\]
and, hence, for \(\varepsilon\to 0\) follows that
\[\|\mathcal{U}_{\varepsilon}(u_{0},v_{0})\|_{\infty}+\|\mathcal{V}_{ \varepsilon}(u_{0},v_{0},w_{0})\|_{\infty}=\left\{\begin{array}{ll}o(1),\\ O(\varepsilon),\mbox{ if }(\ref{eq:1.1})\mbox{ is satified}.\end{array}\right.\]
Further, we have
\[\mathcal{W}_{\varepsilon}(u_{0},v_{0}) = \int_{0}^{1}b(x,x/\varepsilon,u_{0}(x),v_{0}(x))dx\] \[= \int_{0}^{1}(b(x,x/\varepsilon,u_{0}(x),v_{0}(x))-b_{0}(x,u_{0}(x ),v_{0}(x))dx\]
and, hence, for \(\varepsilon\to 0\) follows
\[\|\mathcal{W}_{\varepsilon}(u_{0},v_{0})\|_{\infty}=\left\{\begin{array}{ll} o(1),\\ O(\varepsilon),\mbox{ if }(\ref{eq:1.1})\mbox{ is satified}.\end{array}\right.\]
Therefore, the assumptions (2.1) and (3.11) are satisfied in the setting introduced above.
Finally, let us verify the assumption (2.3) of Theorem 2.1 in the setting introduced above. Suppose that (2.3) is not true. Then there exist sequences \(\varepsilon_{1},\varepsilon_{2},\ldots>0\) and \(u_{1},u_{2},\ldots\in C([0,1];\mathbb{R}^{n})\) and \(v_{1},v_{2},\ldots\in C([0,1];\mathbb{R}^{n})\) and \(w_{1},w_{2},\ldots\in\mathbb{R}^{n}\) such that \(\varepsilon_{n}\to 0\) for \(n\to\infty\) and
\[\lim_{k\to\infty}\|\partial_{u}\mathcal{U}_{\varepsilon_{k}}(u_{ 0},v_{0})u_{k}+\partial_{v}\mathcal{U}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}\|_ {\infty} = 0, \tag{4.4}\] \[\lim_{k\to\infty}\|\partial_{u}\mathcal{V}_{\varepsilon_{k}}(u_{ 0},v_{0})u_{k}+\partial_{v}\mathcal{V}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}+ \partial_{w}\mathcal{V}_{\varepsilon_{k}}(u_{0},v_{0})w_{k}\|_{\infty} = 0,\] (4.5) \[\lim_{k\to\infty}\|\partial_{u}\mathcal{W}_{\varepsilon_{k}}(u_{ 0},v_{0})u_{k}+\partial_{v}\mathcal{W}_{\varepsilon_{k}}(u_{0},v_{0})v_{k}\| = 0,\]
but
\[\|u_{k}\|_{\infty}+\|v_{k}\|_{\infty}+\|w_{k}\|=1\text{ for all }k\in\mathbb{N}. \tag{4.6}\]
As in Subsection 3.2 one can show that without loss of generality we can assume that there exist \(\bar{u}_{0},\bar{v}_{0}\in C([0,1];\mathbb{R}^{n})\) and \(\bar{w}_{0}\in\mathbb{R}^{n}\) such that
\[\lim_{k\to\infty}\left(\|u_{k}-\bar{u}_{0}\|_{\infty}+\|v_{k}-\bar{v}_{0}\|_{ \infty}+\|w_{k}-\bar{w}_{0}\|\right)=0\]
and that \(\bar{u}_{0}\) is a solution to the linearization (in \(u_{0}\)) of (4.3), i.e. that \(\bar{u}_{0}=0\). Hence, from (4.4) follows that for all \(x\in[0,1]\) we have
\[0 = \lim_{k\to\infty}[\partial_{\nu}\mathcal{U}_{e}(u_{0},v_{0})v_{k }](x)=\lim_{k\to\infty}\int_{0}^{x}\partial_{u^{\prime}}b(y,y/\varepsilon,u_{ 0}(y),v_{0}(y))v_{k}(y)dy\] \[= \int_{0}^{x}\partial_{u^{\prime}}b_{0}(y,u_{0}(y),v_{0}(y))\bar{ v}_{0}(y)dy.\]
It follows that \(\partial_{u^{\prime}}b_{0}(x,u_{0}(x),v_{0}(x))\bar{v}_{0}(x)=0\) for all \(x\in[0,1]\) and, hence, (3.19) yields that \(\bar{v}_{0}=0\). Therefore, (4.5) implies that
\[0=\lim_{k\to\infty}\|\partial_{w}\mathcal{V}(u_{0},v_{0},w_{0})\bar{w}_{k}\|_ {\infty}=\lim_{k\to\infty}\|w_{k}\|=\|\bar{w}_{0}\|,\]
and we get a contradiction to (4.6).
|
2309.08126 | Analysis of a stochastic SIR model with media effects | In this study, we investigate a stochastic SIR model with media effects. The
uniqueness and the existence of a global positive solution are studied. The
sufficient conditions of extinction and persistence of the disease are
established. We obtain the basic reproduction number $R_0^S$ for stochastic
system, which can act as the threshold given small environmental noise. Note
that large noise can induce the disease extinction with probability of 1,
suggesting that environmental noises can not be ignored when investigating
threshold dynamics. Further, inclusion of media induced behaviour changes does
not affect the threshold itself, which is similar to the conclusion of the
deterministic models. However, numerical simulations suggest that media impacts
induce the disease infection decline. | Jiaxun Li, Yanni Xiao | 2023-09-15T03:30:25Z | http://arxiv.org/abs/2309.08126v1 | # Analysis of a stochastic SIR model with media effects
###### Abstract
In this study, we investigate a stochastic SIR model with media effects. The uniqueness and the existence of a global positive solution are studied. The sufficient conditions of extinction and persistence of the disease are established. We obtain the basic reproduction number \(R_{0}^{S}\) for stochastic system, which can act as the threshold given small environmental noise. Note that large noise can induce the disease extinction with probability of 1, suggesting that environmental noises can not be ignored when investigating threshold dynamics. Further, inclusion of media induced behaviour changes does not affect the threshold itself, which is similar to the conclusion of the deterministic models. However, numerical simulations suggest that media impacts induce the disease infection decline.
**Key words:** stochastic differential equations, Brownian motion, SIR model, extinction, persistence
## 1 Introduction
Since the pioneer work of Kermack and McKendrick[1], mathematical models have played an important role in investigating epidemics in the real world. In the classical endemic models, the incidence rate is assumed to be bilinear with the form \(\beta SI\), where \(\beta\) is a positive constant represents the probability of transmission per contact. However, when a disease appears and breaks out, people always take protective measures spontaneously influenced by surroundings or the mass media, which may mitigate the spread of the disease. Examples of such media influence include the
spread of the 2003 SARS, the 2009 H1N1, and the recent COVID-19[2-7]. Hence it is unreasonable to assume \(\beta\) as a constant.
As a result, many models were proposed in which the impact of media coverage on disease spread is considered. Liu et al., in [8], described the media effect by multiplying the transmission coefficient \(\beta\) with \(exp(-a_{1}E-a_{2}I-a_{3}H)\), where \(E,I\) and \(H\) are the numbers of reported exposed, infectious and hospitalized individuals, respectively. Li et al., in [3], proposed an SIS model with incidence rate \((\beta_{1}-\beta_{2}\frac{I}{m+I})\frac{SI}{N}\) to reflect the reduction of contact rate through media coverage. Cui et al.[9], Wang and Xiao[10], Song and Xiao[11] used the incidence rate \(\beta exp(-\alpha I)SI\) to approximate the impact of media coverage and have proposed various models with different assumptions.
A common assumption for the models in [3, 8-11] is that the spread of disease is a definite process, while in the real world, epidemics will fluctuate inevitably due to the environmental white noise. Hence adding stochastic factors to epidemic models will be a meaningful approach. In fact, many stochastic models for epidemics have been developed. For example, Tornatore et al., in [15], discussed an stochastic SIR system with and without delays. A Gray et al., in [16], developed an stochastic SIS system and established the conditions for extinction and persistence of \(I(t)\). Zhao, in [17], introduced a stochastic SIR model with saturated incidence and gave the threshold of the system. There are many other stochastic models under various assumption, see [18-23]. Little is known about transmission dynamics of the epidemic model with media impact and environmental perturbations, and consequently it is essential to examine how environmental stochastic factor and media impact influence the transmission dynamics of infectious diseases.
We introduce random perturbations to the following SIR model with media effects[11].
\[\left\{\begin{array}{l}dS=(\Lambda-\beta e^{-\alpha I}SI-\mu S)dt,\\ dI=[\beta e^{-\alpha I}SI-(\mu+\gamma)I]dt,\\ dR=(\gamma I-\mu R)dt,\end{array}\right. \tag{1}\]
where \(S(t),I(t),R(t)\) represents the number of susceptible, infected and recovered individuals respectively. \(\Lambda\) stands for the rate of flow into the population, \(\mu\) is the natural death rate, \(\beta\) denotes the transmission rate, \(\gamma\) represents the recovery rate and \(e^{-\alpha I}\) is the reduction of transmission rate caused by media effects. All the parameters here are positive. Note that the dynamic behavior of (1) has been analyzed in detail by Song and Xiao[11]. They found the basic reproduction number \(R_{0}^{D}\) defined by
\[R_{0}^{D}=\frac{\Lambda\beta}{\mu(\mu+\gamma)},\]
is a threshold of the model (1), namely, the disease-free equilibrium is globally asymptotically stable if \(R_{0}^{D}\leq 1\), while the endemic equilibrium is feasible and globally asymptotically stable if \(R_{0}^{D}>1\).
We assume that noises in the environment will mainly affect the transmission coefficient \(\beta\), as in [15, 16, 17], so
\[\beta dt\rightarrow\beta dt+\sigma dB(t),\]
where \(B(t)\) is a brownian motion and \(\sigma\) is a positive constant, thus the deterministic model (1) is transformed to the following stochastic model:
\[\left\{\begin{array}{l}dS=(\Lambda-\beta e^{-\alpha I}SI-\mu S)dt-\sigma e^{ -\alpha I}SIdB(t)\\ dI=[\beta e^{-\alpha I}SI-(\mu+\gamma)I]dt+\sigma e^{-\alpha I}SIdB(t)\\ dR=(\gamma I-\mu R)dt\end{array}\right. \tag{2}\]
In this paper, we investigate the dynamics of system (2), and give the conditions to determine the extinction and persistence of the disease.
The structure of this paper is organized as follows: In Section 2, we study dynamical behaviors of system (2). In Section 3, we give some numerical examples to show the complicated stochastic dynamics of the model. We then conclude our work in Section 4.
## 2 Analysis for the stochastic model
In this paper, we let \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P})\) be a complete probability space with a filtration \(\{\mathcal{F}_{t}\}_{t\geq 0}\) satisfying the usual conditions, namely, it is increasing and right continuous with \(\mathcal{F}_{0}\) contains all \(\mathbb{P}\)-null sets. Let \(B(t)\) be a 1-dimensional Brownian motion defined on \((\Omega,\mathcal{F},\{\mathcal{F}_{t}\}_{t\geq 0},\mathbb{P})\). We use \(a\wedge b\) to denote \(\min(a,b)\) and \(a\lor b\) to denote \(\max(a,b)\).
### Existence and uniqueness of global positive solution
For the deterministic model (1), we know that a solution \((S(t),I(t),R(t))\in\mathbb{R}_{+}^{3},\forall t\geq 0\) whenever \((S(0),I(0),R(0))\in\mathbb{R}_{+}^{3}\). In order for the stochastic differential equation(SDE) model (2) to make sense, we need to show a solution of it satisfies this property as well.
**Theorem 2.1**.: _For any given initial value \((S(0),I(0),R(0))\in\mathbb{R}_{+}^{3}\), the SDE (2) has a unique global solution \((S(t),I(t),R(t))\in\mathbb{R}_{+}^{3}\) for all \(t\geq 0\) with probability one, namely,_
\[\mathbb{P}\{(S(t),I(t),R(t))\in\mathbb{R}_{+}^{3},\forall t\geq 0\}=1.\]
**Proof.** It is easy to show that the SDE (2) satisfies the local Lipschitz condition, so for any given initial value \((S(0),I(0),R(0))\in\mathbb{R}_{+}^{3}\), there is a unique maximal local solution \((S(t),I(t),R(t))\) on \(t\in[0,\tau_{e})\), where \(\tau_{e}\) is the explosion time. (see Theorem 2.8 in p155 in [24]). Set
\[\varGamma_{k}=\{(x,y,z)\in\mathbb{R}_{+}^{3}|1/k<x,y,z<k\}.\]
Let \(k_{0}>0\) be sufficient large so that \((S(0),I(0),R(0))\in\varGamma_{k_{0}}\). For each \(k\geq k_{0}\), let
\[\tau_{k}=\inf\{t\in[0,\tau_{e})|(S(t),I(t),R(t))\notin\varGamma_{k}\}.\]
where we set \(\inf\emptyset=\infty\). \(\tau_{k}\) is increasing as \(k\to\infty\). Let \(\tau_{\infty}=\lim_{k\to\infty}\tau_{k}\), whence \(\tau_{\infty}\leq\tau_{e}\) a.s. If we can show that \(\tau_{\infty}=\infty\) a.s., then \(\tau_{e}=\infty\) a.s. and \((S(t),I(t),R(t))\in\mathbb{R}_{+}^{3},\forall t\geq 0\) a.s. So to complete the proof all we need to show is that \(\tau_{\infty}=\infty\) a.s. If the statement is false, then there is a pair of constants \(T>0\) and \(\epsilon\in(0,1)\) such that
\[\mathbb{P}\{\tau_{\infty}\leq T\}>\epsilon.\]
Then there exists an integer \(k_{1}>k_{0}\), such that
\[\mathbb{P}\{\tau_{k}\leq T\}>\epsilon,\forall k\geq k_{1}. \tag{3}\]
Let \(N(t)=S(t)+I(t)+R(t)\), from (2), we know that
\[dN(t)=(\Lambda-\mu N)dt. \tag{4}\]
Solve the equation (4), we get \(N(t)=\frac{\Lambda}{\mu}+ce^{-\mu t}\), where \(c\in\mathbb{R}\). Thus
\[N(t)\leq N(0)\vee\frac{\Lambda}{\mu}.\]
For all \(k\geq 0\) and \(t\in[0,\tau_{k})\), since \(S(t),I(t),R(t)>0\), we must have
\[S(t)\leq N(0)\vee\frac{\Lambda}{\mu},I(t)\leq N(0)\vee\frac{\Lambda}{\mu},R( t)\leq N(0)\vee\frac{\Lambda}{\mu},\quad\forall k\geq 0,t\in[0,\tau_{k}). \tag{5}\]
Define a function \(V:\mathbb{R}_{+}^{3}\to\mathbb{R}_{+}\) by
\[V(S,I,R)=\frac{1}{S}+\frac{1}{I}+\frac{1}{R}.\]
Make use of the Ito's formula(see [24]), we have, for any \(t\in[0,T]\) and \(k\geq k_{1}\),
\[\mathbb{E}V(S(t\wedge\tau_{k}),I(t\wedge\tau_{k}),R(t\wedge\tau_{k}))=V(S(0),I(0),R(0))+\mathbb{E}\int_{0}^{t\wedge\tau_{k}}LV(S(s),I(s),R(s))ds, \tag{6}\]
where
\[LV(S,I,R)= -\frac{1}{S^{2}}(\Lambda-\beta e^{-\alpha I}SI-\mu S)-\frac{1}{I^ {2}}[\beta e^{\alpha I}SI-(\mu+\gamma)I]\] \[-\frac{1}{R^{2}}(\gamma I-\mu R)+\sigma^{2}e^{-2\alpha I}I^{2}S^{ 2}\left(\frac{1}{I^{3}}+\frac{1}{S^{3}}\right)\] \[= \frac{\beta e^{-\alpha I}I}{S}+\frac{\mu}{S}+\frac{\mu+\gamma}{I} +\frac{\mu}{R}+\frac{\sigma^{2}e^{-2\alpha I}I^{2}}{S}+\frac{\sigma^{2}e^{-2 \alpha I}S^{2}}{I}\] \[-\left(\frac{\Lambda}{S^{2}}+\frac{\beta e^{-\alpha I}S}{I}+ \frac{\gamma I}{R^{2}}\right)\] \[\leq \frac{\beta e^{-\alpha I}I}{S}+\frac{\mu}{S}+\frac{\mu+\gamma}{I} +\frac{\mu}{R}+\frac{\sigma^{2}e^{-2\alpha I}I^{2}}{S}+\frac{\sigma^{2}e^{-2 \alpha I}S^{2}}{I}\] \[\leq \frac{\beta(\frac{\Lambda}{\mu}\lor N(0))}{S}+\frac{\mu}{S}+ \frac{\mu+\gamma}{I}+\frac{\mu}{R}+\frac{\sigma^{2}(\frac{\Lambda}{\mu}\lor N (0))^{2}}{S}+\frac{\sigma^{2}(\frac{\Lambda}{\mu}\lor N(0))^{2}}{I}\] \[\leq CV(S,I,R),\]
where we used (5) in the penultimate inequality and \(C=\beta\left(\frac{\Lambda}{\mu}\lor N(0)\right)+2\mu+\gamma+\sigma^{2}(\frac {\Lambda}{\mu}\lor N(0))^{2}\). Substituting this into (6),
\[\mathbb{E}V(S(t\wedge\tau_{k}),I(t\wedge\tau_{k}),R(t\wedge\tau_{k}))\leq V(S (0),I(0),R(0))+C\int_{0}^{t\wedge\tau_{k}}\mathbb{E}V(S(s),I(s),R(s))ds.\]
By the Gronwall inequality,
\[\mathbb{E}V(S(T\wedge\tau_{k}),I(T\wedge\tau_{k}),R(T\wedge\tau_{k}))\leq V(S (0),I(0),R(0))e^{CT}. \tag{7}\]
Set \(\Omega_{k}=\{\tau_{k}\leq T\}\) for \(k\geq k_{1}\), by (3), we have \(\mathbb{P}(\Omega_{k})\geq\epsilon\). Note that when \(k\) is large enough, by (5), for every \(\omega\in\Omega_{k}\), at least one of \(S(\tau_{k},\omega),I(\tau_{k},\omega)\) and \(R(\tau_{k},\omega)\) equals \(1/k\), hence
\[V(S(\tau_{k},\omega),I(\tau_{k},\omega),R(\tau_{k},\omega))\geq k.\]
It then follows from(7) that
\[V(S(0),I(0),R(0))e^{CT}\geq\mathbb{E}\left[I_{\Omega_{k}}(\omega)V(S(\tau_{k}, \omega),I(\tau_{k},\omega),R(\tau_{k},\omega))\right]\geq k\mathbb{P}(\Omega_{ k})\geq\epsilon k.\]
Letting \(k\to\infty\) leads to the contradiction
\[\infty>V(S(0),I(0),R(0))e^{CT}=\infty,\]
so we must have \(\tau_{\infty}=\infty\) a.s., thus the proof is complete.
Let
\[\varGamma=\{(x,y,z)\in\mathbb{R}^{3}_{+}|x+y+z<\frac{\Lambda}{\mu}\}.\]
In the rest of this paper, we assume that \((S(0),I(0),R(0))\in\varGamma\). By Theorem 2.1 and (5), we have the following corollary:
**Corollary 2.2**.: _For any given initial value \((S(0),I(0),R(0))\in\varGamma\), the SDE(2) has a unique global solution \((S(t),I(t),R(t))\in\varGamma\) for all \(t\geq 0\) a.s._
### Extinction
In this section, we deduce the condition under which the disease dies out. Define
\[R^{S}_{0}=\frac{\beta\Lambda}{\mu(\mu+\gamma)}-\frac{\sigma^{2}\Lambda^{2}}{2 \mu^{2}(\mu+\gamma)}\]
be the basic reproduction number for SDE model (2). The next theorem shows that this parameter has the similar property as \(R^{D}_{0}\) for the deterministic model (1).
**Theorem 2.3**.: _If_
\[R^{S}_{0}<1\text{ and }\sigma^{2}<\frac{\mu\beta}{\Lambda}, \tag{8}\]
_then for any given initial value \((S(0),I(0),R(0))\in\varGamma\), the solution of SDE(2) obeys_
\[\limsup_{t\to\infty}\frac{1}{t}\ln(I(t))\leq(\mu+\gamma)(R^{S}_{0}-1)<0\qquad \text{a.s.,} \tag{9}\]
_namely, the disease \(I(t)\) will die out exponentially with probability one. Moreover, we have_
\[\limsup_{t\to\infty}\frac{1}{t}\ln(R(t))\leq\max\{(\mu+\gamma)(R^{ S}_{0}-1),-\mu\}<0\qquad\text{a.s.,} \tag{10}\] \[\limsup_{t\to\infty}\frac{1}{t}\ln(\frac{\Lambda}{\mu}-S(t))\leq \max\{(\mu+\gamma)(R^{S}_{0}-1),-\mu\}<0\qquad\text{a.s.,} \tag{11}\]
_which means \(R(t)\) and \(\frac{\Lambda}{\mu}-S(t)\) will tend to zero exponentially._
**Proof.** By the Ito's formula, we have
\[\ln(I(t))=\ln(I(0))+\int_{0}^{t}f(S(s),I(s))ds+\int_{0}^{t}\sigma e^{-\alpha I}S( s)dB(s), \tag{12}\]
where
\[f(S,I)=\beta e^{-\alpha I}S-(\mu+\gamma)-\frac{1}{2}\sigma^{2}e^{-2\alpha I}S^ {2}. \tag{13}\]
Consider the quadratic function
\[g(x)=-\frac{1}{2}\sigma^{2}x^{2}+\beta x-\mu-\gamma. \tag{14}\]
Note that \(g(x)\) attaches its maximum value at \(x_{0}=\frac{\beta}{\sigma^{2}}\). From (8), we have \(x_{0}>\frac{\Lambda}{\mu}\). So \(g(x)\) increases in \((0,\frac{\Lambda}{\mu})\). Thus by Corollary 2.2 and (8), we have
\[f(S,I)=g(e^{-\alpha I}S)\leq g(\frac{\Lambda}{\mu})=\frac{\beta\Lambda}{\mu}- \mu-\gamma-\frac{1}{2}\frac{\sigma^{2}\Lambda^{2}}{\mu^{2}}=(\mu+\gamma)(R_{0} ^{S}-1)\qquad a.s.\]
It follows from (12) that
\[\ln(I(t))\leq\ln(I(0))+(\mu+\gamma)(R_{0}^{S}-1)t+\int_{0}^{t}\sigma e^{- \alpha I(s)}S(s)dB(s)\qquad a.s.,\]
which implies
\[\limsup_{t\to\infty}\frac{1}{t}\ln(I(t))\leq(\mu+\gamma)(R_{0}^{S}-1)+\limsup _{t\to\infty}\frac{1}{t}\int_{0}^{t}\sigma e^{-\alpha I(s)}S(s)dB(s)\qquad a.s.\]
However, by the large number theorem for martingales (see [24]), we have
\[\limsup_{t\to\infty}\frac{1}{t}\int_{0}^{t}\sigma e^{-\alpha I(s)}(S(s))dB(s)= 0\qquad a.s.\]
We therefore complete the proof of (9). Now set
\[m:=-(\mu+\gamma)(R_{0}^{S}-1).\]
From (9) we know that
\[\limsup_{t\to\infty}\frac{1}{t}\ln(I(t))\leq-m<0\qquad a.s.,\]
which means there exists a set \(\Omega_{I}\) such that \(\mathbb{P}(\Omega_{I})=1\) and for every sufficiently small \(\varepsilon>0\) and \(\omega\in\Omega_{I}\), there exists a \(T=T(\varepsilon,\omega)>0\), such that
\[I(t,\omega)<e^{(-m+\varepsilon)t}\qquad\forall t>T. \tag{15}\]
Substituting this into SDE(2) we have
\[dR(t,\omega)<(\gamma e^{(-m+\varepsilon)t}-\mu R(t,\omega))dt\qquad\forall t>T. \tag{16}\]
Consider the deterministic differential equation
\[dx=(\gamma e^{(-m+\varepsilon)t}-\mu x)dt,\]
a simple calculation shows its general solution is
\[x=\frac{\gamma}{-m+\varepsilon+\mu}e^{(-m+\epsilon)t}+ae^{-\mu t},\qquad a\in \mathbb{R}.\]
By the comparing principle, the solution of (16) satisfies
\[R(t,\omega)<\frac{\gamma}{-m+\varepsilon+\mu}e^{(-m+\epsilon)t}+ae^{-\mu t}, \qquad a\in\mathbb{R},t>T, \tag{17}\]
which means
\[\limsup_{t\to\infty}\frac{1}{t}\ln(R(t,\omega))\leq\max\{-m+\epsilon,-\mu\}.\]
Letting \(\epsilon\to 0\) lead to the assertion (10). Then (11) is an immediate result by combining (15),(17) and the fact \(\lim_{t\to\infty}R(t)+I(t)+S(t)=\frac{\Lambda}{\mu}\).
Theorem 2.3 shows that the disease will die out if \(R_{0}^{S}<1\) and the intensity of stochastic perturbation is relatively small. The next theorem covers the case when the intensity of stochastic perturbation is rather large.
**Theorem 2.4**.: _If_
\[\sigma^{2}>\frac{\beta^{2}}{2(\mu+\gamma)}, \tag{18}\]
_then for any given initial value \((S(0),I(0),R(0))\in\varGamma\), the solution of the SDE model (2) obeys_
\[\limsup_{t\to\infty}\frac{1}{t}\ln(I(t))\leq\frac{\beta^{2}}{2\sigma^{2}}-\mu- \gamma<0\qquad\text{a.s.}, \tag{19}\]
namely, \(I(t)\) will die out exponentially with probability one. Moreover,_
\[\limsup_{t\to\infty}\frac{1}{t}\ln(R(t))\leq\max\{\frac{\beta^{2}}{2 \sigma^{2}}-\mu-\gamma,-\mu\}<0\qquad\mbox{a.s.,} \tag{20}\] \[\limsup_{t\to\infty}\frac{1}{t}\ln(\frac{\Lambda}{\mu}-S(t))\leq \max\{\frac{\beta^{2}}{2\sigma^{2}}-\mu-\gamma,-\mu\}<0\qquad\mbox{a.s.,} \tag{21}\]
_which means \(R(t)\) and \(\frac{\Lambda}{\mu}-S(t)\) will tend to zero exponentially._
**Proof.** Since the proofs of (20) and (21) are very similar to that of (10) and (11), we only prove the inequality (19) here. We use the same notation as in the proof of Theorem 2.3. Note that
\[g(x_{0})=-\mu-\gamma+\frac{\beta^{2}}{2\sigma^{2}},\]
so by Theorem 2.1 and (18), we have
\[f(S,I)=g(e^{-\alpha I}S)<g(x_{0})=-\mu-\gamma+\frac{\beta^{2}}{2\sigma^{2}},\]
which is negative by condition (18). This implies, in the same way as in the proof of Theorem 2.3 that
\[\limsup_{t\to\infty}\frac{1}{t}\ln(I(t))\leq-\mu-\gamma+\frac{\beta^{2}}{2 \sigma^{2}}\qquad a.s.\]
as required.
_Remark_. Note that the condition (18) implies \(R_{0}^{S}<1\) since
\[R_{0}^{S}<\frac{\beta\Lambda}{\mu(\mu+\gamma)}-\frac{\beta^{2}\Lambda^{2}}{4 \mu^{2}(\mu+\gamma)^{2}}\leq 1.\]
Thus when the stochastic perturbation is large enough, \(R_{0}^{S}\) will automatically go below \(1\).
_Remark_. Theorem 2.3 and 2.4 are also true if we only suppose \((S(0),I(0),R(0))\in\mathbb{R}_{+}^{3}.\) Since if \(N(0)>\frac{\Lambda}{\mu},\) we have \(N(t)\to\frac{\Lambda}{\mu},t\to\infty\) and \(N(t)\) decreases, then for every \(\epsilon>0,\)\(N(t)<\frac{\Lambda}{\mu}+\epsilon\) for large enough \(t\). Therefore we can repeat the proof of Theorem 2.3 and 2.4 by first replacing \(\frac{\Lambda}{\mu}\) with \(\frac{\Lambda}{\mu}+\epsilon\) and then letting \(\epsilon\to 0.\)
### Persistence
In this section, we deduce the weak persistence and the mean persistence of \(I(t)\). We first pay attention on the variable \(Q(t):=\frac{\Lambda}{\mu}-S(t)\) and deduce its persistence. Note that from our model equations we have \(N(t)\to\frac{\Lambda}{\mu},t\to\infty\), then
\[\lim_{t\to\infty}\frac{Q(t)}{I(t)+R(t)}=1.\]
Use this fact, combine with the persistence of \(Q(t)\), we can derive the weak persistence of \(I(t)\). Recall the definition of \(f\) in (13) and \(g\) in (14). Now we set
\[\tilde{f}(Q,I)=f(N-Q,I)=\beta e^{-\alpha I}(\frac{\Lambda}{\mu}-Q)-(\mu+\gamma )-\frac{1}{2}\sigma^{2}e^{-2\alpha I}(\frac{\Lambda}{\mu}-Q)^{2}. \tag{22}\]
Define \(h\) as
\[h(x,y):=(\frac{\Lambda}{\mu}-x)e^{-\alpha y}, \tag{23}\]
thus
\[\tilde{f}(Q,I)=g(h(Q,I)).\]
By Corollary 2.2, \((Q(t),I(t),R(t))\in\varGamma^{\prime}\) for all \(t\geq 0\) a.s. if \((Q(0),I(0),R(0))\in\varGamma^{\prime}\), where
\[\varGamma^{\prime}=\{(x,y,z)\in\mathbb{R}_{+}^{3}|y+z<x<\frac{\Lambda}{\mu}\}.\]
Hence we may assume the domain of \(\tilde{f}\) and \(h\) be
\[\varGamma^{\prime}_{0}=\{(x,y)\in\mathbb{R}_{+}^{2}|y<x<\frac{\Lambda}{\mu}\}.\]
**Theorem 2.5**.: _If \(R_{0}^{S}>1\), then for any given initial value \((S(0),I(0),R(0))\in\varGamma\), the solution of the SDE (2) obeys_
\[\limsup_{t\to\infty}Q(t)\geq\xi\qquad\text{a.s.}, \tag{24}\]
_where \(\xi\) is the unique root in \((0,\frac{\Lambda}{\mu})\) of the equation_
\[\beta e^{-\alpha\xi}(\frac{\Lambda}{\mu}-\xi)-(\mu+\gamma)-\frac{1}{2}\sigma^ {2}e^{-2\alpha\xi}(\frac{\Lambda}{\mu}-\xi)^{2}=0. \tag{25}\]
_Futhermore, we have_
\[\limsup_{t\to\infty}I(t)\geq(1+\frac{\gamma}{\mu})^{-1}\xi\qquad\text{a.s.} \tag{26}\]
_Namely, the disease \(I(t)\) will have weak persistence with probability one._
To prove this, we need some properties of the function \(\tilde{f}\) in (22) and \(h\) in (23), which are summarized in the below lemma. Fig.1 illustrates the first two assertions of Lemma 2.6.
**Lemma 2.6**.: _If \(R_{0}^{S}>1\), functions \(\tilde{f}\) in (22) and \(h\) in (23) have the following properties:_
1. _There is a unique root_ \(\xi\) _in_ \((0,\frac{\Lambda}{\mu})\) _of the equation_ \(\tilde{f}(\xi,\xi)=0\)_._
2. _For each_ \(y\in[0,\xi]\)_, the equation_ \(\tilde{f}(\eta_{y},y)=0\) _has a unique root_ \(\eta_{y}\) _in_ \([y,\frac{\Lambda}{\mu})\)_._
3. _For each_ \((x,y)\in\Gamma^{\prime}_{0}\)_,_ \(\tilde{f}(x,y)>0\) _if_ \(y\in(0,\xi),x\in(y,\eta_{y})\)_._ \(\tilde{f}(x,y)<0\) _if_ \(y\in(0,\xi),x\in(\eta_{y},\frac{\Lambda}{\mu})\) _or_ \(y\in[\xi,\frac{\Lambda}{\mu})\)_._
4. _For each_ \((x,y)\in\Gamma^{\prime}_{0}\)_,_ \(h(x,x)<h(x,y)<h(x,0)<\frac{\Lambda}{\mu}\)_._
**Proof.** Consider the equation \(g(x)=0\), where \(g\) is defined in (14). If \(R_{0}^{S}>1\), we have
\[\beta>\frac{\sigma^{2}\Lambda}{2\mu}+\frac{\mu(\mu+\gamma)}{\Lambda}\geq\sqrt {2\sigma^{2}(\mu+\gamma)},\]
which induces \(g(x)=0\) has two roots \(x_{1},x_{2}\). Denote the smaller one as \(x_{1}\), then
\[0<x_{1}<\frac{\Lambda}{\mu}<x_{2}, \tag{27}\]
which means \(g(x)=0\) has a unique root \(x_{1}\) in \((0,\frac{\Lambda}{\mu})\). Consider the function
\[r(x):=h(x,x)=e^{-\alpha x}(\frac{\Lambda}{\mu}-x).\]
Note that \(r(x)\) is decreasing in \([0,\frac{\Lambda}{\mu}]\), \(r(0)=\frac{\Lambda}{\mu}\) and \(r(\frac{\Lambda}{\mu})=0\), thus \(r(x)=x_{1}\) has a unique root \(x=\xi\) and \(r(x)=x_{2}\) has no root in \([0,\frac{\Lambda}{\mu}]\), which indicates the following equation
\[\tilde{f}(\xi,\xi)=g(r(\xi))=0\]
has a unique root \(\xi\) in \((0,\frac{\Lambda}{\mu})\). Therefore the proof of 1) is complete. To prove 2), note that
\[\tilde{f}(x,y)=0\Leftrightarrow h(x,y)=x_{1}\text{ or }x_{2}.\]
On one hand, for each \((x,y)\in\overline{\Gamma_{0}^{\prime}}\), \(h(x,y)\in[0,\frac{\Lambda}{\mu}]\), thus \(h(x,y)<x_{2}\). On the other hand, it is easy to see that for each \(y\in[0,\frac{\Lambda}{\mu}]\), the equation \(h(x,y)=x_{1}\) has a unique root \(x=\eta_{y}\) in \([y,\frac{\Lambda}{\mu})\) if and only if \(y\in[0,\xi]\), thus the second assertion is true.
To prove 3), notice that for each fixed \(y\), \(\tilde{f}(x,y)\) is a quadratic function of \(x\). Then we only need to prove that \(\tilde{f}(y,y)>0\) for each \(y<\xi\) and \(\tilde{f}(y,y)<0\) for each \(y>\xi\). To show this, recall \(\tilde{f}(y,y)=g(r(y))\), \(r(0)=\frac{\Lambda}{\mu}\), \(r(\xi)=x_{1}\), \(r(\frac{\Lambda}{\mu})=0\). Then this proposition is an immediate result by (27) and the fact that \(r\) is a decreasing function. Assertion 4) is also an immediate result for the fact that for each \(x\), \(h(x,y)\) decreases as \(y\) increases.
proof of Theorem2.5.: Note that in Lemma 2.6 we have already proved the existence and uniqueness of \(\xi\). We now begin to prove the weak persistence of the variable \(Q(t)\). If it is not true, then there is a sufficiently small \(\epsilon>0\) such that \(\mathbb{P}(\Omega_{1})>\epsilon\), where \(\Omega_{1}=\{\omega|\limsup_{t\to\infty}Q(t,\omega)\leq\xi-2\epsilon\}\). Let \(\epsilon\) be sufficiently small such that
\[g(r(\xi-\epsilon))<g(\frac{\Lambda}{\mu}). \tag{28}\]
Hence, for every \(\omega\in\Omega_{1}\), there is a \(T=T(\omega)>0\) such that
\[Q(t,\omega)\leq\xi-\epsilon,\qquad\text{whenever }t\geq T(\omega). \tag{29}\]
However, for each pair of \((Q,I)\in\Gamma_{0}^{\prime}\cap\{Q\leq\xi-\epsilon\}\), by 4) of Lemma 2.6 and the property of quadratic functions,
\[\tilde{f}(Q,I)=g(h(Q,I))\geq g(\frac{\Lambda}{\mu})\wedge g(r(Q)). \tag{30}\]
where by again the property of quadratic functions,
\[g(r(Q))\geq g(r(\xi-\epsilon))\wedge g(r(0))=g(r(\xi-\epsilon))\wedge g(\frac{ \Lambda}{\mu}). \tag{31}\]
Therefore by (28), (29), (30) and (31), we have
\[\tilde{f}(Q(t,\omega),I(t,\omega))\geq g(r(\xi-\epsilon)),\qquad\mbox{whenever }t\geq T( \omega). \tag{32}\]
Moreover, by the large number theorem for martingales, there is an \(\Omega_{2}\) with \(\mathbb{P}(\Omega_{2})=1\) such that for every \(\omega\in\Omega_{2}\),
\[\lim_{t\to\infty}\frac{1}{t}\int_{0}^{t}\sigma e^{-\alpha I(s,\omega)}(\frac{ \Lambda}{\mu}-Q(s,\omega))dB(s)(\omega)=0. \tag{33}\]
Now, fix any \(\omega\in\Omega_{1}\cap\Omega_{2}\). It then follows form (12) and (32), for \(t\geq T(\omega)\),
\[\ln(I(t,\omega))\geq \ln(I(0,\omega))+\int_{0}^{T(\omega)}\tilde{f}(Q(s,\omega),I(s, \omega))ds+(t-T(\omega))g(r(\xi-\epsilon))\] \[+\int_{0}^{t}\sigma e^{-\alpha I(s,\omega)}(\frac{\Lambda}{\mu}- Q(s,\omega))dB(s)(\omega). \tag{34}\]
Combining (34), (33) and 3) in Lemma 2.6 leads to
\[\liminf_{t\to\infty}\frac{1}{t}\ln(I(t,\omega))\geq g(r(\xi-\epsilon))>0,\]
thus
\[\lim_{t\to\infty}I(t,\omega)=\infty\qquad\omega\in\Omega_{1}\cap\Omega_{2}.\]
This contradicts to (29). Therefore we must have the desired weak persistence of \(Q(t)\).
Now we can prove the weak persistence of the disease \(I(t)\). Set \(\xi^{\prime}=(1+\frac{\gamma}{\mu})^{-1}\xi\). If the assertion (26) is false, then there is a sufficiently small \(\epsilon>0\) such that \(\mathbb{P}(\Omega_{4})>\epsilon\), where \(\Omega_{4}=\{\omega|\limsup_{t\to\infty}I(t,\omega)\leq\xi^{\prime}-2\epsilon\}.\) Hence, by (2), for every \(\omega\in\Omega_{4}\), for large enough \(t\), we have
\[dR(t,\omega)\leq(\gamma(\xi^{\prime}-\epsilon)-\mu R(t,\omega))dt,\]
which yields
\[\limsup_{t\to\infty}R(t,\omega)\leq\frac{\gamma}{\mu}(\xi^{\prime}-\epsilon).\]
Combining this with the definition of \(\Omega_{4}\), we have
\[\limsup_{t\to\infty}(I(t,\omega)+R(t,\omega))\leq(1+\frac{\gamma}{\mu})(\xi^{ \prime}-\epsilon)<\xi,\omega\in\Omega_{4}. \tag{35}\]
However, by (24) and the fact that
\[\lim_{t\to\infty}(Q(t)-I(t)-R(t))=0,\]
we must have
\[\limsup_{t\to\infty}(I(t)+R(t))\geq\xi\qquad\mbox{a.s.} \tag{36}\]
This contradiction (35). Therefore we must have the desired weak persistence of \(I(t)\). The proof is done.
Now we discuss the mean persistence of \(I(t)\). For convenience, we introduce the following notation. For a continuous stochastic process \(y(t)\), let
\[\langle y(t)\rangle=\frac{1}{t}\int_{0}^{t}y(s)ds,\]
We have the following result for \(\langle I(t)\rangle\).
**Theorem 2.7**.: _If_
\[R_{0}^{S}>1+\frac{\alpha\beta\Lambda^{2}}{4\mu^{2}(\mu+\gamma)}, \tag{37}\]
_then for any given initial value \((S(0),I(0),R(0))\in\Gamma\), the solution of the SDE(2) obeys_
\[\liminf_{t\to\infty}\langle I(t)\rangle>\frac{\mu}{\beta}\left(R_{0}^{S}-1- \frac{\alpha\beta\Lambda^{2}}{4\mu^{2}(\mu+\gamma)}\right)>0. \tag{38}\]
**Proof.** By (12), we have
\[\frac{\ln I(t)-\ln I(0)}{t}-\frac{1}{t}\int_{0}^{t}\sigma e^{- \alpha I}SdB(s) =\beta\langle e^{-\alpha I}S\rangle-(\mu+\gamma)-\frac{1}{2} \sigma^{2}\langle e^{-2\alpha I}S^{2}\rangle\] \[\geq\beta\langle(1-\alpha I)S\rangle-(\mu+\gamma)-\frac{1}{2} \sigma^{2}\frac{\Lambda^{2}}{\mu^{2}}\] \[=\beta\langle S\rangle-\alpha\beta\langle IS\rangle-(\mu+\gamma )-\frac{1}{2}\sigma^{2}\frac{\Lambda^{2}}{\mu^{2}}\] \[\geq\beta\langle S\rangle-\frac{\alpha\beta\Lambda^{2}}{4\mu^{2 }}-(\mu+\gamma)-\frac{1}{2}\sigma^{2}\frac{\Lambda^{2}}{\mu^{2}}, \tag{39}\]
where the first inequality in (39) is due to \(e^{-x}\geq 1-x,\forall x\in\mathbb{R}\) and the last inequality is derived by \(I(t)+S(t)\leq\frac{\Lambda}{\mu}\). From (2) we have
\[\frac{S(t)-S(0)}{t}+\frac{I(t)-I(0)}{t}=\Lambda-\mu\langle S\rangle-(\mu+\gamma )\langle I\rangle. \tag{40}\]
Substituting (40) into (39), we get
\[\frac{\beta(\mu+\gamma)}{\mu}\langle I(t)\rangle\geq\frac{\beta\Lambda}{\mu}- \frac{\alpha\beta\Lambda^{2}}{4\mu^{2}}-(\mu+\gamma)-\frac{\sigma^{2}\Lambda^ {2}}{2\mu^{2}}-\frac{\ln I(t)}{t}+\phi(t), \tag{41}\]
where
\[\phi(t)=\frac{1}{t}\int_{0}^{t}\sigma e^{-\alpha I}SdB(s)-\frac{\beta(S(t)-S(0 )+I(t)-I(0))-\mu\ln I(0)}{\mu t}. \tag{42}\]
Clearly \(\limsup_{t\to\infty}\phi(t)=0\). Also since \(I(t)\leq\frac{\Lambda}{\mu}\), we have \(\limsup_{t\to\infty}\frac{\ln I(t)}{t}\leq 0\). Then (41) becomes
\[\frac{\beta}{\mu}\liminf_{t\to\infty}\langle I(t)\rangle\geq R_{0}^{S}-\frac{ \alpha\beta\Lambda^{2}}{4\mu^{2}(\mu+\gamma)}-1>0,\]
and the proof is complete.
We give the weak persistence and mean persistence of \(I(t)\) separately in Theorem 2.5 and Theorem 2.7. Note that the condition for the mean persistence of the disease is much stronger than the condition for the weak persistence of it, since the former one is \(R_{0}^{S}>1\) while the latter one is
\[R_{0}^{S}>1+\frac{\alpha\beta\Lambda^{2}}{4\mu^{2}(\mu+\gamma)}.\]
However, although these two conditions are different, they are both related to \(R_{0}^{S}\), which means \(R_{0}^{S}\) can be used as a major criterion for the persistence of the disease.
## 3 Numerical simulations
In this section, we use the stochastic Runge-Kutta method in [25] to simulate the stochastic model (2) and the corresponding deterministic model. We initially verify our theoretical results, and then consider the effect of stochastic perturbation on infections in which the theoretical results do not cover. To illustrate the impact
of the stochastic perturbation, we perform simulations for the infectious individuals for both the stochastic model (2) and the corresponding deterministic model (1) by freely choosing parameter values.
We initially choose parameter values as in Fig.2a where \(R_{0}^{S}=0.88<1\) and \(\sigma^{2}<\mu\beta/\Lambda\), we then show the infected individuals go to extinction with probability one (shown in Fig.2a), according to Theorem 2.3. It is worth noting that for the deterministic model the disease persists for \(R_{0}^{D}=1.36>1\), we can show that small noise does not change the disease persistence according to Theorem 2.5 (shown in Fig.2c), while the \(I(t)\) goes to zero in the stochastic model with large noise (shown in Fig.2b). This illustrates great environmental noise can cause disease to go to extinction. In particular, solutions in the stochastic model fluctuate around the corresponding counterpart in the deterministic model.
Note that when \(R_{0}^{S}<1\), Theorem 2.3 and 2.4 show that the disease will die out if \(\sigma^{2}<\frac{\mu\beta}{\Lambda}\) or \(\sigma^{2}>\frac{\beta^{2}}{2(\mu+\gamma)}\). Then there are two situations: \(R_{0}^{S}<1\) with \(\frac{\mu\beta}{\Lambda}\leq\sigma^{2}\leq\frac{\beta^{2}}{2(\mu+\gamma)}\) and \(R_{0}^{S}=1\), under which we do not know what the solutions approach. Hence we will pay attention on the two situations and give our results through numerical simulations.
To further examine the asymptotical property for the situation \(R_{0}^{S}<1\) and \(\frac{\mu\beta}{\Lambda}\leq\sigma^{2}\leq\frac{\beta^{2}}{2(\mu+\gamma)}\), we carry out 1000 simulations for the stochastic model (2). We choose the parameters as in Fig.3a in which \(R_{0}^{S}=0.90\) and \(\frac{\mu\beta}{\Lambda}\leq\sigma^{2}\leq\frac{\beta^{2}}{2(\mu+\gamma)}\). Letting \(\epsilon=0.0001\), we found that in each simulation, there exists a \(T\) such that \(I(t)<\epsilon,\forall t\geq T.\) It follows from Fig.3a that the mean value of \(I(t)\) tends to zero.
So we can draw the conclusion that the disease will die out almost surely if \(R_{0}^{S}<1\), and \(\frac{\mu\beta}{\Lambda}\leq\sigma^{2}\leq\frac{\beta^{2}}{2(\mu+\gamma)}\). For the scenario of \(R_{0}^{S}=1\), we take the values as in Fig.3b. Again, we run the 5000 simulations and show the mean value of \(I(t)\) in Fig.3b, we observe there are 3963 simulations in which the value of \(I(t)\) fell below \(\epsilon\) in a give time. Hence, for \(R_{0}^{S}=1\), mean value of \(I(t)\) approaches to zero with high probability (i.e., disease has the great tendency to die out).
To examine the effect of media impact on disease infection we plot the variation in the mean value of \(I(t)\) with parameter \(\alpha\) (shown in Fig.4a and Fig.4b). One can see that increasing the parameter value of \(\alpha\) leads to low infection. This means mass media induced behaviour changes (reduced incidence) play a vital role in control of disease infection in the stochastic environment, which agrees well with those concluded from the deterministic models [9, 11].
## 4 Discussion
In the real world, biological and epidemiological phenomenon are always affected by the environmental noises. As a result, stochastic models may provide more natural description and consequently produce more valuable results, compared to the deterministic counterparts [15, 16, 17, 18, 19, 20, 21, 22, 23]. In this study considering environmental noises,
we investigated the transmission dynamics of an epidemic model with media effects. We initially investigated the existence and uniqueness of solutions of the stochastic system (2), then examined conditions under which the disease dies out or persists. In particular, we obtained the disease goes to extinct if \(R_{0}^{S}<1\), \(\sigma^{2}<\frac{\mu\beta}{\Lambda}\) or \(\sigma^{2}>\frac{\beta^{2}}{2(\mu+\gamma)}\) (then \(R_{0}^{S}<1\) by the remark below Theorem 2.4), while the system is weak persistent for \(R_{0}^{S}>1\). This indicates that the basic reproduction number can act as the threshold value given small noise, which is similar to the threshold level of \(R_{0}^{D}\) in the deterministic system. By comparing the \(R_{0}^{D}\) with \(R_{0}^{S}\) we know that \(R_{0}^{S}\leq R_{0}^{D}\), which means with environmental noises disease goes to extinction more likely. Further, large noise can induce the disease extinction with probability of \(1\), implies that environmental noises can not be ignored when investigating threshold dynamics and estimating the threshold level (the basic reproduction number).
In terms of media impacts under environmental noises, we obtained the basic reproduction number \(R_{0}^{S}<1\) is also independent of the media-related parameter \(\alpha\), that is, inclusion of media induced behaviour changes does not affect the threshold itself, which is similar to the conclusion of the deterministic models. However, numerical simulations suggest that media impacts induce the disease infection decline, which is also verified by the theorem 2.7 that due to media impact strong persistence of the stochastic system becomes less likely.
Finally, we would like to mention the limitations of our work. We only gave the
numerical results under the conditions
\[R_{0}^{S}<1,\frac{\mu\beta}{\Lambda}\leq\sigma^{2}\leq\frac{\beta^{2}}{2(\mu+ \gamma)}\quad\text{and}\quad R_{0}^{S}=1.\]
It is rather interesting that whether we can analytically prove that the disease will go extinct under these conditions. In fact, a similar question proposed in [15] is still an open question. Futhermore, in this study we only introduce the white noises to the system (1). It is interesting to investigate the effects of impulsive perturbations on system (1). These problems will be the subjects of our future work.
**Acknowledgments** This work is supported by the National Natural Science Foundation of China (NSFC, 12220101001, 12031010).
|
2309.09238 | Reduced projection method for photonic moiré lattices | This paper presents a reduced projection method for the solution of
quasiperiodic Schr\"{o}dinger eigenvalue problems for photonic moir\'e
lattices. Using the properties of the Schr\"{o}dinger operator in
higher-dimensional space via a projection matrix, we rigorously prove that the
generalized Fourier coefficients of the eigenfunctions exhibit faster decay
rate along a fixed direction associated with the projection matrix. An
efficient reduction strategy of the basis space is then proposed to reduce the
degrees of freedom significantly. Rigorous error estimates of the proposed
reduced projection method are provided, indicating that a small portion of the
degrees of freedom is sufficient to achieve the same level of accuracy as the
classical projection method. We present numerical examples of photonic moir\'e
lattices in one and two dimensions to demonstrate the accuracy and efficiency
of our proposed method. | Zixuan Gao, Zhenli Xu, Zhiguo Yang | 2023-09-17T11:01:28Z | http://arxiv.org/abs/2309.09238v3 | # Reduced projection method for quasiperiodic Schrodinger eigenvalue problems
###### Abstract.
This paper presents a reduced algorithm to the classical projection method for the solution of \(d\)-dimensional quasiperiodic problems, particularly Schrodinger eigenvalue problems. Using the properties of the Schrodinger operator in higher-dimensional space via a projection matrix of size \(d\times n\), we rigorously prove that the generalized Fourier coefficients of the eigenfunctions decay exponentially along a fixed direction associated with the projection matrix. An efficient reduction strategy of the basis space is then proposed to reduce the degrees of freedom from \(O(N^{n})\) to \(O(N^{n-d}D^{d})\), where \(N\) is the number of Fourier grids in one dimension and the truncation coefficient \(D\) is much less than \(N\). Correspondingly, the computational complexity of the proposed algorithm for solving the first \(k\) eigenpairs using the Krylov subspace method decreases from \(O(kN^{2n})\) to \(O(kN^{2(n-d)}D^{2d})\). Rigorous error estimates of the proposed reduced projection method are provided, indicating that a small \(D\) is sufficient to achieve the same level of accuracy as the classical projection method. We present numerical examples of quasiperiodic Schrodinger eigenvalue problems in one and two dimensions to demonstrate the accuracy and efficiency of our proposed method.
**Key words.** Quasiperiodic problems, Schrodinger eigenvalue problems, projection method, spectral method, basis reduction.
**AMS subject classifications.** 65N35, 65N22, 65F05, 35J05
## 1. Introduction
The quasiperiodic problems emerge naturally in a great many physical systems such as quasicrystals, many-body problems, and low-dimensional materials [27, 43, 7, 4, 47, 17, 12, 32, 49], and have found numerous applications in the areas of mechanics, acoustics, electronics, solid-state physics, and physics of matter waves [2, 8, 14, 33, 16]. Efficient and accurate numerical simulations of quasiperiodic problems play a critical role in exploring and utilizing novel material properties.
Though quasiperiodic systems are ubiquitous in mathematics and physics, the numerical method is not as straightforward as that of periodic systems. Specifically, quasiperiodic systems are space-filling ordered without decay and translational invariance [19]. In recent years, there has been a growing interest in the research of the optical properties of moire lattices, a prototype of quasicrystals, as evidenced by notable studies in [11, 22, 24, 38] that have brought about significant breakthroughs in the field of optics. It is exhilarating to note that a localization-to-delocalization transition of eigenstates of moire lattices in two dimensions is observed for the first time in both numerical simulations and experiments [43], which paves a new way of controlling light at will. However, the localization of eigenstates as well as the phase transition for a high-dimensional case is not well explored, due to the exceedingly huge degrees of freedom and computational cost required.
Considerable efforts have been devoted to overcoming this difficulty. One of the widely used approaches is the periodic approximation method, also known as the crystalline approximant method [9, 13, 31], which approximates the quasiperiodic function via a periodic function in a certain supercell. Nevertheless, this
Introduction
The purpose of this paper is to propose an efficient reduced projection method (RPM) for solving the following quasiperiodic Schrodinger eigenvalue problem
\[\mathcal{L}[u]:=-\frac{1}{2}\Delta u(\mathbf{z})+v(\mathbf{z})u(\mathbf{z})=Eu(\mathbf{z}),\quad \mathbf{z}\in\mathbb{R}^{d}, \tag{1.1}\]
where \(\mathbf{z}=(z_{1},\cdots,z_{d})^{\intercal}\) is the physical coordinates in \(d\) dimensions, \(v(\mathbf{z})\) is a quasiperiodic potential function, and \(u(\mathbf{z})\) and \(E\) are respectively the eigenfunction and eigenvalue of the linear Schrodinger operator \(\mathcal{L}\). By a careful analysis of the decay property of the generalized Fourier series of the eigenfunctions of \(\mathcal{L}\) in a higher dimension, an efficient dimension reduction strategy for the classical projection method is proposed. Furthermore, we conduct rigorous error estimates of the RPM, which validate that the reduction strategy can guarantee the same level of accuracy as the PM, with a significant decrease in the degrees of freedom and computational time. The proposed method provide an efficient numerical tool to explore the phase transition of eigenstates for high-dimensional moire lattices.
The rest of this paper is organized as follows. Section 2 presents the preliminaries of quasiperiodic functions, the theoretical foundation of the dimension reduction strategy, and the associated error estimates of the proposed reduced projection method. Section 3 demonstrates the numerical results of the proposed method for solving quasiperiodic Schrodinger eigenvalue problems in one and two dimensions. Several interesting physical observations regarding the localization of eigenstates are also presented. Section 4 then concludes the discussions with some closing remarks.
## 2. Reduced projection method for Schrodinger eigenvalue problems
We begin this section with some theoretical results for quasiperiodic functions, followed by an introduction to the projection method. The reduced projection method with delicate error analysis and the corresponding algorithm are then presented.
### Preliminaries
We start with a \(d\)-dimensional periodic function \(F(\mathbf{z})\in L^{2}([0,T]^{d})\) with period \(T\) in each dimension (dubbed as "\(T\)-periodic function"), equipped with the standard inner product and norm
\[(F,G)=\frac{1}{T^{d}}\int_{[0,T]^{d}}F\bar{G}d\mathbf{x},\quad\|F\|=\sqrt{(F,F)},\]
where \(\bar{G}\) is the complex conjugate of \(G\in L^{2}([0,T]^{d})\). The Fourier series of \(F(\mathbf{z})\) is defined by
\[F(\mathbf{z})=\sum_{\mathbf{k}\in\mathbb{Z}^{d}}F_{\mathbf{k}}e^{\mathrm{i}\langle\mathbf{k}, \mathbf{z}\rangle},\quad F_{\mathbf{k}}=\frac{1}{T^{d}}\int_{[0,T]^{d}}F(\mathbf{z})e^{- \mathrm{i}\langle\mathbf{k},\mathbf{z}\rangle}d\mathbf{z},\ \ \mathbf{k}\in\mathbb{Z}^{d}, \tag{2.1}\]
where \(\langle\cdot,\cdot\rangle\) is the dot product between two vectors and \(\mathbb{Z}\) is the set of integers. Here \(F_{\mathbf{k}}\) is the \(\mathbf{k}\)th Fourier coefficient of \(F(\mathbf{z})\).
One recalls the decay rate of Fourier coefficients with respect to the smoothness of a \(T\)-periodic function in the following lemma (see [15]).
**Lemma 2.1**.: _Let \(s\in\mathbb{N}^{+}\) and \(F(\mathbf{z})\) be a \(d\)-dimensional \(T\)-periodic function. Suppose that \(\partial^{\mathbf{\alpha}}F\) exists and is integrable for \(\forall\mathbf{\alpha}\in\mathbb{N}_{0}^{d}\) such that \(\|\mathbf{\alpha}\|_{\infty}\leq s\), that is, \(F(\mathbf{x})\in C_{p}^{s}([0,T]^{d})\), then_
\[\lim_{\|\mathbf{k}\|_{2}\to\infty}(1+\|\mathbf{k}\|_{2}^{2s})|F_{\mathbf{k}}|^{2}=0. \tag{2.2}\]
_Furthermore, given that \(F(\mathbf{z})\in C_{p}^{\infty}([0,T]^{d})\), the smooth \(T\)-periodic function space, its Fourier coefficients satisfy \(F_{\mathbf{k}}=o(\|\mathbf{k}\|_{2}^{-s})\) when \(\|\mathbf{k}\|_{2}\to\infty\) for any \(s\in\mathbb{N}^{+}\), i.e. the Fourier coefficients \(F_{\mathbf{k}}\) decay with \(\|\mathbf{k}\|_{2}\) exponentially._
Next, we describe the definition of quasiperiodic functions and their relevant properties (see [3, 29] for comprehensive discussions).
**Definition 2.1**.: _A \(d\)-dimensional function \(f(\mathbf{z})\) is quasiperiodic if there exists a \(d\times n\) projection matrix \(\mathbf{P}\) such that \(F(\mathbf{x})=F(\mathbf{P}\mathbf{\tau}\mathbf{z})=f(\mathbf{z})\) is an \(n\)-dimensional periodic function, where all columns of matrix \(\mathbf{P}\) are linearly independent over rational numbers. Here, \(F(\mathbf{x})\) is called the parent function of \(f(\mathbf{z})\) with respect to \(\mathbf{P}\)._
Define the mean value of a \(d\)-dimensional quasiperiodic function \(f(\mathbf{z})\) by
\[\mathcal{M}(f)=\lim_{L\to\infty}\frac{1}{|L|^{d}}\int_{K}f(\mathbf{z})d\mathbf{z}, \tag{2.3}\]
where \(K=\{\mathbf{z}\,|\,0\leq\mathbf{z}_{i}\leq L,i=1,\ldots,d\}\). Correspondingly, one can define the inner product and norm of quasiperiodic functions as
\[(f,g)_{\rm qp}=\mathcal{M}(f\bar{g}),\quad\|f\|_{\rm qp}=\sqrt{(f,f)_{\rm qp }}\,, \tag{2.4}\]
where \(f,g\) are required to be quasiperiodic under the same projection matrix. It is direct to verify that \(\big{\{}e^{\mathrm{i}(\mathbf{q},\mathbf{z})}\big{\}}_{\mathbf{q}\in\mathbb{R}^{d}}\) forms a normalized orthogonal system as
\[\big{(}e^{\mathrm{i}(\mathbf{q}_{1},\mathbf{z})},e^{\mathrm{i}(\mathbf{q}_{2},\mathbf{z})} \big{)}_{\rm qp}=\delta_{\mathbf{q}_{1},\mathbf{q}_{2}},\quad\mathbf{q}_{1},\mathbf{q}_{2}\in \mathbb{R}^{d}, \tag{2.5}\]
where \(\delta_{\mathbf{q}_{1},\mathbf{q}_{2}}\) is the Dirac delta function. In addition, the Fourier transform of the quasiperiodic function is
\[\mathcal{F}\{f\}(\mathbf{q})=\mathcal{M}\big{(}f(\mathbf{z})e^{-\mathrm{i}(\mathbf{q},\mathbf{ z})}\big{)}, \tag{2.6}\]
which is also called the Fourier-Bohr transformation [19]. Then we have the generalized Fourier series of the quasiperiodic function, given in Lemma 2.2. We refer the readers to the detailed proof from [3].
**Lemma 2.2**.: _For a \(d\)-dimensional quasiperiodic function \(f(\mathbf{z})\) with a projection matrix \(\mathbf{P}\), it has generalized Fourier series_
\[f(\mathbf{z})=\sum_{\mathbf{q}\in\Gamma}f_{\mathbf{q}}e^{\mathrm{i}(\mathbf{q},\mathbf{z})},\quad f _{\mathbf{q}}=\big{(}f,e^{\mathrm{i}(\mathbf{q},\mathbf{z})}\big{)}_{\rm qp},\quad\Gamma: =\Big{\{}\mathbf{q}\Big{|}\mathbf{q}=\sum_{i=1}^{n}c_{i}\mathbf{q}_{i},\ \ c_{i}\in\mathbb{Z},\ \mathbf{q}_{i}=\mathrm{ col}(\mathbf{P})_{i}\Big{\}}, \tag{2.7}\]
_where \(\mathrm{col}(\mathbf{P})_{i}\) denotes the \(i\)th column of \(\mathbf{P}\). In addition, if the series is absolutely convergent, then it is also uniformly convergent._
Similar to periodic functions, the generalized Fourier series of a quasiperiodic function has the Parseval's identity
\[(f,f)_{\rm qp}=\mathcal{M}(|f|^{2})=\sum_{\mathbf{q}\in\Omega}|f_{\mathbf{q}}|^{2}. \tag{2.8}\]
By Definition 2.1, a \(d\)-dimensional quasiperiodic function \(f(\mathbf{z})\) can be transformed into an \(n\)-dimensional periodic function \(F(\mathbf{x})\) by the projection matrix \(\mathbf{P}\). Using the Fourier series of periodic functions, one can obtain
\[F(\mathbf{x})=\sum_{\mathbf{k}\in\mathbb{Z}^{n}}F_{\mathbf{k}}e^{\mathrm{i}\langle\mathbf{k}, \mathbf{x}\rangle}, \tag{2.9}\]
where \(F_{\mathbf{k}}\) is the Fourier coefficient of parent function and Eq. (2.9) is the Fourier series in the raised dimensional space.
In Lemma 2.3 we describe the consistency of the generalized Fourier coefficients and raised Fourier coefficients of quasiperiodic functions by invoking the Birkhoff's ergodic theorem [35, 42, 34] (see [19] for a detailed proof).
**Lemma 2.3**.: _For a \(d\)-dimensional quasiperiodic function \(f(\mathbf{z})\) with parent function \(F(\mathbf{x})\), it holds \(f_{\mathbf{q}}=F_{\mathbf{k}}\) when \(\mathbf{q}=\mathbf{P}\mathbf{k}\)._
The consistency between Fourier coefficients of parent function and generalized Fourier coefficients of the quasiperiodic function makes the projection a reliable method to solve quasiperiodic problems by its corresponding periodic problem in higher dimensions. Theorem 2.1 presents the decay rate of generalized Fourier coefficients by combining Lemmas 2.1 and 2.3.
**Theorem 2.1**.: _Let \(f(\mathbf{z})\) be a \(d\)-dimensional quasiperiodic function with parent function \(F(\mathbf{x})\in C^{s}_{p}([0,T]^{n})\). Then there exists a positive constant \(C\) such that,_
\[f_{\mathbf{q}}\leq C|\mathbf{q}|^{-s}. \tag{2.10}\]
### Reduced projection method
The PM, proposed by Jiang and Zhang [20], serves as an accurate way to solve quasiperiodic eigenvalue problems. For a \(d\)-dimensional quasiperiodic problem, the PM transforms it into its corresponding periodic problem in \(n\)-dimensional space. Specifically, let \(\mathbf{x}=\mathbf{P}^{\intercal}\mathbf{z}\), where \(\mathbf{x}=(x_{1},x_{2},\ldots,x_{n})^{\intercal}\), one can transform Eq. (1.1) into
\[EU(\mathbf{x})=-\frac{1}{2}\sum_{i=1}^{d}\sum_{j,l=1}^{n}P_{ij}P_{il}\frac{\partial ^{2}U(\mathbf{x})}{\partial x_{j}\partial x_{l}}+V(\mathbf{x})U(\mathbf{x}). \tag{2.11}\]
Here \(U(\mathbf{x})\) and \(V(\mathbf{x})\) are the parent functions of the eigenfunction \(u(\mathbf{z})\) and potential \(v(\mathbf{z})\), respectively. It should be noted that the choice of \(\mathbf{P}\) is not unique. In this paper, we select those projection matrices that make the parent functions be \(2\pi\)-periodic, and this choice is unique.
The PM discretizes Eq. (2.11) by seeking an approximate solution
\[U_{N}(\mathbf{x})=\sum_{\mathbf{k}\in\Omega}U_{N,\mathbf{k}}e^{\mathrm{i}\langle\mathbf{k}, \mathbf{x}\rangle},\quad\Omega=\big{\{}\mathbf{k}\in\mathbb{Z}^{n}\big{|}\|\mathbf{k}\|_{ \infty}\leq N,\ \mathbf{k}\in\mathbb{Z}^{n}\big{\}}. \tag{2.12}\]
This leads to a system of equations for each frequency mode \(U_{N,\mathbf{k}}\),
\[EU_{N,\mathbf{k}}=\frac{1}{2}\sum_{i=1}^{d}\sum_{j,l=1}^{n}\mathbf{k}_{j}\mathbf{k}_{l}P_{ ij}P_{il}U_{N,\mathbf{k}}+\{V(\mathbf{x})U_{N}(\mathbf{x})\}_{\mathbf{k}}, \tag{2.13}\]
where \(\{V(\mathbf{x})U_{N}(\mathbf{x})\}_{\mathbf{k}}\) is the \(\mathbf{k}\)th Fourier coefficient as defined in Eq. (2.1) and \(k_{j}\) is the \(j\)th component of \(\mathbf{k}\).
One denotes that \(\mathbf{\hat{U}}\) is a column vector with its components being the Fourier coefficients \(U_{N,\mathbf{k}}\), and the discrete eigenvalue problem Eq. (2.13) can be rewritten into a matrix eigenvalue problem \(\mathbf{H}\mathbf{\hat{U}}=E\mathbf{\hat{U}}\). In real computations, due to the large size of the dense matrix \(\mathbf{H}\), it is not practical to store its elements and the eigenvalue problem is to be solved in a matrix-free manner. That is, for matrix \(\mathbf{H}\), one defines its matrix-vector product function
\[\mathbf{H}\mathbf{f}=\mathbf{\hat{D}}\mathbf{f}+\mathrm{FFT}\big{(}V(\mathbf{x})\cdot\mathrm{IFFT} (\mathbf{f})\big{)}, \tag{2.14}\]
where \(\mathrm{FFT}(\cdot)\) and \(\mathrm{IFFT}(\cdot)\) denote the \(n\)-dimensional fast Fourier transform (FFT) and inverse fast Fourier transform (IFFT). Here \(\mathbf{\hat{D}}\) is a diagonal matrix such that for \(\mathbf{\hat{U}}_{m}=U_{N,\mathbf{k}}\), the \(m\)th diagonal element of \(\mathbf{\hat{D}}\) is
\[\hat{D}_{mm}=\frac{1}{2}\sum_{i=1}^{d}\sum_{j,l=1}^{n}k_{j}k_{l}P_{ij}P_{il}. \tag{2.15}\]
Since only the operation \(\mathbf{H}\mathbf{f}\) is invoked during the generation of basis vectors in the Krylov subspace, there is no need to store the dense matrix \(\mathbf{H}\) itself, thus reducing the storage cost from \(O(N^{2n})\) to \(O(N^{n})\). Then the eigenvalue problem (2.13) can be solved via the Krylov subspace iterative method in a matrix-free manner [41, 45].
Though the PM is a powerful and accurate numerical method for solving quasiperiodic Schrodinger eigenvalue problems, it suffers from the "curse of dimensionality". As the dimension is raised, the degrees of freedom (DOF) required may become extremely large, making it computationally prohibitive and memory-intensive to solve the quasiperiodic eigenvalue problem. For three-dimensional quasiperiodic problems with projection matrix of size \(3\times 6\), the DOF to deal with is \(O(N^{6})\). This poses significant challenges in solving high-dimensional problems.
The reduced projection method is based on the fact that the generalized Fourier coefficients of the eigenfunctions decay exponentially along a fixed direction of \(\mathbf{P}\mathbf{k}\). In fact, given some mild restrictions on the quasiperiodic potential function \(v(z)\) in Eq. (1.1), the index set \(\Omega\) of Fourier expansion in Eq. (2.12) can be reduced to
\[\Omega_{R}=\left\{\mathbf{k}\in\mathbb{Z}^{n}\big{\|}\|\mathbf{P}\mathbf{k}\|_{2}\leq D,|| \mathbf{k}||_{\infty}\leq N\right\} \tag{2.16}\]
without sacrificing the accuracy of the approximation. Here, parameter \(D<N\) is a prescribed truncation constant. The method based on the reduced index set \(\Omega_{R}\) is the RPM, that is, the RPM seeks an approximate solution with the form
\[U_{N}(\mathbf{x})=\sum_{\mathbf{k}\in\Omega_{R}}U_{N,\mathbf{k}}e^{i(\mathbf{k},\mathbf{x})}. \tag{2.17}\]
Let us denote the generalized Fourier coefficients and Fourier coefficients of parent function of \(u(\mathbf{z})\) as \(u_{\mathbf{q}}\) and \(U_{\mathbf{k}}\), respectively. By Lemma 2.3, one has
\[u_{\mathbf{q}}=U_{\mathbf{k}},\ \ \text{when}\ \ \mathbf{q}=\mathbf{P}\mathbf{k}.\]
In the following theorem 2.2, we show the decay rate of the generalized Fourier coefficients \(u_{\mathbf{q}}\) of the eigenfunction \(u\) of the quasiperiodic eigenvalue problem (1.1), which justifies our RPM and plays a crucial role for the error estimate.
**Theorem 2.2**.: _Let \(u\) be the eigenfunction corresponding to eigenvalue \(E\) of the \(d\)-dimensional quasiperiodic Schrodinger eigenvalue problem (1.1), \(u_{\mathbf{q}}\) be the \(\mathbf{q}\)th generalized Fourier coefficient of \(u\), and \(v\), \(V\) be the quasiperiodic potential function and its parent function. Given integer \(\alpha\geq 3\), \(\mathcal{F}\{v\}\in L^{\beta}(\mathbb{R}^{d})\), \(V\in C_{p}^{d+\alpha-2}([0,T]^{n})\) with \(1/\beta+1/d>1\) and \(|\mathbf{q}|^{2}>4E\), there exists a constant \(C_{\alpha}\) depending on \(\|v\|\), \(\|\mathcal{F}\{v\}\|_{\beta}\) and \(d\) such that_
\[|u_{\mathbf{q}}|\leq C_{\alpha}|\mathbf{q}|^{-\alpha}. \tag{2.18}\]
Proof.: Without loss of generality, we set \(\|u\|=1\). Inserting the generalized Fourier series of \(u\) into Eq. (1.1) leads to
\[Eu_{\mathbf{q}}=\frac{1}{2}|\mathbf{q}|^{2}u_{\mathbf{q}}+\{vu\}_{\mathbf{q}}\,. \tag{2.19}\]
According to the properties of the generalized Fourier transform,
\[\{vu\}_{\mathbf{q}}=\mathcal{F}\{vu\}(\mathbf{q})=\mathcal{F}\{v\}(\mathbf{q})*\mathcal{F} \{u\}(\mathbf{q}), \tag{2.20}\]
where \(*\) denotes the convolution between two functions. By Holder's inequality, Parseval's identity and the fact that \(||u||=1\), one has
\[\big{|}E-|\mathbf{q}|^{2}/2\big{|}\,|u_{\mathbf{q}}|\leq|\mathcal{F}\{v\}*\mathcal{F} \{u\}|\leq\|\mathcal{F}\{v\}\|\cdot\|\mathcal{F}\{u\}\|=\|v\|\cdot\|u\|=\|v\|. \tag{2.21}\]
Since \(|\mathbf{q}|^{2}>4E\), it is direct to verify \(|E-|\mathbf{q}|^{2}/2|>|\mathbf{q}|^{2}/4\). Therefore,
\[|u_{\mathbf{q}}|\leq\frac{\|v\|}{|E-|\mathbf{q}|^{2}/2|}\leq 4\|v\|\cdot|\mathbf{q}|^{-2}. \tag{2.22}\]
By Parseval's identity and \(||u||=1\), one can obtain that for any \(\mathbf{q}\), \(|u_{\mathbf{q}}|\leq 1\). Therefore, define
\[g_{-2}(\mathbf{q})=\min\{1,4||v||\cdot|\mathbf{q}|^{-2}\}, \tag{2.23}\]
then one has \(|u_{\mathbf{q}}|\leq g_{-2}(\mathbf{q})\). Considering the convolution term, one has
\[|\mathcal{F}\{v\}(\mathbf{q})*\mathcal{F}\{u\}(\mathbf{q})|\leq\int_{\mathbb{R}^{n}}| \mathcal{F}\{v\}(\mathbf{q}-\mathbf{t})||\mathcal{F}\{u\}(\mathbf{t})|d\mathbf{t}\leq\int_{ \mathbb{R}^{n}}|\mathcal{F}\{v\}(\mathbf{q}-\mathbf{t})|g_{-2}(\mathbf{t})d\mathbf{t}. \tag{2.24}\]
The integral is split into two parts, that is,
\[\int_{\mathbb{R}^{n}}|\mathcal{F}\{v\}(\mathbf{q}-\mathbf{t})|g_{-2}(\mathbf{t})d\mathbf{t} =\int_{|\mathbf{t}|\geq|\mathbf{q}|/2}|\mathcal{F}\{v\}(\mathbf{q}-\mathbf{t})|g_{-2}(\mathbf{t})d \mathbf{t}+\int_{|\mathbf{t}|<|\mathbf{q}|/2}|\mathcal{F}\{v\}(\mathbf{q}-\mathbf{t})|g_{-2}(\mathbf{t })d\mathbf{t} \tag{2.25}\]
\[:=I_{1}+I_{2},\]
By the Holder's inequality with \(1/\beta+1/\gamma=1\),
\[I_{1}\leq 4||v||\left(\int_{|\mathbf{t}|\geq|\mathbf{q}|/2}|\mathcal{F}\{v\}(\mathbf{q}-\mathbf{t })|^{\beta}d\mathbf{t}\right)^{1/\beta}\left(\int_{|\mathbf{t}|\geq|\mathbf{q}|/2}|\mathbf{t}| ^{-2\gamma}d\mathbf{t}\right)^{1/\gamma}, \tag{2.26}\]
where one uses \(g_{-2}(\mathbf{t})\leq 4||v||\cdot|\mathbf{t}|^{-2}\). By using polar coordinates in a \(d\)-dimensional space, it can be observed that
\[\left(\int_{|\mathbf{t}|\geq|\mathbf{q}|/2}|\mathbf{t}|^{-2\gamma}d\mathbf{t}\right)^{1/\gamma }=\left(dV_{d}\int_{|\mathbf{t}|\geq|\mathbf{q}|/2}|\mathbf{t}|^{-2\gamma+d-1}d|\mathbf{t}| \right)^{1/\gamma}=\left(dV_{d}\right)^{1/\gamma}|\mathbf{q}/2|^{-2+d/\gamma}, \tag{2.27}\]
where \(V_{d}\) is the volume of the \(d\)-dimensional unit ball. Assume that \(1/\beta+1/d>1\). One obtains that
\[I_{1}\leq 4||v||\cdot(dV_{d})^{1/\gamma}||\mathcal{F}\{v\}||_{\beta}\cdot|\bm {q}/2|^{-2+d/\gamma}\leq 2||v||\cdot(dV_{d})^{1/\gamma}||\mathcal{F}\{v\}||_{ \beta}\cdot|\mathbf{q}|^{-1}. \tag{2.28}\]
By Theorem 2.1 and the fact that \(V\in C^{d+\alpha-2}([0,T]^{n})\), there exists a constant \(C_{-r}\) such that
\[|\mathcal{F}\{v\}(\mathbf{q}-\mathbf{t})|\leq C_{-r}|\mathbf{q}-\mathbf{t}|^{-r}, \tag{2.29}\]
where \(r\leq d+\alpha-2\). Then the integral \(I_{2}\) can be bounded by
\[I_{2}\leq C_{-r}\int_{|\mathbf{t}|<|\mathbf{q}|/2}|\mathbf{q}-\mathbf{t}|^{-r}d\mathbf{t}\leq 2^{-r+d }C_{-r}V_{d}|\mathbf{q}|^{-r+d}, \tag{2.30}\]
where \(g_{-2}(\mathbf{t})\) is bounded by \(1\). Here we take \(r=d+1\), and
\[\left|E-|\mathbf{q}|^{2}/2\right|\left|u_{\mathbf{q}}\right|\leq S_{-3}|\mathbf{q}|^{-1}, \tag{2.31}\]
where \(S_{-3}=2||v||(dV_{d})^{1/\gamma}\cdot||\mathcal{F}\{v\}||_{\beta}+2^{-1}C_{-d -1}V_{d}\). Then combining \(|E-|\mathbf{q}|^{2}/2|>|\mathbf{q}|^{2}/4\), one has
\[|u_{\mathbf{q}}|\leq 4||v||\cdot S_{-3}|\mathbf{q}|^{-3}. \tag{2.32}\]
Following the same procedure, and we can obtain that for any \(\alpha\geq 3\), there exists a constant \(S_{-\alpha}\) depending on \(||v||\), \(||\mathcal{F}\{v\}||_{\beta}\) and \(d\) such that
\[|u_{\mathbf{q}}|\leq 4||v||\cdot S_{-\alpha}|\mathbf{q}|^{-\alpha}. \tag{2.33}\]
This ends the proof.
We employ a 1D quasiperiodic problem of (1.1) as an example to validate the theoretical results of Theorem 2.2. Let \(v(z)=E_{0}/\big{(}1+(\cos(z)+\cos(\sqrt{5}z))^{2}\big{)}\) and the projection matrix \(\mathbf{P}=[\sqrt{5}\ \ 1]\). The PM method is employed to solve this problem and depict the generalized Fourier coefficients of the 1st and 50th eigenfunctions in the raised frequency domain. As shown in Figure 2.1 (a), whether it is the first eigenfunction (the red line) or the 50th eigenfunction (the blue line), their generalized Fourier coefficients both decay exponentially. We adopt the RPM method to solve the same problem. In order to quantify the truncation error between the PM and RPM methods, we define
\[\mathrm{Err}(D)=\sum_{|\mathbf{k}|>D,\mathbf{k}\in\Omega}|U_{\mathbf{k}}|^{2}. \tag{2.34}\]
Figure 2.1 (b) depict the exponential decay of \(\mathrm{Err}(D)\) with respect to \(D\). For \(D\approx 30\), the method has the machine precision and the reduction error becomes negligible, which accounts for more than 80% reduction of the DOF.
The results in Fig. 2.1 demonstrate that the RPM can effectively reduce the DOF, without sacrificing the accuracy of approximation. For efficient implementation of the RPM, one first derives the index set \(\Omega_{R}\) through predefined coefficients \(N\) and \(D\), as well as the projection matrix \(\mathbf{P}\) for the \(d\)-dimensional quasiperiodic Schrodinger eigenvalue problem. Then with the Fourier expansion in Eq. (2.17) with the reduced basis set, the matrix-vector function \(\mathbf{Hf}\) is straightforwardly implemented. When computing \(\mathrm{FFT}(V(\mathbf{x})\cdot\mathrm{IFFT}(\mathbf{f}))\), it is advisable to zero-fill the coefficients of \(\mathbf{f}\) removed by the RPM, thereby enabling the use of the FFT. Then one takes a random starting vector \(\mathbf{b}\in\mathbb{R}^{1\times|\Omega_{2}|}\), and generate the Krylov subspace \(K_{M}=\mathrm{span}\{\mathbf{b},\mathbf{Hb},\cdots,\mathbf{H}^{M-1}\mathbf{b}\}\). The orthonormal basis \(\mathbf{Q}_{M}=(\mathbf{q}_{1},\mathbf{q}_{2},\cdots,\mathbf{q}_{M})\) of \(K_{M}\) can be generated by the implicitly restarted Arnoldi method [25]. One can determine the Hessenberg matrix \(\mathbf{H}_{M}=\mathbf{Q}_{M}^{\intercal}\mathbf{HQ}_{M}=\mathbf{Q}_{M}^{\intercal}(\mathbf{H} \mathbf{q}_{1},\mathbf{H}\mathbf{q}_{2},\cdots,\mathbf{H}\mathbf{q}_{M})\) and solve its eigenpairs \(\{(E_{m},u_{m})\}_{m=1}^{M}\) of \(\mathbf{H}_{M}\) by the QR algorithm. Detailed procedures of the RPM are summarized in Algorithm 1.
By the RPM, the DOF and the complexity of the eigenvalue solver are significantly reduced. Specifically, the DOF for approximating a \(d\) dimensional quasiperiodic system with a \(d\times n\) projection matrix using the PM is reduced from \(O(N^{n})\) to \(O(N^{n-d}D^{d})\). Correspondingly, the computational complexity of the proposed algorithm for solving the first \(k\) eigenpairs using the Krylov subspace method decreases from \(O(kN^{2n})\) to \(O(kN^{2(n-d)}D^{2d})\)[26, 46]. Due to the fast decay rate of the generalized Fourier coefficients
Figure 2.1. The generalized Fourier coefficient modulus of eigenfunctions for 1D quasiperiodic potential. (a) The generalized Fourier coefficient modulus of eigenfunctions as function of \(\mathbf{q}\). (b) The error \(\mathrm{Err}(D)\) as function of \(D\). In both panels, \(E_{0}=1\) and \(N=180\).
along \(\mathbf{P}\mathbf{k}\), the RPM can obtain reliable numerical results with much fewer DOF, thereby mitigating the curse of dimensions, especially for high-dimensional problems.
**Remark 2.1**.: _The computational complexity of computing FFT\((V(\mathbf{x})\cdot\text{IFFT}(\mathbf{f}))\) is \(O(N^{n}\log N)\), which is much less that of solving the eigenpairs, \(O(kN^{2(n-d)}D^{2d})\). Therefore, the FFT and IFFT are not the primary contributors to the complexity of the RPM._
Moreover, some operators encountered in practical applications exhibit eigenvalues that are extremely sensitive to perturbations, which may lead to spurious eigenmodes in numerical approximation [36]. The RPM can be combined with classical techniques such as spectral indicator method [6, 5] to deal with challenge quasiperiodic eigenvalue problem with highly-contrast quasiperiodic potentials to suppress any spurious eigenmodes. The details of the spectral indicator method are given in Appendix A.
### Error estimate
In what follows, we give rigorously error estimate of the RPM for quasiperiodic eigenvalue problems Eq. (1.1). Let \(K:X\to X\) denote a compact operator. The space \(X\) has the inner product \((\cdot,\cdot)\) and its associated norm \(||\cdot||\). \(S_{N}\) is a subspace of \(X\) and the approximate operator \(K_{N}\) is from \(S_{N}\) to \(S_{N}\). We define the two errors as
\[\epsilon_{N}=\epsilon_{N}(E)=\sup_{u\in M(E)}\inf_{\chi\in S_{N}}||u-\chi||, \hskip 14.226378pt\epsilon_{N}^{*}=\epsilon_{N}^{*}(E)=\sup_{v\in M^{*}(E)}\inf _{\chi\in S_{N}}||v-\chi||,\]
where \(M(E)\) is the set of all normal eigenvectors of \(K\) corresponding to the eigenvalue \(E\) with the corresponding algebraic multiplicity being \(m\). Here \(K^{*}\) is the dual operator of \(K\) and \(M^{*}(E)\) is the set of all normal eigenvectors of \(K^{*}\) corresponding to approximating eigenvalues \(\mathcal{E}_{j},j=1,\ldots,m\). For any \(s\in\mathbb{N}^{*}\), let the \(s\)-derivative Sobolev space on \(T\) be \(H^{s}(T)=\{F\in L^{2}(T),||F||_{s}<\infty\}\), where \(T\) is the period of the \(d\)-dimensional period function \(F\). As usual, the norm and the semi-norm equipped on this space reads
\[||F||_{s}=\left(\sum_{\mathbf{k}\in\mathbb{Z}^{n}}(1+|\mathbf{k}|^{2s})|F_{\mathbf{k}}|^{2 }\right)^{1/2},\hskip 14.226378pt|F|_{s}=\left(\sum_{\mathbf{k}\in\mathbb{Z}^{n}}| \mathbf{k}|^{2s}|F_{\mathbf{k}}|^{2}\right)^{1/2}. \tag{2.35}\]
We further introduce operator \(\varphi_{\mathbf{P}}\), which maps a quasiperiodic function \(f\) to its parent function \(F\), i.e. \(\varphi_{\mathbf{P}}f=F\). In fact, the mapping \(\varphi_{\mathbf{P}}\) is isomorphic [19]. In addition, the operators \(\mathcal{P}\) and \(\mathcal{Q}\) on a periodic function \(F\) denote the partial sums
\[\mathcal{P}F=\sum_{\mathbf{k}\in\Omega}F_{\mathbf{k}}e^{{\rm i}\langle\mathbf{k},\mathbf{x} \rangle},\hskip 14.226378pt\mathcal{Q}F=\sum_{\mathbf{k}\in\Omega_{R}}F_{\mathbf{k}}e^{{ \rm i}\langle\mathbf{k},\mathbf{x}\rangle}, \tag{2.36}\]
where \(\Omega\) and \(\Omega_{R}\) are defined in Eqs. (2.12) and (2.16). In order to obtain the main result, one needs the following lemmas (see [1] for Lemma 2.4, and [39] for Lemma 2.5).
**Lemma 2.4**.: _There exists a constant \(C_{0}\) such that_
\[|E-\mathcal{E}_{j}(N)|\leq C_{0}(\epsilon_{N}\epsilon_{N}^{*})^{1/\alpha},j=1,2, \ldots,m. \tag{2.37}\]
_Here, \(E\) represents the eigenvalue of operator \(K\), and \(\mathcal{E}_{j}(N)\) denotes the corresponding eigenvalues of the approximate operator \(K_{N}\), and \(\alpha\) is the smallest nonnegative integral such that the null-spaces of \((E-K)^{\alpha}\) and \((E-K)^{\alpha+1}\) are equal._
**Lemma 2.5**.: _For any periodic function \(F\in H^{s}(T),s\in\mathbb{N}^{*}\), and \(0\leq\mu\leq s\), the following estimate for \(\mathcal{P}F\) holds_
\[||\mathcal{P}F-F||_{\mu}\leq N^{\mu-s}|F|_{s}. \tag{2.38}\]
_In addition, for any periodic function \(F\in H^{\nu}(T)\) with \(\nu>1/2\), there exists a constant \(C^{\prime}\) depending on \(\nu\) such that_
\[||\mathcal{P}F-F||_{\infty}\leq C^{\prime}N^{1/2-\nu}|F|_{\nu}. \tag{2.39}\]
The upper bound of the approximation under different norms of the operator \(\mathcal{Q}\) to the eigenspace of Eq. (1.1) is given in Theorem 2.3.
**Theorem 2.3**.: _Suppose that \(u\) is the solution of the quasiperiodic Schrodinger eigenvalue problem Eq. (1.1). Let \(U\) be the parent function of \(u\). If \(U\in H^{s}(T)\) with \(s\in\mathbb{N}^{+}\) and \(0\leq\mu\leq s\), there exists a constant \(C\) depending on \(\boldsymbol{P}\) such that_
\[||\mathcal{Q}(\varphi_{\boldsymbol{P}}u)-\varphi_{\boldsymbol{P}}u||_{\mu} \leq(N^{\mu-s}+CD^{\mu-s})|\varphi_{\boldsymbol{P}}u|_{s}. \tag{2.40}\]
_If \(U\in H^{\nu}(T)\) with \(\nu>1/2\), there exist constants \(C_{1}\) and \(C_{2}\) depending on \(\boldsymbol{P}\) and \(\nu\) such that_
\[||\mathcal{Q}(\varphi_{\boldsymbol{P}}u)-\varphi_{\boldsymbol{P}}u||_{\infty} \leq(C_{1}N^{1/2-\nu}+C_{2}D^{1/2-\nu})|\varphi_{\boldsymbol{P}}u|_{\nu}. \tag{2.41}\]
Proof.: By Lemma 2.5 and the isomorphism mapping between quasiperiodic function space and the higher dimensional periodic function space [19], we can obtain that for a quasiperiodic function \(u\) with the parent function \(U\), one has \(\varphi_{\boldsymbol{P}}u=U\). If \(U\in H^{s}(T)\) with \(s\in\mathbb{N}^{*}\) and \(0\leq\mu\leq s\), one has
\[||\mathcal{P}U-U||_{\mu}\leq N^{\mu-s}|U|_{s}. \tag{2.42}\]
In addition, if \(U\in H^{\nu}(T)\) with \(\nu>1/2\), there exists a constant \(C^{\prime}\) depending on \(\nu\) satisfying
\[||\mathcal{P}U-U||_{\infty}\leq C^{\prime}N^{1/2-\nu}|U|_{\nu}. \tag{2.43}\]
By a direct decomposition \(\mathcal{Q}U-U=(\mathcal{Q}U-\mathcal{P}U)+(\mathcal{P}U-U)\) and the triangle inequality, one has
\[||\mathcal{Q}(\varphi_{\boldsymbol{P}}u)-\varphi_{\boldsymbol{P}}u||_{\mu} \leq||\mathcal{Q}U-\mathcal{P}U||_{\mu}+||\mathcal{P}U-U||_{\mu}, \tag{2.44}\]
\[||\mathcal{Q}(\varphi_{\boldsymbol{P}}u)-\varphi_{\boldsymbol{P}}u||_{\infty} \leq||\mathcal{Q}U-\mathcal{P}U||_{\infty}+||\mathcal{P}U-U||_{\infty}. \tag{2.45}\]
The second terms in Eqs. (2.44) and (2.45) have been bounded by Eq. (2.42) and Eq. (2.43), so only the first terms remain to be estimated. Consider the \(\mu\)-norm case. For any \(\boldsymbol{k}\in\Omega/\Omega_{R}\), \(||\boldsymbol{P}\boldsymbol{k}||_{2}>D\) holds, so that
\[||\mathcal{Q}U-\mathcal{P}U||_{\mu}^{2}=\sum_{\boldsymbol{k}\in\Omega/\Omega _{R}}(1+|\boldsymbol{k}|^{2\mu})|U_{\boldsymbol{k}}|^{2}\leq D^{-2s+2\mu}\sum _{\boldsymbol{k}\in\Omega/\Omega_{R}}(1+|\boldsymbol{k}|^{2\mu})|U_{ \boldsymbol{k}}|^{2}|\boldsymbol{P}\boldsymbol{k}|^{2s-2\mu}, \tag{2.46}\]
where \(U_{\boldsymbol{k}}\) is the Fourier coefficient of \(U\) with frequency \(\boldsymbol{k}\). Then, one has,
\[\begin{split}||\mathcal{Q}U-\mathcal{P}U||_{\mu}^{2}& \leq D^{-2s+2\mu}|\boldsymbol{P}|^{2s-2\mu}\sum_{\boldsymbol{k}\in\Omega/ \Omega_{R}}(1+|\boldsymbol{k}|^{2\mu})|U_{\boldsymbol{k}}|^{2}|\boldsymbol{k}| ^{2s-2\mu}\\ &\lesssim D^{-2s+2\mu}|\boldsymbol{P}|^{2s-2\mu}|U|_{s}^{2},\end{split} \tag{2.47}\]
where \(A\lesssim B\) denotes that \(A\) is less than or similar to \(B\). Hence, there exists a constant \(C_{3}\) depending on \(\boldsymbol{P}\) such that
\[||\mathcal{Q}(\varphi_{\boldsymbol{P}}u)-\mathcal{P}(\varphi_{\boldsymbol{P}}u )||_{\mu}\leq C_{3}D^{-2s+2\mu}|\varphi_{\boldsymbol{P}}u|_{s}, \tag{2.48}\]
which directly leads to Eq. (2.40) when combined with Eq. (2.44).
For error estimate under the infinite norm, one can use the Cauchy-Schwarz inequality to obtain
\[\begin{split}||\mathcal{Q}U-\mathcal{P}U||_{\infty}& \leq\sum_{\mathbf{k}\in\Omega/\Omega_{R}}|U_{\mathbf{k}}|\leq\left(\sum_{\mathbf{k}\in \Omega/\Omega_{R}}|\mathbf{k}|^{-2\nu}\right)^{1/2}\left(\sum_{\mathbf{k}\in\Omega/ \Omega_{R}}|\mathbf{k}|^{2\nu}|U_{\mathbf{k}}|^{2}\right)^{1/2}\\ &\leq\left(\sum_{\mathbf{k}\in\Omega/\Omega_{R}}|\mathbf{P}|^{2\nu}|\mathbf{ q}|^{-2\nu}\right)^{1/2}|U|_{\nu}.\end{split} \tag{2.49}\]
For \(\nu>1/2\),
\[\sum_{\mathbf{k}\in\Omega/\Omega_{R}}|\mathbf{q}|^{-2\nu}\leq\int_{D}^{\infty}x^{-2\nu }dx\leq\frac{1}{2\nu-1}D^{1-2\nu}. \tag{2.50}\]
Therefore,
\[\begin{split}||\mathcal{Q}(\varphi_{\mathbf{P}}u)-\mathcal{P}( \varphi_{\mathbf{P}}u)||_{\infty}&\leq\left(\int_{D}^{\infty}x^{-2 \nu}dx\right)^{1/2}|\mathbf{P}|^{\nu}|\varphi_{\mathbf{P}}u|_{\nu}\leq\sqrt{\frac{1}{2 \nu-1}}D^{1/2-\nu}|\mathbf{P}|^{\nu}|\varphi_{\mathbf{P}}u|_{\nu}\\ &:=C_{4}D^{1/2-\nu}|\varphi_{\mathbf{P}}u|_{\nu},\end{split} \tag{2.51}\]
where \(C_{4}\) is a constant depending on \(\nu\) and \(|\mathbf{P}|\). This ends the proof.
One has the Corollary 2.1 for the error estimate of the RPM with space basis \(\Omega_{R}\) by Lemma 2.4 and Theorem 2.3.
**Corollary 2.1**.: _Let \(E\) represents the eigenvalue of the Schrodinger operator, and \(\mathcal{E}_{j}(N,D)\) denotes the corresponding eigenvalues of the RPM. The error of \(E\) is bounded under the \(\mu\)-norm by_
\[|E-\mathcal{E}_{j}(N,D)|\leq C_{0}[(N^{\mu-s}+CD^{\mu-s})|\varphi_{\mathbf{P}}u|_{ s}]^{2/\alpha}, \tag{2.52}\]
_and under the infinite norm by_
\[|E-\mathcal{E}_{j}(N,D)|\leq C_{0}[(N^{1/2-\nu}+CD^{1/2-\nu})|\varphi_{\mathbf{P}} u|_{\nu}]^{2/\alpha}, \tag{2.53}\]
_where \(C\) and \(C_{0}\) are constants._
**Remark 2.2**.: _In Corollary 2.1, it can be observed that the eigenvalue error of RPM exhibits the same decay order with respect to both \(N\) and \(D\). This implies that, if the potential function \(v\) has sufficiently good regularity, the error of the RPM can achieve exponential decay with respect to both \(N\) and \(D\). Consequently, in practical applications, a smaller value of \(D\) can also ensure the accuracy of numerical results._
Corollary 2.1 establishes a rigorous theoretical foundation for solving the quasiperiodic Schrodinger eigenvalue problem using the RPM. Although increasing the dimension significantly enlarges the DOF, the distinctive properties of the Fourier coefficients enable the resolution of larger-scale problems with a noticeably reduced number of the DOF by the RPM.
## 3. Numerical examples
We present numerical results to demonstrate the effectiveness of the RPM. Specifically, we apply the algorithm to quasiperiodic Schrodinger eigenvalue problems in 1D and 2D spaces and assess the quality of the resulting eigenvalues and eigenfunctions, as well as the CPU time. A matrix-free preconditioned Krylov subspace method [30, 23, 37] is employed which requires only the matrix-vector product to be stored at each iteration. Implementation of this method is facilitated by the function eigs in Matlab [40]. In these examples, we compare the RPM with the PM, which shows the accuracy and efficiency of the RPM. The calculations presented in this section are executed using Matlab code on an Intel TM core with a clock rate of 2.50 GHz and 32 GB of memory.
### 1D example
We first examine the performance of the RPM for the 1D case. To be specific, we adopt the potential function in Eq. (1.1) to be
\[v_{1}(z)=\frac{E_{0}}{\big{[}\cos\big{(}2\cos(\theta/2)z\big{)}+\cos\big{(}2\sin (\theta/2)z\big{)}\big{]}^{2}+1}, \tag{3.1}\]
with \(\theta=\pi/6\). The projection matrix is \(\mathbf{P}=[2\cos(\theta/2),\ \ 2\sin(\theta/2)]\).
We take \(E_{0}=1\). We fix the number of Fourier expansions to be \(N=180\) and depict in Figure 3.1 the DOF and condition numbers of the RPM against the truncation parameter \(D\). It can be observed clearly that both the DOF and condition numbers decrease rapidly with the decrease of \(D\). For comparison, the DOF of the original PM is \(N^{2}=32400\), much bigger than that of the RPM for small \(D\). Thus, a small \(D\) not only leads to a matrix eigenvalue system of much smaller size, but also reduces the number of iterations to converge. These observations highlight the potential of using the RPM to solve high-dimensional quasiperiodic eigenvalue problems.
To demonstrate the exceptional accuracy and rapid convergence of the RPM approach, we present the error plots in Figure 3.2 for the potential function with \(E_{0}=1\). The "exact" eigenvalues and eigenfunctions are determined using the numerical results obtained from the PM when \(N=300\). The error of eigenvalues, \(\varepsilon\), is measured by the maximum error of the first five eigenvalues. The error of the first normalized eigenfunction is also measured by the \(L^{2}\)-norm in interval \([0,1]\), which is denoted by \(\delta\). Fig. 3.2 illustrates the convergence with the increase of \(N\) for \(D=15,20\) and \(25\), and the convergence with the increase of \(D\) for \(N=20\) and \(30\), characterized by both \(\varepsilon\) and \(\delta\). Panels (a,b) illustrate that both \(\varepsilon\) and \(\delta\) exhibit an exponential decrease as \(N\) increases, eventually attaining a fixed value. It is notable that the magnitude of this fixed value diminishes with the increase of \(D\). For relatively small values of \(N\) and \(D\) (e.g. \(N=30\) and \(D=20\)), \(\varepsilon\) is already smaller than \(10^{-10}\) and \(\delta\) is smaller than \(10^{-8}\), demonstrating the high accuracy of the RPM. Panels (c,d) exhibit exponential decays with the increase of \(D\), which is in agreement with the error analysis. When \(D\) is small (\(D\leq 15\)), the error curves of \(N=20\) and \(30\) almost overlap, indicating that the error mainly comes from the basis reduction. When \(D\) is large, the error curves of the two cases have a significant difference, indicating that the error is mainly caused by the PM part. Overall, one can observe that high accuracy of the results is remained in spite of a significant reduction in the number of bases.
Figure 3.1. The DOF (a) and the condition number (b) as function of \(D\) using the RPM in one dimension with \(N=180\). Correspondingly, the DOF of the PM is \(N^{2}=32400\).
In Table 3.1, we display the DOF in the RPM and the CPU time for the 1D system with \(E_{0}=1\) for \(N=50,100\) and \(150\) with \(D\) increasing from \(10\) to \(50\). The DOF increases linearly with \(D\). Theoretically, the RPM of 1D systems has complexity \(O(D^{2})\) for given \(N\), and \(O(N^{2})\) for given \(D\). Correspondingly, the complexity of the original PM is \(O(N^{4})\). Moreover, the condition number of the RPM is much smaller than the PM, as shown in Fig. 3.1. The results of the CPU time validate the complexity analysis. We have shown that a small \(D\) can achieve high accuracy. At \(N=50\), setting \(D=20\) has error as small as \(10^{-10}\). In this case, the CPU time for the RPM is \(1.46\) seconds, \(11.6\) times faster than that of the original PM. The reduction for large \(N\) is more significant. For \(N=150\), the speedup with \(D=20\) reaches \(317.0\) times. Correspondingly, with \(N=50,100\) and \(150\), the DOFs of the original PM for \(D=20\) are \(2.4,4.8\) and \(7.2\) times greater than those of the RPM, respectively. These results clearly demonstrate the attractive performance of the RPM.
Figure 3.3 depicts the error of the normalized first eigenfunction in interval \(z\in[0,1]\). We take \(D=25\) for \(N=20,40\) and \(60\), and calculate the absolute error for different \(E_{0}=1,2,4\) and \(8\) where the "exact" eigenfunctions are generated by using the numerical results of the PM with \(N=300\). One can observe
Figure 3.2. Maximum error of eigenvalues \(\varepsilon\) and the \(L^{2}\)-error of the first eigenfunction \(\delta\). (a,b): Error as function of \(N\) for different \(D\). (c,d): Error as function of \(D\) for different \(N\).
that the error converges with the increase of \(N\). With the increase of \(E_{0}\), the error of the RPM increases. This is because \(E_{0}\) describes the optical response strength in the photorefractive crystal [43]. For a large \(E_{0}\), the eigenfunction can become localized, leading to an obvious singularity. The results in panels (cd) illustrate that the RPM remains high accuracy with a small \(D\) with \(N=60\), demonstrating that the RPM is efficient for simulating challenging problems such as localization-delocalization transition in photonic moire lattices.
### 2D example
Consider a 2D example with the potential function taking
\[v_{2}(z_{1},z_{2})=\frac{E_{0}}{(\cos z_{1}\cos z_{2}+\cos(\sqrt{5}z_{1})\cos( \sqrt{5}z_{2}))^{2}+1}. \tag{3.2}\]
This potential Eq. (3.2) possesses the same structure as 2D moire lattices [43, 12], making it applicable to simulations of photonic lattices. Correspondingly, the projection matrix is given by,
\[\boldsymbol{P}=\begin{bmatrix}1&0&\sqrt{5}&0\\ 0&1&0&\sqrt{5}\end{bmatrix}. \tag{3.3}\]
In the calculation, we take \(\theta=\pi/6\).
We first calculate the generalized Fourier coefficient of eigenfunctions of the system. We set \(E_{0}=1\) and \(N=30\). In Figure 3.4, we show the modulus of the coefficients for the 1st and 4th eigenfunctions in the \(\boldsymbol{q}\) space, calculated by the PM. The data are present with values of logarithms 10. One can observe the exponential decay of the generalized Fourier coefficients for both eigenfunctions. In each panel, there is only one peak in the origin of the \(\boldsymbol{q}\) space, and far from the origin the contributions of the Fourier modes are insignificant. Table 3.2 presents the error measured by \(\operatorname{Err}(D)\) in Eq. (2.34) of the first and fourth eigenfunctions with the truncation constant \(D\) for \(N=30\). Again, one can observe rapid decays with respect to \(D\) for both cases. These results are similar to the 1D case and demonstrate that the approximation in the reduced space can be of high accuracy for the eigenproblem.
Figure 3.5 present the DOF and condition number of the RPM as function of \(D\) with same setup: \(E_{0}=1\) and \(N=30\). Again, both the DOF and condition number increase rapidly with the increase of \(D\). In spite that \(N=30\) is not big, the DOF of the entire system in the raised 4D space, \(N^{4}=810000\), is a very big number. From the results, we can see that the use of a small \(D\) can significantly reduce computational complexity, not only the size of the matrix eigenvalue problem, but also the iteration number in the solver of the implicitly restarted Arnoldi method.
We then study the accuracy and convergence of the RPM with \(E_{0}=1\), and the results are presented in Figure 3.6. In the calculations, the "exact" eigenvalues and eigenfunctions are obtained from numerical results of the PM with \(N=32\). The errors are measured, where \(\varepsilon\) represents the absolute error of the first eigenvalue, and \(\delta\) represents the error of the first eigenfunction using the \(L^{2}\) norm in interval \([0,1]^{2}\).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{\(N=50\)} & \multicolumn{2}{c}{\(N=100\)} & \multicolumn{2}{c}{\(N=150\)} \\ \cline{2-7} \(D\) & DOF & CPU time (s) & DOF & CPU time (s) & DOF & CPU time (s) \\ \hline
50 & 2373 & 9.382 & 5177 & 35.500 & 7766 & 123.172 \\
45 & 2236 & 8.389 & 4659 & 21.791 & 6990 & 85.054 \\
40 & 2048 & 5.690 & 4141 & 17.595 & 6210 & 61.924 \\
35 & 1811 & 4.433 & 3623 & 11.190 & 5434 & 42.748 \\
30 & 1552 & 3.110 & 3106 & 9.733 & 4658 & 30.751 \\
25 & 1295 & 2.266 & 2587 & 5.404 & 3881 & 18.599 \\
20 & 1034 & 1.460 & 2069 & 3.529 & 3106 & 10.029 \\
15 & 777 & 1.000 & 1551 & 2.456 & 2328 & 6.542 \\
10 & 517 & 0.452 & 1035 & 1.050 & 1552 & 2.800 \\ \hline PM & 2500 & 17.010 & 10000 & 320.604 & 22500 & 3179.439 \\ \hline \hline \end{tabular}
\end{table}
Table 3.1. The DOF and CPU time of the RPM for different \(N\) and \(D\)
Panels (a,b) illustrate the convergence of the numerical solution with the increase of \(N\) for truncation
Figure 3.3. Error of the normalized first eigenfunctions obtained by the RPM in interval \([0,1]\) for different \(E_{0}\). In each panel, \(D=25\) and three different \(N\) are calculated.
\begin{table}
\begin{tabular}{c c c c c c} \hline \hline \multirow{2}{*}{\(D\)} & \multicolumn{2}{c}{Err(\(D\))} & \multicolumn{2}{c}{Err(\(D\))} \\ \cline{2-6} & 1st & 4th & \(D\) & 1st & 4th \\ \hline
5 & 1.54E-05 & 2.19E-05 & 35 & 9.93E-15 & 1.58E-13 \\
10 & 1.19E-07 & 1.16E-07 & 40 & 6.36E-16 & 7.36E-14 \\
15 & 2.66E-09 & 2.50E-09 & 45 & 4.76E-17 & 3.40E-14 \\
20 & 6.40E-11 & 7.30E-11 & 50 & 3.79E-18 & 1.52E-14 \\
25 & 2.91E-12 & 3.20E-12 & 55 & 2.88E-19 & 2.68E-15 \\
30 & 1.61E-13 & 3.71E-13 & 60 & 2.23E-20 & 1.73E-16 \\ \hline \hline \end{tabular}
\end{table}
Table 3.2. Error of the 1st and 4th eigenfunctions as function of \(D\) in two dimensions
coefficient \(D=10,20\) and \(30\). Both \(\varepsilon\) and \(\delta\) decay exponentially with increasing \(N\), eventually converging to a fixed value which depends on \(D\). Similar to the 1D case, small values of \(D\) results in high accurate results. For \(D=10\) (with a slightly bigger \(N\)), the RPM can achieve accuracy at the level of \(10^{-5}\) in both the eigenvalue and eigenfunction calculation. Panels (c,d) displays the convergence with the increase of \(D\) for \(N=20\) and \(28\). One can observe the exponential decays with \(D\) at the beginning, as expected from the previous error analysis. For small \(D\), the error curves for \(N=20\) and \(28\) almost overlap, suggesting that the reduction of the basis space dominates the error. With a mediate \(D\), the two curves in both panels differ significantly, indicating that the error at this point mainly comes from the PM part. Overall, the error at \(D=10\) is much smaller than the error at \(D=10\).
Figure 3.4. The generalized Fourier coefficients (modulus) of eigenfunctions in the \(\boldsymbol{q}\) space for \(E_{0}=1\) and \(N=30\). Results are present by logarithms with base \(10\).
Figure 3.5. The DOF (a) and the change of condition number (b) with respect to \(D\) of the RPM in 2D. \(N=30\) and the maximum DOF is \(N^{4}=810000\).
the accuracy with small \(D\) (e.g., \(D=10\)) is good enough to provide accurate solutions. These findings demonstrate that high accuracy can be maintained by a significant reduction for basis functions.
We next conduct the study on the DOF and CPU time required for the RPM for the 2D system with \(E_{0}=1\) for \(N=20,24\) and \(28\) with \(D\) increasing from \(10\) to \(30\), and the results are summarized in Table 3.3. The DOF exhibits a quadratic decrease with respect to \(D\). Theoretically, the RPM of 2D systems has complexity \(O(D^{4})\) for given \(N\), and \(O(N^{4})\) for given \(D\), while the complexity of the original PM is \(O(N^{8})\). The numerical results of Table 3.3 are in agreement with these theoretical analysis. It also can be found that a small \(D\) is able to reach high accuracy. For \(N=20\), the use of \(D=10\) achieves an error level of \(10^{-5}\). In this case, the CPU time for the RPM is \(148.50\) seconds which is \(15.8\) times faster than that of the PM, and the DOF is \(4.9\) times smaller than that of the PM. The reduction for larger \(N\) will be more significant. When \(N=28\), the speedup with \(D=10\) becomes \(73.8\) times for the CPU time, and the reduction in the DOF is \(9.7\) times, comparing the RPM with the PM. One can see this speedup is even more larger than 1D problems by introducting the reduction technique in the PM.
Finally, we investigate the performance of the RPM for varying \(E_{0}\). Figure 3.7 shows the profiles of the first eigenfunctions in 2D quasiperiodic systems for various \(E_{0}\) and \(N\). With the increase of \(E_{0}\), the
Figure 3.6. Errors of the first eigenvalue and eigenfunction. (a,b): Error as function of \(N\) for different \(D\). (c,d): Error as function of \(D\) for different \(N\).
eigenfunction becomes singular, leading to a localized eigenstate. This phenomenon is reminiscent of the localization-delocalization transition exhibited in experimental studies of 2D photonic moire lattices [43]. The moire lattices rely on flat-band structures for wave localization as opposed to the disordered media used in other approaches based on light diffusion in photonic quasicrystals [10, 28]. The localization-delocalization transition of the eigenstates in 2D systems provide valuable insight into the exploration of quasicrystal structures. This transition process is displayed in Figure 3.7. The figure also illustrates that the results of different \(N\) are basically the same for the four different \(E_{0}\), indicating that the RPM converges fast for cases of both low and strong strength of optical response. Moreover, because of the lower DOF of the RPM, one can expect that more numerical nodes in each dimension can be applied to reach higher accuracy of approximation.
## 4. Conclusions
We proposed the RPM for accurate and fast calculations of eigenvalue problems of quasiperiodic Schrodinger equations. We show that the exponential decay of the generalized Fourier coefficients for eigenfunctions of the quasiperiodic problems, justifying the efficiency of the RPM. The error bound of the approximation is provided, which demonstrates the high accuracy from theoretical point of view. Compared to the original PM, the reduced method requires much less memory and significantly speeds up the calculation, making it possible to calculate high-dimensional quasiperiodic eigenvalue problems. Numerical results in both 1D and 2D problems show the efficiency and accuracy of the algorithm, demonstrating attractive features for a broader applications for practical problems. The RPM is potentially useful to solve 3D quasiperiodic problems, which will be reported in our future work.
## Appendix A The indicator method
The indicator method [6, 5] can be very useful to remove spurious eigenmodes during the numerical calculation of the RPM. Let \(\Theta\in\mathbb{C}\) be a simply-connected region in complex plane with bound \(\partial\Theta\). For any eigenvalue problem \(\mathbf{Hu}=E\mathbf{u}\), one can define a spectral projection operator
\[\mathbf{Q}=\frac{1}{2\pi\mathrm{i}}\int_{\partial\Theta}(\mathbf{H}-s\mathbf{I})^{-1}ds, \tag{1.1}\]
with \(\mathbf{I}\) being the identity matrix. Take a random vector \(\mathbf{f}\neq\mathbf{0}\), one has \(||\mathbf{Qf}||=0\) if there is no eigenvalue within \(\Theta\). If there is at least one eigenvalue in \(\Theta\), then the probability of \(||\mathbf{Qf}||\neq 0\) is 1. By these properties, one can define an indicator, \(\mathrm{Ind}=\|\mathbf{QQf}/\|\mathbf{Qf}\|\|\), to judge if there is eigenvalue in
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{2}{c}{\(N=20\)} & \multicolumn{2}{c}{\(N=24\)} & \multicolumn{2}{c}{\(N=28\)} \\ \cline{2-7} \(D\) & Size & CPU time (s) & Size & CPU time (s) & Size & CPU time (s) \\ \hline
30 & 156816 & 2089.2 & 291600 & 6681.5 & 459684 & 15591 \\
28 & 152100 & 1927.7 & 273529 & 5760.1 & 421201 & 11562 \\
26 & 145161 & 1803.2 & 252004 & 4807.6 & 379456 & 9889.5 \\
24 & 135424 & 1552.9 & 227529 & 4318.9 & 335241 & 7631.6 \\
22 & 123201 & 1284.0 & 200704 & 3144.2 & 291600 & 6118.3 \\
20 & 108900 & 1057.3 & 173889 & 2362.0 & 247009 & 4035.2 \\
18 & 94249 & 786.30 & 145924 & 1895.6 & 202500 & 3363.2 \\
16 & 78400 & 650.76 & 117649 & 1409.2 & 160801 & 1832.3 \\
14 & 62001 & 424.71 & 90601 & 821.09 & 123904 & 1341.1 \\
12 & 46225 & 319.53 & 66564 & 493.80 & 91204 & 763.4 \\
10 & 32400 & 148.50 & 46656 & 304.35 & 63504 & 446.09 \\ \hline PM & 160000 & 2343.2 & 331776 & 10305 & 614656 & 32901 \\ \hline \hline \end{tabular}
\end{table}
Table 3.3. The DOF and CPU time of the RPM for different \(N\) and \(D\) in two dimensions
a specified region [18]. To compute \(\mathbf{Qf}\), the line integral along \(\partial\Theta\) is obtained by numerical quadrature rule,
\[\mathbf{Qf}\approx\frac{1}{2\pi\mathrm{i}}\sum_{j=1}^{n_{0}}\omega_{j}\mathbf{r}_{j}, \tag{1.2}\]
where \(\{\omega_{j}\}\) are quadrature weights and \(\{\mathbf{r}_{j}\}\) are the solutions of linear systems
\[(\mathbf{H}-s_{j}\mathbf{I})\mathbf{r}_{j}=\mathbf{f},j=1,2,\ldots,n_{0}. \tag{1.3}\]
Here \(\{s_{j}\}\) are the quadrature nodes on \(\partial\Theta\). Practically, \(\Theta\) can be chosen as small square such that trapezoidal rule with four vertices of the square as quadrature points guarantee high accuracy. The linear systems (1.3) are usually solved by the generalized minimal residual method (GMRES) in a matrix-free manner. The region will have no eigenvalue if the indicator is less than a small criterion.
In our system, the matrix \(\mathbf{H}\) is generated by Eq. (2.13) and the indicator method is applied in the frequency space. The random vector \(\mathbf{f}\) is usually taken as the generalized Fourier coefficient vector of the potential function \(v\). One can validate each eigenvalue obtained by the RPM by taking a small region with the eigenvalue as the center. The eigenvalue will be considered as a spurious one if the indicator of this region is less than the criterion.
Figure 3.7. The normalized first eigenfunction \(|u|\) in 2D under different \(E_{0}\) and \(N\). \(D=30\) is taken and the results in area \([0,10]^{2}\). (a,b,c): \(N=24\), (d,e,f): \(N=26\) and (g,h,i): \(N=28\). (a,d,g): \(E_{0}=0.25\), (b,e,h): \(E_{0}=1\) and (c,f,i): \(E_{0}=4\).
## Acknowledgement
Z. G. and Z. X. are supported by the National Natural Science Foundation of China (NNSFC)(No. 12071288) and Science and Technology Commission of Shanghai Municipality (grant Nos. 20JC1414100 and 21JC1403700). Z. Y. is supported by the NNSFC (No. 12101399) and the Shanghai Sailing Program (No. 21YF1421000).
|
2309.16318 | DeepPCR: Parallelizing Sequential Operations in Neural Networks | Parallelization techniques have become ubiquitous for accelerating inference
and training of deep neural networks. Despite this, several operations are
still performed in a sequential manner. For instance, the forward and backward
passes are executed layer-by-layer, and the output of diffusion models is
produced by applying a sequence of denoising steps. This sequential approach
results in a computational cost proportional to the number of steps involved,
presenting a potential bottleneck as the number of steps increases. In this
work, we introduce DeepPCR, a novel algorithm which parallelizes typically
sequential operations in order to speed up inference and training of neural
networks. DeepPCR is based on interpreting a sequence of $L$ steps as the
solution of a specific system of equations, which we recover using the Parallel
Cyclic Reduction algorithm. This reduces the complexity of computing the
sequential operations from $\mathcal{O}(L)$ to $\mathcal{O}(\log_2L)$, thus
yielding a speedup for large $L$. To verify the theoretical lower complexity of
the algorithm, and to identify regimes for speedup, we test the effectiveness
of DeepPCR in parallelizing the forward and backward pass in multi-layer
perceptrons, and reach speedups of up to $30\times$ for the forward and
$200\times$ for the backward pass. We additionally showcase the flexibility of
DeepPCR by parallelizing training of ResNets with as many as 1024 layers, and
generation in diffusion models, enabling up to $7\times$ faster training and
$11\times$ faster generation, respectively, when compared to the sequential
approach. | Federico Danieli, Miguel Sarabia, Xavier Suau, Pau Rodríguez, Luca Zappella | 2023-09-28T10:15:30Z | http://arxiv.org/abs/2309.16318v2 | # DeepPCR: Parallelizing Sequential Operations in Neural Networks
###### Abstract
Parallelization techniques have become ubiquitous for accelerating inference and training of deep neural networks. Despite this, several operations are still performed in a sequential manner. For instance, the forward and backward passes are executed layer-by-layer, and the output of diffusion models is produced by applying a sequence of denoising steps. This sequential approach results in a computational cost proportional to the number of steps involved, presenting a potential bottleneck as the number of steps increases. In this work, we introduce DeepPCR, a novel algorithm which _parallelizes typically sequential operations_ in order to speed up inference and training of neural networks. DeepPCR is based on interpreting a sequence of \(L\) steps as the solution of a specific system of equations, which we recover using the _Parallel Cyclic Reduction_ algorithm. This reduces the complexity of computing the sequential operations from \(\mathcal{O}(L)\) to \(\mathcal{O}(\log_{2}L)\), thus yielding a speedup for large \(L\). To verify the theoretical lower complexity of the algorithm, and to identify regimes for speedup, we test the effectiveness of DeepPCR in parallelizing the forward and backward pass in multi-layer perceptrons, and reach speedups of up to \(30\times\) for the forward, and \(200\times\) for the backward pass. We additionally showcase the flexibility of DeepPCR by parallelizing training of ResNets with as many as 1024 layers, and generation in diffusion models, enabling up to \(7\times\) faster training and \(11\times\) faster generation, respectively, when compared to the sequential approach.
## 1 Introduction
Neural Networks (NNs) have proven very effective at solving complex tasks, such as classification [26; 14], segmentation [5; 30], and image or text generation [26]. Training NNs, however, is a computationally demanding task, often requiring wall-clock times in the order of days, or even weeks [35; 18], before attaining satisfactory results. Even inference in pre-trained models can be slow, particularly when complex architectures are involved [4]. To reduce training times, a great effort has been invested into speeding up inference, whether by developing dedicated software and hardware [7; 22; 23], or by investigating algorithmic techniques such as (early) pruning [28; 40; 20; 27; 43; 9].
Another possibility for reducing wall-clock time, and the one we focus on in this work, consists in parallelizing computations that would otherwise be performed sequentially. The most intuitive approach to parallelization involves identifying sets of operations which are (almost entirely) independent, and executing them concurrently. Two paradigms that follow this principle are _data-parallelization_, where multiple datapoints are processed simultaneously in batches; and _model-parallelization_, where the model is split among multiple computational units, which perform their evaluations in parallel [1]. Still, certain operations which are key for training and inference in NNs have a sequential structure. The forward and backward pass of a NN are examples of such operations, where activations
(or gradients) are computed sequentially, one layer at a time. Moreover, some generative models suffer from similar shortcomings: in diffusion models (DMs), for example, the output image is generated through a sequence of denoising steps [36]. Sequential operations such as these require a computational effort which grows linearly with the sequence length \(L\) (that is, with the number of layers, or denoising steps), which represents a bottleneck when \(L\) is large. Given the prevalence of these operations, any effort towards their acceleration can result in noticeable speed gains, by drastically reducing training and inference time. Further, faster computations may allow exploration of configurations which were previously unfeasible due to the excessive time required to perform these operations sequentially: for example, extremely deep NNs, or diffusion over tens of thousands of denoising steps.
In this work we introduce DeepPCR, a novel method which provides a flexible framework for turning such sequential operations into parallel ones, thus accelerating operations such as training, inference, and the denoising procedure in DMs.
The core idea behind DeepPCR lies in interpreting a sequential operation of \(L\) steps as the solution of a system of \(L\) equations, as illustrated in Sec. 2. DeepPCR assumes the output of each step only depends on that of the previous one, that is, the sequence satisfies the Markov property. If this holds, we can leverage the specific structure of the resulting system to tackle its solution in parallel, using the Parallel Cyclic Reduction algorithm (PCR) [10; 2]. This algorithm, described in Sec. 3, guarantees the recovery of the solution in \(\mathcal{O}(\log_{2}L)\) steps, rather than the \(\mathcal{O}(L)\) steps required for its sequential counterpart. In our test, this translates into inference speedups of up to \(30\times\) for the forward pass and \(200\times\) for the backward pass in certain regimes, and \(11.2\times\) speedup in image generation via diffusion, as shown in Fig. 1. The reduced computational complexity comes in exchange for higher memory and computational intensity. Therefore, in Sec. 4.1 we investigate in detail regimes for speedup, as well as the trade-off between our method and the sequential approach, considering as model problems the forward and backward passes through multi-layer perceptrons (MLPs) of various sizes. In Sec. 4.2 we then observe how this translates into speedups when training ResNet architectures. Finally, in Sec. 4.3 we showcase how DeepPCR can be applied to accelerate other types of sequential operations as well, choosing as example the denoising procedure in DMs.
Previous WorkThe idea of parallelizing forward and backward passes through a DNN was spearheaded in [13; 32; 24; 31; 41], under the concept of _layer-parallelization_. For the most part, these approaches have been limited to accelerating the training of deep ResNets [15], since they rely on the interpretation of a ResNet as the discretization of a time-evolving differential equation [6], whose solution is then recovered in a time-parallel fashion [11].
More closely resembling our approach is the work in [39], where the authors start by interpreting a sequential operation as the solution of a large system of equations, which is then targeted using parallel solvers. They too focus on accelerating forward and backward passes on ResNets, but also consider some autoregressive generative models (specifically, MADE [12] and PixelCNN++ [38]), similarly to what is done in [44]. The main difference between our approach and the one in [39] lies in the solvers used for tackling the target system in parallel. They rely on variations of Jacobi iterations [34], which are very cost-efficient, but "fall short when the computational graph [of the sequential operation considered] is closer to a Markov chain" [39]: we can expect the convergence of Jacobi to fall to \(\mathcal{O}(L)\) in that case, thus providing no speedup over the sequential approach. By contrast, our method specifically targets Markov sequences, solving them with complexity \(\mathcal{O}(\log_{2}L)\), and is in this sense complementary to theirs. We point out that a similar theoretical foundation for our method was proposed in [33], however it was not verified experimentally, nor has it been considered for applications other than forward and backward passes acceleration.
Figure 1: DeepPCR allows executing sequential operations, such as denoising in latent diffusion, in \(\mathcal{O}(\log_{2}L)\) time, as opposed to the \(\mathcal{O}(L)\) needed for the traditional approach (\(L\) being the number of steps). In our experiments, DeepPCR achieves a \(\mathbf{11.2\times}\)**speedup for image generation with latent diffusion** with respect to the sequential baseline, with comparable quality in the recovered result.
Main ContributionsThe main contributions of this work can be summarized as follows:
1. We propose DeepPCR, a novel algorithm for parallelizing sequential operations in NN training and inference, reducing the complexity of these processes from \(\mathcal{O}(L)\) to \(\mathcal{O}(\log_{2}L)\), \(L\) being the sequence length.
2. We analyze DeepPCR speedup of forward and backward passes in MLPs, to identify high-performance regimes of the method in terms of simple architecture parameters, and we discuss the trade-offs between memory consumption, accuracy of the final solution, and speedup.
3. We showcase the flexibility of DeepPCR applying it to accelerate training of deep ResNet [15] on MNIST [8], and generation in Diffusion Models trained on MNIST, CIFAR-10 [25] and CelebA [29]. Results obtained with DeepPCR are comparable to the ones obtained sequentially, but are recovered up to \(7\times\) and \(11\times\) faster, respectively.
## 2 Turning sequential operations into systems of equations
Our approach is rooted in casting the application of a sequence of \(L\) steps as the solution of a system of \(L\) equations, which we then proceed to solve all at once, in parallel. In this section, we illustrate a general framework to perform this casting and recover the target system. Specific examples for the applications considered in our work (namely forward and backward passes, and generation in diffusion models) are described in appendix A. The algorithm for the parallel solution of the recovered system is outlined in Sec. 3.
Consider a generic sequence of steps in the form \(\mathbf{z}_{l}=f_{l}(\mathbf{z}_{l-1})\), for \(l=1,\ldots,L\), starting from \(\mathbf{z}_{0}=f_{0}(\mathbf{x})\). The various \(f_{l}\) could represent, for example, the application of layer \(l\) to the activations \(\mathbf{z}_{l-1}\) (if we are considering a forward pass), or the application of the \(l\)-th denoising step to the partially recovered image \(\mathbf{z}_{l-1}\) (if we are considering a diffusion mechanism). Notice we are assuming that the output of each step \(\mathbf{z}_{l}\) depends only on that of the previous step \(\mathbf{z}_{l-1}\) and no past ones: that is, we are considering sequences that satisfy the _Markov_ property (a discussion on the limitations related to this assumption, and possible workarounds to relax it, is provided in appendix B). We can collate this sequence of operations into a system of equations for the collated variable \(\mathbf{z}=[\mathbf{z}_{0}^{T},\ldots,\mathbf{z}_{L}^{T},]^{T}\), and obtain:
\[\mathcal{F}(\mathbf{z})=\left[\begin{array}{cccc}\mathbf{z}_{0}-f_{0}(\mathbf{x})\\ \mathbf{z}_{1}-f_{1}(\mathbf{z}_{0})\\ \vdots\\ \mathbf{z}_{L}-f_{L}(\mathbf{z}_{L-1})\end{array}\right]=\left[\begin{array}{cccc}I &&&&\\ -f_{1}(\cdot)&I&&&\\ &\ddots&\ddots&\\ &&&-f_{L}(\cdot)&I\end{array}\right]\left[\begin{array}{cccc}\mathbf{z}_{0}\\ \mathbf{z}_{1}\\ \vdots\\ \mathbf{z}_{L}\end{array}\right]-\left[\begin{array}{cccc}f_{0}(\mathbf{x})\\ \mathbf{0}\\ \vdots\\ \mathbf{0}\end{array}\right]=\mathbf{0}. \tag{1}\]
Notice that, to better highlight the structure of the operator involved, we are abusing matrix notation and considering that the "multiplication" of \(f_{l}(\cdot)\) with \(z_{l-1}\) results in its application \(f_{l}(z_{l-1})\), although \(f_{l}\) is generally a nonlinear operator. To tackle the nonlinearity (when present), we use Newton's method [34]. In more detail, denoting with a superscript \(k\) the Newton iteration, we start from an initial guess for iteration \(k=0\), namely \(\mathbf{z}=\mathbf{z}^{0}\), and iteratively update the solution \(\mathbf{z}^{k+1}=\mathbf{z}^{k}+\delta\mathbf{z}^{k}\) by solving the linearized system
\[J_{\mathcal{F}}|_{\mathbf{z}^{k}}\,\delta\mathbf{z}^{k}=-\mathcal{F}(\mathbf{z}^{k}), \tag{2}\]
until we reach convergence. \(\left.J_{\mathcal{F}}\right|_{\mathbf{z}^{k}}\) denotes the Jacobian of the global sequential operation \(\mathcal{F}(\mathbf{z})\) evaluated at the current iteration \(\mathbf{z}^{k}\). This Jacobian defines the target system we need to solve, and obeys a very specific structure: taking the derivative of (1) with respect to \(\mathbf{z}\), and expanding (2), we see that
\[(2)\Longleftrightarrow\left[\begin{array}{cccc}I&&&&\\ -\left.J_{f_{1}}\right|_{\mathbf{z}_{0}^{k}}&I&&\\ &&\ddots&\ddots&\\ &&&-\left.J_{f_{L}}\right|_{\mathbf{z}_{L-1}^{k}}&I\end{array}\right]\left[\begin{array} []{cccc}\delta\mathbf{z}_{0}^{k}\\ \delta\mathbf{z}_{1}^{k}\\ \vdots\\ \delta\mathbf{z}_{L}^{k}\end{array}\right]=\left[\begin{array}{cccc}f_{0}(\mathbf{x })-\mathbf{z}_{0}^{k}\\ f_{1}(\mathbf{z}_{0}^{k})-\mathbf{z}_{1}^{k}\\ \vdots\\ f_{L}(\mathbf{z}_{L-1}^{k})-\mathbf{z}_{L}^{k}\end{array}\right], \tag{3}\]
that is, the system is _block bidiagonal_. This structure is a direct consequence of the Markovian nature of the sequential operation: since each step relates only two adjacent variables \(\mathbf{z}_{l-1}\) and \(\mathbf{z}_{l}\), only two diagonals appear. The core of DeepPCR lies in applying a specialized parallel algorithm for solving systems with this very structure, as described in Sec. 3.
Parallel Cyclic Reduction for NNs
The solution of a block bidiagonal system is usually obtained via forward substitution: once \(\mathbf{z}_{l}\) is known, it is used to recover \(\mathbf{z}_{l+1}\) and so on, in increasing order in \(l\). This procedures is efficient, but inherently sequential, and as such might represent a bottleneck for large \(L\). Interestingly, there exist alternative algorithms for the solution of such systems, which trade-off more complex instructions and extra memory consumption for a higher degree of parallelization. One such algorithm, and the one our method is based on, is Parallel Cyclic Reduction (PCR) [19]. Originally, PCR was devised to parallelize the solution of tridiagonal systems; in this work, we describe its adaptation for bidiagonal systems such as (3). In a nutshell, PCR works by combining the equations of a system to progressively reduce its dimension, until it becomes easily solvable. Pseudo-code for the adapted algorithm is reported in Alg. 1, and a schematic of how the reduction is performed is outlined in Fig. 2. More details on its functioning are provided next.
We start by noting that systems like (3) can be compactly represented as a set of equations involving only two _adjacent_ variables \(\delta\mathbf{z}_{l-1}\), \(\delta\mathbf{z}_{l}\):
\[\delta\mathbf{z}_{l}-\underbrace{J_{f_{1}}|_{\mathbf{z}_{l-1}}}_{=:A_{l}^{0}}\delta \mathbf{z}_{l-1}-(\underbrace{f_{l}(\mathbf{z}_{l-1})-\mathbf{z}_{l}}_{=:\mathbf{r}_{l}^{0}})=0,\qquad l=1,\dots,L, \tag{4}\]
with \(\delta\mathbf{z}_{0}=f_{0}(\mathbf{x})-\mathbf{z}_{0}^{k}\) known. The \(0\) superscripts in the operators \(A_{l}^{0}\) and vectors \(\mathbf{r}_{l}^{0}\) defined above refer to the current (0-th) PCR step. As a first step for PCR, we substitute the \((l-1)\)-th equation into the \(l\)-th, for each \(l\) in parallel, recovering
\[\delta\mathbf{z}_{l}-\underbrace{A_{l}^{0}A_{l-1}^{0}}_{=:A_{l}^{1}}\delta\mathbf{z}_ {l-2}-\underbrace{\left(\mathbf{r}_{l}^{0}-A_{l}^{0}\mathbf{r}_{l-1}^{0}\right)}_{=: \mathbf{r}_{l}^{1}}=0,\qquad l=2,\dots,L. \tag{5}\]
Notice that the original structure is still preserved, but now the equations relate variables \(l\) to \(l-2\). In other words, the even and the odd variables have become separated, and we have split the original system into two independent subsystems: one involving variables \(\delta\mathbf{z}_{0},\delta\mathbf{z}_{2},\dots\), the other \(\delta\mathbf{z}_{1},\delta\mathbf{z}_{3},\dots\). At the next step, we substitute equations \(l-2\) into \(l\), to recover:
\[\delta\mathbf{z}_{l}-\underbrace{A_{l}^{1}A_{l-2}^{1}}_{=:A_{l}^{2}}\delta\mathbf{z}_ {l-4}-\underbrace{\left(\mathbf{r}_{l}^{1}-A_{l}^{1}\mathbf{r}_{l-2}^{1}\right)}_{=: \mathbf{r}_{l}^{1}}=0,\qquad l=5,\dots,L, \tag{6}\]
so that now only variables at distance 4 are related. Ultimately, at each step of PCR, we are splitting each subsystem into two independent subsystems. If we iterate this procedure for \(\log_{2}L\) steps, we finally obtain \(L\) systems in one variable, which are trivially solvable, thus recovering the solution to the original system.
Figure 2: Left: pseudo-code for PCR algorithm. Right: schematic of row reductions in PCR: green rows are combined pairwise to obtain a system of equations in even unknowns; at the same time, blue rows are combined to obtain a system in odd unknowns only. The result is two independent systems with half the original number of unknowns. The procedure is then repeated for \(\log_{2}L\) steps.
### Limitations of DeepPCR
The main advantage of using DeepPCR for solving (1) lies in the fact that it requires only \(\mathcal{O}(\log_{2}L)\) sequential steps, as opposed to the \(\mathcal{O}(L)\) necessary for traditional forward substitution. However, some conditions must be verified for this procedure to be effective in achieving speedups. We discuss next some recommendations and limitations associated with DeepPCR.
Effective speedup for deep modelsWhile PCR requires fewer sequential steps overall, each step is in principle more computationally intensive than its sequential counterpart, as it requires multiple matrix-matrix multiplications to be conducted concurrently (by comparison, one step of the sequential case requires applying the step function \(f_{l}(\mathbf{z})\)), as per line 6 in Alg. 1. If this cannot be done efficiently, for example because of hardware limitations, then we can expect performance degradation. Moreover, the difference between the linear and logarithmic regimes becomes useful only for large \(L\). Both these facts are investigated in Sec. 4.1.
Controlling Newton iterationsWhenever (1) is nonlinear, the complexity actually becomes \(\mathcal{O}(c_{N}\log_{2}L)\), where \(c_{N}\) identifies the number of Newton iterations necessary for convergence. On the one hand, it is important for \(c_{N}\) to remain (roughly) constant and small, particularly with respect to \(L\), for the logarithmic regime to be preserved and speedups to be attained; on the other hand, there is a positive correlation between \(c_{N}\) and the accuracy of the solution recovered by the Newton solver. Implications of this trade-off are discussed in Sec. 4.4. We also point out that, in general, Newton's method provides no guarantees on _global_ convergence (unlike Jacobi's in [39], which reduces to the sequential solution in the worst-case scenario). Even though in our experiments the method never fails to converge, it is worth keeping in mind that ultimately the solver performance is dependent both on the regularity of the target function (1), and on the initialization choice. In particular, the effect of the latter is investigated in appendix F, but already the simple heuristics employed in our experiments (such as using the average of the train set images as initialization for the output of our DMs) have proven to be effective in providing valid initial guesses for Newton.
Benefits from larger memoryTo apply DeepPCR, it is necessary to store the temporary results from the equation reductions (most noticeably, the operators \(A_{l}\) in line 6 in Alg. 1). The associated memory requirements scale linearly in the number of steps \(L\) and quadratically in the dimension of each step output \(\mathbf{z}\). This results in an increase in memory usage with respect to classical approaches (roughly \(2\times\) as much for forward passes in MLPs, as measured and reported in appendix C.2). We point out that the additional memory requirements of DeepPCR may limit its applications to some distributed training settings where memory is already a bottleneck. Moreover, one can expect additional communication overhead to arise in these settings.
## 4 Results
In this section, we set out to demonstrate the applicability of DeepPCR to a variety of scenarios. We start by investigating the performance characteristics of DeepPCR when applied to the forward and backward passes through a Multi-Layer Perceptron (MLP). Experimenting with this model problem is mostly aimed at identifying regimes where DeepPCR achieves speedup. Specifically, in Sec. 4.1 we show that, when applied to the forward pass, DeepPCR becomes effective in architectures with more than \(2^{7}\) layers. For the backward pass, this regime is reached earlier, in architectures with \(2^{5}\) layers. Next, we explore the effects of applying DeepPCR to speedup the whole training procedure, considering ResNets architectures: in Sec. 4.2 we verify not only that the speedups measured for the single forward and backward passes carry over to this scenario, achieving a \(7\times\) speedup over the sequential implementation, but also that training with DeepPCR results in equivalent models than using sequential passes. In Sec. 4.3, we showcase the flexibility of DeepPCR by using it to speedup another type of sequential operation: the denoising procedure employed by diffusion models in image generation. We consider applications to latent diffusion, and find speedups of up to \(11.2\times\), with negligible error with respect to the sequential counterpart. Lastly, in Sec. 4.4 we focus on the role of the Newton solver in the DeepPCR procedure, establishing that the method remains stable and recovers satisfactory results even by limiting the number of Newton iterations, thus allowing to trade-off additional speedup for an increased approximation error with respect to sequential solutions.
All the experiments in this section were conducted on a V100 GPU with 40GB of RAM; our models are built using the PyTorch framework, without any form of neural network compilation.
### Speeding up forward and backward passes in MLPs: identifying performance regimes
Our first goal is to identify under which regimes DeepPCR can effectively provide a speedup. To this end, we consider a single forward pass through a randomly initialized MLP with a constant number of hidden units (namely, its width \(w\)) at each layer, and profile our algorithm for varying \(w\) and NN depth, \(L\). Notice that these two parameters directly affect the size of (3): \(L\) determines the number of equations, while \(w\) the unknowns in each equation; as such, they can be used as indication of when to expect speedups for more complex problems.
Timing results for these experiments are reported in Fig. 3. The leftmost column refers to the sequential implementation of forward (top) and backward (bottom) pass, and clearly shows the linear complexity in \(L\) of such operations: the curves flatten on a line of inclination 1. Conversely, the graphs in the middle column illustrate DeepPCR's performance, and trace a logarithmic curve for the most part, confirming the theoretical expectations on its \(\mathcal{O}(\log_{2}L)\) complexity. Notice this reduces the wall-clock time for a single forward pass from \(0.55s\) to \(0.015s\), and for a backward pass from \(589ms\) to \(2.45ms\), corresponding to speedups of \(>30\times\) and \(200\times\), respectively, at least for the most favorable architectures - and this despite the fact that there has been more than 20 years of optimization into extracting the best performance from the current GPU hardware when running the sequential forward and backward pass. This result is encouraging as our proposed algorithm can gain from further optimization in each of its steps.
As the MLP grows in width, however, the logarithmic regime is abandoned in favour of a linear regime. This performance degradation is due to the fact that the reductions in line 6 necessary for PCR cannot be performed concurrently anymore. Notice that \(w\) relates directly to the size of the Jacobian blocks in (3), so we can expect similar problems whenever the Jacobian size grows past a given threshold. This issue is caused by hardware limitations, and can be addressed by using dedicated hardware or by optimizing the implementation: evidence of this claim is provided in appendix C.1, where we measure how the threshold for abandoning the logarithmic regime shifts as we use GPUs with different amounts of dedicated memory. Finally, the rightmost graphs in Fig. 3 show the ratio of timings for the sequential versus parallel implementation: any datapoint above 1 indicates effective speedup. The break-even point between the two methods lies around \(L\approx 2^{7}\) for the forward pass.
Figure 3: Time to complete a single forward pass (top) and backward pass (bottom), for MLPs of varying depths \(L\) and widths \(w\), with ReLU activation function. Each datapoint reports the minimum time over 100 runs. The left, center, and right columns refer to the sequential implementation, the DeepPCR implementation, and the ratio between the timings of the two, respectively.
Results for backward pass are qualitatively comparable, but achieve break-even at \(L\approx 2^{5}\): this gain is due to the fact that the backward pass is a linear operation, and as such does not require Newton iterations. For a more in-depth analysis of the role of the Newton solver, we refer to Sec. 4.4.
### Speeding up training of ResNets
The results in Sec. 4.1 identify regimes where one can expect to achieve speedup using DeepPCR, but they only refer to a single forward and backward pass through a freshly initialized model. The results in this section aim to verify that DeepPCR can be used to accelerate forward and backward passes for the whole training procedure, and that the speedup is maintained throughout. To this end, we train a deep ResNet model composed of only fully-connected layers. Each ResNet block consists of 4 layers of width \(2^{4}\) and the ReLU activation function. The models are trained on a classification task on MNIST [8], both using the sequential approach and DeepPCR. We train for 8 epochs using an SGD optimizer with a learning rate of \(10^{-3}\) without a scheduler. We perform training runs with various seeds but report results from only one for readability: the others are comparable, and we show their statistics in appendix D. In Fig. 4 we report the evolution of the wall-clock time measurements for the forward pass throughout the training procedure. We can notice these remain roughly constant, confirming that the speedup achieved by DeepPCR is preserved during training. Notice that using DeepPCR translates into a speedup of \(7\times\) over the sequential implementation: over the whole course of training, this entails a wall-clock time difference of \(3.2h\) versus \(30min\), even without including the gains from the backward pass.
As mentioned in Sec. 3.1, we remind the reader that DeepPCR uses Newton in order to solve (1). Being Newton an approximate solver, one may wonder whether we are accumulating numerical errors with respect to the sequential solution, how does it affect the evolution of the parameters, and what is
Figure 4: Time to complete forward pass during training, for sequential (left) and DeepPCR implementation (center), and ratio between the two (right), for ResNets of varying depths \(L\), with \(w=2^{4}\), skip connection of length \(4\), and ReLU activation function. Each datapoint is an average over 100 optimization steps, and the shaded area spans to \(\pm 1\) standard deviation.
Figure 5: Loss evolution during training with forward and backward passes computed sequentially (left), with DeepPCR (center), and difference between the two (right), for ResNets of varying depths \(L\), with \(w=2^{4}\), skip connection of length \(4\), and ReLU activation function. Each datapoint is an average over 100 optimization steps, and the shaded area spans \(\pm 1\) standard deviation.
the impact on the quality of the final trained model. In our experiments, we measure such impact by comparing the evolution of the loss curves for the models trained sequentially and in parallel with DeepPCR. These are reported in Fig. 5, which shows that, for our experiments, the evolutions are practically equivalent. To further confirm this, we report the accuracy evolution on the test set in appendix D: in both cases, it sits around \(94\%\) at the end of training. The effects of the Newton solver on performance are further discussed in Sec. 4.4.
### Speeding up image generation in Diffusion Models
The experiments in this section showcase the flexibility of DeepPCR in accelerating more general definitions of sequential operations. As an example, we apply DeepPCR to speedup image generation via latent-space diffusion models [37]. Note that we are interested in parallelizing the whole denoising procedure, rather than the single forward pass through the denoiser: we refer to appendix A.4 for the specifics on how this operation falls within the DeepPCR framework. We consider the size of the latent space and the number of denoising steps as the two main parameters which can impact the effectiveness of DeepPCR, and measure how the performance of our method varies according to them. Notice that, in determining the size of system (3), these two parameters cover the same role as \(w\) and \(L\) in Sec. 4.1, respectively, so we identify them using the same notation. Our latent diffusion model considers a simplification of the KL-AutoEncoder introduced by [37] as an encoder, and a custom MLP with residual connections as denoiser: see appendix E for details.
In Fig. 6 (left) we report the average time1 for completing the diffusion procedure, either sequentially or using DeepPCR, for 100 runs on architectures trained on MNIST with various values of \(w\) and \(L\). Notice how even in this case the time for the sequential approach grows linearly with respect to the number of denoising steps, while for DeepPCR the growth is logarithmic for the most part. Increasing \(w\) past \(\sim 2^{6}\), though, results in a speedup reduction for the largest \(L=2^{10}\), matching what is observed in Fig. 3: similarly, this is related to hardware limitations, and we refer again to appendix C.1 for an analysis of the phenomenon. The distributions of the associated speedups are also plotted in Fig. 6 (middle), where we can see that DeepPCR manages to generate images up to \(11\times\) faster, reducing the required time from \(1.3s\) to \(0.12s\) for certain configurations. To ensure the quality of the resulting images, we follow the FID score [16] and measure the Wasserstein-2 distance between the latent distribution of the original test set and the latent distribution of the images recovered, either sequentially or using DeepPCR. The difference of these distances is also reported in Fig. 6, and is consistently close to \(0\), hinting that using either method results in images of similar qualities. Some examples images generated sequentially or using DeepPCR can be seen in Fig. 18, to further confirm that they are hardly distinguishable. We also experimented with diffusion in pixel-space: the corresponding timings can be found in Tab. 2, and their behavior mimics what was observed for latent diffusion.
Footnote 1: We point out that the timings in Fig. 6 and 7 are a proxy, evaluated assuming perfect parallelizability of the Jacobian assembly operation necessary to initialize system (3). We could not measure exact wall-clock time due to incompatibilities between the vmap and autograd functionalities provided in PyTorch. Nonetheless, this proxy is reasonably accurate, as the time required to assemble the Jacobians is negligible with respect to that for the PCR reduction (see appendix E.2, and particularly Fig. 17 for details).
Figure 6: Results from applying DeepPCR to speedup image generation in latent diffusion trained on MNIST, for various latent space dimensions \(w\) and number of denoising steps \(L\). Left: timings using sequential and DeepPCR approaches (average over 100 runs). Middle: violin plots of speedups distribution (ratio of sequential/DeepPCR timings for 100 runs). Right: difference between Wasserstein-2 distances to test distribution of latents recovered sequentially and using DeepPCR.
Finally, in order to provide empirical evidence of the capability of DeepPCR to provide speedup also for other datasets, we experiment with latent diffusion on CIFAR-10 [25] and CelebA [29] as well. The corresponding timings results are reported in Fig. 7. We limit ourselves to \(w>2^{6}\) due to the difficulty of training VAEs for these datasets on smaller latent dimensions. Nonetheless, the timing results are comparable to the ones measured for MNIST in Fig. 6, and even in this case we manage to recover speedups of \(8\times\) and \(9\times\) for CIFAR-10 and CelebA, respectively. We can see that also for these more complex datasets the performance of DeepPCR starts degrading for \(w>2^{7}\), similarly to what is observed in Fig. 6. This observation further confirms that the speedup attained by DeepPCR is influenced by the problem parameters \(w\) and \(L\), but is otherwise dataset-independent.
### Accuracy/Speedup trade-off: analysis on Newton convergence
As outlined in Sec. 2, when system (1) is nonlinear, DeepPCR relies on a Newton solver. This is an iterative solver, which only recovers an _approximate_ solution, correct up to a fixed tolerance. The experiments in the previous sections were conducted with a tolerance of \(10^{-4}\), as we were interested in recovering a solution which would closely match the sequential one. The tolerance of the solver, however, grants us a degree of freedom in trading off accuracy for additional speedup. In this section we investigate in detail the properties of the Newton method when used for the solution of the problems considered in Sec. 4.1 and 4.2.
As a first result, we show that Newton can indeed recover high-quality solutions, within a number of iterations \(c_{N}\) which is small and roughly independent of the configuration considered. To this purpose, we report in Fig. 8 the values of \(c_{N}\) recorded for the experiments in Sec. 4.1 and 4.2. In all configurations considered, they remained bounded below \(c_{N}\leq 6\), and practically independent on the system configuration, particularly of \(L\). In Fig. 8 (first on the left), we see that the performance of the Newton solver is indeed impacted by the type of activation function used in the layers of the MLP: using ReLUs generally requires more iterations for convergence than using a smoother counterpart such as sigmoid. This is in line with the properties of the Newton method which assumes differentiability of the underlying function for fast convergence.
Additionally, for the same set-up, we show (second plot in Fig. 8) the error between the solution recovered via Newton with DeepPCR and the traditional solution, recovered sequentially. This error is expressed in terms of the \(L^{2}\) difference of the NN output (for the experiments in Sec. 4.1) and in terms of the \(L^{\infty}\) difference of the parameters evolution (for the experiments in Sec. 4.2), to better reflect the relevant metrics of the two experiments. The former sits almost always around machine precision, confirming that sequential and DeepPCR solutions are extremely close. For the latter, we see that small numerical errors eventually accumulate throughout the training procedure. Still, the discrepancies are bounded, and this does not affect the final performance of the trained model (as shown also in Fig. 5, and appendix D).
Finally, we conduct an ablation study on the effect of reducing the accuracy of the recovered solution. To this end, we consider again the framework in Sec. 4.2, but this time we fix the number of Newton iterations for solving the forward pass to increasingly small values, and check at which stage training
Figure 7: Results from applying DeepPCR to speedup image generation in latent diffusion, for various latent space dimensions \(w\) and number of denoising steps \(L\). The timings compare sequential (baseline) and DeepPCR approaches, reporting an average over 100 runs, for models trained over the CIFAR-10 (left) and CelebA (right) datasets.
of the ResNets fails. The results reported in appendix F.1 show that, for the problem considered, stopping Newton at \(c_{N}=3\) still results in successful training. This translates into an additional \(2\times\) speedup with respect to the ResNet times reported in Fig. 4, for a total of up to \(14\times\) speedup. For more general problems, we can expect that fine-tuning the Newton solver would play a relevant role in the final speedup attained. Particularly, choosing the correct initial guess for the system and identifying the most apt tolerance level.
## 5 Conclusion, Limitations, and Future Work
We introduced DeepPCR, a method for parallelizing sequential operations which are relevant in NN training and inference. The method relies on the target sequence being Markovian: if this is satisfied, the sequential operation can be interpreted as the solution of a bidiagonal system of equations. The system is then tackled using Parallel Cyclic Reduction, combined with Newton's method. We investigated the effectiveness and flexibility of DeepPCR by applying it to accelerate: i) forward/backward passes in MLPs, ii) training of ResNets, and iii) image generation in diffusion models, attaining speedups of up to \(30\times\), \(7\times\), and \(11\times\) for the three problems, respectively. We identified regimes where the method is effective, and further analyzed trade-offs in terms of speedup, accuracy, and memory consumption.
The main bottleneck for our DeepPCR implementation is represented by the decay in performance associated with the growth in size of the Jacobian blocks in (3). While this can be curbed by using hardware with larger memory and/or better parallelization capabilities, investigating alternative ways to circumvent this issue would greatly benefit the applicability of DeepPCR. Another potential issue is related to the reliance of DeepPCR on a Newton solver for recovering the solution to the target system. While Newton proved to be reasonably robust for the target applications we investigated, in order to achieve best performance one might have to perform _ad-hoc_ adjustments to the solver, depending on the specific sequential operation considered.
Future work will focus on relaxing the limitations outlined above, but also on investigating the applicability of DeepPCR to speedup forward and backward passes through more complex architectures, as well as to speedup different types of sequential operations. In particular, text generation in large language models [4] could be a suitable candidate. Overall, DeepPCR represents a promising method for speeding up training and inference in applications where reducing wall-clock time is critical, and additional computational power is available for parallelization. Furthermore, DeepPCR has the potential to unlock architectures which were not previously experimented upon, due to the long computational time required to perform inference on them.
Figure 8: Newton solver analysis for forward pass through MLP (left), and ResNet training (right).
## Acknowledgements
The authors would like to thank Barry Theobald, David Grangier and Ronan Collobert for their effort and help in proofreading the paper, and Nicholas Apostoloff and Jerremy Holland for supporting this work. The work by Federico Danieli was conducted as part of the AI/ML Residency Program in MLR at Apple.
|
2308.16757 | Universal Approach to Critical Percolation | Percolation problems appear in a large variety of different contexts ranging
from the design of composite materials to vaccination strategies on community
networks. The key observable for many applications is the percolation
threshold. Unlike the universal critical exponents, the percolation threshold
depends explicitly on the specific system properties. As a consequence,
theoretical approaches to the percolation threshold are rare and generally
tailored to the specific application.
Yet, any percolating cluster forms a discrete network the emergence of which
can be cast as a graph problem and analyzed using branching processes. We
propose a general mapping of any kind of percolation problem onto a branching
process which provides rigorous lower bounds of the percolation threshold.
These bounds progressively tighten as we incorporate more information into the
theory. We showcase our approach for different continuum problems finding
accurate predictions with almost no effort. Our approach is based on first
principles and does not require fitting parameters. As such it offers an
important theoretical reference in a field that is dominated by simulation
studies and heuristic fit functions. | Fabian Coupette, Tanja Schilling | 2023-08-31T14:26:02Z | http://arxiv.org/abs/2308.16757v1 | # Universal Approach to Critical Percolation
###### Abstract
Percolation problems appear in a large variety of different contexts ranging from the design of composite materials to vaccination strategies on community networks. The key observable for many applications is the percolation threshold. Unlike the universal critical exponents, the percolation threshold depends explicitly on the specific system properties. As a consequence, theoretical approaches to the percolation threshold are rare and generally tailored to the specific application. Yet, any percolating cluster forms a discrete network the emergence of which can be cast as a graph problem and analyzed using branching processes. We propose a general mapping of any kind of percolation problem onto a branching process which provides rigorous lower bounds of the percolation threshold. These bounds progressively tighten as we incorporate more information into the theory. We showcase our approach for different continuum problems finding accurate predictions with almost no effort. Our approach is based on first principles and does not require fitting parameters. As such it offers an important theoretical reference in a field that is dominated by simulation studies and heuristic fit functions.
Percolation describes the formation of giant components in complex systems [1; 2]. Originally proposed to describe water permeating a rock through the emergence of a system spanning cavity network [3; 4; 5; 6], percolation theory has been applied in a broad variety of different contexts [7; 8] such as the design of composite materials [9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19], the analysis of complex networks [20; 21; 22; 23; 24], and transport through porous media [25; 26; 27; 28; 29]. The phenomenon attracted particular attention due to its resemblance of a thermodynamic phase transition [30; 31; 32; 33; 34], where the percolation probability acts as an order parameter and the mean cluster size can be interpreted as a susceptibility with characteristic power-law exponents describing their scaling behavior in the vicinity of the critical point, i.e., the percolation threshold. While those critical exponents coincide for all standard percolation problems set in the same spatial dimension [35; 36; 37; 38; 39; 40], the percolation threshold itself sensitively depends on the intricacies of the system. As a consequence, theoretical approaches are mostly tailored to specific applications [41; 42; 43; 44; 45; 11; 15; 46; 47] and straightforward simulation is the primary tool of choice for the accurate determination of critical parameters [48; 49; 50; 51; 52; 53; 54]. Yet, the predictive power of these approaches is limited.
Connectivity percolation has been studied with a multitude of different prefixes such as lattice, continuum, directed, dynamic, protected, explosive or bootstrap [8; 55]. In spite of this variety of flavors, percolation inherently is always a graph problem. Even if particles move continuously in space, the connectivity relationship between the particles in the system can be expressed as a graph with each vertex representing a particle and edges corresponding to a connection between the particles represented by the incident vertices. Thus, each configuration \(\gamma\) of the system translates into a network which we may independently analyze for the existence of a giant component. Therefore, all percolation problems have a unified foundation which we will exploit in the following.
For illustrative purposes consider an infinite connected graph \(G\). Assigning an arbitrary vertex of the graph as the origin, \(\mathcal{O}\), we define the \(k\)-neighborhood \(\mathcal{N}_{k}\) of \(\mathcal{O}\) as the sub-graph spanned by all random walks of length \(k\) starting at the origin. The shortest distance to the origin partitions the vertex set of the complete network into mutually disjoint vertex sets \(V_{k}=V(\mathcal{N}_{k})\setminus V(\mathcal{N}_{k-1})\) where \(V(\cdot)\) refers to the vertex set of the graph in brackets (cf. Fig. 1). We now define a percolation problem on \(G\) by assigning a degree of freedom to each edge, i.e., each edge is either open or closed. Two vertices are considered connected if there is a path of open edges linking them. We call a vertex open if it is connected to the origin. The percolation probability \(\Theta\) is defined as the probability that the origin is part of an infinite cluster of connected vertices. If the model is taking parameters \(\boldsymbol{p}\in\Lambda\) from a parameter space \(\Lambda\) then the critical manifold (the "percolation threshold") is defined as the boundary of the set
\[\{\boldsymbol{p}\in\Lambda:\Theta(\boldsymbol{p})=0\}. \tag{1}\]
Given a configuration of the system, define \(X_{k}\) as the number of open vertices in \(V_{k}\) on the sub-graph \(\mathcal{N}_{k}\). Accordingly, \(X_{1}\) is the number of direct neighbors of the origin, i.e., vertices connected to the origin through a single open edge. Direct neighbors of any of these \(X_{1}\) vertices that have not been visited before comprise \(X_{2}\). Continuing in this manner gives rise to a sequence \((X_{k})_{k\in\mathbb{N}}\) that we call surface activity sequence. Only the open vertices in \(V_{k}\) have the capacity to induce open vertices in \(V_{k+1}\) on \(\mathcal{N}_{k}\). Consequently, the sequence terminates once \(X_{k}=0\), i.e. there is a finite cluster around the origin. Notice that through the exclusion of previously visited vertices in the
exploration of neighborhoods the graph generated by the modified search algorithm is treelike. Thus, we can interpret the stochastic process as a branching process of the form
\[X_{k+1}=\sum_{i=1}^{X_{k}}\xi_{i}^{k}\quad\text{with}\quad X_{0}=1\;, \tag{2}\]
with \(\xi_{i}^{k}\) being a random variable describing the distribution of next-level neighbors generated by the \(i^{\text{th}}\) member of the \(k\)-neighborhood. The extinction probability \(Q\) of this branching process is \(Q=1-\Theta\). At this stage it is irrelevant whether the sequence \((X_{k})_{k\in\mathbb{N}}\) was generated by bond percolation on the square lattice, cavities in a porous medium, or carbon nano-tubes dispersed in a polymer matrix. We can carry out this mapping for any percolation problem, but naturally we only transfer the original complexity onto the definition of the random variables \(\xi_{i}^{k}\). So what is all this good for?
There are two different aspects contributing to the complexity of \(\xi_{i}^{k}\) on the graph level: vertex correlations and loop structure. The former can express itself as, for instance, the friendship paradox [56; 57] in social networks or the structure of a hard sphere fluid [58]. These correlations can be both, positive as well as negative, and they tend to decay with the distance between layers of the construction. In simplified terms, vertex correlations account for the \(k\)-dependence of \(\xi_{i}^{k}\). Loops, on the other hand, exclusively induce negative correlations. The redundancy of multiple paths activating the same surface vertex leads to a reduced average surface activity compared to counting each path individually [4]. We are going to exploit this one-sided correlation in the following when constructing a lower bound for the percolation threshold.
Treelike networks are an important special case as they allow for exact calculation of the percolation threshold [59]. Without loops, the probability distributions corresponding to the individual \(\xi_{i}^{k}\) only depend on the distribution of vertex degrees across the system. If, furthermore, the network is asymptotically homogeneous, i.e., \(\xi^{k}\leadsto\xi\) converges in distribution to a common random variable \(\xi\) (sometimes involving a coarse-graining step cf. Fig. 1), the problem simplifies to a Galton-Watson branching process [60]. As a consequence, the percolation problem on a treelike network is critical if
\[\mathbb{E}[\xi]=1\;, \tag{3}\]
i.e., an active vertex induces on average another active vertex on the next level. The random variable \(\xi\) captures the recurring transition from one layer of the construction to the next. If the network features degree correlations (cf. right panel of Fig. 1) the definition of \(\xi\) may require an additional coarse-graining step but the criticality condition remains valid. As these degree correlations do not fundamentally change the problem we will ignore them for the moment. Loops are the actual problem.
When casting percolation on a network with cycles on a branching process we effectively replace loops by correlations between the random variables \(\xi^{i}\) that control the propagation of the surface activity. The distribution of \(X_{k+1}\) is not entirely defined by \(X_{k}\) but also depends on the position of the active surface vertices relative to each other.
If the length of loops is bounded throughout the system, we can coarse-grain to absorb the largest loop in a single branching step and solve the network as a decorated Bethe lattice. Indeed, the significance of loops of diverging size constitutes the universality class of the percolation problem. However, a loop comprises two independent paths leading to the same surface vertex. If we add the probability of each path to be open, we systematically overestimate the average surface activity by neglecting loops. For systems containing only loops of finite length we can apply our previous solution strategy, integrate out the largest loops and use eq. (3) to compute the percolation threshold. That means, without vertex correlations,
\[\mathbb{E}[X_{k}]=1\;, \tag{4}\]
always provides a lower bound to the percolation threshold which becomes progressively more precise as \(k\) is increased. Eq. (4) gives rise to a hierarchy of approximations. We call this hierarchy _areal expansion_ in analogy to the virial expansion, because in the continuum we will systematically integrate out larger volumes rather than expanding in the number of particles participating in an interaction.
For \(k=1\), eq. 4 assumes an entirely treelike network topology which is equivalent to the second virial approximation and thus consistent with the exactly solved percolation models [59]. For \(k=2\), we correctly
Figure 1: Impact of vertex correlations: fragments of two treelike lattices with the same mean vertex degree \(\langle z\rangle=3\). Left: Globally homogeneous vertex degree, i.e., regular Bethe lattice, with critical edge probability \(p_{c}=\frac{1}{\langle z\rangle-1}=\frac{1}{2}\). Right: Vertex degree alternates between 2 and 4 on every path across the lattice. Coarse-graining two subsequent branching steps into one we recover a homogeneous Bethe lattice with \(p_{c}=\frac{1}{\sqrt{3}}>\frac{1}{2}\). Colors and vertex sizes distinguish the vertex sets \(V_{k}\). Edges illustrate a realization of a bond percolation model: thick bonds are open, dashed bonds are closed.
incorporate loops comprising up to four edges. As \(k\) goes to infinity we eventually integrate out the entire system and the lower bound necessarily becomes arbitrarily tight. Naturally, we cannot compute \(X_{k}\) analytically for large \(k\) in complicated systems so that, regarding the exact value of the percolation threshold, we do not gain anything compared to a straightforward simulation. The strength of our approach is that we can compute reliable lower bounds with decent precision with little effort. Moreover, analytical results for low orders of the construction enable us to directly characterize the impact of model parameters on the percolation threshold.
To demonstrate the virtues of our approach, consider one of the most fundamental continuum percolation problems: points randomly distributed in \(\mathbb{R}^{3}\) with a prescribed number density \(\rho\). Points are connected if their separation is smaller than a parameter \(d\). Above a critical dimensionless density \(\rho_{c}d^{3}\approx 0.6530\)[50], the system almost surely contains an infinite cluster of connected points. While established simulation techniques allow for an accurate determination of this value, theoretical approaches like connectedness percolation theory employing liquid state theory for connectivity are generally inconclusive. The virial series cannot be truncated at any accessible order [61, 62] and liquid state closures to the connectedness Ornstein-Zernike equation generate wrong diagrammatic expansions [63, 64, 65, 41, 43]. Other methods which provide accurate predictions are either heuristic in nature, based on unjustifiable assumptions or tailored to specific systems [66, 67, 17]. With all these approaches, it is extremely hard to _a priori_ estimate the accuracy of the result which renders their predictive power virtually inexistent.
The areal approximation has a straightforward continuum formulation. We decompose the probability \(p(\mathbf{r}_{1},\mathbf{r}_{2})\) that particles positioned at \(\mathbf{r}_{1}\) and \(\mathbf{r}_{2}\) are connected into functions \(p_{k}\), which describe the connection probability with the additional constraint that the shortest path between them has length \(k\)
\[p(\mathbf{r}_{1},\mathbf{r}_{2})=\sum_{k=1}^{\infty}p_{k}(\mathbf{r}_{1},\mathbf{r}_{2})\;. \tag{5}\]
\(p_{k}(\mathcal{O},\mathbf{r})\) describes the probability density that a particle at \(\mathbf{r}\) is activated in the \(k^{\text{th}}\) branching step. Accordingly, the average number of particles activated in the \(k^{\text{th}}\) branching step, i.e., \(X_{k}\), is given by
\[\mathbb{E}[X_{k}]=\int\;\mathrm{d}\mathbf{r}\;\rho(\mathbf{r})p_{k}(\mathcal{O},\mathbf{r })\;. \tag{6}\]
Combining eq. (4) and eq. (6), we find a sequence \((\rho_{c}^{k})_{k\in\mathbb{N}}\) of rigorous lower bounds for the percolation threshold of a continuum percolation problem. However, the graph distance is not the natural length scale for a continuum system. Thus, we are going to improve the approach with two slight modifications.
Eq. (6) effectively approximates the cluster containing the origin as a tree of \(k\)-neighborhoods. We can switch to euclidean distance by adapting our notion of a \(k\)-neighborhood: instead of referring to the particles that can be reached by a random walk of \(k\)-steps from the origin on the connectivity graph, we include all particles that can be reached by a random walk that does not exit the ball \(\mathcal{B}_{k}(\mathcal{O})\) of radius \(k\) (euclidean distance) around the origin. The cluster containing the origin is almost surely finite if there is not on average at least one particle outside of the ball connected to the \(k\)-neighborhood. Now we define \(\mathbb{E}[X_{k+1}]\) as the average number of particles in direct contact with the \(k\)-neighborhood outside of the \(k\)-ball averaged over all system configurations.
Applying our approach to the sphere model (and generally to all non-interacting models) is particularly simple: Due to a homogeneous density distribution,
\[\mathbb{E}[X_{k+1}]=\rho\,\mathbb{E}[V_{A}(k)]\;, \tag{7}\]
with the active volume \(V_{A}(k)\subset\mathcal{B}_{k}(\mathcal{O})^{c}\) of a configuration defined as the volume outside of the \(k\)-ball in which a probe particle would intersect the \(k\)-neighborhood of the origin (see Fig. 2). The active volume comprises the set of positions which have the capacity to further extend the cluster that contains the origin. For spheres, \(V_{A}(0)\) is simply the excluded volume of the origin. This is equivalent to the second virial approximation as required for consistency with exactly solvable models. Yet, the third virial order accounts only for the addition of triangles which does not yield a real percolation threshold (see [62]). Conversely, the second areal approximation contains all configurations of the ball \(\mathcal{B}_{d}(\mathcal{O})^{c}\), e.g., complete graphs to arbitrary order. Moreover, the areal
Figure 2: Penetrable disks of diameter \(d\). Left: Areal order \(k=1\). The blue particle in the origin is isolated unless there is at least one particle centered in the active volume \(V_{A}\) (hatched). The average number of particles within this area becomes \(\mathbb{E}[X_{1}]\). Right: Areal order \(k=2\). We fix a configuration within the central disk of radius of \((k-1)d\). The corresponding cluster nucleus remains finite unless there is at least one particle in the new active volume \(V_{A}\) (hatched). The crossed area is already specified so that it does not contribute to \(V_{A}\). Averaging \(\rho V_{A}\) over all configurations in the crossed area (origin fixed) gives us \(\mathbb{E}[X_{2}]\).
approximation by design provides lower bounds of the percolation thresholds which tighten as we increase the order of the expansion. The results for the first orders are summarized in Tab. 1 - the first two areal orders are calculated analytically, for the rest we use small scale Monte Carlo integration.
However, despite a substantial improvement over the virial approximation, the lower bounds are still not particularly tight. This is because we have ignored vertex correlations. Here, our second improvement comes in. The origin has a full \(4\pi\) solid angle available to branch to, whereas a particle on the surface of the growing cluster after \(j\) branching iterations can only grow outward as we already integrated out a ball of radius \((j-1)d\). If \(j\) is large but finite we still find for any \(\rho>0\) an open particle \(J\) on the surface of the cluster with finite probability. Thus, the system percolates if a branching process initiated at \(J\) does not eventually terminate with probability 1. Yet, if \(j\) is sufficiently large, the branching process starting at \(J\) is effectively constrained to a half-space. By switching from the origin to \(J\), we remove the vertex degree correlations caused by the artificial symmetry of the origin, getting rid of the explicit \(k\)-dependence \(\tilde{\xi}_{i}^{k}\) in eq. (2) by "fast-forwarding to infinity". Fig. 3 depicts the average surface activity for the constrained branching process. The resultant lower bounds to the percolation threshold \(\tilde{\rho}_{c}^{kd}\) have significantly tightened (see Tab. 1). Moreover, the mean surface activities for different orders \(k\) mutually intersect each other in close proximity, at a density likewise close to the literature critical density. This pattern is familiar from finite size scaling analysis and exact for treelike networks (see e.g. ref. [59]). For general systems we expect the intersection point to drift slightly towards the percolation threshold.
Finally, we use our method to study the impact of particle anisotropy on the percolation threshold. Consider a penetrable spherocylinder of thickness \(d\) and length \(l+d\), \(l\) being the length of the linear segment, and define its aspect ratio \(\zeta:=1+\frac{l}{d}\). In the Onsager limit \(\zeta\rightarrow\infty\), the second virial approximation becomes exact [11], but for \(\zeta<100\) the approximation is inaccurate as simulation studies have shown [68; 53; 69]. Heuristic corrections have been proposed by subjecting the critical number of nearest-neighbors to a power-law fit in the aspect ratio [68; 69]. But the functional form of this fit is unjustified and does not agree with later simulation studies [53]. We can improve on the second virial approximation using the areal framework: we integrate out a ball of radius \(\zeta\) and average the number of next-nearest neighbors. We place the center of the first spherocylinder at the origin with a random orientation, fill the \(\zeta\)-ball with a fixed number of randomly placed and randomly oriented cylinders, and average the active volume of the resulting arrangement over many different configurations. Convolving the result with a Poisson distribution with mean \(\rho V_{\zeta}\), where \(V_{\zeta}\) is the volume of the specified \(\zeta\)-ball, yields \(\mathbb{E}[X_{2}]\) as a function of the density. The solutions to \(\mathbb{E}[X_{2}]=1\) for different aspect ratios lead to the lower bounds \(\rho_{c}^{2}\) illustrated in Fig. 4. We observe only a slight improvement over the second virial approximation (\(\rho_{c}^{1}\)). But, once more, we can restrict the branching process to a half-space to find a massive tightening of the lower bound. The relative deviation to the simulation results is most pronounced for \(\zeta=2\) (\(\lesssim 20\%\)) and decreases monotonically with \(\zeta\) (\(\approx 10\%\) at \(\zeta=21\)). This is expected as the importance of loops in the formation of the percolating cluster diminishes with the aspect ratio ultimately approaching treelike topology in the Onsager limit. It should be emphasized that we find this accuracy for the lowest order above the second virial approximation - the sampling simulation requires at most a few tens of cylinders. This demonstrates that the severe shortcoming of the second virial approximation is not primarily the neglect of loops but rather the omission of vertex degree correlations.
In summary, we introduced a general mapping of percolation problems onto branching processes. It is easily applicable to any type of percolation problem: pick an origin and define a notion of \(k\)-neighborhood, construct the corresponding random variables \(X_{k}\) and solve
\begin{table}
\begin{tabular}{c c c c}
**order \(k\)** & **areal \(\rho_{c}^{kd}d^{3}\)** & **ar. half-space \(\tilde{\rho}_{c}^{kd}d^{3}\)** & **virial** \\
1 & 0.2387 & 0.4775 & — \\
2 & 0.3468 & 0.5502 & 0.2387 \\
3 & 0.4375 & 0.5959 & — \\
4 & 0.4874 & 0.6099 & 0.3001 \\ \hline Simulation & 0.6530 & 0.6530 & 0.6530 \\ \end{tabular}
\end{table}
Table 1: Lower bounds for the percolation threshold calculated with areal and virial approximation.
Figure 3: Mean surface activity \(\mathbb{E}[X_{k}]\) of fully penetrable spheres as a branching process constrained to a half-space. For \(k\leq 2\) the \(\mathbb{E}[X_{k}]\) can be computed analytically, for \(k>2\) we use small-scale simulations. The circles mark the roots of \(\mathbb{E}[X_{k}]=1\) corresponding to the lower bounds listed in Tab. 1. The red circle contains all mutual intersections between the surface activities at different orders. The vertical dashed line demarcates the critical density determined with high-precision simulations [50]. |
2309.08898 | Investigation of the Anomalous and Topological Hall Effects in Layered
Monoclinic Ferromagnet Cr$_{2.76}$Te$_4$ | We studied the electrical transport, Hall effect, and magnetic properties of
monoclinic layered ferromagnet Cr$_{2.76}$Te$_4$. Our studies demonstrate
Cr$_{2.76}$Te$_4$ to be a soft ferromagnet with strong magnetocrystalline
anisotropy. Below 50 K, the system shows an antiferromagnetic-like transition.
Interestingly, between 50 and 150 K, we observe fluctuating magnetic moments
between in-plane and out-of-plane orientations, leading to non-coplanar spin
structure. On the other hand, the electrical resistivity data suggest it to be
metallic throughout the measured temperature range, except a $kink$ at around
50 K due to AFM ordering. The Rhodes-Wohlfarth ratio
$\frac{\mu_{eff}}{\mu_{s}}=1.89 (>1)$ calculated from our magnetic studies
confirms that Cr$_{2.76}$Te$_4$ is an itinerant ferromagnet. Large anomalous
Hall effect has been observed due to the skew-scattering of impurities and the
topological Hall effect has been observed due to non-coplanar spin-structure in
the presence of strong magnetocrystalline anisotropy. We examined the mechanism
of anomalous Hall effect by employing the first principles calculations. | Shubham Purwar, Achintya Low, Anumita Bose, Awadhesh Narayan, S. Thirupathaiah | 2023-09-16T06:45:46Z | http://arxiv.org/abs/2309.08898v1 | Investigation of the Anomalous and Topological Hall Effects in Layered Monoclinic Ferromagnet Cr\({}_{2.76}\)Te\({}_{4}\)
###### Abstract
We studied the electrical transport, Hall effect, and magnetic properties of monoclinic layered ferromagnet Cr\({}_{2.76}\)Te\({}_{4}\). Our studies demonstrate Cr\({}_{2.76}\)Te\({}_{4}\) to be a soft ferromagnet with strong magnetocrystalline anisotropy. Below 50 K, the system shows an antiferromagnetic-like transition. Interestingly, between 50 and 150 K, we observe fluctuating magnetic moments between in-plane and out-of-plane orientations, leading to non-coplanar spin structure. On the other hand, the electrical resistivity data suggest it to be metallic throughout the measured temperature range, except a kink at around 50 K due to AFM ordering. The Rhodes-Wohlfarth ratio \(\frac{\mu_{\mathrm{FL}}}{\mu_{\mathrm{s}}}=1.89(>1)\) calculated from our magnetic studies confirms that Cr\({}_{2.76}\)Te\({}_{4}\) is an itinerant ferromagnet. Large anomalous Hall effect has been observed due to the skew-scattering of impurities and the topological Hall effect has been observed due to non-coplanar spin-structure in the presence of strong magnetocrystalline anisotropy. We examined the mechanism of anomalous Hall effect by employing the first principles calculations.
## I Introduction
Two-dimensional (2D) magnetic materials with topological properties [1; 2; 3] have sparked significant research attention recently due to their potential applications in spintronics and magnetic storage devices [4; 5; 6]. Importantly, these are the van der Waals (vdW) magnets possessing peculiar magnetic properties with strong magnetocrystalline anisotropy (MCA) [7; 8; 9]. In general, the Heisenberg-type ferromagnet does not exist with long-range magnetic ordering at a finite temperature in the 2D limit due to dominant thermal fluctuations [10]. However, the strong magnetic anisotropy that usually present in the low-dimensional materials can stabilize the long-range magnetic ordering to become a 2D Ising-type ferromagnet [11]. Till date many 2D ferromagnets have been discovered experimentally [12; 13], but only a few of them can show the topological signatures such as the topological Hall effect (THE) or skyrmion lattice. For instance, the recent microscopic studies on Cr\({}_{2}\)Ge\({}_{2}\)Te\({}_{6}\)[14], Fe\({}_{3}\)GeTe\({}_{2}\)[15], and Fe\({}_{5}\)GeTe\({}_{2}\)[16; 17] demonstrated topological magnetic structure in the form of skyrmion bubbles in their low-dimensional form.
On the other hand, soon after predicting the layered Cr\({}_{\mathrm{x}}\)Te\({}_{\mathrm{y}}\)-type systems as potential candidates to realize the 2D ferromagnetism in their bulk form [18; 19], a variety of Cr\({}_{\mathrm{x}}\)Te\({}_{\mathrm{y}}\) compounds were grown including CrTe [20], Cr\({}_{2}\)Te\({}_{3}\)[21], Cr\({}_{3}\)Te\({}_{4}\)[22], and Cr\({}_{5}\)Te\({}_{8}\)[23]. Interestingly, all these systems are formed by the alternative stacking of Cr-full (CrTe\({}_{2}\)-layer) and Cr-vacant (intercalated Cr-layer) layers along either \(a\)-axis or \(c\)-axis [24]. Thus, the Cr concentration plays a critical role in forming the crystal structure, magnetic, and transport properties. The compounds like Cr\({}_{5}\)Te\({}_{8}\), Cr\({}_{2}\)Te\({}_{3}\), and Cr\({}_{3}\)Te\({}_{4}\) are reported to crystallize in monoclinic or trigonal structures, whereas Cr\({}_{1-\mathrm{x}}\)Te (x \(<0.1\)) crystallizes in the hexagonal NiAs-type structures [25]. The electronic band structure calculations performed on CrTe, Cr\({}_{2}\)Te\({}_{3}\), and Cr\({}_{3}\)Te\({}_{4}\) suggest a strong out-of-plane Cr \(3\mathrm{d}\) e\({}_{\mathrm{g}}\) orbital, d\({}_{x^{2}}-\)d\({}_{x^{2}}\), overlapping along the \(c\)-axis to have relatively smaller nearest neighbor Cr\({}_{\mathrm{}}\)\(-\)Cr distance [24]. In addition, Cr\({}_{5}\)Te\({}_{8}\)[23], Cr\({}_{1.2}\)Te\({}_{2}\)[26], Cr\({}_{0.87}\)Te [27] are known to show topological properties in the hexagonal phase.
In this work, we systematically investigate the electrical transport, Hall effect, and magnetic properties of monoclinic Cr\({}_{2.76}\)Te\({}_{4}\) which is very close to the stoichiometric composition of Cr\({}_{3}\)Te\({}_{4}\). We observe that the easy-axis of magnetization is parallel to the \(bc\)-plane, leading to strong magnetocrystalline anisotropy. Below 50 K, the system shows antiferromagnetic-like transition. In addition, we find fluctuating Cr magnetic moments between in-plane and out-of-plane directions within the temperature range of 50 and 150 K. Electrical resistivity data suggest Cr\({}_{2.76}\)Te\({}_{4}\) to metallic throughout the measured temperature range with a kink at around 50 K due to AFM ordering. Our studies clearly point Cr\({}_{2.76}\)Te\({}_{4}\) to an itinerant ferromagnet. Magnetotransport measurements demonstrate large anomalous Hall effect (AHE) and topological Hall effect (THE) in this system. First-principles calculations point to an intrinsic AHE due to non-zero Berry curvature near the Fermi level, while experimentally it is found to be an extrinsic AHE due to the skew-scattering [28].
## II Experimental Details
High quality single crystals of Cr\({}_{2.76}\)Te\({}_{4}\) were grown by the chemical vapor transport (CVT) technique with iodine as a transport agent as per the procedure described earlier [29]. Excess lodine present on the crystals was removed by washing with ethanol several times and dried under vacuum. The as-grown single crystals were large in size (\(3\times 2\) mm\({}^{2}\)), were looking shiny, and easily cleavable in the \(bc\)-plane. Photographic image of typical single crystals is shown in the inset of Fig. 1(a). Crystal structural and phase purity of the
single crystals were identified by the X-ray diffraction (XRD) technique using Rigaku X-ray diffractometer (SmartLab, 9kW) with Cu K\({}_{\alpha}\) radiation of wavelength 1.54059 A. Compositional analysis of the single crystals was done using the energy dispersive X-ray spectroscopy (EDS of EDAX). Magnetic and transport studies were carried out on the physical property measurement system (9 Tesla-PPMS, DynaCool, Quantum Design). See Supplemental Material for more discussion on the chemical composition of the studied system [30]. Electrical resistivity and Hall measurements were performed in the standard four-probe method. To eliminate the longitudinal magnetoresistance contribution due to voltage probe misalignment, the Hall resistance was calculated as \(\rho_{yz}\)(H)=[\(\rho_{yz}(+\mathrm{H})-\rho_{yz}(-\mathrm{H})]/2\).
## III Density functional theory calculations
We have performed density functional theory (DFT) calculations using the Quantum Espresso package [31; 32]. We used fully relativistic pseudopotentials in order to include the spin-orbit interaction. Generalized gradient approximation was considered based on Perdew-Burke-Ernzerhof implementation [33] within the projector augmented wave (PAW) basis [34]. For wave function and charge density expansions, cutoff values of 50 Ry and 300 Ry were chosen, respectively. For the self-consistent calculation, a 7\(\times\)7\(\times\)7 Monkhorst-Pack grid was used [35]. In order to consider the van der Waals forces, Semi-empirical Grimme's DFT-D2 correction [36] was included. We further constructed tight-binding model based on the maximally localized Wannier functions using the wannier90 code [37], with Cr 3\(\mathrm{d}\), Cr 3\(\mathrm{s}\), Te 5p, and Te 5\(\mathrm{s}\) orbitals as the basis. Then utilizing the as obtained tight-binding model, we calculated Berry curvature along the high symmetry directions using Kubo formula [38] encoded in wannier90 code [37]. We have calculated intrinsic anomalous Hall conductivity (AHC) by integrating the \(x\)-component of Berry curvature over the entire BZ using WannierTools code [39].
## IV Results and discussion
Fig. 1(a) shows the XRD pattern of Cr\({}_{2.76}\)Te\({}_{4}\) single crystal with intensity peaks of \((1\;0\;0)\) Bragg plane, indicating that the crystal growth plane is along the \(a\)-axis. Inset in Fig. 1(a) shows the photographic image of Cr\({}_{2.76}\)Te\({}_{4}\) single crystal. Fig. 1(b) shows XRD pattern of crushed Cr\({}_{2.76}\)Te\({}_{4}\) single crystals measured at room temperature. All peaks in the XRD pattern can be attributed to the monoclinic crystal structure of \(\mathrm{C12/m1}\) space group (No.12) without any impurity phases, consistent with the crystal phase of Cr\({}_{3}\)Te\({}_{4}\)[40]. Rietveld refinement confirms the monoclinic structure with lattice parameters \(a\)=13.9655(2) A, \(b\)=3.9354(4)A, \(c\)=6.8651(7) A, \(\alpha\)=\(\beta\)=90\({}^{\circ}\), and \(\gamma\)=118.326(7)\({}^{\circ}\). These values are in good agreement with previous reports on similar systems [41; 42]. Fig. 1(c) shows schematic crystal structure of Cr\({}_{2.76}\)Te\({}_{4}\) projected onto the \(ac\)-plane (top panel) and \(ab\)-plane (bottom panel). Cr1 atoms are located in the Cr-vacant layer with an occupancy of 0.189/u.c, whereas Cr2 atoms are located in the
Figure 1: (a) XRD pattern from Cr\({}_{2.76}\)Te\({}_{4}\) single crystals. Inset in (a) shows the photographic image of the single crystals. (b) X-ray diffraction pattern from the crushed Cr\({}_{2.76}\)Te\({}_{4}\) single crystals, overlapped with Rietveld refinement. (c) Schematic crystal structure of Cr\({}_{2.76}\)Te\({}_{4}\) obtained from the Rietveld refinement.
Figure 2: (a) Temperature dependent magnetization \(\mathrm{M(T)}\) measured under ZFC and FC modes with a magnetic field H=1000 Oe for H \(\parallel a\) and H \(\parallel bc\). (b) Variation of magnetization \(\Delta\)M=(\(\mathrm{M_{FC}}\)-\(\mathrm{M_{FC}}\)) plotted as a function of temperature. Inset in (b) shows first derivative of magnetization with respect to the temperature (dM/dT) of the data shown in (a) for H \(\parallel a\). (c) and (d) Field dependent magnetization \(\mathrm{M(H)}\) measured at different temperatures for H \(\parallel a\) and H \(\parallel bc\), respectively.
Cr-full layer with an occupancy of 0.5/u.c. The intercalated Cr atoms (Cr1) are sandwiched within the van der Waals gap created by the two CrTe\({}_{2}\) layers, as shown in Fig. 1(c).
The EDS measurements suggest an actual chemical composition of Cr\({}_{2.76}\)Te\({}_{4}\).
To explore the magnetic properties of Cr\({}_{2.76}\)Te\({}_{4}\), magnetization as a function of temperature [M(T)] was measured as shown in Fig. 2(a) at a field of 1000 Oe applied parallel to the \(bc\)-plane (H \(\parallel bc\)) and \(a\)-axis (H \(\parallel a\)) for both zero-field-cooled (ZFC) and field-cooled (FC) modes. We observe that Cr\({}_{2.76}\)Te\({}_{4}\) exhibits paramagnetic (PM) to ferromagnetic (FM) transition at a Curie temperature (T\({}_{\rm C}\)) of 310 K, which is close to the Curie temperature of Cr\({}_{3}\)Te\({}_{4}\) (T\({}_{\rm C}\)=316 K). Decrease in the sample temperature results into a decrease in the magnetization for both H \(\parallel bc\) and H \(\parallel a\) at around 50 K, possibly due to spin-canting emerged from the coupling between in-plane (\(bc\)-plane) AFM order and out-of-plane (\(a\)-axis) FM orders [43; 44]. Also, the in-plane saturated magnetic moment of 1.78 \(\mu_{\rm B}\)/Cr is almost 4 times higher than the out-of-plane saturated magnetic moment of 0.43 \(\mu_{\rm B}\)/Cr at 2 K with an applied field of 1000 Oe, clearly demonstrating strong magnetocrystalline anisotropy in Cr\({}_{2.76}\)Te\({}_{4}\). From the magnetization difference between ZFC and FC, \(\Delta{\rm M}={\rm M}_{\rm FC}-{\rm M}_{\rm ZFC}\), shown in Fig. 2(b), we notice significant magnetization fluctuations as the maximum of \(\Delta\)M varies between in-plane and out-of-plane directions in going from 50 K to 150 K [45].
The magnetic state of Cr\({}_{2.76}\)Te\({}_{4}\) is further explored by measuring the magnetization isotherms [M(H)] for H \(\parallel a\) and H \(\parallel bc\) at various sample temperatures as shown in Figs. 2(c) and 2(d), respectively. Consistent with M(T) data, the magnetization saturation occurs at an applied field of 0.7
Figure 4: Temperature dependence of (a) Normal Hall coefficient R\({}_{0}\) (left axis) and charge carrier density n (right axis). (b) Anomalous Hall scaling coefficient S\({}_{\rm H}\) plotted as a function of temperature. (c) Anomalous Hall resistivity (\(\rho_{yz}^{\rm A}\)) and Hall conductivity (\(\sigma_{yz}^{\rm A}\)). (d) Plot of \(\rho_{yz}^{\rm A}\) vs. \(\rho_{zz}\). Dashed line in (d) is linear fitting with equation shown on the figure.
Figure 3: (a) Temperature dependent longitudinal resistivity \(\rho_{zz}\)(T). Top-inset in (a) shows low temperature resistivity fitted by \(\rho\)(T) = \(\rho_{0}\) + bT\({}^{2}\) and bottom-inset in (a) shows schematic diagram of linear-four-probe contacts. (b) Hall measuring geometry is shown schematically. (c) Hall resistivity (\(\rho_{yz}\)) plotted as a function of magnetic field measured at various temperatures. In (c), Red curves represent the experimental data of total Hall resistivity (\(\rho_{yz}\)), black curves represent the contributions from the normal and anomalous Hall resistivities (\(\rho^{\rm N}+\rho^{\rm A}\)), and Blue curves represent the topological Hall resistivity (\(\rho_{yz}^{\rm T}\)). See the text for more details.
T and 1.4 T for \(\rm H\parallel\)\(bc\) and \(\rm H\parallel\)\(a\), respectively, suggesting \(bc\)-plane to be the easy-magnetization plane. Also, \(\rm Cr_{2.76}Te_{4}\) is a soft ferromagnet as it has a negligible coercivity [see Fig. S1(a) in the Supplemental Material [30]]. The observed saturation magnetisation (\(\rm M_{s}\)) 2.586 \(\rm\mu_{B}\)/Cr and 2.55 \(\rm\mu_{B}\)/Cr for \(\rm H\parallel\)\(\rm a\) and \(\rm H\parallel\)\(\rm bc\), respectively, are smaller than the stand alone Cr atom (3 \(\mu_{B}\)), indicating correlated magnetic states in \(\rm Cr_{2.76}Te_{4}\). These observations are in good agreement with report on \(\rm Cr_{2.76}Te_{4}\)[46].
Next, coming to the main results of this contribution, Fig. 3(a) exhibits temperature dependent longitudinal electrical resistivity (\(\mathbf{\rho}_{zz}\)) of \(\rm Cr_{2.76}Te_{4}\). \(\mathbf{\rho}_{zz}(\rm T)\) suggests metallic nature throughout the measured temperature range [47]. However, a _kink_ at around 50 K is noticed in the resistivity, related to the AFM ordering [see Fig. 2(a)]. Bottom inset of Fig. 3(a) depicts schematic diagram of linear-four-probe measuring geometry and the top inset elucidates the quadratic nature of low temperature resistivity up to 50 K as it can be explained well by the Fermi liquid (FL) theory, \(\mathbf{\rho}(\rm T)=\mathbf{\rho}_{0}+aT^{2}\) where \(\mathbf{\rho}_{0}\) is the temperature independent residual resistivity. Schematic diagram of Hall measuring geometry is shown in Fig. 3(b). The Hall resistivity, \(\mathbf{\rho}_{yz}\), is measured with current along the \(y\)-axis and magnetic field applied along the \(x\)-axis to get the Hall voltage along the \(z\)-axis. Thus, Fig. 3(c) shows field dependent Hall resistivity \(\mathbf{\rho}_{yz}\) (black curve) measured at various sample temperatures. The total Hall resistivity (\(\mathbf{\rho}_{yz}\)) may have contributions from the normal Hall effect (\(\mathbf{\rho}^{\rm N}\)) and the anomalous Hall effect (\(\mathbf{\rho}^{\rm A}\)). Thus, the total Hall resistivity can be expressed by the empirical formula, \(\mathbf{\rho}_{yz}(\rm H)=\mathbf{\rho}^{\rm N}(\rm H)+\mathbf{\rho}^{\rm A}(\rm H)=\mathbf{ \mu}_{0}R_{0}H+\mathbf{\mu}_{0}R_{\rm S}M\), where \(\rm R_{0}\) and \(\rm R_{S}\) are the normal and anomalous Hall coefficients, respectively. These coefficients can be obtained by performing a linear fit using the relation \(\frac{\rho_{yz}}{\mu_{0}H}=\rm R_{0}+R_{S}\frac{M}{H}\) as shown in Fig. S1(c) of the Supplemental Material [30]. Having obtained the normal and anomalous Hall coefficients, we can now fit the total Hall resistivity (red curves) using the equation, \(\mathbf{\rho}_{yz}(\rm H)=\mathbf{\mu}_{0}R_{0}H+\mathbf{\mu}_{0}R_{\rm S}M\). The fitting should be nearly perfect if there is no topological Hall contribution. However, from Fig. 3(c) we can clearly notice that the fitting (red curve) is not perfect. Therefore, the topological Hall resistivity also contributes to the total Hall resistivity which can be expressed by \(\mathbf{\rho}_{yz}(\rm H)=\mathbf{\rho}^{\rm N}(\rm H)+\mathbf{\rho}^{\rm A}(\rm H)+\mathbf{ \rho}^{\rm T}(\rm H)\). Thus, the topological Hall contribution (blue curve) is extracted using the relation, \(\mathbf{\rho}^{\rm T}(\rm H)=\mathbf{\rho}_{yz}(\rm H)-[\mathbf{\rho}^{\rm N}(\rm H)+\mathbf{ \rho}^{\rm A}(\rm H)]\)[48, 49, 50]. See the Supplemental Material for more details [30].
Fig. 4(a) depicts the normal Hall coefficient (\(\rm R_{0}\)) and the charge carrier density (\(\rm n\)) derived using the formula, \((\rm R_{0}=1/n|e|\)), plotted as a function of temperature. We clearly notice from Fig. 4(a) that as the temperature decreases the carrier density increases up to 110 K. However, below 110 K, the carrier density decreases with temperature and gets saturated below 50 K. Fig. 4(b) (left axis) presents the anomalous Hall resistivity (\(\mathbf{\rho}_{yz}^{\rm A}\)) at zero field obtained by linearly intersecting the field axis [see Fig. 3(c)]. Maximum anomalous Hall resistivity is noticed at around 110 K. Anomalous Hall conductivity, \(\mathbf{\sigma}_{yz}^{\rm A}\), derived using the formula \(\mathbf{\sigma}_{yz}^{\rm A}=\frac{\rho_{yz}}{\rho_{yz}^{2}+\rho_{zz}^{2}}\) is shown in the right axis of Fig. 4(c), again to find the maximum Hall conductivity of \(\mathbf{\sigma}_{yz}^{\rm A}\)=27 \(\rm\Omega^{-1}cm^{-1}\) at around 110 K. Fig. 4(b) shows anomalous scaling coefficient \(\rm S_{H}=\frac{\rho_{xz}^{\rm A}}{\rm M\rho_{zz}^{2}}\), plotted as a function of temperature. The values of \(\rm S_{H}\) are inline with the itinerant ferromagnetic systems of \(\rm S_{H}\)=0.01-0.2 V\({}^{-1}\)[51]. In general, anomalous Hall effect can occur in solids either intrinsically originated from the nonzero-Berry curvature in the momentum space [52, 53] or extrinsically due to side-jump/scew-scattering mechanisms [54, 28, 52]. Therefore, to elucidate the mechanism of AHE in \(\rm Cr_{2.76}Te_{4}\), we plotted \(\mathbf{\rho}_{yz}^{\rm A}\) vs. \(\mathbf{\rho}_{zz}\) as shown in Fig. 4(d). From Fig. 4(d) it is evident that \(\mathbf{\rho}_{yz}^{\rm A}\) linearly changes with \(\mathbf{\rho}_{zz}\) [\(\mathbf{\rho}_{yz}=\alpha\mathbf{\rho}_{zz}^{\rm m}\) for \(\rm m=1.03\pm 0.02\)] up to 110 K and then deviates on further increasing the sample temperature [55]. In the case of itinerant ferromagnetic system, the Hall resistivity can be expressed by the relation \(\mathbf{\rho}_{yz}=\alpha\mathbf{\rho}_{zz}+\beta\mathbf{\rho}_{zz}^{2}\) where \(\alpha\) and \(\mathbf{\beta}\) are the screw-scattering and side-jump terms, respectively [56, 57, 47]. That means, in the case of skew-scattering \(\mathbf{\rho}_{yz}^{\rm A}\) linearly depends on \(\mathbf{\rho}_{zz}\), while \(\mathbf{\rho}_{yz}^{\rm A}\) quadratically depends on \(\mathbf{\rho}_{zz}\) in the case of side-jump. Since the Hall resistivity (\(\mathbf{\rho}_{yz}\)) linearly depends on \(\mathbf{\rho}_{zz}\), the skew-scattering could be the most suitable mechanism of AHE observed in this system [58, 59]. Note that the intrinsic Berry curvature contribution to the AHR also quadratically dependents on \(\mathbf{\rho}_{zz}\)[55]. To make a point, it looks that 110 K is the critical temperature of the properties discussed in Figs. 4(a)-(d). However, as per the \(\rm dM/dT\) data shown in the inset of Fig. 2(b), we think that the critical temperature could be \(\approx\) 125 K instead of 110 K as we can see significant change in magnetization across \(\approx\) 125 K.
Our experimental findings on the AHE have been examined using the density functional theory calculations as presented in Fig. 5. For the calculations, we considered primitive unit cell of \(\rm Cr_{3}Te_{4}\), consisting of three Cr (one Cr1 and two Cr2 type) and four Te atoms. The magnetic spins of Cr atoms were considered along the \(x\)-direction. Our calculations suggest a ferromagnetic ground state with average magnetic moments of 3.23\(\rm\mu_{B}\) and 3.15\(\rm\mu_{B}\) per Cr1 and Cr2 atoms, respectively, slightly higher than the experimental average value of 2.568 \(\rm\mu_{B}/Cr\). In Fig. 5(b), we present bulk electronic band structure of \(\rm Cr_{3}Te_{4}\). The system is found to be metallic with several bands crossing the Fermi level (\(\rm E_{F}\)). Several band crossing points are found near \(\rm E_{F}\) along \(\rm C_{2}-Y_{2}\), \(\rm D_{2}-A\), \(\rm A-\Gamma\) and \(\rm L_{2}-\Gamma\)\(\rm k\)-paths, but slightly away from the high symmetry points. Next, we explore the Berry curvature (\(\rm\Omega\)) calculated using the formula \(\rm\Omega(k)=\nabla(k)\times A(k)\) (where \(\rm A(k)\) is the Berry connection). Variation of the \(x\)-component of Berry curvature (\(\rm\Omega^{x}\)) at \(\rm E_{F}\) along the high symmetry \(k\)-path is shown in Fig. 5(c). We can notice from Fig. 5(c) that \(\rm\Omega^{x}\) is strongly enhanced along the \(\rm A-\Gamma\) (\(\rm k_{z}\))direction. Further, Fig. 5(d) shows the anomalous Hall conductivity (AHC), \(\mathbf{\sigma}_{yz}\), of \(\rm Cr_{3}Te_{4}\) calculated as a function of energy using the Eqn. 1.
\[\sigma_{yz}=\frac{\rm e^{2}}{\hbar}\int f(\varepsilon_{\rm n}(\mathbf{k}))\Omega _{\rm n}^{x}(\mathbf{k})\frac{\rm d\mathbf{k}}{(2\mathbf{\pi})^{3}} \tag{1}\]
Where, \(\rm f(\varepsilon_{\rm n}(\mathbf{k}))\) is the Fermi-Dirac distribution function.
Our calculations suggest an intrinsic AHC of \(\sigma_{yz}\)\(\approx\)260 \(\Omega^{-1}\)\(\mathrm{cm^{-1}}\) near \(\mathrm{E_{F}}\), much larger than the experimental value of \(\sigma_{yz}^{\mathrm{A}}\)=27 \(\Omega^{-1}\)\(\mathrm{cm^{-1}}\). Such a small AHC suggests dominant impurity scattering contribution to the total anomalous Hall effect of \(\mathrm{Cr_{3}Te_{4}}\)[60], i.e., the skew-scattering contribution in the dirty limit [61]. However, despite the system is in the dirty limit, the total AHC should be at least comparable to the value of intrinsic AHC (\(\approx\)260 \(\Omega^{-1}\)\(\mathrm{cm^{-1}}\)) as it has contributions from both intrinsic Berry-curvature and extrinsic skew-scattering. In contrast, experimentally, we find a much smaller AHC compared to theoretical calculations. Note here that the calculations performed on the stoichiometric \(\mathrm{Cr_{3}Te_{4}}\) predict a large intrinsic AHC near \(\mathrm{E_{F}}\) due to the presence of several band crossing points (Weyl points). But the intrinsic AHC rapidly decreases as we move away from \(\mathrm{E_{F}}\). This is particularly true when \(\mathrm{E_{F}}\) is shifted to lower binding energies [see Fig. 5(d)]. Therefore, a genuine reason behind the discrepancy of AHC between experiment and theory could be that the experiments are performed on a slightly off-stoichiometric composition of \(\mathrm{Cr_{2.76}Te_{4}}\) with 8% Cr deficiency per formula unit, shifting the Fermi level towards the lower binding energy. Maximum topological Hall resistivity (\(\rho_{\mathrm{max}}^{\mathrm{T}}\)) is plotted as a function of temperature in Fig. 6(a) by the black-colored data points. Also, green-colored data points in Fig. 6(a) represent the magnetocrystalline anisotropy constant \(\mathrm{K_{u}}\) calculated using the Eqn. 2.
\[\mathrm{K_{u}}=\mu_{0}\int_{0}^{\mathrm{M_{s}}}[\mathrm{H_{bc}(M)-H_{a}(M)}]\ \mathrm{dM} \tag{2}\]
Here, \(\mathrm{M_{s}}\) represents saturation magnetization. \(\mathrm{H_{bc}}\) and \(\mathrm{H_{a}}\) represent \(\mathrm{H\parallel}\)\(bc\) and \(\mathrm{H\parallel}\)\(a\), respectively.
From Fig. 6(a), we can notice that the topological Hall resistivity is highest within the temperature range of 50 and 150 K. Also, most importantly, the temperature dependance of \(\mathrm{K_{u}}\) resembles \(\rho_{\mathrm{max}}^{\mathrm{T}}\). In Fig. 6(b), the black-colored data points illustrate temperature dependent saturation field (\(\mathrm{H_{s}}\)) beyond which the system becomes ferromagnetic [extracted from Fig. 2(c)] and the red-colored data points illustrate temperature dependent field (\(\mathrm{H_{T}}\)) beyond which \(\rho_{yz}^{\mathrm{T}}\) becomes zero [extracted from Fig. 3(c)]. Interestingly, both fields \(\mathrm{H_{T}}\) and \(\mathrm{H_{s}}\) perfectly overlap at all measured temperatures. This can be understood using the schematics shown in Fig. 6(c). Means, at a given temperature up to the saturation field (\(\mathrm{H}<\mathrm{H_{s}}\)), the system possess non-coplanar spin structure and hence shows the topological Hall effect [62]. However, for the applied field beyond magnetic saturation (\(\mathrm{H}>\mathrm{H_{s}}\)), the system becomes ferromagnetic and thus THE disappears.
Further, the maximum topological Hall resistivity \(\rho_{\mathrm{max}}^{\mathrm{T}}\)\(\approx\)\(1.1\)\(\mu\)Q-cm over a broad temperature range of \((50\mathrm{K}<\mathrm{T}<150\mathrm{K})\) observed in this vdW ferromagnetic \(\mathrm{Cr_{2.76}Te_{4}}\) is in the
Figure 5: (a) Schematic monoclinic unit cell of \(\mathrm{Cr_{2.76}Te_{4}}\). (b) Electronic band structure of \(\mathrm{Cr_{2.76}Te_{4}}\) calculated with inclusion of spin-orbit coupling (SOC). In (b) the Weyl points near the Fermi level are encircled. (c) \(x\)-component of the Berry curvature, \(\Omega^{x}\), calculated at the Fermi level. Inset in (c) shows zoomed-in view of \(\Omega^{x}\) for the A - \(\Gamma\) segment. (d) Anomalous Hall conductivity, \(\sigma_{yz}\), plotted as a function of energy. (e) High symmetry points defined on the Brillouin zone of monoclinic primitive unit cell.
same order of magnitude found in the chiral semimetals such as Mn\({}_{2}\)PtSn [63], Gd\({}_{3}\)PdSi\({}_{3}\)[64], LaMn\({}_{2}\)Ge\({}_{2}\)[65], and in other Cr\({}_{\rm x}\)Te\({}_{\rm y}\) based systems [23; 26]. Interestingly, so far THE is observed in Cr\({}_{\rm x}\)Te\({}_{\rm y}\) systems in their hexagonal (trigonal) crystal structure [23; 26; 66] but not in the monoclinic structure that we found in the present study. As discussed above, the easy-axis of magnetization in Cr\({}_{2.76}\)Te\({}_{4}\) is found to be in the \(bc\)-plane and thus for small applied fields the Cr spins are canted out of the \(bc\)-plane towards the \(a\)-axis for \(\rm H\,\parallel\,a\). In this way, the non-coplanar spin-structure has been generated for sufficiently smaller fields [23; 26; 65; 66; 67; 68].
Several mechanisms are proposed to understand the topological Hall effect. Such as the antisymmetric exchange or Dzyaloshinskii-Moriya (DM) interaction in the noncentrosymmetric systems [69; 70; 71] and the uniaxial magnetocrystalline anisotropy in the centrosymmetric systems [72; 73; 74; 75]. In the case of monoclinic Cr\({}_{3}\)Te\({}_{4}\) (\(\rm C21/m1\)) which is a centrosymmetric crystal, the chiral-spin structure could be stabilized by the strong MCA. This analogy is completely supported by our experimentally estimated MCA values at various temperatures as shown in Fig. 6(a). Most importantly, highest \(\rho_{\rm max}^{\rm T}\) value has been obtained at the highest magnetocrystalline anisotropy of K\({}_{\rm u}\)=165 \(\rm kJ/m^{3}\). This is because, in presence of the chiral-spin structure, the itinerant electrons acquire real-space Berry curvature associated with finite scalar-spin chirality \(\chi_{\rm ijk}=\rm S_{i}.(S_{\rm j}\times S_{k})\) which serves as fictitious magnetic field to generate the topological Hall signal [76; 77; 78].
## V Conclusions
To summarize, we have grown high quality single crystal of layered ferromagnetic Cr\({}_{2.76}\)Te\({}_{4}\) in the monoclinic phase to study the electrical transport, Hall effect, and magnetic properties. Our studies suggest Cr\({}_{2.76}\)Te\({}_{4}\) to be a soft ferromagnet with a negligible coercivity. The easy-axis of magnetization is found to be parallel to the \(\rm bc\)-plane, leading to strong magnetocrystalline anisotropy. Below 50 K, an antiferromagnetic-like transition is noticed. Interestingly, in going from 50 K to 150 K the strength of magnetic moments switches between out-of-plane to in-plane, suggests fluctuating Cr spins. From the electrical resistivity measurements the system is found to be metallic throughout the measured temperature range. Also, a \(\rm kink\) at around 50 K due to AFM ordering is noticed. Magnetotransport measurements demonstrate large anomalous Hall effect (AHE) and topological Hall effect (THE) in this systems. First-principles calculations point to an intrinsic AHE due to non-zero Berry curvature near the Fermi level, while experimentally it is found to be an extrinsic AHE due to skew-scattering. Topological Hall effect has been observed due to the non-coplanar spin-structure in the presence of strong magnetocrystalline anisotropy.
## VI Acknowledgements
A.B. acknowledges support from Prime Minister's Research Fellowship (PMRF). A.N. thanks startup grant of the Indian Institute of Science (SG/MHRD-19-0001) and DST-SERB (SRG/2020/000153). S.T. thanks the Science and Engineering Research Board (SERB), Department of Science and Technology (DST), India for the financial support (Grant No.SRG/2020/000393). This research has made use of the Technical Research Centre (TRC) Instrument Facilities of S. N. Bose National Centre for Basic Sciences, established under the TRC project of Department of Science and Technology, Govt. of India.
|
2306.01787 | Power Control with QoS Guarantees: A Differentiable Projection-based
Unsupervised Learning Framework | Deep neural networks (DNNs) are emerging as a potential solution to solve
NP-hard wireless resource allocation problems. However, in the presence of
intricate constraints, e.g., users' quality-of-service (QoS) constraints,
guaranteeing constraint satisfaction becomes a fundamental challenge. In this
paper, we propose a novel unsupervised learning framework to solve the
classical power control problem in a multi-user interference channel, where the
objective is to maximize the network sumrate under users' minimum data rate or
QoS requirements and power budget constraints. Utilizing a differentiable
projection function, two novel deep learning (DL) solutions are pursued. The
first is called Deep Implicit Projection Network (DIPNet), and the second is
called Deep Explicit Projection Network (DEPNet). DIPNet utilizes a
differentiable convex optimization layer to implicitly define a projection
function. On the other hand, DEPNet uses an explicitly-defined projection
function, which has an iterative nature and relies on a differentiable
correction process. DIPNet requires convex constraints; whereas, the DEPNet
does not require convexity and has a reduced computational complexity. To
enhance the sum-rate performance of the proposed models even further,
Frank-Wolfe algorithm (FW) has been applied to the output of the proposed
models. Extensive simulations depict that the proposed DNN solutions not only
improve the achievable data rate but also achieve zero constraint violation
probability, compared to the existing DNNs. The proposed solutions outperform
the classic optimization methods in terms of computation time complexity. | Mehrazin Alizadeh, Hina Tabassum | 2023-05-31T14:11:51Z | http://arxiv.org/abs/2306.01787v1 | Power Control with QoS Guarantees: A Differentiable Projection-based Unsupervised Learning Framework
###### Abstract
Deep neural networks (DNNs) are emerging as a potential solution to solve NP-hard wireless resource allocation problems. However, in the presence of intricate constraints, e.g., users' quality-of-service (QoS) constraints, guaranteeing constraint satisfaction becomes a fundamental challenge. In this paper, we propose a novel unsupervised learning framework to solve the classical power control problem in a multi-user interference channel, where the objective is to maximize the network sum-rate under users' minimum data rate or QoS requirements and power budget constraints. Utilizing a differentiable projection function, two novel deep learning (DL) solutions are pursued. The first is called Deep Implicit Projection Network (DIPNet), and the second is called Deep Explicit Projection Network (DEPNet). DIPNet utilizes a differentiable convex optimization layer to implicitly define a projection function. On the other hand, DEPNet uses an explicitly-defined projection function, which has an iterative nature and relies on a differentiable correction process. DIPNet requires convex constraints; whereas, the DEPNet does not require convexity and has a reduced computational complexity. To enhance the sum-rate performance of the proposed models even further, Frank-Wolfe algorithm (FW) has been applied to the output of the proposed models. Extensive simulations depict that the proposed DNN solutions not only improve the achievable data rate but also achieve zero constraint violation probability, compared to the existing DNNs. The proposed solutions outperform the classic optimization methods in terms of computation time complexity.
Power control, learning to optimize (L2O), deep learning (DL), unsupervised learning, differentiable projection, multi-user, interference, and resource allocation.
## I Introduction
The problem of sum-rate maximization (SRM) in a multi-user interference channel through optimized power control has been explored for decades using standard optimization tools. However, due to the non-convex and NP-hard nature of the power control problem and lack of analytical solutions, a majority of the existing algorithms rely on an either exhaustive search (explicitly or implicitly) [1] or iterative optimization of some approximate sub-problems [2]. The convergence and computational complexity typically hinder the practicality of the optimal or near-optimal solutions [3]. One way to mitigate the computational complexity of solving NP-hard optimization problems is to view them as a mapping from the state of the environment to the decision variables. This mapping can be learned efficiently by deep neural networks (DNNs) via offline training. Since the inference time of DNNs is far less than the run-time of iterative algorithms, online computational complexity will reduce significantly.
While DNNs can minimize the time complexity, handling sophisticated problem constraints is a fundamental challenge regardless of the supervised or unsupervised training method. Note that, simple constraints (e.g., power budget constraint in a power control problem [3], base station (BS) quota constraint in a user assignment problem [4], etc.) can be satisfied using standard activation functions (like Rectified Linear Unit (ReLU), Sigmoid, etc.). Nevertheless, sophisticated quality-of-service (QoS)1 constraints cannot be incorporated using well-known activation functions. To date, such constraints are either incorporated by considering a penalty of the constraint violation into the loss function, which encourages the DNN output to meet the constraints [3, 4, 5] or included in the loss function which is the Lagrangian of the original problem and the learnable dual variables penalize the violation of constraints [6, 7]. The approaches however do not provide a guarantee that the results are always feasible and satisfy constraints. Given the infancy of this line of research, this paper aims to address the following fundamental questions: _(1) How to systematically incorporate convex and/or non-convex constraints into the DNN architecture instead of incorporating them in the loss function? (2) How to ensure a zero constraint violation probability?_
Footnote 1: In this paper, the term QoS refers to users’ minimum data rate requirements which can be different due to distinct services required by those users.
### _Background Work_
To date, most deep learning (DL)-based studies considered SRM via power control [2, 8, 9, 11], while considering simple BS power budget constraint. Recently, some research works considered the problem of SRM via power control with power budget and minimum rate constraint of users. The authors in [3, 5] applied unsupervised training of DNNs and incorporated the penalty of violating the QoS constraint in the loss function. In [15], the authors proposed a hybrid resource allocation scheme for multi-channel underlay device-to-device (D2D) communications. In particular, the transmit power control is considered for SRM considering interference and minimum rate constraints. The authors considered a heuristic equally reduced power (ERP) scheme together with a DNN-based scheme to avoid violation of QoS constraints. However, resorting to these heuristics (ERP in [15] or minimum power [3]), when the DNN's output violates QoS constraint, can compromise the quality of optimal power allocation solutions; thereby impacting the maximum achievable sum-rate.
Based on the duality theory, [6, 7, 12] applied the Lagrangian loss function to train the DNN and parameterize both the primal and dual variables. However, due to the residual error of DNN for parameterizing the dual variables, the methods cannot guarantee that the constraints are always satisfied. In [13], the authors introduced a slack variable that relaxes the minimum rate constraints. The objective changed to maximizing the sum-rate while minimizing the slack variable. This new method called counterfactual optimization sacrifices the sum-rate to provide QoS of the cell-edge users. Very recently, [14] studied the SRM via power control with minimum rate and power budget constraints for an ad-hoc setup. To address the constraints, an approximate closed-form projection is used that takes the output of the DNN and projects it into the feasible set. However, the approach is limited to linear constraints. **Table I** summarizes the existing state-of-the-art and clarifies the novelty of this article.
### _Motivation and Contributions_
Most of the aforementioned works handled the power budget constraint systematically via using a sigmoid (in case of a single channel) or softmax (in case of multiple channels) at the output layer of the DNN [11]. The QoS constraints, on the other hand, were handled either by adding a penalty term to the loss function, indicating the violation of these constraints, or by transforming the problem into its Lagrangian and performing the learning in the dual domain [16, 17]. Neither of those guarantees zero violation of constraints at the test time; therefore heuristic algorithms are generally applied to allocate feasible powers in the instance of QoS violation. In this paper, our contributions can be summarized as follows:
* We propose two novel DL solutions for the classical power control problem in a multi-user interference channel with QoS and power budget constraints. The first is called **D**eep **I**mplicit **P**rojection **N**etwork (DIPNet), and the second is called **D**eep **E**xplicit **P**rojection **N**etwork (DEPNet). The former requires convex constraints; whereas, the latter does not.
* DIPNet utilizes differentiable convex optimization layer [18, 19], a type of implicit layers in DNNs [20], to implicitly define a projection function. The projection function projects the neural network's output to the feasible set defined by the QoS and power budget constraints. Thus, the DNN's output always satisfies the constraints.
* DEPNet is inspired by [21], where a process called correction is applied iteratively on the output of the DNN to make it fall unto the feasible set of inequality constraints. The iterative process can be perceived as a differentiable and explicitly-defined projection function that projects the output of DNN to the feasible set of the problem with reduced computational complexity. The process is compatible with GPU-based training.
* To improve the sum-rate performance of the proposed models even further, the Frank-Wolfe algorithm (FW), an iterative algorithm for constrained optimization problems [22], has been applied. This algorithm takes the output of the proposed models as its initial point and searches within the feasible set to find a better solution.
* The proposed models are trained in an unsupervised manner and compared with (i) the enhanced version of PCNet [3], called PNet, which works in the multi-channel scenario, is considered as the DNN-based benchmark, (ii) Geometric Program (GP) [23] and genetic algorithm are used as the optimization-based benchmark. The network sum-rate, constraint violation probability, and online test time are the performance metrics.
* Numerical results demonstrate that the proposed DIPNet and DEPNet guarantee zero constraint violation probability while outperforming PNet in terms of network sum-rate and constraint violation probability. In addition, the proposed models outperform GP and genetic algorithm in terms of computation time complexity.
Note that [20] and [21] have proposed the fundamental techniques, i.e., deep implicit layers and iterative gradient-descent-based projection, respectively. In this paper, we have demonstrated their applicability to handle both the convex and non-convex constraints in wireless resource allocation problems. The proposed differentiable projection functions are compatible with any DNN architecture. Thus, considering sophisticated DNN architectures will not impact the functionality and applicability of the proposed projection methods.
### _Paper Organization and Notations_
The remainder of this paper is organized as follows. Section II details the system model, assumptions, and problem statement. Section III depicts the problem transformation and introduces the proposed differentiable projection framework. Section IV and Section V detail the implicit and explicit projection framework. Section VI proposes the Frank-Wolfe enhancement to the proposed DNN architectures. Section VII details the experimental set-up, dataset generation procedure, and considered benchmarks for performance comparison. Sec
\begin{table}
\begin{tabular}{|l|l|c|c|c|} \hline
**Ref.** & **QoS Violation** & **Constraint type** & **Constraints** & **Method** \\ \hline [8, 9] & N/A & Linear & Power budget & ReLU activation \\ \hline [10, 11] & N/A & Linear & Power budget & Sigmoid activation \\ \hline [3, 5] & Yes & Non-convex & Power budget, QoS & Customized loss function \\ \hline [6, 7, 12] & Yes & Non-convex & Power budget, QoS & Primal-Dual training \\ \hline [13] & Yes & Non-convex & Power budget, QoS & Counterfactual primal-dual learning \\ \hline [14] & No & Linear & Power budget, QoS & Heuristic closed-form projection \\ \hline [15] & No1 & Non-convex & Power budget, QoS & Customized loss function \\ \hline This paper & No & Non-convex & Power budget, QoS & Differentiable implicit and explicit projection \\ \hline \end{tabular}
\end{table} TABLE I: A comparative analysis of the existing machine learning frameworks for power control.
tion VIII presents selected numerical results followed by conclusion in Section IX.
Throughout the paper, we use the following notations: bold lower-case letters for vectors, bold upper-case letters for matrix, and calligraphy upper-case letters denote a set. \(\mathbb{R}\) and \(\mathbb{C}\) denote the set of real and complex numbers.
## II System Model and Problem Statement
We consider a downlink wireless network composed of \(B\) single-antenna BSs where each BS can serve \(Q\) users at maximum in \(Q\) orthogonal frequency channels. Due to orthogonal channel allocation at each BS, the users that are being served by the same BS will not interfere with each other, i.e., no intra-cell interference exists. However, the BSs share the same frequency spectrum and they equally distribute the bandwidth among their users, denoted by \(W\). Thus, the inter-cell interference on each channel of bandwidth \(W\) exists from the neighboring BSs. Without the loss of generality, we consider a total of \(U\) single-antenna users in the system, where \(U=BQ\). The achievable rate of the user associated with channel \(q\) of BS \(b\) can thus be modeled as follows:
\[R_{b,q}(\textbf{P},\textbf{H})=W\mathrm{log}_{2}\left(1+\gamma_{b,q}(\textbf{ P},\textbf{H})\right),\]
\[\gamma_{b,q}(\textbf{P},\textbf{H})=\frac{H_{b,q,b}P_{b,q}}{\sum_{b=1,\hat{b} \neq b}^{B}H_{b,q,\hat{b}}P_{\hat{b},q}+\sigma^{2}} \tag{1}\]
where \(H_{b,q,\hat{b}}\) denotes the interfering channel between the BS \(\hat{b}\) and the user who is assigned to channel \(q\) of BS \(b\), \(P_{b,q}\) denotes the transmit power allocated to the user scheduled on channel \(q\) of BS \(b\), and \(\gamma_{b,q}\) denotes the received Signal-to-Interference-to-Noise ratio (SINR) of the user scheduled on channel \(q\) of BS \(b\). Note that \(\textbf{P}\in\mathbb{R}^{B\times Q}\) and \(\textbf{H}\in\mathbb{R}^{B\times Q\times B}\) denote the matrix and tensor containing all values of the transmit powers and channel power gains composed of distance-based path-loss, shadowing, and fading, respectively. We assume that the perfect channel state information (CSI) is available on the BS side. Also, \(\sigma^{2}\) refers to the thermal noise power at the users' receivers, which is the same for all the users. We denote the set of all BSs and channels as \(\mathcal{B}=\{1,\cdots,B\}\) and \(\mathcal{Q}=\{1,\cdots,Q\}\), respectively.
The SRM problem with QoS constraints can then be formulated as follows:
\[\underset{\textbf{P}}{\text{maximize}} R(\textbf{P},\textbf{H})=\sum_{b=1}^{B}\sum_{q=1}^{Q}R_{b,q}(\textbf{P}, \textbf{H})\] (2) subject to \[P_{b,q}\geq 0,\quad\forall b\in\mathcal{B},\forall q\in\mathcal{Q}\] \[\sum_{q=1}^{Q}P_{b,q}\leq P_{\max},\quad\forall b\in\mathcal{B}\] \[R_{b,q}(\textbf{P},\textbf{H})\geq\alpha_{b,q},\quad\forall b\in \mathcal{B},\forall q\in\mathcal{Q}\]
where the first constraint ensures non-negative power allocations and the second constraint refers to the transmit power budget of each BS. We assume the same maximum power budget \(P_{\max}\) of each BS. The third constraint refers to the minimum rate requirement of the user scheduled on the channel \(q\) of BS \(b\) (\(\alpha_{b,q}\)). The problem in (2) is NP-hard and non-convex both in its objective and the constraints set; thus, finding an optimal solution is challenging.
## III Problem Transformation and Differentiable Projection Framework
Since the implicit projection method, introduced in Section IV, requires the convexity of the constraints and the explicit projection, introduced in Section V, provides improved results when the feasible set is convex, we first transform the problem to its equivalent form with convex constraints. However, the non-convexity still arises from the non-convex objective function.
### _Problem Transformation_
The considered power control problem in (2) can be reformulated in two different ways, i.e., either using the matrix version of power or using the vector form of the power allocations. The matrix form of (2) is shown below:
\[\underset{\textbf{P}}{\text{maximize}} R(\textbf{P},\textbf{H})\] (3) subject to \[\textbf{P}\geq\textbf{0},\quad\textbf{P}.\textbf{1}\leq P_{\max} \textbf{1},\quad\gamma_{b,q}(\textbf{P},\textbf{H})\geq\beta_{b,q}\]
where \(\textbf{1}=[1,1,\cdots,1]^{T}\) is a \(B\)-dimensional vector and \(\beta_{b,q}=2^{\frac{\alpha_{b,q}}{2^{\textbf{P}}}}-1\) is the minimum SINR to get the minimum rate requirement. To reformulate the problem in the vector form, we convert **P** in a vector form. The vector form, i.e. \(\textbf{p}\in\mathbb{R}^{BQ\times 1}\), can be derived by stacking the \(Q\) columns of matrix **P**, i.e., \(P_{b,q}=p_{(q-1)B+b}\). The vector form of (3) is shown below:
\[\underset{\textbf{P}}{\text{maximize}} R(\textbf{p},\textbf{H})\] (4) subject to \[\textbf{p}\geq\textbf{0},\quad\textbf{Ap}\leq P_{\max}\textbf{1}, \quad\textbf{Cp}\geq\textbf{d}\]
where \(\textbf{A}\in\mathbb{R}^{B\times BQ}\) is defined as follows:
\[A_{i,j}=\left\{\begin{array}{ll}1&\text{if }i\equiv j\pmod{B}\\ 0&\text{otherwise}\end{array}\right. \tag{5}\]
The problem in (4) offers a linear (thus convex) formulation of the third constraint. Since each BS can serve \(Q\) users at a time and has a certain predefined users' quota, the matrix \(\textbf{C}\in R^{U\times U}\) becomes a block-diagonal matrix, where each block is related to one of the \(Q\) channels. The \(q\)-th block denoted by \(\textbf{M}^{q}\in\mathbb{R}^{B\times B}\) is defined as:
\[M^{q}_{b,\hat{b}}=\left\{\begin{array}{ll}H_{b,q,b}&\text{if }b=\hat{b}\\ -\beta_{b,q}H_{b,q,\hat{b}}&\text{if }b\neq\hat{b}\end{array}\right., \tag{6}\]
where \(\textbf{C}=\mathrm{diag}(\textbf{M}^{1},...,\textbf{M}^{Q})\), and \(\textbf{d}\in\mathbb{R}^{U\times 1}\) is also derived by stacking the columns of matrix \(\textbf{D}\in\mathbb{R}^{B\times Q}\), defined
Figure 1: A graphical illustration of the considered system model.
below, on top of each other, i.e., \(D_{b,q}=\beta_{b,q}\sigma^{2},\quad\textbf{d}=[\textbf{D}_{\cdot,1}^{T},...,\textbf {D}_{\cdot,Q}^{T}]^{T}\). The aforementioned transformations also show that the third constraint can be expressed as either a non-convex, non-linear, or linear constraint in (2), (3), and (4), respectively [23]. In the following, we will leverage the aforementioned transformations in Section IV and Section V.
### _Functional Optimization Form_
Despite the linearity of the constraints in (4), this problem is NP-hard due to the non-convex objective function. Thus, finding an optimal solution is challenging. Traditionally, (4) is solved for each channel realization, i.e., for each realization of **H**, we solve (4) to get **p** which dictates an implicit mapping between **H** and **p**. Although effective, this variable optimization approach yields high computational complexity. To overcome this problem, we can approximate the implicit mapping between **H** and **p** with an explicit function. This will significantly improve the computational complexity as long as the explicit function has an efficient implementation, e.g., using neural networks. The equivalent functional optimization form of (4) is:
\[\underset{F(\textbf{H})}{\text{maximize}} \mathbb{E}_{\textbf{H}\sim p(\textbf{H})}[R(F(\textbf{H}), \textbf{H})]\] (7) subject to \[F(\textbf{H})\geq\textbf{0},\quad\forall\textbf{H}\in\mathbb{R }^{B\times Q\times B}\] \[\textbf{A}F(\textbf{H})\leq P_{\max}\textbf{1},\quad\forall \textbf{H}\in\mathbb{R}^{B\times Q\times B}\] \[\textbf{C}F(\textbf{H})\geq\textbf{d},\quad\forall\textbf{H} \in\mathbb{R}^{B\times Q\times B}\]
where \(F(\cdot)\) represents the functionality that maps CSI to a power allocation. It has been proven in [24] that the solution of (7) is also the optimal solution of (4). The same transition can be written for (3). The output of \(F(\cdot)\) is a matrix for (3) and a vector for (4).
DNNs have been shown to be a very rich family of parametric functions, in a sense that even a DNN with fully-connected layers (FCNN) has universal function approximation property [25], and has shown success in approximating the aforementioned mapping in supervised and unsupervised ways. Thus, we consider them for approximating \(F\), i.e., \(F(\textbf{H})=\mathcal{N}_{p}(\textbf{H};\textbf{w}_{p})\) where \(\mathcal{N}_{p}\) is a DNN, and \(\textbf{w}_{p}\in\mathbb{R}^{D_{w_{p}}}\) is a \(D_{w_{p}}\) dimensional vector containing all trainable parameters, i.e., weights and biases, of the DNN. The output of the DNN appears as a subscript to the DNN and its parameters. For example, if the output of the DNN is variable **y**, the DNN and its parameters are denoted by \(\mathcal{N}_{y}\) and \(\textbf{w}_{y}\), respectively. As a result, the problem of finding \(\textbf{w}_{p}\) via learning can thus be formulated as:
\[\underset{\textbf{w}_{p}}{\text{minimize}} \mathbb{E}_{\textbf{H}\sim p(\textbf{H})}[l(\mathcal{N}_{p}( \textbf{H};\textbf{w}_{p}),\textbf{H})]\] (8) subject to \[\textbf{w}_{p}\in\mathbb{R}^{D_{w_{p}}}\]
where \(l\) is the loss function and measures how good the output of the neural network is for a given data **H**. Importantly, the design of the loss function is critical to solving the constrained optimization problem. Most of the current research typically incorporates the power budget constraint into the output of the DNN by using bounded activation functions like Sigmoid [3]. Other constraints are generally incorporated by customizing the loss function using the dual problem formulation. The downside of this choice is that there is no easy way to make sure the output of the neural network always meets the constraints and lies in the feasible set of problem (4). To overcome this issue, in what follows, we present a differentiable projection-based framework that projects the DNN's output into the feasible set of (7).
### _Differentiable Projection Framework_
In this paper, we focus on designing a projection framework where we can add a special layer to the DNN, which projects the output of the DNN to the feasible set of (4) or (7). We refer to this transformation as _Projection_, i.e.,
\[\textbf{p}=\mathcal{N}_{p}(\textbf{H};\textbf{w}_{p})=\mathrm{Proj}(\textbf{r} );\quad\textbf{r}=\mathcal{N}_{r}(\textbf{H};\textbf{w}_{r}), \tag{9}\]
where **r** is the output of the backbone DNN without the projection layer, and \(\mathrm{Proj}:\mathbb{R}^{U}\longrightarrow\mathbb{R}^{U}\) is a projection function unto the feasible set of (7). Since we do not consider parametric projection functions, the parameters of \(\mathcal{N}_{p}\) and \(\mathcal{N}_{r}\) are considered as the same, i.e., \(\textbf{w}_{p}=\textbf{w}_{r}\). Fig. 2 shows the graphical illustration of this architecture and the projection function is defined as follows.
**Definition 1**.: _A function \(\mathrm{Proj}:\mathbb{R}^{U}\longrightarrow\mathbb{R}^{U}\) is a differentiable projection function w.r.t. (7), if its Jacobian can be evaluated, i.e., \(\frac{\partial\textbf{p}}{\partial\textbf{r}}\) and **p** meets the constraints of (7). Being differentiable is critical to train the DNN end-to-end using gradient-based methods [26]._
The projection function can be defined implicitly or explicitly as in the following, respectively.
* **Differentiable Implicit Projection:** in which a differentiable convex optimization (DCO) layer, a type of implicit layer in DNN, is applied to project the output of the
Figure 3: An illustration of the proposed projection methods: Implicit projection via mathematical optimization (left) - Explicit projection via an iterative process (right).
Figure 2: A graphical illustration of the proposed differentiable projection framework.
\(\mathcal{N}_{r}\) to the feasible set. By doing this, we make sure that the output of \(\mathcal{N}_{p}\) always satisfies the constraints. The detailed explanation of DNN architecture is in Section IV. Now that the output of DNN always satisfies the constraints, \(l(\mathcal{N}_{p}(\mathbf{H};\mathbf{w}_{p}),\mathbf{H})\) can take the form of \(-R(\mathcal{N}_{p}(\mathbf{H};\mathbf{w}_{p}),\mathbf{H})\) to directly optimize (7).
* **Differentiable Explicit Projection:** uses a differentiable iterative process to realize the projection function (Proj) and moves the output of the \(\mathcal{N}_{r}\) closer to the feasible set. Each iteration uses a process, called correction process [21], which _corrects_ the previous output towards lesser violation of the constraints. This approach uses soft-loss during training and shows faster performance relative to the first approach at the expense of the lack of the provable feasibility of the results. Experimental evaluations, however, confirm the zero constraint violation probability, as detailed in Section VI.
**Remark 1**.: _Let \(f:\mathbb{R}^{m}\longrightarrow\mathbb{R}^{n}\) be a function such that \(\textbf{y}=f(\textbf{x})\). If the process of evaluating the output **y** from an input **x** is known, we refer to the function as **explicit**, whereas the **implicit** function means that the output evaluation process from the input is unknown. To put it differently, implicit definition separates "what" to compute from "how" to compute it [27]._
It is noteworthy that the differentiable projection layer needs to consider both the power budget constraint and QoS constraints together, i.e., satisfy all constraints jointly.
## IV Differentiable Implicit Projection Framework
In this section, we describe the systematic incorporation of the QoS constraint into the DNN architecture. To be more specific, we utilize the newly introduced DCO layer [19] to project the output of the neural network to the feasible set defined by the QoS or minimum rate constraints. We call this layer a projection layer. DCO and other types of layers which describe an implicit functionality between input and output space lie under the umbrella of implicit layers [20].
### _Projection Layer_
We define the projection function implicitly using the concept of DCO which requires reformulating the constraints of the original problem as convex constraints and choosing a convex objective function for this layer. Using the affine formulation of the constraints of (2), as defined in (4) or (7), the optimization problem that characterizes the projection layer is formulated as:
\[\begin{split}\textbf{p}=\underset{\hat{\textbf{p}}}{\text{argmin}} &\frac{1}{2}||\hat{\textbf{p}}-\textbf{r}||_{2}^{2}\\ \text{subject to}&\hat{\textbf{p}}\geq\textbf{0}, \qquad\textbf{A}\hat{\textbf{p}}\leq P_{\max}\textbf{1},\qquad\textbf{C}\hat{ \textbf{p}}\geq\textbf{d}\end{split} \tag{10}\]
where (10) implicitly defines the projection function (\(\textbf{p}=\text{Proj}(\textbf{r})\)). This projection takes the form of the euclidean projection unto a set, defined by the constraints of (7). Given the implicit function theorem [19, 20, 27], the Jacobian of the output w.r.t. the input, i.e., \(\frac{\partial\textbf{p}}{\partial\textbf{r}}\), can be computed, regardless of how the solution is derived. Moreover, since the constraints of (10) and (7) are the same, they have the same feasible set, i.e., the solution of (10) satisfies the constraints of (7). Note that, as long as the convexity is preserved, one can choose other objective functions for (10) as well. The main role of this objective is to formalize the similarity between **r** and the points in the feasible set of (7). Once the formalization is there, the output of the projection function will be the point with the highest similarity. In (10), the distance, measured by the euclidean norm, is chosen to measure the similarity, i.e., the lower the distance, the higher the similarity. The upside of this choice is that (10) becomes a quadratic program, which can be solved efficiently [28]; thus offering a reasonable evaluation complexity once composed with \(\mathcal{N}_{r}\). Fig. 3 (left one) illustrates how (10) works. The projection function (10) is an instance of the DCO layer. The implementation details of this function and its integration with automatic-differentiation frameworks like PyTorch are available in [19, 29].
### _Neural Network Architecture_
As depicted in (9), the overall neural network (\(\mathcal{N}_{p}\)) is the composition of the projection function (Proj) and a backbone neural network (\(\mathcal{N}_{r}\)) as presented in Fig. 2. Since the focus of this work is on the design of the projection function, we consider a neural network with fully connected layers as the backbone. The architecture is composed of fully connected layers with the ReLU activation function at the hidden layers. The input to the neural network is the vector form of tensor **H**, i.e. \(\textbf{h}\in\mathbb{R}^{BQ\times 1}\), where we have \(H_{i,j,k}=h_{(j-1)B^{2}+(k-1)B+i}\).
Subsequently, the input dimension is \(BU\). The output will have the same dimension as the power vector, i.e., \(U\). The final layer's activation function is sigmoid to bound the output of \(\mathcal{N}_{r}\) between zero and one. Experiments showed that doing this improves the computation time of the optimization problem of the projection layer (10). Since the final neural network (\(\mathcal{N}_{p}\)) uses an implicit projection, we refer to it as **D**ifferentiable **I**mplicit **P**rojection **NET**work, or for short **DIPNet**. The proposed framework is agnostic to the DNN architecture; thus, one can extend to other DNN architectures to enhance the performance even further.
Considering a backbone DNN (with fully-connected layers) along with proposed projection methods enables us to provide a fair comparison with conventional PNet. The conventional PNet uses the same backbone DNN with fully connected layers along with a softmax layer to handle the maximum transmit power constraint, but the QoS constraint was added as an extra term to the loss function along with a tuning parameter. This model is widely used in the literature (PCNet for example). Subsequently, we are able to highlight the gains of having a powerful differentiable projection layer solely. It can be seen in the experimental evaluation part that having this layer will provide a 100% guarantee of constraint satisfaction as compared to conventional PNet.
## V Differentiable Explicit Projection Framework
In this section, we describe another way of incorporating constraints at the DNN's output. Specifically, we design a projection function via an iterative process. This idea is inspired by [21], where a process called correction is applied
iteratively to the output of the DNN to make it fall into the feasible set of inequality constraints.
### _Differentiable Iterative Projection_
Consider the following formulation of (4) or (7), where all the constraints are concatenated to form a single vector and are denoted as \(g(\textbf{p},\textbf{H})\leq\textbf{0}\), i.e.,
\[\underset{\textbf{p}}{\text{maximize}} R(\textbf{p},\textbf{H})\] (11) subject to \[g(\textbf{p},\textbf{H})\leq\textbf{0}\]
where \(g:\mathbb{R}^{U}\times\mathbb{R}^{BU}\longrightarrow\mathbb{R}^{G}\) contains all the inequality constraints of the main problem (\(G=UBU\)). Based on (4), \(g\) is an affine transformation w.r.t. **p**. Let us define \(V_{H}:\mathbb{R}^{U}\longrightarrow\mathbb{R}\) as a measure of the constraint violation, i.e.,
\[V_{H}(\textbf{p})=||\mathrm{max}(g(\textbf{p},\textbf{H}),0)||_{2}^{2} \tag{12}\]
This means that given two arbitrary points \(\textbf{x}_{1},\textbf{x}_{2}\in\mathbb{R}^{U}\) if \(V_{H}(\textbf{x}_{1})<V_{H}(\textbf{x}_{2})\), \(\textbf{x}_{2}\) violates the constraints of (11) more than \(\textbf{x}_{1}\). In what follows, we define the correction process.
**Definition 2**.: _An explicitly defined function \(\rho:\mathbb{R}^{U}\longrightarrow\mathbb{R}^{U}\) that has the following properties is called correction process: if \(\textbf{y}=\rho(\textbf{x})\), then \(V_{H}(\textbf{y})<V_{H}(\textbf{x})\), and \(\frac{\partial\textbf{r}}{\partial\textbf{r}}\) is calculable. The former condition makes sure that the output of the correction process is closer to the feasible set than its input, and the latter guarantees the differentiability of the correction process._
We also denote \(\rho^{t}\) as applying \(\rho\) for \(t\) times. Based on the first condition in Definition 2, by applying \(\rho\) iteratively, we will end up with a point that meets the constraints i.e.,
\[\textbf{p}=\mathrm{Proj}(\textbf{r})=\lim_{t\longrightarrow\infty}\rho^{t}( \textbf{r}). \tag{13}\]
Since \(\rho\) is explicitly defined and differentiable, \(\rho^{t}\) and \(\mathrm{Proj}\) are also explicitly defined and differentiable, and their Jacobian can be derived using the chain rule. Thus, the resulting projection function in (13) is explicitly defined and follows the definition of the differentiable projection function. In other words, given an input \(\textbf{r}\in\mathbb{R}^{U}\) and output \(\textbf{p}\in\mathbb{R}^{U}\) of the projection function, i.e., \(\textbf{p}=\mathrm{Proj}(\textbf{r})\), \(V_{H}(\textbf{p})=0\) and the Jacobian of **p** w.r.t **r**, i.e. \(\frac{\partial\textbf{p}}{\partial\textbf{r}}\) is derivable. The former condition enables end-to-end training of the whole system, i.e. letting the gradients of the loss function w.r.t. the DNN's parameters be calculated via backpropagation.
Due to the infinite limit, the projection function as defined in (13) cannot be realized in practice. Hence, similar to [21], we use the truncated version of it, i.e. \(\mathrm{Proj}(.)=\rho^{t}(.)\) for some finite \(t\). Due to truncation, the output of the projection function may not lie on the feasible set. However, we can speed up the convergence rate of \(\rho^{t}\) by carefully designing \(\rho\), discussed later, and choosing the initial point **r**. The latter can be handled by making the output of the backbone neural network \(\mathcal{N}_{r}\) closer to the feasible set. To do this, we use the following loss function, called soft-loss, during training, i.e.,
\[l_{\mathrm{soft}}(\textbf{p},\textbf{H})=-R(\textbf{p},\textbf{H})+\lambda V_{ H}(\textbf{p}), \tag{14}\]
where \(\textbf{p}=\rho^{t}(\mathcal{N}_{r}(\textbf{H};\textbf{w}_{r}))\), \(t\) is a finite number, and \(\lambda\) is a hyperparameter controlling the constraint satisfaction relative to objective optimization. The second term is added to the loss function to penalize the points that violate the constraints, which will make \(\mathcal{N}_{r}\) to output a good initial point to \(\rho^{t}\). Moreover, since it is not necessary to fully satisfy the constraints during training, fewer iterations can be used to speed up the training process. During the test time, however, the number of iterations is increased to output a feasible point [21].
### _Design of the Correction Process (\(\rho\))_
Following [21], we use gradient-descent-based methods to realize the correction process \(\rho\). Let \(\nabla^{V_{H}}(\textbf{x})\in\mathbb{R}^{U}\) and \(\mathcal{H}^{V_{H}}(\textbf{x})\in\mathbb{R}^{U\times U}\) denote the gradient and Hessian of \(V_{H}\) w.r.t. **x** and defined, respectively, as follows:
\[\nabla^{V_{H}}(\textbf{x}) =\nabla_{\textbf{x}}||\mathrm{max}(g(\textbf{x},\textbf{H}),0)|| _{2}^{2}\] \[=2J^{g}(\textbf{x},\textbf{H})^{\mathrm{T}}\mathrm{max}(g( \textbf{x},\textbf{H}),0). \tag{15}\] \[\mathcal{H}^{V_{H}}(\textbf{x}) =2I(\mathrm{max}(g(\textbf{x},\textbf{H}),0))^{\mathrm{T}}\textbf{ K}^{g}(\textbf{x},\textbf{H})+\] \[2\mathrm{max}(g(\textbf{x},\textbf{H}),0)^{\mathrm{T}}\textbf{T}^{ g}(\textbf{x},\textbf{H}). \tag{16}\]
where \(J^{g}(\textbf{x},\textbf{H})\in\mathbb{R}^{G\times U}\) is the Jacobian of \(g\) w.r.t. **x**. Moreover, \(I:\mathbb{R}\longrightarrow\mathbb{R}\), \(\textbf{K}\in\mathbb{R}^{G\times U\times U}\), and \(\textbf{T}\in\mathbb{R}^{G\times U\times U}\) are defined as shown in (17). Where \(I\) is an indicator function that is applied element-wise to \(\mathrm{max}(g(\textbf{x},\textbf{H}),0)\), **K**, and **T** are rank-3 tensors. The definition of Hessian in (16) contains vector to tensor dot product, resulting in a \(U\times U\) matrix. The product is computed as follows:
\[\textbf{C}=\textbf{a}^{\mathrm{T}}\textbf{Z}\longrightarrow C_{i,j}=\sum_{k=1 }^{G}a_{k}Z_{k,i,j}, \tag{18}\]
where \(\textbf{Z}\in\mathbb{R}^{G\times U\times U}\), \(\textbf{a}\in\mathbb{R}^{G\times 1}\), and \(\textbf{C}\in\mathbb{R}^{U\times U}\).
The feasible set of (2) can be presented with linear constraints, i.e., a polyhedron, as can be seen in (4). Thus, in the following, we derive the above-mentioned formulas in case \(g\) is an affine function, i.e. \(g(\textbf{x},\textbf{H})=\textbf{M}\textbf{x}+\textbf{n}\), where \(\textbf{M}:\mathbb{R}^{G\times U}\) and \(\textbf{n}\in\mathbb{R}^{G\times 1}\) are functions of **H**.
\[g(\textbf{x},\textbf{H})=\textbf{M}\textbf{x}+\textbf{n}\longrightarrow\left\{ \begin{array}{l}J^{g}(\textbf{x},\textbf{H})=\textbf{M}\\ K^{g}(\textbf{x},\textbf{H})_{k,i,j}=M_{k,i}M_{k,j}\\ T^{g}(\textbf{x},\textbf{H})_{k,i,j}=0\end{array}\right. \tag{19}\]
Unlike DIPNet, DEPNet does not have a convexity requirement.
Now that we have access to the first and second-order information of \(V_{H}\) w.r.t. **x**, the correction process can be formulated as follows:
\[\rho(\textbf{x})=\textbf{x}+\Delta\textbf{x}, \tag{20}\]
where \(\Delta\textbf{x}\) can take the form of variations of descent methods. Examples of which include vanilla gradient descent (\(\Delta\textbf{x}=-\gamma\nabla^{V_{H}}(\textbf{x})\)) or Newton method (\(\Delta\textbf{x}=-\mathcal{H}^{V_{H}}(\textbf{x})^{-1}\nabla^{V_{H}}(\textbf{x})\)). Here, we used gradient-descent with momentum [30] during training and the Newton method during test for faster training and lower violation probability at test. In the following, the correction process formulation for these choices is provided.
Let \(\Delta\textbf{x}^{t}\) be the step of the correction process that is applied at the iteration \(t\) of the projection function. The update rule for gradient descent with momentum can be given as follows:
\[\Delta\textbf{x}^{t}=-\gamma\nabla^{V_{H}}(\textbf{x})-\mu\Delta\textbf{x}^{t- 1}, \tag{21}\]
where \(\gamma\) is called step-size or learning rate and \(\mu\) is called the momentum. To avoid confusion with the learning rate involved in training DNNs, we refer to \(\gamma\) as the step-size. It should be noted that \(\gamma\) is chosen prior to training and is fixed during the training. This is to make the execution time of the correction process faster and the calculation of its derivatives easier during the training. In this case, having momentum will help the convergence of gradient descent by making it less vulnerable to the oscillations of noisy gradients [30].
One can use alternatives like exact or backtracking line-search [28] for choosing \(\gamma\) in each iteration. Although it helps with convergence, it introduces more computational complexity during the training, and the calculation of the derivative of the correction process will not be as straightforward as using a fixed value for the step-size. At the test time, however, we can use these techniques to fasten the convergence rate of the correction process and get feasible solutions from the projection function. Inspired by this idea, instead of using gradient descent at the test time, we use Newton method, which has a faster convergence rate [28]. Since \(\max(-,0)\) is used in the definition of \(V_{H}(.)\) and its second derivative is zero, the hessian matrix is poorly conditioned and has lots of zeros, making it not invertible. To deal with this issue, we regularize the hessian matrix by adding a small value \(\alpha\) to its diagonal. Thus, the update rule becomes:
\[\Delta\mathbf{x}^{t}=-(\mathcal{H}^{V_{H}}(\mathbf{x})+\alpha\mathbf{I})^{-1} \mathbf{V}^{V_{H}}(\mathbf{x}) \tag{22}\]
where \(\mathbf{I}\in\mathbb{R}^{U\times U}\) is the identity matrix. In the following, for simplicity, we refer to (22) as the Newton method update rule. (22) uses the second-order information of the objective at hand. As proven in [28] and shown in my experiments, Newton method has a faster convergence rate than gradient-descent. The only downside of it is that the calculation of the derivative of its steps is not straightforward due to matrix inversion in (22). Thus, we only used it at test time, and utilized gradient descent during training. Moreover, to make sure that the values of \(\mathbf{x}\) are always non-negative, which is very important for non-convex cases, we apply \(\max(\mathbf{x},0)\) in an element-wise manner on \(\mathbf{x}\) after each update of the correction process.
### _Neural Network Architecture_
For consistency and the sake of fair comparison, we use the same DNN architecture, as described in Section IV, with minor modifications. As depicted in Fig. 2, after the last affine transformation, there is a sigmoid non-linearity followed by a projection function. Using sigmoid at the final layer of \(\mathcal{N}_{r}\) showed faster convergence of this projection method in the experiments. Since this model uses an explicitly defined differentiable transformation to realize the projection, we call it **D**ifferentiable **E**xplicit **P**rojection **NET**work (DEPNet).
## VI An enhancement: Frank-Wolfe Algorithm
The optimality of the output of the projection function is dependent on the quality of the initial point (\(\mathbf{r}\)) and the projection method. Although the quality of the initial point will improve during the training, it might show sub-optimality. This means that the final power profile is feasible, but might not be optimal. A remedy is to apply constrained optimization algorithms, where the output of the projection function is passed as the initial point of the chosen algorithm (\(\mathbf{p}^{0}\)). The algorithm, then, outputs an enhanced power profile that achieves a higher sum-rate than \(\mathbf{p}^{0}\).
Among several variants of general constraint optimization algorithms, we select Frank-Wolfe [22], a.k.a conditional gradient descent, due to its low computational cost in each iteration of the algorithm. Compared to other alternatives like projected gradient descent which requires solving a quadratic program in each iteration, Frank-Wolfe only solves a linear program in each iteration [22]. In the following, the description of the Frank-Wolfe algorithm is provided.
As mentioned earlier, the initial point of this algorithm will be the output of the projection function, i.e., \(\mathbf{p}^{0}=\mathrm{Proj}(\mathbf{r})\). The algorithm is agnostic to the projection method and only requires a feasible initial point. Thus, it can be used with both DIPNet and DEPNet. The final power profile will be the output of this algorithm. Let us denote the output of the Frank-Wolfe algorithm after \(t\) and \(t+1\) iterations as \(\mathbf{p}^{t}\) and \(\mathbf{p}^{t+1}\in\mathbb{R}^{U}\), respectively. They belong to the feasible set of (4) and the relation between them is:
\[\mathbf{p}^{t+1}=\mathbf{p}^{t}+\lambda\mathbf{p}^{t+\frac{1}{2}},\quad \lambda\in[0,1] \tag{23}\]
where \(\mathbf{p}^{t+\frac{1}{2}}\) is an intermediate variable derived as follows:
\[\mathbf{p}^{t+\frac{1}{2}}=\underset{\hat{\mathbf{p}}}{\text{ argmin}} \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad\quad \quad
noise power is assumed to be 0.01\(\mu W\)[31]. This model is widely adopted in the literature to evaluate resource allocation algorithms [3, 8, 31]. The second dataset, called _Path-loss Dataset_, has channel coefficients that are composed of large-scale path-loss, shadowing, and fading. The dataset generation has the following steps. First, the locations of the BSs and users are determined by random sampling from a 500 m \(\times 500\) m area. We consider that the distance between the two closest BSs, BSs, and user, and between closest users, should be at least 100 m, 5 m, and 2 m, respectively. The carrier frequency is 2.4 GHz, the transmission bandwidth is 5 MHz, and the noise spectral density is set to -169 dBm. The remaining details of the channel model can be found in [32].
For both datasets, once a datapoint or channel realization is generated, we associate the best \(Q\) users to the first BS in terms of their channels. These users are then excluded from the user set. Then, the users' of the next BS are determined by the same process. This process is continued until all the users are associated with a BS. After that, for each BS, the allocated set of users is sorted out based on their channel gains and then gets assigned to the channels such that the first channel of each BS has the strongest user of that BS.
Different datasets are generated under different configurations. For each configuration, 100,000 data samples are generated and divided to train, validation, and test with 0.9, 0.05, and 0.05 ratios, respectively. Unless stated otherwise, the number of BSs is set to \(B=\)4. Also, we set the minimum users' rate requirement to \(\alpha=\) 2.5 Mbps unless stated otherwise. To provide a comprehensive quantitative analysis, additional datasets are generated under different configurations listed in Table II. Dataset ID is used to refer to different configurations, where the quota of each BS is given as \(Q=U/B\). All datasets in Table II are following the Path-loss models. The number of BSs, users, and the minimum rate are different among them.
#### Vii-B2 Feasibility Check
In the dataset preparation process, we need to make sure that all data points are feasible, i.e., there exists a power profile that meets the constraints of problem (2). For that, we use the approach mentioned in [23] and used in [3, 31]. The approach provides feasible transmit powers as well as minimum transmit powers that can fulfill the minimum rate constraints. Originally, this approach is not designed for the case where there is more than one channel, i.e., \(Q>1\). However, since channels on a given BS are orthogonal, we can break (2) into \(Q\) sub-problems, and apply the feasibility check approach with slight modifications. The description of the approach is defined next. Let us define matrix \(\textbf{B}^{q}\) as follows:
\[B^{q}_{b,\hat{b}}=\left\{\begin{array}{ll}0&b=\hat{b}\\ \frac{\beta_{b,q}|h_{b,q},\hat{b}|^{2}}{|h_{b,q},\hat{b}|^{2}}&b\neq\hat{b} \end{array}\right., \tag{25}\]
If the maximum eigenvalue of \(\textbf{B}^{q}\) is larger than 1, there is no feasible solution. That is, we can not find a power vector for channel \(q\) that can satisfy the minimum rate requirement of the users associated with this channel. Otherwise, we can find a feasible power allocation profile as:
\[\textbf{P}_{:,q}=(\textbf{I}-\textbf{B}^{q})^{-1}\textbf{u}^{q}, \tag{26}\]
where \(\textbf{P}_{:,q}\) is the transmit power vector over channel \(q\), **I** is a \(B\times B\) identity matrix, and \(\textbf{u}^{q}\) is a \(B\times 1\) vector with \(j\)th element given as \(u^{q}_{b}=\frac{\beta_{b,q}\sigma^{2}}{|h_{b,q},\hat{b}|^{2}}\). Once all power vectors are calculated for \(q\in\mathcal{Q}\), we create the final power matrix in the following manner. If **P** meets the first and second constraints of (2), i.e., all elements of **P** are greater than zero and the sum of each row is lesser than \(P_{\max}\), **P** is a feasible solution of (2). Otherwise, (2) is not feasible.
#### Vii-B3 Benchmarks
We consider three main benchmarks3, i.e.,
Footnote 3: Traditionally greedy algorithms are considered as a common benchmark. However, in the presence of QoS constraint, the greedy algorithms are not suitable and can not offer us a fair comparison with the proposed scheme. Thus, similar to the relevant research works, such as [3], geometric programming (GP) [23] is considered as the main optimization-based benchmark.
* **PNet:** is a neural network model exactly like DIPNet and DEPNet, but without the projection layer, i.e. FCNN. This model is an extension of PCNet [3] that works for the multi-channel scenario. The power budget constraint was handled by having softmax as the activation function of the final layer. The violation of the QoS constraint was added as a penalty term to the loss function, similar to the soft loss as in Section VI. This DNN-based benchmark does not utilize the proposed projection methods.
* **GP:** is an optimization-based solution that uses the high-SINR assumption to transform the main problem (2) to a geometric program [23].
* **Genetic Algorithm:** is a well-known global optimization algorithm with high complexity.
## VIII Numerical Results and Discussions
In this section, we present the performance of the proposed methods (DIPNet and DEPNet) with the conventional benchmarks. The considered performance metrics include network sum-rate, QoS violation probability, and per sample test time. To get a good estimate of the computation time, the overall
\begin{table}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{**Dataset ID**} & \multicolumn{2}{c|}{**Configuration**} & \multicolumn{2}{c|}{**DIPNet**} & \multicolumn{2}{c|}{**DEPNet**} & \multicolumn{2}{c|}{**GP**} \\ \cline{2-10} & **BS** & **Users** & **Quota** & **Target Rate** & **Sum-rate** & **Time** & **Sum-rate** & **Time** \\ \hline
1 & 5 & 5 & 1 & 10 & 104.15 & 2.91 & 107.35 & 0.69 & 119.6 & 478.47 \\ \hline
2 & 2 & 6 & 3 & 2.5 & 243.35 & 2.70 & 244.6 & 0.46 & 244 & 224.22 \\ \hline
3 & 4 & 20 & 5 & 5 & 510.05 & 4.06 & 515.2 & 3.06 & 569.6 & 7852.19 \\ \hline
4 & 4 & 20 & 5 & 2.5 & 456.1 & 4.02 & 458.95 & 3.05 & 512.9 & 7864.58 \\ \hline
5 & 6 & 24 & 4 & 5 & 441.25 & 5.85 & 444.2 & 4.99 & 515.4 & 35607.32 \\ \hline
6 & 6 & 24 & 4 & 2.5 & 397.7 & 6.05 & 399.85 & 5.81 & 466.55 & 36338.41 \\ \hline
7 & 5 & 5 & 1 & 2.5 & 55.7 & 2.92 & 57.4 & 0.66 & 68.35 & 530.52 \\ \hline
8 & 4 & 12 & 3 & 2.5 & 258.7 & 3.82 & 261.9 & 1.73 & 297 & 2244.52 \\ \hline \end{tabular}
\end{table}
Table II: A comparison of DIPNet, DEPNet, and optimization benchmark in terms of sum-rate (in Mbps) and computation time (in msec).
computation time of each model is measured over the test set. Per-sample computation time is then calculated by averaging over the data points. For the calculation of the sum rate, if the output of a scheme violates the QoS constraint, we set the sum rate of that datapoint to be zero. In this way, the effect of constraint violation is highlighted in the performance evaluation. This method is adopted from [15].
In all the experiments, an FCNN with three 200-dimensional hidden layers is used as the backbone of DIPNet and DEPNet (\(\mathcal{N}_{r}\)). Batch normalization [33] and Dropout [34] are used to accelerate the training and prevent over-fitting. The same neural network is used for the PNet as well. For the correction process of DEPNet, we used gradient descent with momentum for the training and Newton method for the testing. The momentum is set to 0.5 for all the datasets, and the step size is fine-tuned for each dataset individually from the interval of 0.5 to 0.0005. Similarly, the parameter \(\lambda\) in soft-loss (14) is chosen from the interval of 10 to 10000 for each dataset individually. The number of iterations in the iterative projection is set to 5 and 100 for the training (with Gradient Descent method) and testing (with Newton method), respectively.
For all DNNs, we used a learning rate of 0.001, batch size of 10, and learning rate decay rate of 0.99. ADAM is also used for training the DNNs. All DNNs are trained for 20 epochs, and early stopping is used to pick the model with the best performance. That is, after each epoch, we check the performance of the model over the validation set based on a metric like network sum-rate. Then, we pick the model that has the best performance among all epochs. The performance metric for DIPNet and DEPNet is the network sum-rate. For the PNet, we pick the model that has the minimum QoS violation probability. This is because choosing network sum-rate as the selecting criteria will result in a model that has a significant QoS violation probability, which will not be comparable with DIPNet and DEPNet. Moreover, based on the experiments, \(\lambda\) in the soft-loss of PNet is set to 1000 for Gaussian datasets and 10000 for path-loss datasets. This results in a model that has comparable performance in terms of network sum-rate and QoS violation with DIPNet and DEPNet.
For the implementation, we used PyTorch [35] as the automatic differentiation engine. For implementing the implicit projection in DIPNet, we used the CVXPYlayer package [19]. ECOS [36] is chosen as the optimization solver of this layer among the available options. To make the implementation consistent, GP is implemented in Python using CVXPY package [37]. MOSEK [38], a commercial solver, is used as the backbone solver of CVXPY for both GP and Frank-Wolfe. The maximum number of iterations for Frank-Wolfe is set to 50 and the threshold is set to 0.001. Since the output of DIPNet is always feasible, we only applied Frank-Wolfe to the output of DIPNet, referred to as DIPNet+FW. Moreover, we used the genetic algorithm's implementation on MATLAB with 5000 iterations for all the datasets. The experiments are done on a desktop computer with Intel Core i7-8700 CPU 3.20GHz and 8GB of RAM.
### _Performance of DIPNet and DEPNet_
#### Iv-A1 Optimization-based Benchmark
Starting with the Gaussian datasets, Fig. 4 (left) demonstrates the performance of DIPNet and DEPNet in terms of the achievable aggregate network sum-rate as compared to the conventional GP-based optimization solution. Both DIPNet and DEPNet demonstrate a close sum-rate to GP while showing zero QoS violation probability (as shown in Fig. 5 (left)). By increasing the problem size, the required computation time of GP increases drastically (as shown in Fig. 6 (left)). DIPNet and DEPNet, on the other hand, have much reduced computational complexity.
Considering the Path-loss datasets, Fig.4 (right) shows that GP outperforms both DIPNet and DEPNet in terms of network sum-rate. The difference in network sum-rates starts to grow as the problem size increases. The solutions of GP are always feasible. The same is true for DIPNet and DEPNet (as shown in Fig. 5 (right)). The time complexity of GP increases exponentially when the problem size increases (as shown in Fig. 6 (right)). Once the Frank-Wolfe algorithm is applied to DIPNet results, we can observe an apparent boost in the network sum-rate for both Gaussian and Path-loss datasets (as can be seen in Fig. 4). Compared to GP, DIPNet with Frank-Wolfe enhancement surpasses GP across all the datasets. The cost of this boost is an increase in computation time compared to DIPNet (as can be seen in Fig. 6). Although the computation time of Frank-Wolfe-based DIPNet is higher than other DNN-based methods, it is still lower than GP, especially when the number of users increases (as in Fig. 6).
Compared with the Genetic algorithm, we can observe that the results of the GP are very close to the true optimal (Fig. 4), which makes it a good benchmark for comparison. Also, we can see that the results of DIPNet+FW are almost similar to the genetic algorithm, with noticeably lower computational complexity (Fig. 6). Finally, Table II provides more experimental results of the DIPNet, DEPNet, and GP across various network configurations. Similar to Fig 4, GP also, outperforms DEPNet and DIPNet in terms of network sum-rate. The difference becomes less significant when the minimum rate and the problem size is small. The main reason behind the better performance of GP is that the proposed projection methods tend to find points at the boundary of the feasible set. GP, on the other hand, can search within the feasible set to find solutions with a higher network sum-rate.
#### Iv-A2 A Comparison to Conventional PNet (Enhanced PCNet)
As shown in Fig. 4, PNet always outputs a solution that achieves a lower sum-rate than DIPNet and DEPNet for both Gaussian and Path-loss datasets. As pointed out in [3], reducing the \(\lambda\) in the soft-loss results in an increase in the constraint violation which will also reduce the resulting network sum rate of the PNet. In Fig 5, we note that the violation probability of PNet increases with the dimension of the problem (number of users). Comparing Path-loss and Gaussian datasets, we can see that the QoS violation probability of PNet increases dramatically while working on a more realistic dataset, i.e. Path-loss dataset. This is due to the fact that there is no mechanism in the architecture of PNet to satisfy the QoS constraints. Since PNet doesn't utilize any projection functionality, the
computation time of PNet is lesser than DIPNet and DEPNet (as can be seen in Fig. 6). Considering all these factors, we can conclude that PNet is the best option when there is no complex hard constraint like QoS. However, once the QoS constraint is introduced, a projection function has to be utilized to satisfy the constraints with zero violation probability.
#### V-A3 DIPNet vs DEPNet
The main difference between DIPNet and DEPNet is the projection function used in them to satisfy the QoS constraint. DIPNet uses an optimization solver to incorporate the constraint, and it is easier to implement and does not require tuning some hyperparameters for the projection function. DEPNet, however, uses an iterative process to realize the projection functionality. We used gradient descent with momentum during the training with five iterations. This choice is computationally efficient and leads to fast training. However, the step-size of gradient-descent needs to be tuned for each dataset; thus requiring experimentation. At the test time, we used Newton method, which has a faster convergence rate than gradient descent, with 100 iterations to make sure the output is always feasible. As shown in Fig. 5, both DIPNet and DEPNet achieve zero violation probability, but DEPNet is more computationally efficient than DIPNet (as can be noted from Fig. 6).
When it comes to sum-rate, the performance of DIPNet is slightly better than DEPNet in Gaussian datasets but is worse for the Path-loss dataset. This is because the iterative process used to define the projection function in DEPNet provides better gradients for the backbone neural network. The same trend can be observed in Table II. Although tempting, the better performance of DEPNet comes at the cost of the careful configuration of the parameters of its projection function.
The computation time of DEPNet is faster than DIPNet in all the experiments consistently (Fig. 6). This is because DIPNet requires solving a quadratic program in each step. Moreover, since the iterative process used in DEPNet is based on Newton method and only requires gradient, Hessian, matrix inversion, and matrix multiplication, DEPNet can be run on GPU as well, which can significantly improve the computation time of DEPNet. DIPNet, however, uses an optimization solver for the projection function, which is not GPU-friendly. Hence, DEPNet outperforms DIPNet in terms of time complexity, but DIPNet is relatively easier to implement. Fig. 9 and Fig. 10 show the performance of the proposed models against the benchmarks on different values of the users' minimum rate requirement and number of BSs, respectively. The general trends are observed to be the same, i.e., DIPNet and DEPNet reach zero constraint violation probability while having low computational complexity than the other benchmarks.
### _DIPNet and DEPNet: Tuning_
#### V-B1 Effect of the Activation function of the last Layer of \(\mathcal{N}_{r}\)
Here, we examine the effect of using different activation functions for the last layer of \(\mathcal{N}_{r}\) before the projection layer for DIPNet and DEPNet. We can think of the output of the \(\mathcal{N}_{r}\) as the initial point for the projection layer; thus, using a proper activation function can benefit the convergence behaviour of the projection function, especially the iterative projection of DEPNet. The experiments are conducted on dataset ID 3 with \(U=12\) that follows the Path-loss model. For each activation, we record the network sum-rate and QoS violation probability of the model on the test dataset after each epoch. We test the following activations: affine, i.e. not using any activation, ReLU, Sigmoid, and Softmax, where it is applied in a way to satisfy the second constraint of (4) (\(\textbf{Ar}\leq P_{\max}\textbf{1}\)). Starting with the network sum-rate, we observe that Sigmoid and Softmax outperform other activation functions for both DIPNet and DEPNet (as shown in Fig. 7). We can see that sigmoid reaches higher network sum-rate and least QoS violation for DIPNet and DEPNet. Thus, we conclude that Sigmoid has the best performance overall.
#### V-B2 Tuning DEPNet: Gradient Descent vs Newton
We compare now the convergence rate of gradient-descent (GD) with different step-sizes and Newton method in a DEPNet. The input to the algorithms (**r**) is the output of the backbone neural network (\(\mathcal{N}_{r}\)) before training and after training. The training is conducted with the step-size of \(0.007\) as it has the best
Figure 4: Sum-rate for GP, PNet, DIPNet, DIPNet+FW, DEPNet, and Genetic (Left: Gaussian - Right: Path-loss).
Figure 5: QoS violation for GP, PNet, DIPNet, DIPNet+FW, DEPNet, and Genetic (Left: Gaussian Dataset - Right: Pathloss Dataset).
convergence rate among the others. Moreover, the momentum is set to 0.5 for all the gradient-descent configurations, and \(10^{-8}\) is used to regularize the Hessian.
Fig. 11 shows the dynamics of the projection function before and after training considering Newton and GD (with different step sizes). It is observed that all configurations exhibit an improvement in convergence after training, thus indicating that the backbone neural network learns to generate an initial point for the projection function that is already in proximity to the feasible set. However, the fluctuations in Fig. 11 (especially in gradient descent method after training) are attributed to the fixed step size and momentum employed in each iteration of the gradient descent. The rationale behind this choice is to ensure computational efficiency and ease of differentiability for the training of the backbone neural network through back-propagation. To minimize fluctuations, one solution is to adopt an adaptive step-size by performing a line search [28]. However, this approach increases the computational complexity of each gradient descent update and complicates differentiation, rendering its application challenging during training.
As we can see, Newton method is the only correction process that can achieve zero violation probability. Among different step-sizes for GD, we can observe that step-size 0.01 and 0.007 achieve lower violation probability than others. We can see that having very large and small step-sizes (0.1 and 0.0001) results in no progress. Thus, the step-size should be chosen with experiments to find the right range that helps the convergence of the projection function.
After the training is completed, we can see that the gradient-descent with step-size 0.01, 0.007, and 0.001, achieves better results than before (without training). The Newton method still is the winner of the game by achieving zero violation probability. We can see that the convergence of Newton method is improved after training and it converges after almost 10 iterations, which took about 20 iterations to happen before the training. Moreover, after training, we don't have
Figure 8: The comparison of constraint violation probability during training using different activation functions for the last layer of (Left: DIPNet - Right: DEPNet).
Figure 6: Computation time for GP, PNet, DIPNet, DIPNet+FW, DEPNet, and Genetic (Left: Gaussian Dataset - Right: Pathloss Dataset).
Figure 7: The comparison of sum-rate during training using different activation functions for the last layer of (Left: DIPNet - Right: DEPNet).
any oscillations before the convergence. This implies that by training, the backbone network learns to output points that are already very close to the feasible set, and just taking a few steps of Newton method will land them on the feasible set.
### _Consideration of Non-Convex Constraints_
We would like to emphasize that the DIPNet requires convex constraints; whereas DEPNet does not pose any such restriction. That is, DEPNet can handle more sophisticated non-convex constraints such as energy efficiency constraints. To demonstrate this capability, we tested DEPNet on three network sum rate maximization problems: 1) with non-linear data rate constraint formulation, 2) with (matrix-based) linear data rate constraint formulation, and 3) with energy efficiency (EE) constraint. Since the objective function and power budget constraint are the same among these problems, in the following, we only show the QoS or EE constraint: **(i)** Data Rate (non-linear): \(R_{b,q}(\textbf{P},\textbf{H})\geq\alpha_{b,q},\forall b\in\mathcal{B},\forall q \in\mathcal{Q}\), **(ii)** Data Rate (linear): \(\textbf{Cp}\geq\textbf{d}\), \((\textbf{iii)}EE_{b,q}=\frac{R_{b,q}(\textbf{P},\textbf{H})}{P_{b,q}}\geq ee_{b,q },\quad\forall b\in\mathcal{B},\forall q\in\mathcal{Q}\), where \(ee_{b,q}\) is the minimum energy efficiency required by the user associated with channel \(q\) of BS \(b\). The general procedure of using DEPNet remains the same as in Section V. As we can see in Fig. 12 (right), DEPNet works successfully for all of the cases and reaches zero violation for data rate constraint with both linear and non-linear constraints. The number of iterations of the newton method is set to 100 for the linear one and 300 for the other two. The reason behind this difference is that the other two constraints involve non-convexity, which requires the newton method to take more steps to satisfy them. Finally, as expected, the data rate and energy efficiency continue to decrease with the increase in data rate requirements. The reason is that the gains from power allocations to strong channel users continue to diminish as the power demands of weak channel users continue to increase; thus overall data rate decreases.
### _Consideration of Imperfect CSI_
Following the CSI estimation error model in [39], we have: \(H_{b,q,\hat{b}}=\hat{H}_{b,q,\hat{b}}+\Delta H_{b,q,\hat{b}},\) where \(\Delta H_{b,q,\hat{b}}\sim\mathcal{U}[-\sigma H_{b,q,\hat{b}},\sigma H_{b,q, \hat{b}}]\) where \(\hat{H}_{b,q,\hat{b}}\) is the estimated imperfect CSI that the DNN has access to (both in training and test time) and \(\Delta H_{b,q,\hat{b}}\) is the estimation error following a uniform distribution. The uncertainty increases with the increase in \(\sigma\), which widens the support of the uniform distribution. To
Figure 11: Convergence of DEPNet considering GD with different step sizes and Newton method (Left: before training- Right: after training).
Figure 9: Average network sum rate (left), QoS violation probability (middle), and computation time (right) as a function \(\alpha_{b,q}\) considering path-loss dataset, \(B=4\), \(U=12\).
overcome the estimation error in the CSI, [39] proposed to generate an artificially distorted CSI which has the same statistical characteristics as \(\Delta H_{b,q,\hat{b}}\), and use it as the input to the loss function of the DNN. In other words, \(\tilde{H}_{b,q,\hat{b}}=\hat{H}_{b,q,\hat{b}}+\Delta\hat{H}_{b,q,\hat{b}}, \mathrm{where}\quad\Delta\hat{H}_{b,q,\hat{b}}\sim\mathcal{U}[-\sigma\hat{H}_{b,q,\hat{b}},\sigma\hat{H}_{b,q,\hat{b}}]\) where we denote the tensors of imperfect, artificially distorted, and perfect CSIs, as \(\mathbf{\hat{H}}\), \(\mathbf{\hat{H}}\), and \(\mathbf{\mathbf{H}}\), respectively. As shown in [39], using the artificially distorted CSI during the training will make the output of the DNN robust to the estimation error. In Fig. 13, we note that the quality of the CSI influences the amount of the resulting sum-rate. Moreover, as the estimation error increases, the QoS violation increases. The reason is that the proposed projection methods are designed to find a point in the feasible set to fulfill the constraints with zero violation, and the geometry of the feasible set is a function of the input CSI to the DNN, thus considering imperfect CSI results in a constraint violation.
To make the proposed projection methods behave in a robust manner w.r.t. estimation error in CSI, we propose a heuristic to tackle the CSI imperfection (as detailed in the appendix). To test the approach, we generate \(\mathbf{\hat{H}}\) and \(\mathbf{\hat{H}}\) for each value of \(\sigma\) and applied the feasibility check to make sure all the data points are feasible. The hyperparameters for training DIPNet and DEPNet are the same as the other results of the paper. To find the worst-case, we generate multiple CSIs with \(\chi\) ranging from zero to \(\sigma\) with a step size of 0.01, apply the feasibility check on all of them and pick the one with the largest value of \(\chi\). This will provide us with an approximation of \(\mathbf{H}_{\mathrm{min}}\). As shown in Fig. 13, this approach works well for small values of \(\sigma\) for both DIPNet and DEPNet and results in zero constraint violation probability. As \(\sigma\) increases, however, we observe some violations, which are still significantly lower than the method in [39]. The reason is using the same value of \(\chi\) for all the CSIs to approximate the worst-case scenario, which is an approximation of \(\mathbf{H}_{\mathrm{min}}\). One can find the exact value of \(\mathbf{H}_{\mathrm{min}}\) by following sophisticated techniques from robust optimization in [40].
## IX Conclusion
In this paper, to achieve zero constraint violation probability, a differentiable projection framework is developed, which uses a projection function to project the output of the backbone neural network to the feasible set of the problem. The projection function is defined implicitly using convex optimization and explicitly using an iterative process. The resulting DIPNet and DEPNet are tested against optimization-based and neural-based benchmarks. Numerical experiments confirmed zero violation probability of the output of the proposed models while outperforming the DNN-based benchmark in terms of sum-rate and GP in terms of the computation time. With the proposed framework, one can handle more sophisticated differentiable constraints that are a function of problem data and/or the constraints that cannot be handled with the standard off-the-shelf projection functions. To incorporate non-differentiable constraints in our framework, one can apply some continuous approximations to provide differentiability. This is synonymous with the work in [41], where the categorical distribution is approximated with Gumble-Softmax distribution, and the
Figure 12: Average network sum-rate (left) and QoS violation (right) for DEPNet with convex and non-convex constraints, \(B=4\), \(U=8\).
Figure 13: Average network sum-rate (left) and QoS violation probability (right) for DIPNet and DEPNet in the presence of imperfect CSI considering path-loss dataset, \(B\)=4, \(U\)=8, \(\alpha_{b,q}=2.5\) Mbps.
quantization function is approximated during training with a smooth function to make the constraints differentiable; thereby, the training of the DNN becomes possible. Another way to extend the proposed framework for non-differentiable constraints is to design another differentiable and iterative process specific to those constraints, to satisfy them. One example of such work is [42], where the authors used an iterative process called Sinkhorn normalization to project the output of the neural network to the space of doubly-stochastic matrices, i.e. positive-valued square matrices where the sum of each row and column is one. Moreover, considering imperfect CSI is inevitable due to the imperfect channel estimation procedures. Thus, this is an important problem for further investigation. Furthermore, there are several other important QoS performance metrics like energy efficiency, fairness, reliability, latency, jitter, that can be investigated with the proposed framework.
Start from the linear formulation of the constraints in (4). As we can see, the matrix \(\mathbf{C}\) is derived directly from the CSIs (we denote it as \(\mathbf{C}(\mathbf{H})\)). Thus, if we design the projection w.r.t. \(\hat{\mathbf{H}}\), there is a certain probability that it will not work for \(\mathbf{H}\). To overcome this issue, given the estimated CSI and the distribution of the estimation error, we can generate the worst-case CSI (denoted by \(\mathbf{H}_{\min}\)) that is feasible w.r.t. the minimum rate requirement of the users. Thus, once we perform the projection w.r.t. \(\mathbf{H}_{\min}\), the constraints will be satisfied for the perfect CSI as well. In other words, given \(\hat{\mathbf{H}}\), we want to find \(\mathbf{H}_{\min}\) such that if \(\mathbf{C}(\mathbf{H}_{\min})\mathbf{p}\geq\mathbf{d}\Rightarrow\mathbf{C}( \mathbf{H})\mathbf{p}\geq\mathbf{d}\). In the following, we generate an approximation of \(\mathbf{H}_{\min}\), i.e., (\(\mathbf{H}^{\prime}\)) by making the direct channel of each user weaker and the interfering channels stronger. Mathematically:
\[H^{\prime}_{b,q,\hat{b}}=\left\{\begin{array}{ll}\hat{H}_{b,q,b}-\chi\hat{H} _{b,q,b}&\text{if }\hat{b}=b\\ \hat{H}_{b,q,b}+\chi\hat{H}_{b,q,b}&\text{otherwise}\end{array}\right. \tag{27}\]
where \(0<\chi\leq\sigma\).
|
2309.07573 | Two remarks on the set of recurrent vectors | We solve in the negative two open problems, related to the linear and
topological structure of the set of recurrent vectors, asked by Sophie Grivaux,
Alfred Peris and the first author of this paper. Firstly, we show that there
exist recurrent operators whose set of recurrent vectors is not dense lineable;
and secondly, we construct operators which are reiteratively recurrent and
cyclic, but whose set of reiteratively recurrent vectors is meager. | Antoni López-Martínez, Quentin Menet | 2023-09-14T10:06:58Z | http://arxiv.org/abs/2309.07573v1 | # Two remarks on the set of recurrent vectors
###### Abstract
We solve in the negative two open problems, related to the linear and topological structure of the set of recurrent vectors, asked by Sophie Grivaux, Alfred Peris and the first author of this paper. Firstly, we show that there exist recurrent operators whose set of recurrent vectors is not dense lineable; and secondly, we construct operators which are reiteratively recurrent and cyclic, but whose set of reiteratively recurrent vectors is meager.
+
Footnote †: **Key words and phrases**: Linear dynamics, hypercyclicity, recurrence, cyclicity, reiterative recurrence.
+
Footnote †: **Key words and phrases**: Linear dynamics, hypercyclicity, recurrence, cyclicity, reiterative recurrence.
## 1 Introduction
Given a _continuous linear operator_\(T:X\longrightarrow X\) on a _separable infinite-dimensional Banach space_\(X\), a vector \(x\in X\) is called _hypercyclic for_\(T\) if its orbit,
\[\mathcal{O}_{T}(x):=\{T^{n}x\ ;\ n\geq 0\},\]
is a dense set in \(X\); and \(T\) is said to be a _hypercyclic operator_ whenever it admits a hypercyclic vector. In _Linear Dynamics_ this property has been investigated in many different directions, one of them being to study the structure of the set \(\mathrm{HC}(T)\), that is, the set of hypercyclic vectors for \(T\). For instance: it is well-known (due to Birkhoff) that the set \(\mathrm{HC}(T)\) is always a _dense \(G_{\delta}\)-set_ for any hypercyclic operator \(T\) (see [17, Theorem 2.19]); and we also know (due to Herrero and Bourdon) that such a set is always _dense lineable_, that is, every hypercyclic operator \(T\) admits a dense vector subspace that consists (except for the zero-vector) entirely of hypercyclic vectors (see [17, Theorem 2.55]).
Another property that has appeared in the last years in the context of Linear Dynamics, and the one in which we are interested here, is that of recurrence: a vector \(x\in X\) is called _recurrent for_\(T\) if it belongs to the closure of its forward orbit, that is, if
\[x\in\overline{\mathcal{O}_{T}(Tx)}=\overline{\{T^{n}x\ ;\ n\geq 1\}}.\]
This definition is equivalent, in our "_Banach space setting_", to any of the following two facts:
* there exists an increasing sequence of positive integers \((n_{k})_{k\in\mathbb{N}}\) such that \(T^{n_{k}}x\to x\) as \(k\to\infty\);
* for every neighbourhood \(U\) of \(x\) there exists a positive integer \(n\geq 1\) such that \(T^{n}x\in U\).
Moreover, an operator \(T\) is said to be a _recurrent operator_ if the set of recurrent vectors for \(T\), which will be denoted by \(\mathrm{Rec}(T)\), is a dense set in \(X\). Recurrence has a very long history in the context of _non-linear dynamical systems_ (see [11] and [12]), while the start of its study in Linear Dynamics can just be dated back to 2014 when the work of Costakis, Manoussos and Parissis [10] was published. This paper was later followed by the very recent works [6], [7, 8, 9] and [14, 15, 19] among others, and the "_novelty_" of this property in the linear setting justifies, somehow, the many open problems that there exist right now in _linear recurrence_.
About the structure of the set of recurrent vectors many things are known by now: when \(T\) is recurrent then the set \(\mathrm{Rec}(T)\) is always a _dense \(G_{\delta}\)-set_ by [10, Proposition 2.1] and a _lineable_ set, i.e. every recurrent operator \(T\) admits an infinite-dimensional vector subspace that consists entirely of recurrent vectors (see [15, Section 5]). Also the _spaceability_ of such a set has been studied and, even though hypercyclicity is a much stronger notion than recurrence, the following curious result was proved in [19]: _a weakly-mixing operator acting on a Banach space admits a closed infinite-dimensional subspace of recurrent vectors if and only if it admits a closed infinite-dimensional subspace that consists (except for the zero-vector) of hypercyclic vectors_. In this paper we will focus on the _dense lineability_ property for the set \(\mathrm{Rec}(T)\), but the reader interested in the notions of lineability and spaceability in a more general context can refer to the book [1].
Given an operator \(T:X\longrightarrow X\) we say that the set of recurrent vectors \(\mathrm{Rec}(T)\) is _dense lineable_ if it contains a dense vector subspace. This notion has been studied by Grivaux, Peris and the first author of this paper, and it was established in [15] that a sufficient condition for \(\mathrm{Rec}(T)\) to be dense lineable is that of quasi-rigidity: an operator \(T\) is called _quasi-rigid_ if the \(N\)-fold direct sum operator
\[\underbrace{T\oplus\cdots\oplus T}_{N}:\underbrace{X\oplus\cdots\oplus X}_{N} \longrightarrow\underbrace{X\oplus\cdots\oplus X}_{N}\;,\]
acting as \((x_{1},x_{2},...,x_{N})\longmapsto(Tx_{1},Tx_{2},...,Tx_{N})\), is again a recurrent operator for every \(N\in\mathbb{N}\). It is also shown in [15, Section 3] that there exist recurrent operators which are not quasi-rigid, but for the examples constructed there every vector is recurrent, so that they trivially have a dense lineable set of recurrent vectors. One can thus wonder if (just the notion of) recurrence is enough to imply the mentioned dense lineability, as it was asked in [15, Section 6], and as it is the case for hypercyclicity. This is the first open problem that we solve here in the negative (see Section 2 below):
**Question 1.1** ([15]).: Let \(T:X\longrightarrow X\) be a recurrent operator. Is the set \(\mathrm{Rec}(T)\) dense lineable?
In order to state the second problem that we are about to solve, let us introduce a strengthened notion of recurrence called _reiterative recurrence_, which appeared in the context of Linear Dynamics for the first time in the recent 2022 paper [6]: a vector \(x\in X\) is called _reiteratively recurrent for \(T\)_ if the return set from \(x\) to any neighbourhood \(U\) of \(x\), that is, the set
\[\mathcal{N}_{T}(x,U):=\{n\geq 1\;;\;T^{n}x\in U\},\]
has positive upper Banach density, which means that
\[\overline{\mathrm{Bd}}(\mathcal{N}_{T}(x,U)):=\lim_{N\to\infty}\left(\max_{m \geq 0}\frac{\#\left(\mathcal{N}_{T}(x,U)\cap[m+1,m+N]\right)}{N}\right)>0. \tag{1}\]
We will denote by \(\mathrm{RRec}(T)\) the _set of reiteratively recurrent vectors for \(T\)_, which is said to be a _reiteratively recurrent operator_ whenever \(\mathrm{RRec}(T)\) is a dense set. This notion presents a nice relation: _an operator is reiteratively recurrent and hypercyclic if and only if it is reiteratively hypercyclic_, which is a strong version of hypercyclicity introduced in [4] and deeply studied in [5] and [6]. Note also that
\[\mathrm{RRec}(T)\subset\mathrm{Rec}(T) \tag{2}\]
since a vector \(x\) belongs to \(\mathrm{Rec}(T)\) if and only if the return set \(\mathcal{N}_{T}(x,U)\) is non-empty for every neighbourhood \(U\) of \(x\). Moreover, the usual formula to compute the _upper Banach density_ for a set of positive integers \(J\subset\mathbb{N}\) is written with a superior limit, that is,
\[\overline{\mathrm{Bd}}(J):=\limsup_{N\to\infty}\left(\max_{m\geq 0}\frac{\# \left(J\cap[m+1,m+N]\right)}{N}\right),\]
but the limit is known to exist (see for instance [13]), so that we can use the formula stated in (1).
In view of the inclusion (2) and since \(\mathrm{Rec}(T)\) is always a dense \(G_{\delta}\)-set, it is natural to ask if \(\mathrm{RRec}(T)\) is also co-meager for every reiteratively recurrent operator. It was proved in [6, Theorem 2.1] that this is the case whenever \(T\) is also hypercyclic:
- [6, Theorem 2.1]: _If \(T\) is reiteratively recurrent and hypercyclic, then \(\mathrm{RRec}(T)\) is co-meager._
However, it was also shown in [6, Example 2.4] that there exist operators \(T\) for which the set \(\mathrm{RRec}(T)\) can be dense and meager at the same time. By [6, Theorem 2.1] it is clear that the mentioned examples are non-hypercyclic, but it can be checked that they are even non-cyclic. The next question, which we also solve here in the negative (see Section 3 below), was then posed in [15, Problem 5.14]:
**Question 1.2** ([15]).: Let \(T:X\longrightarrow X\) be reiteratively recurrent and cyclic. Is \(\mathrm{RRec}(T)\) co-meager?
We solve Questions 1.1 and 1.2 in the next sections by constructing counterexamples in every separable infinite-dimensional Banach space. In order to construct such examples we will use the notion of _biorthogonal sequence_ (see Subsection 1.1 below).
The paper is organized as follows: in Section 2 we present a modification for the construction of "_recurrent but not quasi-rigid operators_" shown in [15], to exhibit recurrent operators whose set of recurrent vectors is not dense lineable in every separable infinite-dimensional Banach space. This solves Question 1.1 in the negative. In Section 3 we construct reiteratively recurrent and cyclic operators whose set of reiteratively recurrent vectors is meager, which solves Question 1.2 in the negative.
We refer the reader to the textbooks [3] and [17] for any unexplained notion in Linear Dynamics.
### Notation for a general separable infinite-dimensional Banach space
We will denote by \(\mathbb{K}\) the field of either real or complex numbers \(\mathbb{R}\) or \(\mathbb{C}\), given any (real or complex) separable infinite-dimensional Banach space \(X\), we will denote by \(X^{*}\) its _topological dual space_, and given any pair \((x,x^{*})\in X\times X^{*}\) we will denote by \(\langle x^{*},x\rangle:=x^{*}(x)\) the standard dual evaluation.
In the next sections our operators will be built using a bounded biorthogonal sequence. In fact, by a classical result proved in [21], given any separable infinite-dimensional Banach space \(X\) we can consider a sequence \((e_{k},e_{k}^{*})_{k\in\mathbb{N}}\subset X\times X^{*}\) with the following properties:
* \(\mathrm{span}\{e_{k}\ ;\ k\in\mathbb{N}\}\) is dense in \(X\);
* \(\langle e_{k}^{*},e_{j}\rangle=\delta_{k,j}\) where \(\delta_{k,j}=0\) if \(k\neq j\) and \(1\) if \(k=j\);
* for each \(k\in\mathbb{N}\) we have that \(\|e_{k}\|=1\) and \(K:=\sup_{k\in\mathbb{N}}\|e_{k}^{*}\|^{*}<\infty\).
We will repeatedly use the fact that given any \(x\in X\)
\[|\langle e_{k}^{*},x\rangle|\leq K\|x\|\quad\text{ for each }k\in\mathbb{N}. \tag{3}\]
We will always write \(c_{00}:=\mathrm{span}\{e_{k}\ ;\ k\in\mathbb{N}\}\). Note that for any vector \(x\in c_{00}\) we have the following equalities
\[x=\sum_{k\in\mathbb{N}}\langle e_{k}^{*},x\rangle e_{k}=\sum_{k=1}^{k_{x}} \langle e_{k}^{*},x\rangle e_{k}\quad\text{ for some }k_{x}\in\mathbb{N}.\]
Note also that, in general, the first equality is false for arbitrary vectors unless the sequence \((e_{k})_{k\in\mathbb{N}}\) is a Schauder basis of the Banach space \(X\).
## 2 Dense but not dense lineable sets of recurrent vectors
This section is devoted to show the following result, which solves Question 1.1:
**Theorem 2.1**.: _Let \(X\) be any separable infinite-dimensional Banach space. There exists a recurrent operator \(T:X\longrightarrow X\) whose set of recurrent vectors \(\mathrm{Rec}(T)\) is not dense lineable._
We first assume that \(X\) is a **complex** space and we modify the construction given in [15, Section 3], which was originally based on [2]. Fix any complex separable infinite-dimensional Banach space \(X\) and let \((e_{k},e_{k}^{*})_{k\in\mathbb{N}}\subset X\times X^{*}\) be a biorthogonal sequence with the properties stated in Subsection 1.1. Grivaux, Peris and the first author of this paper consider in [15] an operator \(T\) given by
\[Tx:=Rx+\sum_{k\geq 3}\frac{1}{m_{k-1}}\langle w_{k}^{*},Px\rangle e_{k}\]
depending on an operator \(R\), a sequence of integers \((m_{k})_{k\in\mathbb{N}}\), a projection \(P\) and a sequence of functionals \((w_{k}^{*})_{k\geq 3}\) with bounded norm. Our main modification is letting \((w_{k}^{*})_{k\geq 3}\) to be unbounded.
### Constructing the operator \(T\)
As in [15] we let \(P:X\longrightarrow\mathrm{span}\{e_{1},e_{2}\}\) be the projection of \(X\) onto the span of \(e_{1}\) and \(e_{2}\) given by
\[Px:=\langle e_{1}^{*},x\rangle e_{1}+\langle e_{2}^{*},x\rangle e_{2}\quad \text{ for every }x\in X.\]
Note that \(\|P\|\leq 2K\) so that \(P\) is continuous. Set \(E^{*}:=\mathrm{span}\{e_{1}^{*},e_{2}^{*}\}\), endowed with the norm \(\|\cdot\|^{*}\) of the dual space \(X^{*}\), and denote by \(S_{E^{*}}:=\{w^{*}\in E^{*}\;;\;\|w^{*}\|^{*}=1\}\) the sphere of the \(2\)-dimensional space \(E^{*}\). In [15] the authors consider for \((w_{k}^{*})_{k\geq 3}\) a dense sequence in \(S_{E^{*}}\), which exists since \(S_{E^{*}}\) is a compact metrizable space, but this choice results in all vectors becoming recurrent for \(T\). Since we want the set of recurrent vectors \(\mathrm{Rec}(T)\) not to be dense lineable, we select here the functionals \(w_{k}^{*}\) in such a way to get non-recurrent vectors. To this end, we first consider a dense sequence \((\widetilde{w}_{k}^{*})_{k\geq 3}\) in \(S_{E^{*}}\) and a vector \(z\in\mathrm{span}\{e_{1},e_{2}\}\) such that
\[\langle\widetilde{w}_{k}^{*},z\rangle\neq 0\quad\text{ for all }k\geq 3. \tag{4}\]
Such a vector \(z\) exists since the family \(\{\widetilde{w}_{k}^{*}\;;\;k\geq 3\}\) is countable. Given a partition \((A_{n})_{n\geq 3}\) of the set \(\{k\in\mathbb{N}\;;\;k\geq 3\}\) with \(\#A_{n}=\infty\) for all \(n\geq 3\), we then set
\[w_{k}^{*}:=\frac{1}{|\langle\widetilde{w}_{n}^{*},z\rangle|}\widetilde{w}_{n}^ {*}\quad\text{ for each }k\in A_{n}.\]
In this way, we have that
\[\|w_{k}^{*}\|^{*}\geq\frac{1}{\|z\|}\quad\text{ and }\quad|\langle w_{k}^{*},z \rangle|=1\quad\text{ for every }k\geq 3. \tag{5}\]
We now consider a sequence \((m_{k})_{k\in\mathbb{N}}\in\mathbb{N}^{\mathbb{N}}\) of positive integers with the following properties:
1. \(m_{k}\mid m_{k+1}\) for each \(k\geq 1\);
2. \(m_{1}=1=m_{2}\);
and starting from \(k=3\), the sequence \((m_{k})_{k\geq 3}\) grows fast enough to satisfy:
3. \(\lim_{j\to\infty}\left(m_{j-1}\cdot\sum_{k>j}\frac{\|w_{k}^{*}\|^{*}}{m_{k-1}} \right)=0.\)
These properties are comparable to the properties required in [15]. The only difference relies on the last condition where we need to take into account the norm of \(w_{k}^{*}\). The rest of the construction is similar: for each \(x\in c_{00}=\mathrm{span}\{e_{k}\ ;\ k\in\mathbb{N}\}\) we set
\[Rx:=\sum_{k\in\mathbb{N}}\lambda_{k}\langle e_{k}^{*},x\rangle e_{k},\]
where \(\lambda_{k}:=\exp(2\pi i\frac{1}{m_{k}})\) for each \(k\in\mathbb{N}\). By (3) we have for every \(x\in c_{00}\) that
\[\|Rx\|\leq\|Rx-x\|+\|x\|\leq\sum_{k\in\mathbb{N}}|\lambda_{k}-1|\cdot|\langle e _{k}^{*},x\rangle|+\|x\|\leq\left(K\sum_{k\in\mathbb{N}}|\lambda_{k}-1|+1 \right)\|x\|\]
and using that \(|\exp(i\theta)-1|\leq|\theta|\) for every \(\theta\in\mathbb{R}\) we have that
\[\sum_{k\in\mathbb{N}}|\lambda_{k}-1|\leq 2\pi\sum_{k\in\mathbb{N}}\frac{1}{m_{k} }<\infty,\]
since (5) together with condition (c) on the sequence \((m_{k})_{k\in\mathbb{N}}\) show that the series \(\sum_{k\in\mathbb{N}}\frac{1}{m_{k}}\) is indeed convergent. Then, by the density of \(c_{00}\) in the space \(X\), the previous inequality also implies that the map \(R:c_{00}\longrightarrow c_{00}\) extends to a bounded operator on \(X\) still denoted by \(R\). Finally, assumption (c) on the sequence \((m_{k})_{k\in\mathbb{N}}\) and the fact that \(\|P\|\leq 2K\) imply that
\[\left\|\sum_{k\geq 3}\frac{1}{m_{k-1}}\langle w_{k}^{*},Px\rangle e_{k}\right\| \leq\sum_{k\geq 3}\frac{\|w_{k}^{*}\|^{*}\cdot\|Px\|}{m_{k-1}}\leq\left(2K \sum_{k\geq 3}\frac{\|w_{k}^{*}\|^{*}}{m_{k-1}}\right)\|x\|,\]
so we can define the operator \(T\) on \(X\) by setting
\[Tx:=Rx+\sum_{k\geq 3}\frac{1}{m_{k-1}}\langle w_{k}^{*},Px\rangle e_{k}\quad \text{ for every }x\in X. \tag{6}\]
It follows that the \(n\)_-th power_ of \(T\) can be computed exactly as in [15, Fact 3.3.1]:
**Fact 2.1.1** ([15, Fact 3.3.1]).: _For every \(x\in X\) and \(n\geq 1\) we have that_
\[T^{n}x=R^{n}x+\sum_{k\geq 3}\frac{\lambda_{k,n}}{m_{k-1}}\langle w_{k}^{*},Px \rangle e_{k},\]
_where \(\lambda_{k,n}:=\sum_{j=0}^{n-1}\lambda_{k}^{j}=\frac{\lambda_{k}^{n}-1}{ \lambda_{k}-1}\) for each \(k\geq 3\)._
We will also need the following properties regarding the numbers \(\lambda_{k,n}\):
**Fact 2.1.2** ([15, Fact 3.3.2]).: _Let \(n\geq 1\). Then:_
1. \(|\lambda_{k,n}|\leq n\) _for all_ \(k\geq 3\)_;_
2. \(\lambda_{k,m_{n}}=0\) _whenever_ \(n\geq k\geq 3\)_;_
3. \(|\lambda_{k,n}|\geq\frac{2}{\pi}n>\frac{m_{k-1}}{\pi}\) _whenever_ \(k=\min\{j\geq 3\ ;\ 2n\leq m_{j}\}\)_._
Let us now check that \(T\) is a recurrent operator but that \(\mathrm{Rec}(T)\) is not dense lineable.
### Recurrence properties of \(T\)
**Proposition 2.2**.: _The operator \(T:X\longrightarrow X\) is recurrent._
Proof.: Since \(\{\widetilde{w}_{n}^{*}\ ;\ n\geq 3\}\) is dense in \(S_{E^{*}}\), the union of the kernels \(\bigcup_{n\geq 3}\mathrm{Ker}(\widetilde{w}_{n}^{*})\) is dense in the \(2\)-dimensional space \(\mathrm{span}\{e_{1},e_{2}\}\). Then the set
\[X_{0}:=\left\{x\in c_{00}\ ;\ Px\in\bigcup_{n\geq 3}\mathrm{Ker}(\widetilde{w}_{ n}^{*})\right\}\]
is dense in \(X\) since \(X_{0}\) is dense in \(c_{00}\), which is dense in \(X\). We claim that \(X_{0}\subset\mathrm{Rec}(T)\). Indeed, given any \(x=\sum_{k=1}^{k_{x}}\langle e_{k}^{*},x\rangle\in X_{0}\) pick \(n\geq 3\) such that \(\langle\widetilde{w}_{n}^{*},Px\rangle=0\) and let \((k_{j})_{j\in\mathbb{N}}\) be the increasing sequence of integers forming the set \(A_{n}\). By Fact 2.1.1 we have that
\[T^{m_{k_{j}-1}}x-x=\left(R^{m_{k_{j}-1}}x-x\right)+\sum_{k\geq 3}\frac{ \lambda_{k,m_{k_{j}-1}}}{m_{k-1}}\langle w_{k}^{*},Px\rangle e_{k},\]
and we will show that this is a \(0\)-convergent sequence as \(j\to\infty\). We start by noticing that, by condition (a) on the sequence \((m_{k})_{k\in\mathbb{N}}\), we have the equality
\[R^{m_{k_{j}-1}}x=\sum_{k=1}^{k_{x}}\lambda_{k}^{m_{k_{j}-1}}\langle e_{k}^{*}, x\rangle e_{k}=\sum_{k=1}^{k_{x}}\langle e_{k}^{*},x\rangle e_{k}=x\qquad\text{ as soon as }k_{j}-1\geq k_{x}. \tag{7}\]
Thus, using that \(\lambda_{k,m_{k_{j}-1}}=0\) for \(k_{j}-1\geq k\) by (ii) of Fact 2.1.2 and also the equality \(\langle w_{k_{j}}^{*},Px\rangle=0\) for every \(k_{j}\in A_{n}\), we deduce that
\[\left\|T^{m_{k_{j}-1}}x-x\right\|=\left\|\sum_{k>k_{j}}\frac{\lambda_{k,m_{k_{ j}-1}}}{m_{k-1}}\langle w_{k}^{*},Px\rangle e_{k}\right\|\qquad\text{ as soon as }k_{j}-1\geq k_{x}. \tag{8}\]
Moreover, by (i) of Fact 2.1.2 we have that \(|\lambda_{k,m_{k_{j}-1}}|\leq m_{k_{j}-1}\) for every \(k\in\mathbb{N}\), and using also the technical condition (c) on the sequence \((m_{k})_{k\in\mathbb{N}}\) we finally get (as soon as \(k_{j}-1\geq k_{x}\)) that
\[\left\|T^{m_{k_{j}-1}}x-x\right\|\leq\sum_{k>k_{j}}\frac{|\lambda_{k,m_{k_{j}- 1}}|}{m_{k-1}}\|w_{k}^{*}\|^{*}\cdot\|Px\|\leq 2K\|x\|\left(m_{k_{j}-1}\cdot \sum_{k>k_{j}}\frac{\|w_{k}^{*}\|^{*}}{m_{k-1}}\right)\underset{j\to\infty}{ \longrightarrow}0,\]
which implies that \(x\in\mathrm{Rec}(T)\). The density of \(X_{0}\) shows that \(T\) is recurrent.
**Proposition 2.3**.: _The set \(\mathrm{Rec}(T)\) is not dense lineable._
Proof.: Let \(z\) be the vector considered in (4). We start by showing that \(P^{-1}(\{z\})\cap\mathrm{Rec}(T)=\emptyset\). Indeed, let \(x\in P^{-1}(\{z\})\), \(n\geq 1\) and \(k_{n}:=\min\{j\geq 3\ ;\ 2n\leq m_{j}\}\). Note that by (iii) of Fact 2.1.2
\[|\lambda_{k_{n},n}|>\tfrac{m_{k_{n}-1}}{\pi}. \tag{9}\]
Then
\[\|T^{n}x-x\| \geq \frac{1}{K}\left|\langle e_{k_{n}}^{*},T^{n}x-x\rangle\right| \text{by (\ref{eq:z})},\] \[= \frac{1}{K}\left|\langle e_{k_{n}}^{*},R^{n}x-x\rangle+\frac{ \lambda_{k_{n},n}}{m_{k_{n}-1}}\langle w_{k_{n}}^{*},z\rangle\right| \text{by Fact \ref{eq:z}.1.1},\] \[> \frac{1}{K}\left(\frac{1}{\pi}-\left|\langle e_{k_{n}}^{*},R^{n} x-x\rangle\right|\right) \text{by (\ref{eq:z}) and (\ref{eq:z})}.\]
Since \(c_{00}\) is dense in \(X\) and \(e_{k_{n}}^{*}\) is continuous we will have, by definition of \(R\) on \(c_{00}\), that
\[\langle e_{k_{n}}^{*},R^{n}x-x\rangle=(\lambda_{k_{n}}^{n}-1)\langle e_{k_{n}}^{ *},x\rangle.\]
Moreover, \(\langle e_{k_{n}}^{*},x\rangle\) tends to \(0\) as \(k_{n}\) tends to infinity because \(c_{00}\) is dense in \(X\). We deduce that
\[\liminf_{n\to\infty}\|T^{n}x-x\|\geq\frac{1}{K\pi},\]
and we conclude that \(x\notin\operatorname{Rec}(T)\). Finally, if \(\operatorname{Rec}(T)\) contained a dense subspace we would have the equality \(P(\operatorname{Rec}(T))=\operatorname{span}\{e_{1},e_{2}\}\). This contradicts the fact that \(z\notin P(\operatorname{Rec}(T))\).
The **complex** version of Theorem 2.1 is now proved, but the construction can be easily adapted to the **real** case using the same arguments as in [2, Section 3.2]. It follows that every (real or complex) separable infinite-dimensional Banach space supports a recurrent operator whose set of recurrent vectors is not dense lineable, and Question 1.1 is now solved.
**Remark 2.4**.: The operators constructed in this section fulfill a stronger recurrence notion than the usual one, namely \(\mathcal{AP}\)_-recurrence_:
* A vector \(x\in X\) is called \(\mathcal{AP}\)_-recurrent_ for an operator \(T:X\longrightarrow X\) if for every neighbourhood \(U\) of \(x\) the return set \(\mathcal{N}_{T}(x,U)=\{n\geq 1\ ;\ T^{n}x\in U\}\) contains arbitrarily long arithmetic progressions; and \(T\) is called an \(\mathcal{AP}\)_-recurrent operator_ if its set of \(\mathcal{AP}\)-recurrent vectors, \(\mathcal{AP}\operatorname{Rec}(T)\), is dense.
In [8] it is shown that the inclusions \(\operatorname{RRec}(T)\subset\mathcal{AP}\operatorname{Rec}(T)\subset \operatorname{Rec}(T)\) hold for every operator \(T\) acting on a Banach space \(X\). Moreover, it is also shown that the set \(\mathcal{AP}\operatorname{Rec}(T)\) is dense in \(X\) if and only if \(T\) is _topologically multiply recurrent_ (see [8, Proposition 2.2]).
For the operators \(T\) constructed in this section, the set \(X_{0}\) considered in Proposition 2.2 is easily checked to be included in \(\mathcal{AP}\operatorname{Rec}(T)\) thanks to condition (c) on the sequence \((m_{k})_{k\in\mathbb{N}}\). Let us quickly argue this fact: fix \(\varepsilon>0\) and any length \(L\in\mathbb{N}\), pick any \(x=\sum_{k=1}^{k_{x}}\langle e_{k}^{*},x\rangle e_{k}\in X_{0}\backslash\{0\}\) and \(n\geq 3\) such that \(\langle\widetilde{w}_{n}^{*},Px\rangle=0\), and let \((k_{j})_{j\in\mathbb{N}}\) be the increasing sequence of integers forming the set \(A_{n}\). Using condition (c) we can choose \(j\in\mathbb{N}\) fulfilling that \(k_{j}-1\geq k_{x}\) and
\[m_{k_{j}-1}\cdot\sum_{k>k_{j}}\frac{\|w_{k}^{*}\|^{*}}{m_{k-1}}<\frac{ \varepsilon}{2K\|x\|L}.\]
Thus, for each \(1\leq\ell\leq L\) we have that \(R^{\ell\cdot m_{k_{j}-1}}x=x\) just as in (7) and arguing as in (8) we get that
\[\left\|T^{\ell\cdot m_{k_{j}-1}}x-x\right\|=\left\|\sum_{k>k_{j}}\frac{ \lambda_{k,\ell\cdot m_{k_{j}-1}}}{m_{k-1}}\langle w_{k}^{*},Px\rangle e_{k} \right\|\leq 2K\|x\|\ell\left(m_{k_{j}-1}\cdot\sum_{k>k_{j}}\frac{\|w_{k}^{*}\| ^{*}}{m_{k-1}}\right)<\varepsilon.\]
This means that the return set \(\mathcal{N}_{T}(x,U)\) from \(x\) to \(U:=\{y\in X\ ;\ \|x-y\|<\varepsilon\}\) contains an arithmetic progression of length \(L\), namely \(\{\ell\cdot m_{k_{j}-1}\ ;\ 1\leq\ell\leq L\}\). The arbitrariness of \(\varepsilon\) and \(L\) implies that \(X_{0}\subset\mathcal{AP}\operatorname{Rec}(T)\) and we have even proved the following result, stronger than Theorem 2.1:
* _Every (real or complex) separable infinite-dimensional Banach space_ \(X\) _supports an_ \(\mathcal{AP}\)_-recurrent operator_ \(T:X\longrightarrow X\) _for which the set of recurrent vectors_ \(\operatorname{Rec}(T)\)_, and thus also the set of_ \(\mathcal{AP}\)_-recurrent vectors_ \(\mathcal{AP}\operatorname{Rec}(T)\)_, are not dense lineable_._
To link to the next section note that, by [18, Lemma 4.8], the set \(\mathcal{AP}\operatorname{Rec}(T)\) is always a \(G_{\delta}\)-set and hence a co-meager set when \(T\) is \(\mathcal{AP}\)-recurrent. This is however not always the case for the set of reiteratively recurrent vectors \(\operatorname{RRec}(T)\) even if \(T\) is cyclic as we show below.
Dense but not co-meager sets of reiteratively recurrent vectors
In this section we prove the following result, which solves Question 1.2:
**Theorem 3.1**.: _Let \(X\) be any separable infinite-dimensional Banach space. There exists a reiteratively recurrent cyclic operator \(T:X\longrightarrow X\) whose set of reiteratively recurrent vectors \(\mathrm{RRec}(T)\) is meager._
Recall that a vector \(x\in X\) is called _cyclic_ for an operator \(T:X\longrightarrow X\) on a (real or complex) Banach space \(X\) if for every non-empty open subset \(U\subset X\) there exists a (real or complex) polynomial \(p\) such that \(p(T)x\in U\); and that \(T\) is called a _cyclic operator_ if it admits a cyclic vector.
**Remark 3.2**.: The following is a well-known sufficient condition for \(T:X\longrightarrow X\) to be cyclic:
* _For every pair of non-empty open subsets_ \(U,V\subset X\) _there exists a (real or complex) polynomial_ \(p\) _such that_ \(p(T)(U)\cap V\neq\emptyset\)_._
Indeed, if this holds and we select a countable base \((U_{n})_{n\in\mathbb{N}}\) of non-empty open sets for \(X\), then
\[\bigcap_{n\in\mathbb{N}}\bigcup_{p\text{ polynomial}}p(T)^{-1}(U_{n}),\]
is clearly a dense \(G_{\delta}\)-set of cyclic vectors for \(T\).
### The family of operators \(T_{\boldsymbol{\lambda},\boldsymbol{\omega}}\)
In order to prove Theorem 3.1 we will use the operators \(T_{\boldsymbol{\lambda},\boldsymbol{\omega}}:=D_{\boldsymbol{\lambda}}+B_{ \boldsymbol{\omega}}\), where \(D_{\boldsymbol{\lambda}}\) is a diagonal operator with weights \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\) just as the operator \(R\) of Section 2, and where \(B_{\boldsymbol{\omega}}\) is the usual unilateral backward shift with weights \(\boldsymbol{\omega}=(\omega_{k})_{k\in\mathbb{N}}\). These operators have been considered in the recent work [16, Chapter 4], acting on the complex spaces \(c_{0}(\mathbb{N})\) and \(\ell^{p}(\mathbb{N})\), with the objective of distinguishing the notion of _ergodicity_ from that of _ergodicity in the Gaussian sense_.
Restricted to the vector subspace \(c_{00}=\{e_{k}\ ;\ k\in\mathbb{N}\}\) the operator \(T_{\boldsymbol{\lambda},\boldsymbol{\omega}}\) can be seen as the following infinite matrix
\[T_{\boldsymbol{\lambda},\boldsymbol{\omega}}=\begin{pmatrix}\lambda_{1}& \omega_{1}&0&0&\cdots\\ 0&\lambda_{2}&\omega_{2}&0&\cdots\\ 0&0&\lambda_{3}&\omega_{3}&\cdots\\ 0&0&0&\lambda_{4}&\ddots\\ \vdots&\vdots&\vdots&\vdots&\ddots\end{pmatrix}:c_{00}\longrightarrow c_{0 0}\,\quad\left(\langle e_{k}^{*},x\rangle\right)_{k\in\mathbb{N}}\longmapsto \left(\lambda_{k}\langle e_{k}^{*},x\rangle+\omega_{k}\langle e_{k+1}^{*},x \rangle\right)_{k\in\mathbb{N}}\,\]
and \(T_{\boldsymbol{\lambda},\boldsymbol{\omega}}\) extends to a continuous operator on \(c_{0}(\mathbb{N})\) and \(\ell^{p}(\mathbb{N})\) as soon as \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\) and \(\boldsymbol{\omega}=(\omega_{k})_{k\in\mathbb{N}}\) are bounded sequences. Since we want to prove the result for a general Banach space we will have to require stronger conditions on the previous sequences in order to guarantee continuity. Indeed, the sequence \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\) will have to converge very fast to \(1\) while \(\boldsymbol{\omega}=(\omega_{k})_{k\in\mathbb{N}}\) will have to converge very fast to \(0\). See Subsection 3.2 for the precise selection of these sequences.
Moreover, and since our objective is constructing a reiteratively recurrent and cyclic operator, we will need some sufficient conditions to guarantee that an operator of the form \(T_{\boldsymbol{\lambda},\boldsymbol{\omega}}\) satisfies these dynamical properties. In Lemma 3.4 below we will prove a much more general fact regarding the so-called _upper-triangular operators_. Let us start by fixing our setting:
**Definition 3.3**.: Let \(X\) be a separable infinite-dimensional Banach space \(X\), consider a biorthogonal sequence \((e_{k},e^{*}_{k})_{k\in\mathbb{N}}\subset X\times X^{*}\) with the properties stated in Subsection 1.1 and let \(T:X\longrightarrow X\) be a continuous linear operator. We say that \(T\) is an _upper-triangular operator with respect to \((e_{k},e^{*}_{k})_{k\in\mathbb{N}}\)_ if the restriction
\[T|_{c_{00}}:c_{00}\longrightarrow c_{00},\]
of the operator \(T\) to the vector subspace \(c_{00}=\mathrm{span}\{e_{k}\ ;\ k\in\mathbb{N}\}\), can be written as an upper-triangular infinite matrix, that is
\[T|_{c_{00}}=\begin{pmatrix}\lambda_{1}&&&\\ &\lambda_{2}&&(*)&\\ &&\lambda_{3}&&\\ &(0)&&\lambda_{4}&\\ &&&\ddots\end{pmatrix}.\]
The sequence \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\in\mathbb{K}^{\mathbb{N}}\) will be called the _diagonal of \(T\)_.
**Lemma 3.4**.: _Let \(X\) be any separable infinite-dimensional Banach space and consider a biorthogonal sequence \((e_{k},e^{*}_{k})_{k\in\mathbb{N}}\subset X\times X^{*}\) with the properties stated in Subsection 1.1. Suppose that \(T:X\longrightarrow X\) is an upper-triangular operator with respect to \((e_{k},e^{*}_{k})_{k\in\mathbb{N}}\), denote by \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\in\mathbb{K}^{\mathbb{N}}\) the diagonal of the operator \(T\) and assume that \(\lambda_{k}\neq\lambda_{l}\) for every \(k\neq l\in\mathbb{N}\). Then:_
1. _The vector subspace_ \(\mathrm{span}\left(\bigcup_{k\in\mathbb{N}}\mathrm{Ker}(T-\lambda_{k}I)\right)\) _is dense in_ \(X\)_._
2. _The set of cyclic vectors for_ \(T\) _is co-meager in_ \(X\)_._
Proof.: For each \(N\in\mathbb{N}\) set \(X_{N}:=\mathrm{span}\{e_{k}\ ;\ 1\leq k\leq N\}\), which is a \(T\)-invariant subspace isomorphic to \(\mathbb{K}^{N}\) by the upper-triangular condition, and consider the restriction \(T_{N}:=T|_{X_{N}}:X_{N}\longrightarrow X_{N}\). It is then enough to check that the following statements hold:
1. We have the equality \(X_{N}=\mathrm{span}\left(\bigcup_{1\leq k\leq N}\mathrm{Ker}(T_{N}-\lambda_{ k}I)\right)\) for all \(N\in\mathbb{N}\).
2. The operator \(T_{N}\) has a dense set of cyclic vectors in \(X_{N}\) for all \(N\in\mathbb{N}\).
In fact, if (a') holds then statement (a) follows since \(\bigcup_{N\in\mathbb{N}}X_{N}\) is dense in \(X\). Moreover, and since \(X_{N}\subset X_{N+1}\) for all \(N\in\mathbb{N}\), we know that given two non-empty open subsets \(U,V\subset X\) we can find \(N\in\mathbb{N}\) such that \(U_{N}:=U\cap X_{N}\) and \(V_{N}:=V\cap X_{N}\) are non-empty open subsets of \(X_{N}\). Hence, if statement (b') holds there exists a vector \(x\in U_{N}\subset U\) and a polynomial \(p\) such that \(p(T)x=p(T_{N})x\in V_{N}\subset V\), so that \(p(T)(U)\cap V\neq\emptyset\) and (b) follows from Remark 3.2.
Let us now check (a') and (b'): for any fixed \(N\in\mathbb{N}\) we have that
\[T_{N}=\begin{pmatrix}\lambda_{1}&&&\\ 0&\lambda_{2}&&(*)&\\ 0&0&\lambda_{3}&&\\ \vdots&\vdots&\vdots&\ddots&\\ 0&0&0&0&\lambda_{N}\end{pmatrix}\]
and since all the \(\lambda_{k}\) are assumed to be different we deduce that \(\sigma_{p}(T_{N})=\{\lambda_{k}\ ;\ 1\leq k\leq N\}\) and the matrix \(T_{N}\) is similar to the diagonal matrix \(D_{N}=\mathrm{Diag}(\lambda_{1},\lambda_{2},...,\lambda_{N})\), i.e. \(T_{N}=LD_{N}L^{-1}\) for some invertible operator \(L:\mathbb{K}^{N}\longrightarrow\mathbb{K}^{N}\). Starting with (a'), it is clear that \(e_{k}\in\mathrm{Ker}(D_{N}-\lambda_{k}I)\) for each index \(1\leq k\leq N\), so that
\[X_{N}=\mathrm{span}\left(\bigcup_{1\leq k\leq N}\mathrm{Ker}(D_{N}-\lambda_{k} I)\right).\]
It is then immediate to check that \(Le_{k}\in\operatorname{Ker}(T_{N}-\lambda_{k}I)\), and since \(L\) is invertible we deduce that the set \(\{Le_{k}\ ;\ 1\leq k\leq N\}\) is an algebraic basis of \(X_{N}\), which finally shows (a'). Regarding (b'), we claim that every vector \(x\in X_{N}\) with \(\langle e_{k}^{*},x\rangle\neq 0\) for all \(1\leq k\leq N\) is cyclic for \(D_{N}\): if such a vector \(x\) was not cyclic there would be a non-zero polynomial \(p\) of degree less or equal to \(N-1\) fulfilling that \(0=p(D_{N})x=\sum_{k=1}^{N}p(\lambda_{k})\langle e_{k}^{*},x\rangle e_{k}\), which would imply that \(p(\lambda_{k})=0\) for every \(1\leq k\leq N\), contradicting the maximum number of roots that \(p\) can have. Thus, \(D_{N}\) has a dense set of cyclic vectors in \(X_{N}\) and (b') follows since the map \(L\) has dense range and the \(L\)-image of every cyclic vector for the matrix \(D_{N}\) is cyclic for the matrix \(T_{N}\).
**Remark 3.5**.: Suppose now that \(X\) is a complex space and \(\mathbb{T}:=\{z\in\mathbb{C}\ ;\ |z|=1\}\). If \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\) and \(\boldsymbol{\omega}=(\omega_{k})_{k\in\mathbb{N}}\) are sequences of complex numbers fulfilling that the operator \(T_{\boldsymbol{\lambda},\boldsymbol{\omega}}:c_{00}\longrightarrow c_{00}\) extends continuously to \(X\), it follows from the previous result that:
* _A sufficient condition for the operator_ \(T_{\boldsymbol{\lambda},\boldsymbol{\omega}}\) _to be reiteratively recurrent and cyclic is that the sequence of complex numbers_ \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\) _belongs to_ \(\mathbb{T}^{\mathbb{N}}\) _and_ \(\lambda_{k}\neq\lambda_{l}\) _for all_ \(k\neq l\in\mathbb{N}\)_._
Cyclicity follows directly from statement (b) of Lemma 3.4. For reiterative recurrence recall that \(x\in X\setminus\{0\}\) is called a _unimodular eigenvector for \(T\)_ provided that \(Tx=\lambda x\) for some \(\lambda\in\mathbb{T}\), so that if
\[\mathcal{E}(T):=\{x\in X\setminus\{0\}\ ;\ Tx=\lambda x\text{ for some } \lambda\in\mathbb{T}\},\]
and if \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\in\mathbb{T}^{\mathbb{N}}\), we then have that \(\operatorname{span}(\mathcal{E}(T))\) is dense in \(X\) by statement (a) of Lemma 3.4. Finally, it is well-known that \(\operatorname{span}(\mathcal{E}(T))\subset\operatorname{RRec}(T)\); see for instance [6] or [14]. In addition, if we choose each \(\lambda_{k}\) to be a root of unity, then \(\operatorname{span}\left(\bigcup_{k\in\mathbb{N}}\operatorname{Ker}(T- \lambda_{k}I)\right)\) is even formed by periodic vectors for \(T\); see [17, Proposition 2.33]. This will be the case in our example.
Choosing \(\boldsymbol{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\) and \(\boldsymbol{\omega}=(\omega_{k})_{k\in\mathbb{N}}\)
We are finally ready to prove Theorem 3.1 and we start by the **complex** case as we did in Section 2: fix any complex separable infinite-dimensional Banach space \(X\) and let \((e_{k},e_{k}^{*})_{k\in\mathbb{N}}\subset X\times X^{*}\) be a biorthogonal sequence with the properties stated in Subsection 1.1. We are going to construct a reiteratively recurrent and cyclic operator \(T_{\boldsymbol{\lambda},\boldsymbol{\omega}}\) for which the set \(\operatorname{RRec}(T_{\boldsymbol{\lambda},\boldsymbol{\omega}})\) is meager. Recall that by the already mentioned result [6, Theorem 2.1] such an operator cannot be hypercyclic, and in fact, we will get the non-hypercyclicity by considering some \(\omega_{k}=0\).
We start by fixing \(\boldsymbol{\omega}\): for each \(j\in\mathbb{N}\) consider the one-dimensional rank operator
\[\big{(}e_{2j-1}\otimes e_{2j}^{*}\big{)}:X\longrightarrow X\,\quad x \longmapsto\langle e_{2j}^{*},x\rangle e_{2j-1}\,\]
choose any summable sequence \(v=(v_{j})_{j\in\mathbb{N}}\in\ell^{1}(\mathbb{N})\) with \(|v_{j}|>0\) for every \(j\in\mathbb{N}\), and consider a sequence \(\boldsymbol{\omega}=(\omega_{k})_{k\in\mathbb{N}}\in\mathbb{C}^{\mathbb{N}}\) fulfilling that
\[0<|\omega_{2j-1}|\leq\frac{|v_{j}|}{\|e_{2j-1}\otimes e_{2j}^{*}\|}\qquad \text{ and }\qquad\omega_{2j}=0\]
for each \(j\geq 1\). Thus the linear map \((\sum_{j\in\mathbb{N}}\omega_{2j-1}\cdot(e_{2j-1}\otimes e_{2j}^{*})):c_{00} \longrightarrow c_{00}\), which coincides with the backward shift \(B_{\boldsymbol{\omega}}\) on \(c_{00}\), extends continuously to the whole space \(X\) as a compact operator still denoted by \(B_{\boldsymbol{\omega}}\) since
\[\|B_{\boldsymbol{\omega}}\|=\left\|\sum_{j\in\mathbb{N}}\omega_{2j-1}\cdot \big{(}e_{2j-1}\otimes e_{2j}^{*}\big{)}\right\|\leq\sum_{j\in\mathbb{N}}| \omega_{2j-1}|\cdot\|e_{2j-1}\otimes e_{2j}^{*}\big{\|}\leq\|v\|_{1}.\]
We now fix \(\mathbf{\lambda}\): by using a trivial recursive process one can construct a strictly increasing sequence of positive integers \((m_{j})_{j\in\mathbb{N}}\in\mathbb{N}^{\mathbb{N}}\) fulfilling the properties
1. \(m_{j}>j\) for every \(j\in\mathbb{N}\);
2. \(\lim_{j\to\infty}\frac{1}{m_{j}|\omega_{2j-1}|}=0\).
Let now
\[\lambda_{2j-1}:=\exp\left(2\pi i\cdot\frac{1}{m_{j}^{2}}\right)\qquad\text{ and}\qquad\lambda_{2j}:=\exp\left(2\pi i\cdot\frac{2}{m_{j}^{2}}\right)\]
for each \(j\in\mathbb{N}\). Note that the diagonal linear map
\[D_{\mathbf{\lambda}}:c_{00}\longrightarrow c_{00}\,\quad x\longmapsto\sum_{k\in \mathbb{N}}\lambda_{k}\langle e_{k}^{*},x\rangle e_{k}\,\]
extends to the whole space \(X\) (still denoted by \(D_{\mathbf{\lambda}}\)) just as it happened for \(R\) in Section 2 since
\[\|D_{\mathbf{\lambda}}x\|\leq\|D_{\mathbf{\lambda}}x-x\|+\|x\|\leq\sum_{k\in\mathbb{N }}|\lambda_{k}-1|\cdot|\langle e_{k}^{*},x\rangle|+\|x\|\leq\left(K\sum_{k\in \mathbb{N}}|\lambda_{k}-1|+1\right)\|x\|\]
for every \(x\in c_{00}\), and using again that \(|\exp(i\theta)-1|\leq|\theta|\) for every \(\theta\in\mathbb{R}\) we have that
\[\sum_{k\in\mathbb{N}}|\lambda_{k}-1|\leq 6\pi\sum_{j\in\mathbb{N}}\frac{1}{m_{j} ^{2}}<\infty\]
by condition (a) on the sequence \((m_{j})_{j\in\mathbb{N}}\).
We can finally consider the operator \(T_{\mathbf{\lambda},\mathbf{\omega}}:=D_{\mathbf{\lambda}}+B_{\omega}:X\longrightarrow X\), which is continuous and also upper-triangular with respect to \((e_{k},e_{k}^{*})_{k\in\mathbb{N}}\). Since \((m_{j})_{j\in\mathbb{N}}\) is strictly increasing and \(m_{1}>1\) we clearly have that \(\lambda_{k}\neq\lambda_{l}\) for every \(k\neq l\in\mathbb{N}\) so that Lemma 3.4, and in particular Remark 3.5, ensures that the operator \(T_{\mathbf{\lambda},\mathbf{\omega}}\) is reiteratively recurrent and cyclic for these \(\mathbf{\lambda}=(\lambda_{k})_{k\in\mathbb{N}}\) and \(\mathbf{\omega}=(\omega_{k})_{k\in\mathbb{N}}\). The form of our operator is the following
\[T_{\mathbf{\lambda},\mathbf{\omega}}=\begin{pmatrix}A_{1}&&&&\\ &A_{2}&&&(0)\\ &&A_{3}&&\\ &&&\ddots&&\\ &(0)&&&A_{j}&\\ &&&&\ddots\end{pmatrix}\]
where for each \(j\geq 1\) we are denoting by \(A_{j}\) the \(2\times 2\) matrix
\[A_{j}=\begin{pmatrix}\exp\left(2\pi i\cdot\frac{1}{m_{j}^{2}}\right)&\omega_{2 j-1}\\ 0&\exp\left(2\pi i\cdot\frac{2}{m_{j}^{2}}\right)\end{pmatrix} \tag{10}\]
so that \(T_{\mathbf{\lambda},\mathbf{\omega}}\) can be expressed as a direct sum of \(2\)-dimensional cyclic operators \(T_{\mathbf{\lambda},\mathbf{\omega}}=\bigoplus_{j\geq 1}A_{j}\). Our objective is now proving that the set \(\operatorname{RRec}(T_{\mathbf{\lambda},\mathbf{\omega}})\) is meager (see Proposition 3.7 below), and we do it by using the dynamical properties of the matrices \(A_{j}\) (see Lemma 3.6 below).
Indeed, given three complex values \(\mu_{1},\mu_{2},\omega\in\mathbb{C}\) with \(\mu_{1}\neq\mu_{2}\) and \(\omega\neq 0\), then for the \(2\times 2\) matrix
\[A=\begin{pmatrix}\mu_{1}&\omega\\ 0&\mu_{2}\end{pmatrix}\]
it is easily checked, inductively, that the \(n\)-th power of \(A\) for each \(n\in\mathbb{N}\) has the form
\[A^{n}=\begin{pmatrix}\mu_{1}^{n}&\frac{\mu_{1}^{n}-\mu_{2}^{n}}{\mu_{1}-\mu_{ 2}}\cdot\omega\\ 0&\mu_{2}^{n}\end{pmatrix}.\]
This formula allows us to proof the following technical fact, which is the key to complete our objective:
**Lemma 3.6**.: _For each positive integer \(j\geq 1\) and each \(n\in\mathbb{N}\) of the form \(n=\ell\cdot m_{j}^{2}+k\) with \(\ell\geq 0\) and \(m_{j}\leq k\leq m_{j}^{2}-m_{j}\), we have that the coordinate \((1,2)\) of the matrix \(A_{j}^{n}=(A_{j})^{n}\) has modulus_
\[\left|A_{j}^{n}(1,2)\right|=\frac{|\lambda_{2j-1}^{n}-\lambda_{2j}^{n}|}{| \lambda_{2j-1}-\lambda_{2j}|}\cdot|\omega_{2j-1}|\geq\frac{2m_{j}|\omega_{2j- 1}|}{\pi}.\]
Proof.: Given \(j\geq 2\) and any \(n\in\mathbb{N}\) we have that
\[\frac{\left|A_{j}^{n}(1,2)\right|}{|\omega_{2j-1}|}=\frac{\left|\lambda_{2j-1 }^{n}-\lambda_{2j}^{n}\right|}{|\lambda_{2j-1}-\lambda_{2j}|}=\frac{\left| \exp\left(2\pi ni\cdot\frac{1}{m_{j}^{2}}\right)-1\right|}{\left|\exp\left(2 \pi i\cdot\frac{1}{m_{j}^{2}}\right)-1\right|}=\frac{\left|\sin\left(\frac{ \pi n}{m_{j}^{2}}\right)\right|}{\left|\sin\left(\frac{\pi}{m_{j}^{2}}\right) \right|}\geq\frac{m_{j}^{2}}{\pi}\cdot\left|\sin\left(\frac{\pi n}{m_{j}^{2}} \right)\right|,\]
where we have used that \(|\sin(\theta)|\leq|\theta|\) for every \(\theta\in\mathbb{R}\). If now we let \(n=\ell\cdot m_{j}^{2}+k\) with \(\ell\geq 0\) and \(m_{j}\leq k\leq m_{j}^{2}/2\), then
\[\left|\sin\left(\frac{\pi n}{m_{j}^{2}}\right)\right|=\left|\sin\left(\frac{ \pi k}{m_{j}^{2}}\right)\right|\geq\frac{2k}{m_{j}^{2}}\geq\frac{2}{m_{j}},\]
by using that \(\sin(\theta)\geq\frac{2}{\pi}\theta\) for each \(\theta\in[0,\frac{\pi}{2}]\), so that
\[\left|A_{j}^{n}(1,2)\right|\geq\frac{m_{j}^{2}}{\pi}\cdot\left|\sin\left( \frac{\pi n}{j^{2}}\right)\right|\cdot|\omega_{2j-1}|\geq\frac{2m_{j}|\omega_ {2j-1}|}{\pi}, \tag{11}\]
for each \(n=\ell\cdot m_{j}^{2}+k\) with \(\ell\geq 0\) and \(m_{j}\leq k\leq m_{j}^{2}/2\). Since the sinus function is symmetric with respect to \(\frac{\pi}{2}\) we deduce that (11) also holds whenever \(m_{j}^{2}/2\leq k\leq m_{j}^{2}-m_{j}\).
**Proposition 3.7**.: _The set \(\mathrm{RRec}(T_{\boldsymbol{\lambda},\boldsymbol{\omega}})\) is meager._
Proof.: It is enough to show that there exists a co-meager set \(G\subset X\) such that \(G\cap\mathrm{RRec}(T_{\boldsymbol{\lambda},\boldsymbol{\omega}})=\emptyset\). The proof is an adaptation of the argument used in [6, Example 2.4]. Let
\[G:=\left\{x\in X\ ;\ |\langle e_{2j}^{*},x\rangle|>\tfrac{1}{m_{j}|\omega_{2j-1} |}\ \text{for infinitely many}\ j\in\mathbb{N}\right\}.\]
By condition (b) on the sequence \((m_{j})_{j\in\mathbb{N}}\) and using that \(\alpha_{00}\) is dense in \(X\), this set \(G\) can be written as the intersection of countably many dense open subsets
\[G=\bigcap_{N\in\mathbb{N}}\bigcup_{j\geq N}\left\{x\in X\ ;\ |\langle e_{2j}^{*},x \rangle|>\tfrac{1}{m_{j}|\omega_{2j-1}|}\right\},\]
which shows that \(G\) is a dense \(G_{\delta}\)-set, and hence co-meager.
Fix a vector \(x\in G\), let \(\varepsilon=\frac{2}{3\pi}\) and consider the neighbourhood \(U=\{y\in X\;;\;\|y-x\|<\frac{\varepsilon}{K}\}\) of \(x\). Note that since \(c_{00}\) is dense in \(X\) there is some \(k_{0}\in\mathbb{N}\) such that \(|\langle e_{k}^{*},x\rangle|<\varepsilon\) for every \(k\geq k_{0}\). Thus, by (3), we have that if \(y\in U\) then \(|\langle e_{k}^{*},y\rangle|<2\varepsilon\) for every \(k\geq k_{0}\). Moreover, since \(x\in G\), there exists an infinite set \(J\subset\mathbb{N}\) such that
\[2j-1\geq k_{0}\quad\text{ and }\quad|\langle e_{2j}^{*},x\rangle|>\frac{1}{m_{j} |\omega_{2j-1}|}\quad\text{ for each }j\in J.\]
Now, for each \(j\geq 1\) and \(n\in\mathbb{N}\), the density of \(c_{00}\) in \(X\) shows that
\[\langle e_{2j-1}^{*},T_{\mathbf{\lambda},\mathbf{\omega}}^{n}x\rangle=A_{j}^{n}(1,2) \cdot\langle e_{2j}^{*},x\rangle+\lambda_{2j-1}^{n}\cdot\langle e_{2j-1}^{*}, x\rangle,\]
so that if \(j\in J\) and \(n=\ell\cdot m_{j}^{2}+k\) for some \(\ell\geq 0\) and \(m_{j}\leq k\leq m_{j}^{2}-m_{j}\) we have that
\[\left|\langle e_{2j-1}^{*},T_{\mathbf{\lambda},\mathbf{\omega}}^{n}x\rangle\right| \geq\left|A_{j}^{n}(1,2)\right|\cdot\left|\langle e_{2j}^{*},x\rangle\right|- \left|\langle e_{2j-1}^{*},x\rangle\right|>\frac{2}{\pi}-\varepsilon=2\varepsilon\]
by Lemma 3.6. Then \(T_{\mathbf{\lambda},\mathbf{\omega}}^{n}x\notin U\) and we have that \(\#(\mathcal{N}_{T_{\mathbf{\lambda},\mathbf{\omega}}}(x,U)\cap[m+1,m+m_{j}^{2}])\leq 2m _{j}\) for every integer \(m\geq 0\). We can now compute the limit
\[\overline{\mathrm{Bd}}(\mathcal{N}_{T_{\mathbf{\lambda},\mathbf{\omega}}}(x,U))=\lim_ {J\ni j\to\infty}\left(\sup_{m\geq 0}\frac{\#(\mathcal{N}_{T_{\mathbf{\lambda},\mathbf{ \omega}}}(x,U)\cap[m+1,m+m_{j}^{2}])}{m_{j}^{2}}\right)\leq\lim_{J\ni j\to \infty}\frac{2m_{j}}{m_{j}^{2}}=0,\]
so \(x\notin\mathrm{RRec}(T_{\mathbf{\lambda},\mathbf{\omega}})\). This shows that the set of reiteratively recurrent vectors for \(T_{\mathbf{\lambda},\mathbf{\omega}}\) is meager.
The **complex** version of Theorem 3.1 is now proved, but the construction can be easily adapted to the **real** case by using a conjugacy argument that we include in the following lines: if \(X\) is any real separable infinite-dimensional Banach space, and we again let \((e_{k},e_{k}^{*})_{k\in\mathbb{N}}\) be a biorthogonal sequence with the properties stated in Subsection 1.1, we can consider the linear map
\[T=\begin{pmatrix}B_{1}&&&&\\ &B_{2}&&&(0)\\ &&B_{3}&&&\\ &&&\ddots&&\\ &(0)&&&B_{j}&\\ &&&&\ddots\end{pmatrix}:c_{00}\longrightarrow c_{00}\]
where for each \(j\geq 1\) we are denoting by \(B_{j}\) the \(4\times 4\) matrix
\[B_{j}=\begin{pmatrix}\cos\left(2\pi\cdot\frac{1}{m_{j}^{2}}\right)&-\sin\left( 2\pi\cdot\frac{1}{m_{j}^{2}}\right)&\mathrm{Re}(\omega_{2j-1})&-\mathrm{Im}( \omega_{2j-1})\\ \sin\left(2\pi\cdot\frac{1}{m_{j}^{2}}\right)&\cos\left(2\pi\cdot\frac{1}{m_{j }^{2}}\right)&\mathrm{Im}(\omega_{2j-1})&\mathrm{Re}(\omega_{2j-1})\\ 0&0&\cos\left(2\pi\cdot\frac{2}{m_{j}^{2}}\right)&-\sin\left(2\pi\cdot\frac{2} {m_{j}^{2}}\right)\\ 0&0&\sin\left(2\pi\cdot\frac{2}{m_{j}^{2}}\right)&\cos\left(2\pi\cdot\frac{2} {m_{j}^{2}}\right)\end{pmatrix}. \tag{12}\]
If the sequence of non-zero complex numbers \((\omega_{2j-1})_{j\in\mathbb{N}}\) decreases fast enough, and if \((m_{j})_{j\in\mathbb{N}}\) satisfies condition (a) from Subsection 3.2, then the map \(T\) extends continuously to an operator acting on \(X\), still denoted by \(T\). We can then show that \(T\) is reiteratively recurrent, that if \(m_{1}>2\) then \(T\) is cyclic, and that if also condition (b) from Subsection 3.2 is satisfied then \(\mathrm{RRec}(T)\) is a meager set.
Indeed, if following Lemma 3.4 we set \(X_{N}:=\mathrm{span}\{e_{k}\ ;\ 1\leq k\leq N\}\) for each \(N\in\mathbb{N}\), then for every positive integer \(j\in\mathbb{N}\) we can define the homeomorphism
\[\phi_{j}:X_{4j}\longrightarrow\widetilde{X_{2j}}\,\quad(\langle e_{k}^{*},x \rangle)_{k=1}^{4j}\longmapsto\left(\langle e_{2k-1}^{*},x\rangle+i\langle e_{ 2k}^{*},x\rangle\right)_{k=1}^{2j}\,\]
where \(\widetilde{X_{2j}}:=X_{2j}+iX_{2j}\) denotes the standard _complexification_ of the real finite-dimensional Banach subspace \(X_{2j}=\mathrm{span}\{e_{k}\ ;\ 1\leq k\leq 2j\}\subset X\) (see [20]), and it is trivial to check that
\[\phi_{j}\circ T_{4j}=T_{\boldsymbol{\lambda},\boldsymbol{\omega},2j}\circ \phi_{j}\]
for every \(j\in\mathbb{N}\), where
\[T_{4j}=\begin{pmatrix}B_{1}&&&&\\ &B_{2}&&(0)\\ &&B_{3}&&\\ &(0)&&\ddots&\\ &&&B_{j}\end{pmatrix}:X_{4j}\longrightarrow X_{4j},\ T_{\boldsymbol{\lambda}, \boldsymbol{\omega},2j}=\begin{pmatrix}A_{1}&&&&\\ &A_{2}&&(0)\\ &&A_{3}&&\\ &(0)&&\ddots&\\ &&&A_{j}\end{pmatrix}:\widetilde{X_{2j}}\longrightarrow\widetilde{X_{2j}},\]
and where each \(A_{j}\) is the matrix described in (10). The previous relations and equalities show that each homeomorphism \(\phi_{j}\) is a _conjugacy of dynamical systems_ (see [17, Definition 1.5]) between the finite-dimensional systems \(T_{4j}\) and \(T_{\boldsymbol{\lambda},\boldsymbol{\omega},2j}\) for each \(j\in\mathbb{N}\). We claim that:
* **The equality \(X_{4j}=\mathrm{RRec}(T_{4j})\) holds for every \(j\in\mathbb{N}\)**: by Lemma 3.4 we know that \[\widetilde{X_{2j}}=\mathrm{RRec}(T_{\boldsymbol{\lambda},\boldsymbol{\omega},2j}),\] for every \(j\in\mathbb{N}\), and a standard conjugacy argument completes the statement (see [15, Lemma 4.14]).
* **If \(m_{1}>2\) then every \(T_{4j}\) has a dense set of cyclic vectors**: using again that \(T_{4j}\) is dynamically conjugated to \(T_{\boldsymbol{\lambda},\boldsymbol{\omega},2j}\), and hence it is also dynamically conjugated to the diagonal matrix \[D_{2j}=\mathrm{Diag}(\lambda_{1},...,\lambda_{2j}):\widetilde{X_{2j}} \longrightarrow\widetilde{X_{2j}},\] it is enough to show that \(D_{2j}\) admits a dense set of vectors that are cyclic with respect to the polynomials with real coefficients. We know that the subset of vectors for which every component is a non-zero complex value is dense in \(\widetilde{X_{2j}}\), and we can show that these vectors are cyclic with respect to the real polynomials. Indeed, suppose that given such a vector \(x\) there was a non-zero polynomial \(p\) with real coefficients and of degree less or equal to \(4j-1\) such that \(p(D_{2j})x=0\). We would then have that \(p(\lambda_{k})=0\) for every \(1\leq k\leq 2j\). However, since \(p\) has real coefficients, then also the conjugate value of every \(\lambda_{k}\) is a root of \(p\). Therefore, since all the \(\lambda_{k}\) and \(\overline{\lambda_{k}}\) are different because we assumed \(m_{1}>2\), this contradicts the maximum number of roots that \(p\) can have.
Reasoning as in Lemma 3.4 we obtain that the whole real-linear operator \(T:X\longrightarrow X\) is reiteratively recurrent and cyclic. To show that \(\mathrm{RRec}(T)\) is a meager set one can consider in \(X\) the dense \(G_{\delta}\)-set
\[G:=\left\{x\in X\ ;\ |\langle e_{4j}^{*},x\rangle|>\tfrac{1}{m_{j}|\omega_{2j- 1}|}\ \text{for infinitely many}\ j\in\mathbb{N}\right\}\]
and then check that \(G\cap\mathrm{RRec}(T)=\emptyset\) by using Lemma 3.6 as in Proposition 3.7. Note that this reasoning follows from the key fact that the matrices \(A_{j}\) as described in (10) are finite-dimensional systems dynamically conjugated to the matrices \(B_{j}\) as described in (12).
We conclude that every (real or complex) separable infinite-dimensional Banach space supports a reiteratively recurrent and cyclic operator whose set of reiteratively recurrent vectors is meager, and we have finally completely solved Question 1.2.
Final comments and open problems
In Section 2 we have exhibited the existence of recurrent operators whose set of recurrent vectors is not dense lineable in every separable infinite-dimensional Banach space. As we comment in Remark 2.4 the same examples are valid for the slightly stronger notion of \(\mathcal{AP}\)-recurrence. However, the general dense lineability property for other notions such as reiterative, \(\mathcal{U}\)-frequent, frequent or uniform recurrence, is still unknown. Let us introduce these properties and comment on the respective problems:
**Definition 4.1** ([6]).: Let \(T:X\longrightarrow X\) be a continuous linear operator acting on a Banach space \(X\). A vector \(x\in X\) is called
* _uniformly recurrent for_ \(T\) if for any neighbourhood \(U\) of \(x\) the return set \[\mathcal{N}_{T}(x,U)=\{n\geq 1\ ;\ T^{n}x\in U\}\] has bounded gaps, that is, there exists \(m_{U}\in\mathbb{N}\) such that \(\mathcal{N}_{T}(x,U)\cap[n,n+m_{U}]\neq\emptyset\) for all \(n\in\mathbb{N}\). The set of such vectors is denoted by \(\mathrm{URec}(T)\) and \(T\) is called a _uniformly recurrent operator_ if such a set is dense in \(X\).
* _frequently recurrent for_ \(T\) if for any neighbourhood \(U\) of \(x\) the return set \(\mathcal{N}_{T}(x,U)\), defined as before, has positive lower density, that is, \[\underline{\mathrm{dens}}(\mathcal{N}_{T}(x,U))=\liminf_{N\to\infty}\frac{ \#(\mathcal{N}_{T}(x,U)\cap[1,N])}{N}>0.\] The set of such vectors is denoted by \(\mathrm{FRec}(T)\) and \(T\) is called a _frequently recurrent operator_ if such a set is dense in \(X\).
* \(\mathcal{U}\)_-frequently recurrent for_ \(T\) if for any neighbourhood \(U\) of \(x\) the return set \(\mathcal{N}_{T}(x,U)\), defined as before, has positive upper density, that is, \[\overline{\mathrm{dens}}(\mathcal{N}_{T}(x,U))=\limsup_{N\to\infty}\frac{\#( \mathcal{N}_{T}(x,U)\cap[1,N])}{N}>0.\] The set of such vectors is denoted by \(\mathrm{UFRec}(T)\) and \(T\) is called a _\(\mathcal{U}\)-frequently recurrent operator_ if such a set is dense in \(X\).
As we were mentioning before, the following are open problems:
**Problems 4.2**.: Let \(T:X\longrightarrow X\) be a continuous linear operator acting on a Banach space \(X\):
* If \(T\) is reiteratively recurrent, is \(\mathrm{RRec}(T)\) dense lineable?
* If \(T\) is \(\mathcal{U}\)-frequently recurrent, is \(\mathrm{UFRec}(T)\) dense lineable?
* If \(T\) is frequently recurrent, is \(\mathrm{FRec}(T)\) dense lineable?
* If \(T\) is uniformly recurrent, is \(\mathrm{URec}(T)\) dense lineable?
In [6] it is shown that \(\mathrm{URec}(T)\subset\mathrm{FRec}(T)\subset\mathrm{UFRec}(T)\subset \mathrm{RRec}(T)\subset\mathcal{AP}\mathrm{Rec}(T)\subset\mathrm{Rec}(T)\) for every operator \(T\), so one may wonder if the examples exhibited in Section 2 can solve negatively these questions. However, if \(\mathrm{RRec}(T)\) is a dense set then \(T\) is quasi-rigid and hence \(\mathrm{Rec}(T)\) is dense lineable by [15, Proposition 6.2], so that a similar construction/argument to that used here does not apply.
About the strongest recurrence notion between those introduced, namely _uniform recurrence_, we know that when the underlying space \(X\) is Hilbert then the set \(\mathrm{URec}(T)\) is dense lineable as soon as \(T\) is uniformly recurrent. This follows from the inclusion \(\mathrm{span}(\mathcal{E}(T))\subset\mathrm{URec}(T)\), which holds for every operator \(T\), together with the fact that
\[\overline{\mathrm{span}(\mathcal{E}(T))}=\overline{\mathrm{URec}(T)}\quad\text { whenever $X$ is a Hilbert space; see \@@cite[cite]{[\@@bibref{}{L14}{}{}, Theorem 1.9]}.}\]
However, it is not known if the equality \(\overline{\mathrm{span}(\mathcal{E}(T))}=\overline{\mathrm{URec}(T)}\) is true for an arbitrary operator \(T\) acting on general Banach space \(X\), so for the moment we can not conclude if \(\mathrm{URec}(T)\) is always dense lineable outside the Hilbertian setting.
## Funding
The first author was supported by the Spanish Ministerio de Ciencia, Innovacion y Universidades, grant FPU2019/04094; by MCIN/AEI/10.13039/501100011033, Projects PID2019-105011GB-I00 and PID2022-139449NB-I00; and by the "Fundacio Ferran Sunyer i Balaguer". The second author is a Research Associate of the Fonds de la Recherche Scientifique - FNRS.
|
2309.08640 | Higher Order Nyquist Zone Sampling with RFSoC Data Converters for
Astronomical and High Energy Physics Readout Systems | From generation to generation, the maximum RF frequency and sampling rate of
the integrated data converters in RF system-on-chip (RFSoC) family devices from
Xilinx increases significantly. With the integrated digital mixers and up and
down conversion blocks in the datapaths of the data converters, those RFSoC
devices offer the capability for implementing a full readout system of ground
and space-based telescopes and detectors across the electromagnetic spectrum
within the devices with minimum or no analog mixing circuit. In this paper, we
present the characterization results for the the data converters sampling at
higher orders of Nyquist zones to extend the frequency range covered for our
targeted readout systems of microwave-frequency resonator-based cryogenic
detector and multiplexer systems and other astronomical and high-energy physics
instrumentation applications, such as, axion search and dark matter detection.
The initial evaluation of the data converters operating higher order Nyquist
zones covers two-tones and comb of tones tests to address the concerns in the
RF inter-modulation distortion, which is the key performance index for our
targeted applications. The characterization of the data converters is performed
in the bandwidth of 4-6 GHz and results meet our requirements. The settings and
operating strategies of the data converters for our targeted applications will
be summarised. | Chao Liu, Zeeshan Ahmed, Shawn W. Henderson, Ryan Herbst, Larry Ruckman | 2023-09-14T23:39:07Z | http://arxiv.org/abs/2309.08640v1 | Higher Order Nyquist Zone Sampling with RFSoC Data Converters for Astronomical and High Energy Physics Readout Systems
###### Abstract
From generation to generation, the maximum RF frequency and sampling rate of the integrated data converters in RF system-on-chip (RFSoC) family devices from Xilinx increases significantly. With the integrated digital mixers and up and down conversion blocks in the datapaths of the data converters, those RFSoC devices offer the capability for implementing a full readout system of ground and space-based telescopes and detectors across the electromagnetic spectrum within the devices with minimum or no analog mixing circuit. In this paper, we present the characterization results for the the data converters sampling at higher orders of Nyquist zones to extend the frequency range covered for our targeted readout systems of microwave-frequency resonator-based cryogenic detector and multiplexer systems and other astronomical and high-energy physics instrumentation applications, such as, axion search and dark matter detection. The initial evaluation of the data converters operating higher order Nyquist zones covers two-tones and comb of tones tests to address the concerns in the RF inter-modulation distortion, which is the key performance index for our targeted applications. The characterization of the data converters is performed in the bandwidth of 4-6 GHz and results meet our requirements. The settings and operating strategies of the data converters for our targeted applications will be summarised.
Microwave, Spectrometry, Data Converter, Sampling, Readout.
## I Introduction
The majority of past application of RFSoC leverage the developed analog RF up and down mix circuit to convert the RF signal with in the Nyquist frequency of the converters. Both other research teams [1, 2] and us [3, 4] have done comprehensive performance characterization of the integrated data converter in first order Nyquist zone and proven the signal to noise ratio, the dynamic range, effective number of bits and cha distortion (IMD) are within the specifications of our target applications with both Xilinx evaluation board and custom designed board. In this paper, we will summaries the evaluation results for data converters operate in higher order Nyquist zones. Since IMD level is one of the most critical index for the highly multiplexed readout, the paper will focus on the discussion of configurations of data converter have significant impact in the IMD level.
The prototype of the readout system uses the Xilinx Zynq UltraScale+ RFSoC ZCU208 Evaluation Kit, which carries a Gen RFSoc device with eight 14-bit ADCs up to 5GSPS, and eight 14-bit DACs up to 10GSPS. All the tests describe in this paper is performed with ZCU208.
## II Data Converter Performance Characterization at Higer Order Nyquist Zones
The first step of characterization is to evaluate the signal generated by the DAC. Figure 1 shows the full test circuit for DAC performance measurement. In this case the baseband quadrature sequence is loaded to BRAM and streamed to the RF data converter (RFdc) at the data rate of 614.4 MHz. The quadrature sequence is interpolated by factor of 10 and up-mixed at 4.25 GHz with the integrated blocks in DAC datapath. In this case, the DAC is sampling at 6.144 GSPS and carrier frequency is in the second Nyquist zone of the DAC. The signal generated by the DAC is filtered by a bandpass filter before injected to the spectrum analyzer, which is the Keysight EXA signal analyzer N9010B with bandwidth from 10 Hz to 26.5 GHz. The IMD performance of digital up-conversion and second Nyquist zone is investigated with two-tone test and comb of tones generation for our applications.
Fig. 1: DAC performance evaluation setup.
Fig. 2: FFT of two-tone sequence loaded to the DAC.
### _Two-tone Test_
Two-tones test is the most common test to evaluate the IMD performance of radio frequency electronic system. Figure 2 shows the FFT of the two-tone baseband sequence loaded to BRAM for testing. The first tones is around 50 MHz and the second is 2.4 MHz higher.
## III DAC Performance Characterization
As the signal generated by DAC locates in the second order Nyquist zone, the DAC has been configured to RF mix mode. The RF mix mode has adopted the pulse type with second half inverted, which can maximize the energy in second Nyquist zone. The decoder mode of the DAC is also a critical settings for achieving lower intermodualtion level. Figure 3 and 4 shows the spectrum measured by the signal analyzer for DAC output in the two decoder modes, SNR optimized and high linearity. The DAC consists of unit current cells turned on and off to achieve corresponding digital code. The current cells all have systematic static DC error and the fixed selection current cell can result in higher intermodulation products. The high linearity mode randomized the selection of current cell to spread the error over the whole Nyquist band. From Figure 3 to 4, the third order and higher order products are reduced by at least 7.3 dB without significant increase in the noise floor level. The high linearity mode achieved 76 dB clearance between the tones and intermodulation products and should be used for IMD sensitive applications. The amplitude of signal and DAC current level need to be set to operate 50 % lower than the maximum rate to achieve the optimum IMD level.
### _Comb of Tones Test_
The RFSoC based readout between 4-6 GHz needs to divided to four 500 MHz block based on previous generation readout system. In this test, two of the blocks centred at 4.25 GHz and 5.25 GHz have been generated by the DAC. There are 209 tones in steps randomized around 2.4 MHz in this test. From the spectrum capture by the signal analyzer shown in Figure 5, the two blocks are generated with low level of intermodulation products and leakage to other bands.
## IV Conclusions
The IMD level of the integrated DAC datapath with appropriate configuration is optimistic to meet the stringent requirement of highly multiplexed readout. Similar two-tone characterization has also been performed for the integrated ADC datapath in third order Nyquist zone and it also gives about 75 dB clearance between the tones and the third order intermodualtion product. Therefore, the IMD level of the integrated datapaths in RFSoC has the potential be used to realize readout system between 4-6 GHz or even higher frequency without any analog RF mixing circuit.
|
2309.03715 | Probing VHE gamma-ray emission from GW events with H.E.S.S | Gravitational wave (GW) events, particularly those connected to the merger of
compact objects such as neutron stars, are believed to be the primary source of
short gamma-ray bursts. To explore the very high energy (VHE) component of the
emission from these events, the H.E.S.S. collaboration has dedicated a
substantial effort and observing time to follow up on these events. During the
second and third GW observing runs, H.E.S.S. was the first ground-based
instrument to observe the GW170817 binary neutron star merger. In addition,
H.E.S.S. followed four binary black hole mergers. The data acquired by H.E.S.S.
was used to constrain the VHE emission from these events for the first time.
H.E.S.S. also monitored the GW170817 source for approximately 50 hours and
obtained limits that constrained the magnetic field in the merger remnant to $>
24 \mu G$. As the fourth GW observing run (O4) approaches, the H.E.S.S.
collaboration has allocated significant observation time to the follow-up of GW
events. This contribution provides an overview of the science results derived
from the H.E.S.S. follow-up of GW events, a technical overview of the GW
follow-up strategies for O4, and an update on H.E.S.S. activities during O4. | Halim Ashkar, Mathieu de Bony de Lavergne, Francois Brun, Stephen Fegan, Ruslan Konno, Stefan Ohm, Heike Prokoph, Fabian Schüssler, Sylvia J Zhu | 2023-09-07T13:45:30Z | http://arxiv.org/abs/2309.03715v1 | # Probing VHE gamma-ray emission from GW events with H.E.S.S.
###### Abstract:
Gravitational wave (GW) events, particularly those connected to the merger of compact objects such as neutron stars, are believed to be the primary source of short gamma-ray bursts. To explore the very high energy (VHE) component of the emission from these events, the H.E.S.S. collaboration has dedicated a substantial effort and observing time to follow up on these events. During the second and third GW observing runs, H.E.S.S. was the first ground-based instrument to observe the GW170817 binary neutron star merger. In addition, H.E.S.S. followed four binary black hole mergers. The data acquired by H.E.S.S. was used to constrain the VHE emission from these events for the first time. H.E.S.S. also monitored the GW170817 source for approximately 50 hours and obtained limits that constrained the magnetic field in the merger remnant to \(>24\mu G\). As the fourth GW observing run (O4) approaches, the H.E.S.S. collaboration has allocated significant observation time to the follow-up of GW events. This contribution provides an overview of the science results derived from the H.E.S.S. follow-up of GW events, a technical overview of the GW follow-up strategies for O4, and an update on H.E.S.S. activities during O4.
## 1 Introduction
Gravitational Wave (GW) events, such as the merger of compact objects, are a source of interest to high-energy astronomers for probing the highest energy photons in the gamma-ray ray domain. Mergers including neutron stars are known to be responsible for a significant portion of gamma-ray bursts (GRBs), notably short GRBs. Probing the very high energy (VHE) emission from these cataclysmic events, with imaging atmospheric telescopes (IACTs) such as the High Energy Stereoscopic System (H.E.S.S.) brings information on the: non-thermal emission processes creating the highest energy photons, particle acceleration mechanisms in extreme magnetic fields and properties of the merger remnant.
GW event detection suffers from poor localization as their localization regions can span tens to thousands of degrees in the sky. Due to their low duty cycle (around 10%), IACTs struggle with the low latency follow-up of such targets of opportunity (ToO) as they can only observe in quasi-total darkness. However, due to their medium-to-large field of view (FoV), they have an advantage over small FoV instruments as they can probe large regions in the sky at once. Moreover, they have a superior sensitivity to large-FoV space-based gamma-ray surveying instruments.
In this contribution, we present an overview of the H.E.S.S. GW program since its beginning before the second GW observing run O2 until today. In Sec. 2 the H.E.S.S. GW follow-up observation strategies are described. In Sec. 3 the H.E.S.S. follow-up observations of GW events during O2 and O3 and their implications is presented. Finally, Sec. 4 outlines the preparations and the H.E.S.S. activities during the fourth observing run O4.
## 2 H.E.S.S. GW follow-up strategy
To efficiently cover GW events and increase the chances of catching the source (and the VHE counterparts) as fast as possible in the large GW localization regions, the H.E.S.S. collaboration has dedicated extensive efforts to the development and optimization of GW follow-up strategies.
### Science cases
The aim of the H.E.S.S. follow-up observations of GW events is to probe VHE emission issued from particle acceleration in GRBs from the merger of compact objects. Binary neutron star (BNS) mergers are prime candidates for producing such GRBs. This has been proven in 2017 by the dual detection of GW170817 and GRB 170817 [1]. Not only BNS mergers are expected to produce GRBs but also black hole-neutron star (BHNS) merger [2]. Therefore, BNS, BHNS mergers, and MassGAP events (an object falling in the maximum neutron star and the minimum black hole mass gap) are considered in one science case and are given the same priority. Since these cases have a high scientific yield, the follow-up criteria are loose for this science case. A follow-up observation that fulfills the H.E.S.S. observation and visibility conditions (darkness or moderate moonlight with a maximum 60 deg zenith angle) with coverage exceeding 10% of the localization region within 24 hours of the event, is considered interesting enough for H.E.S.S. to spend observing time on it.
In the case of BBH mergers, electromagnetic emission is not highly anticipated. However, a hint of a high energy transient coincident with GW150914 has previously sparked some interest in the community [3]. Moreover, some extreme scenarios of BBH mergers predict the emission
of electromagnetic waves [4, 5, 6, 7]. Since it is important to verify these hypotheses, the H.E.S.S. collaboration also considers BBH mergers and follows GW events emanating from such sources. In that case, good coverage of the localization region is required, to maximize the chances of covering the GW source. This allows us to efficiently reject the emission of VHE gamma rays and to place stringent upper limits in a non-detection case. Therefore, in addition to having good observation and visibility conditions, the coverage requirement to follow BBH mergers is above 50%.
Burst alerts are GW events from non-modeled sources such as nearby asymmetric supernovae. The sources of burst alerts can be extremely interesting for IACTs, due to their close distances. However, the false alert rate of Burst events is higher than in compact binary mergers. They fall in the domain of exploratory searches and their priority lies between neutron stars mergers and BBH mergers. The requirement for Burst follow-up with H.E.S.S. is a coverage of more than 20%.
All alerts should have a probability of Terrestrial origin lower than 50% in order to trigger a H.E.S.S. response.
### Observation strategy
To efficiently cover GW event localization regions, the H.E.S.S. collaboration developed 2 strategies to optimize the scheduling of GW follow-up observations [8]. The first strategy (2D strategy) only takes into consideration the probability information contained in the first layer of the probability maps provided by LVKC. Taking advantage of the H.E.S.S. FoV, the probability is integrated. The 90% GW localization region is divided into smaller regions representing the telescope FoV. At a given observation time fulfilling H.E.S.S. observation conditions, the FoV covering the region with the largest integrated probability of hosting the event has the highest priority to be observed at this given time. The regions are then masked. In the next observation window, the same integrated probability computation is repeated with the remaining regions.
The second strategy (3D) uses in addition the distance information contained in the other 3 layers of the probability maps and the distribution of galaxies in the local Universe using the GLADE galaxy catalog [9]. Here, the galaxies inside the GW localization region, at the distances of the event are assigned a probability. As for the 2D case, several regions in the sky are tested, with the difference that this time the probability of the galaxies hosting the event is integrated inside the H.E.S.S. FoV. The galaxies in the highest probability FoV region are observed. For the next observation window, the same procedure is repeated with the remaining regions/galaxies.
The choice of a 2D or a 3D strategy depends on the GW event localization information. Galaxy catalogs are incomplete at large distances. Taking that into consideration, a threshold of 150 Mpc was applied for the choice of strategy during O3. All events with an average distance below 150 Mpc are observed with a 3D strategy. All events beyond this limit are observed with a 2D strategy. Moreover, since galaxy catalogs are also incomplete around the galactic plane, all GW events that have localization maps with the hotspot located around the galactic plane are observed with a 2D strategy.
The H.E.S.S. response and processing of GW alerts is automatized. Human intervention is only needed for checks and irregular updates. Experts on call are provided with tools based on the Tilepy1[10] library in case such intervention is needed. A possible case requiring human
intervention is the modification of the GW follow-up schedule taking into consideration observations occurring previous to an incoming update to avoid overlap.
All GW events fulfilling the requirements described above and observable within 24 hours are followed. If the GW event occurs outside observation hours, afterglow-mode observations are scheduled for the upcoming night. In that case, observations will be updated following incoming LVKC updates and can be human-vetted. In the case of a GW event occurring during observation hours and given its priority, a prompt-observations mode is triggered and the telescopes will slew automatically towards the highest probability (using a 2D or 3dD strategy) region visible at the time of the arrival of the alert. The condition to trigger such a response is that the probability of hosting the event covered by the prompt observation be higher than 5% in this single observation. In addition to the rest of the observations, the overall coverage depends on the science case as mentioned above. EarlyWaning, Preliminary, and Initial GW alerts2 are subject to this prompt-mode response. Updates are only subject to afterglow-mode response.
Footnote 2: [https://emfollow.docs.ligo.org/userguide/content.html](https://emfollow.docs.ligo.org/userguide/content.html)
During the observation, if the real-time analysis finds a hotspot indicating a significant excess of VHE gamma-rays coming from a region that is not associated with any known source, the hotspot is observed again as it might be a GW VHE counterpart candidate.
## 3 H.E.S.S. GW follow-up during O2 and O3
The first H.E.S.S. follow-up of GW occurred during O2 on GW170502. During O3, H.E.S.S. observed GW200105. However, in these cases the H.E.S.S. coverage of the localization region was low as in the first case the localization region was large and it was the first H.E.S.S. trial and in the second case, the localization region shifted significantly after an event update. Therefore, the follow-up of these events will not be featured here. Five more events were observed with H.E.S.S. with a good coverage exceeding 50%. These events are the BNS merger GW170817 and the BBH mergers GW170814, GW190512, GW190728, and GW200224.
### GW170817 observations with H.E.S.S.
In August 2017, GW170817 was detected emanating from a BNS merger followed by a short GRB detected 2 seconds later by Fermi-GBM and INTEGRAL. H.E.S.S. used the updated localization region, distributed a few minutes before the beginning of astronomical dark time on the H.E.S.S. site in Namibia, and derived an optimized follow-up schedule that contained 3 positions to be observed during the 1.5 hours-long visibility window. Six hours after the observations started, the optical counterpart [11] was discovered in the NGC 4993 galaxy. It turned out that the first H.E.S.S. observation on GW170817 covered NGC 4993, making the data acquired by H.E.S.S. the earliest ground-based data taken on the source. The following night, H.E.S.S. observed the source for a total of 3.2 hours and continued monitoring for several days afterward. The data analysis did not show any significant detection of VHE gamma-rays in the direction of NGC 4993 [12]. These H.E.S.S. observations permitted to place the first stringent constraint on VHE emission from BNS mergers. The merger occurred at an off-axis viewing angle. The H.E.S.S. limits, later on, helped constrain the off-axis viewing angle of the jet to a value larger than 15 deg [13].
Nine and sixteen days after the merger, the radio and X-ray synchrotron emission from the source started rising as the opening angle of the jet increased. The acceleration of particles in the merger remnants is believed to be suitable for synchrotron self-Compton (SSC) emission, where the high-energy electrons accelerated in the magnetic field upscatter the synchrotron photons created by the same electron population to VHE energies. H.E.S.S. performed a long-term follow-up observation campaign on SSS17a [15]. The observation campaign gathered 53.9 hours of data over \(\sim\)5 months. The analysis did not show any significant VHE emission. The upper limits on the SSC emission are transformed into limits on the strength of the magnetic field of the merger remnant. The synchrotron component brings information on the energy density of the electrons and the magnetic field, but cannot disentangle the two. The SSC component can break the ambiguity. The magnetic field is constrained to \(B>24\)\(\mu G\) for an off-axis relativistic jet.
Figure 1: Gamma-ray lightcurve generated by external inverse Compton radiation for emission at 100 and 250 \(\mathrm{GeV}\) considering viewing angles of 0 and 15 degrees with upper limits on the VHE spectrum derived from H.E.S.S. observation on GW170817. From [14], adapted from [13].
Figure 2: Spectral energy distribution of EM170817 for the non-relativistic (blue lines) and relativistic (red lines) scenarios. The blue and red dots correspond respectively to the X-ray and radio measurements. The green dots represent the H.E.S.S. derived upper limits. The solid and dashed lines correspond respectively to the minimum and maximum X-ray emission. From [15].
### BBH merger observations with H.E.S.S.
In addition to GW170817, H.E.S.S. also followed up on four BBH mergers: GW170814, GW190512, GW190728, and GW200224. With the 2D strategy, the H.E.S.S. coverage exceeded 50% for all these events [16]. Since no counterparts were detected for these events, the VHE analysis concentrated on searching for VHE signals in all the areas observed. No significant VHE emission is found. Upper limit maps on the VHE energy flux from these events are published. In addition, taking into consideration the GW event distance estimation, the VHE luminosity from these events is constrained3. These upper limits are then compared to extrapolation of Fermi-LAT detected GRBs in the VHE domain as shown in Fig. 3. These comparisons show that the H.E.S.S. observations constrain well the VHE luminosity of these events since the limits placed coincide with the bulk of the GRB extrapolated VHE luminosity. To significantly increase the chances of detecting a VHE counterpart, the main focus should be on getting earlier observations from the time of the merger, something that the H.E.S.S. collaboration has no control over beyond optimizing observation strategies. However, this can be achieved with an increased rate of detection and better localization in upcoming GW observing runs.
Footnote 3: [https://www.mpi-hd.mpg.de/hfm/HESS/pages/publications/auxiliary/2021_BBH_02_03/](https://www.mpi-hd.mpg.de/hfm/HESS/pages/publications/auxiliary/2021_BBH_02_03/)
## 4 H.E.S.S. GW follow-up during O4
The lessons learned in the first three GW observing run on the importance of the VHE domain in the search for GW electromagnetic counterparts lead to putting GW observations as the top priority of the H.E.S.S. collaboration. For O4, the collaboration allocated as much as 20% of its observing time for GW follow-ups. The strategies used in O2 and O3 have been adapted for O4. These adaptations mostly comprised updating the code to the new changes adopted by the LVKC collaboration and the different brokers to the GW VoEvents. The most notable change is the increase of the horizon at which a 3D strategy is used. The value was increased from 150 Mpc to 300 Mpc
Figure 3: Mean (orange points) and standard deviation (orange bands) of the per-pixel luminosity upper limit maps for the BBH events. They are compared to luminosity extrapolation of Fermi-LAT GRBs (grey lines) with known redshift, The luminosity from H.E.S.S. detected VHE GRBs and to the H.E.S.S. upper limit on GW170817 (black) [12]. All five GW upper limits are calculated assuming an intrinsic \(E^{-2}\) spectrum, although the upper limit for GW170817 is calculated with a slightly different energy range. From [16].
due to significant improvements in the GLADE catalog. To read the catalog faster, most objects far away from this horizon are removed. Finally, to test the H.E.S.S. response to GW alerts before the start of O4, firedrills consisting in injecting fake GW alerts in the system were performed. These alerts differ from the Mock alerts sent by LVKC as these ones would trigger a telescope active response. The Mock alerts are used to continuously monitor the system and the processing of alerts. However, they do not trigger telescope response as they are marked as "Test" alerts by the H.E.S.S. Too response system [17]. The alerts used in the firedrills are based on the GW170817 event and successfully triggered the telescope afterglow and prompt-mode responses. With the firedrills the whole chain of processing from the reception of the alert to the real-time analysis was tested and debugged. These firedrills will be complemented through O4 by expert-on-call training sessions.
During O4 and until July 2023, H.E.S.S. observed three GW events presented in 4 with a coverage of \(\sim\) 10%, \(\sim\) 70%, and \(\sim\) 10%, respectively. S230518h is the first one received during the engineering run period. It is flagged as 86% BHNS merger and false alarm rate (FAR) of 1 per 98.463 years and is followed using a mix of 2D and 3D strategies. S230528ay is the second one and it is a Burst alert with a FAR of 22.193 per year and is followed using a 2D strategy.
Figure 4: H.E.S.S observations of GW candidate events (from top to bottom) S230518H, S230528ay and S230615az. On the right the GW localization region is displayed with the Earth in the background at the time of the first H.E.S.S. observation. On the right, the H.E.S.S observations are indicated in a dark circle with the delay, the duration, and the zenith angle. The dots on the bottom plot represent the galaxies at the distance of the event.
S230615az is the third one and is flagged as an 85% BNS merge and FAR of 4.7 per year and is followed using a 3D strategy. Although these events are of low significance, they help commission the H.E.S.S. GW follow-up program for 04. Notably, S230516az was received during observation hours. H.E.S.S. successfully triggered a prompt automatic response to GW alerts. During O3, S200224 was received during the night but the prompt reaction could not be tested because the telescopes were parked due to rain. S230615az GW event occurred on 2023-06-15 at 17:50:08. The notice was distributed at 17:50:34 and received by the H.E.S.S. Too response system at 17:50:41. By 17:51:30 the best position that could be observed at the time was computed with a 3D strategy and distributed to the shifters, the expert on call and forwarded to the telescopes confirming that the time of computation is less than a minute as indicated in [8]. We note that the localization region for this event is significantly large making the processing time longer than the average. The region covered by H.E.S.S. represents \(\sim\)10% of the total galaxy probability. This is due to the fact that the algorithms targeted a region with a relatively high concentration of galaxies. The telescopes slewed to this position and started observations at 17:52:02 with a total delay of 114 seconds accounting for the distribution of the alert, the H.E.S.S. processing, and the telescopes slewing.
|
2309.14269 | Unsupervised correspondence with combined geometric learning and imaging
for radiotherapy applications | The aim of this study was to develop a model to accurately identify
corresponding points between organ segmentations of different patients for
radiotherapy applications. A model for simultaneous correspondence and
interpolation estimation in 3D shapes was trained with head and neck organ
segmentations from planning CT scans. We then extended the original model to
incorporate imaging information using two approaches: 1) extracting features
directly from image patches, and 2) including the mean square error between
patches as part of the loss function. The correspondence and interpolation
performance were evaluated using the geodesic error, chamfer distance and
conformal distortion metrics, as well as distances between anatomical
landmarks. Each of the models produced significantly better correspondences
than the baseline non-rigid registration approach. The original model performed
similarly to the model with direct inclusion of image features. The best
performing model configuration incorporated imaging information as part of the
loss function which produced more anatomically plausible correspondences. We
will use the best performing model to identify corresponding anatomical points
on organs to improve spatial normalisation, an important step in outcome
modelling, or as an initialisation for anatomically informed registrations. All
our code is publicly available at
https://github.com/rrr-uom-projects/Unsup-RT-Corr-Net | Edward G. A. Henderson, Marcel van Herk, Andrew F. Green, Eliana M. Vasquez Osorio | 2023-09-25T16:29:18Z | http://arxiv.org/abs/2309.14269v1 | Unsupervised correspondence with combined geometric learning and imaging for radiotherapy applications
###### Abstract
The aim of this study was to develop a model to accurately identify corresponding points between organ segmentations of different patients for radiotherapy applications. A model for simultaneous correspondence and interpolation estimation in 3D shapes was trained with head and neck organ segmentations from planning CT scans. We then extended the original model to incorporate imaging information using two approaches: 1) extracting features directly from image patches, and 2) including the mean square error between patches as part of the loss function. The correspondence and interpolation performance were evaluated using the geodesic error, chamfer distance and conformal distortion metrics, as well as distances between anatomical landmarks. Each of the models produced significantly better correspondences than the baseline non-rigid registration approach. The original model performed similarly to the model with direct inclusion of image features. The best performing model configuration incorporated imaging information as part of the loss function which produced more anatomically plausible correspondences. We will use the best performing model to identify corresponding anatomical points on organs to improve spatial normalisation, an important step in outcome modelling, or as an initialisation for anatomically informed registrations. All our code is publicly available at [https://github.com/rrr-uom-projects/Unsup-RT-Corr-Net](https://github.com/rrr-uom-projects/Unsup-RT-Corr-Net).
Keywords:correspondence un-supervised learning geometric learning image registration radiotherapy
## 1 Introduction
Radiotherapy is used in the treatment of \(\sim 80\%\) of Head and Neck (HN) cancer patients [19]. Treatments are planned on a patient's computed tomography (CT) scan, where the tumour and the organs-at-risk are segmented. These segmentations are also used to establish dose-effect relationships which are ultimately
used to improve radiotherapy practice. Modern techniques which allow the investigation of sub-volume dose effects rely on spatial normalisation to map the dose distributions between patients [15]. Examples for these associations in HN radiotherapy include radiation dose to the base of the brainstem and late dysphagia (problems swallowing) [20], dose to the masester muscle and trismus (limited jaw movement) [2]. In these examples, the authors used intensity-based non-rigid image registration (NRR) to indirectly establish the correspondence of the anatomy between different patients. Improved spatial normalisation, using point-wise correspondences rather than NRR algorithms, would reduce uncertainties in outcome modelling applications. However, manually annotating pair-wise correspondences is a complex and time consuming task, rendering its practice unfeasible.
Another promising use of correspondences in radiotherapy applications is in the initialisation of structure-guided image registration methods. Currently, spline-based registration relies on estimating correspondence based on distance criteria [21] and more advanced finite-element based models rely on a set of boundary conditions, e.g. based on structure curvature [4]. These structure-based registrations are particularly useful for cases with dramatic changes, such as registration of images before/after an intervention [22] or of images separated by a long time period (e.g. paediatric follow-up or re-irradiation settings). A model that can quickly and accurately identify corresponding points on sets of anatomical structures would be particularly effective for incorporation into other non-rigid image registration frameworks.
The aim of this study was to find a solution to automatically identify corresponding anatomical points on organs for radiotherapy applications. In this study, we took an established model for simultaneous correspondence and interpolation estimation in everyday 3D shapes, _Neuromorph_[5], and retrained it on biomedical data, specifically HN organ segmentations from planning CT scans. It has previously been shown that the performance of geometric learning models for tasks involving radiotherapy organ shapes can be dramatically improved by incorporating the associated CT scan imaging [6], an approach not attempted in previous correspondence literature [9, 13, 17]. Therefore we extended _Neuromorph_ in two ways in an attempt to optimise its performance for this application: 1) by directly complementing geometrical features with learned image features, and 2) by adding a novel imaging loss function component. The performance of our resultant correspondence models were compared to a NRR algorithm currently used for outcome modelling.
## 2 Materials and method
### Dataset
An open-access dataset of 34 head and neck CT scans with segmentations of the brainstem, spinal cord, mandible, parotid and submandibular glands was used for this study [14]. The segmentations are highly consistent and followed
international guidelines, having been produced by an expert and then audited by three observers and a specialist oncologist with at least four years of experience.
### Pre-processing
The CT scans had a \(\sim 2.5\times 1\times 1\)mm voxel spacing and were truncated at the apex of the lungs to ensure consistency in the length of the cervical section of the spinal cord. The marching cubes algorithm was used to generate 3D triangular meshes for each organ, which were then smoothed with ten iterations of Taubin smoothing. The meshes were simplified using quadric decimation to 3000 triangles for each of the organs apart from the submandibular glands which were simplified to 2000 triangles because of their smaller volume. The organ meshes were then optimised by iteratively splitting the longest and collapsing the shortest edges. The CT scans were rigidly aligned to a single reference patient using _SimpleITK_ 2.0.2. The computed transformations were applied directly to the mesh vertices to align the organ shapes thereby avoiding interpolation artefacts.
### Model
The source model used in this study, _Neuromorph_, was originally presented by Eisenberger et al. [5]. _Neuromorph_ is a geometric learning model which, when given two 3D triangular meshes, predicts corresponding points and a smooth interpolation between the two in a single forward pass. The model performs unsupervised learning which is crucial for our applications because of the scarcity of high quality 3D data labelled with point-to-point correspondences.
Figure 1 shows a schematic of the original model and one of our modifications to add imaging features. The _Neuromorph_ model is formed of two components: a Siamese feature extracting network and an interpolator. The feature-extracting portion consists of two networks with shared features which receive two meshes as input. The encoded shape features are matched using matrix multiplication to produce a correspondence matrix between the input meshes. The correspondence matrix is used to produce a vector which contains the offset between source vertices and their corresponding counterparts in the target mesh. This offset vector provides part of the input to the interpolator, along with the original source vertices and a time-step encoding to provide the number of intermediate steps along which to interpolate the deformation. The interpolator outputs a deformation vector for these time-steps for each vertex in the source mesh.
The feature extractor and interpolator have identical graph neural network architectures, consisting of repeating residual EdgeConv layers [23]. The primary intuition behind success of the _Neuromorph_ architecture is that correspondence and interpolation are interdependent tasks that complement each other when optimised in an end-to-end fashion. _Neuromorph_ uses a three-component loss function for unsupervised learning. These are: a registration loss, to quantify the overlap of the target and source meshes; an "as-rigid-as-possible" (ARAP) loss, to penalise overly elastic deformations; and a geodesic distance preservation loss, to regularise the predicted pair-wise correspondences. For all models in this
study, the weight of the ARAP loss component was increased by a factor of ten compared to the originally proposed value to reduce the elasticity of predicted deformations. For full implementation details of the original _Neuromorph_ model, refer to [5].
### Incorporating imaging information
The original _Neuromorph_ model predicts point-to-point correspondences solely on the geometric structure of the input meshes. Since the meshes used in this study are organ shapes derived from CT scans, we have additional imaging information which was leveraged using two different approaches.
#### 2.4.1 Complementing geometrical features with image features
We followed a similar approach to that of Henderson et al. to encode image patches for each point on the mesh using a 3D convolutional neural network (CNN) [6]. Figure 1 shows further details of this methodology extension. Cubic 3D image patches of side length \(\approx 19\)mm (\(7\times 19\times 19\) voxel sub-volumes) were extracted from the CT scan for each vertex on the triangular mesh of each organ. This patch size was chosen so that image information 10mm outside the organ is within view, including surrounding structures such as bones and air cavities. Figure 2 shows an example slice of a parotid gland contour and demonstrates the field-of-view which these image patches cover. The image patches were normalised
Figure 1: A schematic of the _Neuromorph_ architecture [5] with our first extension to leverage the CT imaging. Additional CNN blocks (red) encode a \(7\times 19\times 19\) CT sub-volume for each mesh point into an imaging feature vector to complement the geometrical features used to predict point correspondences. An example of the correspondence and interpolation predictions for a pair of parotid glands from different patients is shown with the different colours showing corresponding points.
from Hounsfield Units (HU) onto the range \([0,1]\) using contrast windowing with settings used to visualise soft tissue (W 350HU, L 40HU) [7]. The patches were then encoded using a custom CNN architecture into imaging feature vectors (Figure 1). These imaging feature vectors were concatenated with the feature vectors created by the geometric feature extractors of the original _Neuromorph_ model. Feature matching and correspondence prediction was then performed as before, but now utilising both geometric and imaging information. The imaging and geometric feature extractors of the extended model were optimised simultaneously during training.
#### 2.1.2 Imaging as a component of the loss function
For the second approach, we added a new loss term to calculate the mean-squared error of \(7\times 19\times 19\) image patches for which the associated vertices are identified as corresponding. The \(l_{\mathrm{imaging}}\) loss component was calculated as
\[l_{\mathrm{imaging}}=\lambda_{\mathrm{imaging}}\times\left\|\Pi Y_{\mathrm{CT\_ patches}}-X_{\mathrm{CT\_patches}}\right\|^{2} \tag{1}\]
where \(\Pi\) is the predicted correspondence matrix, \(Y_{\mathrm{CT\_patches}}\) and \(X_{\mathrm{CT\_patches}}\) are the CT image patches of the related target and source mesh points respectively and \(\lambda_{\mathrm{imaging}}\) was set to 1000 to balance the contribution with the other components. This hyperparameter value was chosen in preliminary testing from a range spanning \(1\to 100,000\).
By incorporating the imaging information as a loss component, the model does not require any additional input or modification from the original architecture. However, the rationale of including such an imaging loss was to encourage the model to learn more anatomically feasible correspondences at training time based on the underlying CT scan.
Figure 2: a) A cross sectional view of a parotid gland mesh showing the field-of-view of the \(7\times 19\times 19\) CT sub-volumes. Only \(\sim 10\%\) of the sub-volume patches in this cross section are shown for clarity of the visualisation. b) A visualisation of one of the 3D CT sub-volumes from the lateral aspect of the parotid.
### Comparison with non-rigid image registration
We compared the performance of the correspondence models with an established NRR algorithm which is a standard approach for aligning images and anatomical structures for radiotherapy applications [2, 11, 20]. For this comparison, the CT scans were first rigidly registered to a single reference patient, as before, then _NiftyReg_ was used to non-rigidly register each pair of patients [10]. The registration performed was a cubic B-spline using normalised mutual information loss with specific parameters: -ln 5 -lp 4 -be 0.001 -smooR 1 -smooF 1 -jl 0.0001. The computed non-rigid transformations were applied to the organ masks which were then meshed as in section 2.2. Corresponding points between the pairwise registered organs were assigned using the nearest neighbours.
### Evaluation metrics
We implemented each of the three metrics used by Eisenberger et al. [5]:
**The geodesic error:** measures the consistency of shapes for sets of corresponding points [18]. It is defined as the differences between the geodesic distances of pairs of points on the target and the predicted corresponding pairs of points on the source mesh. This metric quantifies the discrepancies in the geodesic distances, resulting from the predicted correspondences, normalised by the square root area of the mesh.
**The chamfer distance:** measures the accuracy of the predicted interpolation. It is defined as the distance between each predicted point on the source mesh to the nearest point on the target [3]. While the chamfer distance is a good measure of the overlap of the predicted shapes, a sufficiently elastic (and anatomically unrealistic) registration can achieve a near perfect (zero) chamfer distance.
**The conformal distortion:** provides insight into the realism of the deformations produced [8]. This metric quantifies the amount of distortion each triangle on the mesh experiences through interpolation. The conformal distortion is a good indicator of the anatomical feasibility of a deformation, with a higher conformal distortion metric value suggesting a more unrealistic registration.
#### 2.7.1 Anatomical landmark error
We additionally evaluated the correspondence of organ sub-regions using anatomical landmarks identified in the original CT scans. Figure 3 shows the landmarks used in this study which were manually identified in each of the 34 CT scans by a single observer. When the identified landmark was not on the segmentation, the closest point on each mesh was found. The Euclidean distance between the landmark on the target organ and the predicted corresponding landmark point was then found, which we call the landmark error.
### Implementation details
All models are implemented using _PyTorch_1.13.0 and _PyG_2.2.0. _Open3D_0.13.0 and _PyVista_0.38.6 were used to perform mesh smoothing and visualisation.
All training was performed using a 24 GB NVidia GeForce RTX 3090 and AMD Ryzen 9 3950X 16-Core Processor. The base _Neuromorph_ model contained \(389,507\) parameters and the extended model with imaging features contained \(686,467\) parameters. Models were trained for 75 epochs with the Adam optimiser (learning rate of \(0.0001\)) and used a maximum of \(4.8\) GB GPU memory.
### Experiments
For this study we evaluated the original model, _Neuromorph_, and two proposed extensions against a NRR baseline. Each model was trained with data from all organs, but with the restriction that only pairs of the same organ were presented to the model, e.g., a pair of left parotid glands, followed by a pair of mandibles, etc. For each configuration we performed a five-fold cross-validation, dividing the data into folds to train five different model parameter sets. Trivial self-pairs were excluded when computing the evaluation metrics in section 2.6, resulting in \(7\times 24^{2}=4032\) pairs for training, \(7\times 3^{2}=63\) pairs for validation and \(7\times 7\times 6=294\) pairs for testing each parameter set. The metric results in the testing fold for all five parameter sets are reported.
A Wilcoxon signed-rank hypothesis test was used to compare the performance of each of the model configurations to the NRR baseline for the anatomical landmark error. The geodesic error and chamfer distances were also calculated for the non-rigidly registered organs, but the conformal distortion could not be computed for the baseline approach since this metric requires a vertex-wise interpolation sequence.
## 3 Results
Figures 4 and 5 show an example set of correspondence predictions for every organ between a single pair of patients. Identical colours on the organs identify
Figure 3: CT scan slices showing the locations of the anatomical landmarks used for clinical validation.
corresponding points. 2D images, either axial or sagittal slices or maximum intensity projections of the CT scans are shown annotated with the organ contour to aid visualisation. The predictions shown were produced by a single model that included the imaging loss during training.
Figure 6 shows cumulative distributions of the geodesic error, chamfer distance and conformal distortion of all model configurations and organs. The original (_Neuromorph_) model and model with imaging features perform similarly across most metrics. The original performs better on the geodesic error and conformal distortion for the spinal cord. However, the imaging features model produces less distortion for the parotid and submandibular glands. The model which includes the imaging as an additional loss component performed similarly
Figure 4: Visualisation of predicted correspondences for a) the brainstem, b) the spinal cord and c) the mandible between a single pair of patients. The target patient scan and contour is presented in the first column, followed by the reference/target mesh, then the predicted correspondence on the source mesh, and finally, the scan and contour of the source patient. Sagittal slices or a maximum intensity projection (mandible) of the CT scans are shown to improve visualisation clarity.
to the original for the geodesic error, apart from the submandibular glands for which it outperforms the original. The imaging loss model has a slightly poorer chamfer distance results compared to the original, but greatly improved conformal distortion results, especially for the submandibular glands.
Figure 5: Visualisation of predicted correspondences for a) the left parotid gland, b) the right parotid gland, c) the left submandibular gland and d) the right submandibular gland between a single pair of patients. The target patient scan and contour is presented in the first column, followed by the reference/target mesh, then the predicted correspondence on the source mesh, and finally, the scan and contour of the source patient. Axial slices of the CT scans are shown.
For the geodesic error, the NRR baseline performs better than the correspondence models in the spinal cord and mandible, similarly for the parotid glands and worse for the brainstem and submandibular glands. The correspondence models outperform the NRR baseline for the chamfer distance for all organs apart from the mandible.
Table 1 shows the landmark error distances for each of the model configurations and the NRR baseline. All of the models showed a significant improvement over the baseline for all anatomical landmarks. All correspondence methods perform similarly in terms of landmark distance, but subtle differences could exist that are hidden by observer variation. The median distance from the landmark to the organs is shown in the final row and this serves as an indication on the landmark variability and hence a reasonable upper bound of the correspondence accuracy identifiable with this measure.
Figure 6: Cumulative distributions of the geodesic error, chamfer distance and conformal distortion metrics for each of the model configurations and the NRR baseline. Closer to zero is better for all metrics.
Figure 7 shows an additional example of correspondences produced by the model including imaging loss. This particular case is interesting as it demonstrates how the model handles the difficult scenario of missing correspondences. One of the patients has an accessory parotid, an anterior extension of the parotid present in \(>\)30% of the population [16], and the other does not. The model was able to robustly handle this case in both directions, i.e. with either patient as the reference.
\begin{table}
\begin{tabular}{l c c c c} & \multicolumn{3}{c}{Median landmark error (IQR) mm} \\ Model configuration & Pineal gland &
\begin{tabular}{c} Spinal cord \\ at C1 \\ \end{tabular} & Styloid process & Mandible lingula \\ \hline Baseline (NRR) & 4.6 (4.4) & 3.9 (3.6) & 8.4 (7.0) & 7.8 (10.8) \\ Neuromorph & 3.7 (2.5) \(\dagger\) & 2.3 (2.0) \(\ddagger\) & 5.6 (5.3) * & 3.1 (2.1) \(\ddagger\) \\ + Imaging features & 3.7 (2.7) \(\dagger\) & 2.2 (1.9) \(\ddagger\) & 6.2 (5.8) * & 3.0 (2.4) \(\ddagger\) \\ + Imaging loss & 3.8 (2.5) \(\ddagger\) & 2.5 (2.2) \(\ddagger\) & 6.3 (5.5) * & 3.1 (2.1) \(\ddagger\) \\ Distance from & 3.6 (2.4) & 2.1 (1.6) & 5.4 (5.0) & 2.6 (1.8) \\ landmark to organ & & & & \\ \end{tabular}
\end{table}
Table 1: Landmark error distances for each model configuration. The level of significance of improvement of each model over the NRR baseline according to the Wilcoxon signed-rank test is shown as: * - p value \(<0.05\), \(\ddagger\) - p value \(<0.005\), \(\dagger\) - p value \(<0.0005\), \(\ddagger\) - p value \(<0.00005\).
Figure 7: An example of the model including imaging loss robustly handling a case with missing correspondences for a pair of parotid glands. In a) the reference parotid has an anterior extension (accessory) which is not reproduced on the predicted correspondences. In b) the lack of accessory in the reference does not impact majority of the predicted correspondences shown by the black stripes aligning between the two. Black and white has been used here to show corresponding points instead of the full colormap for clarity.
## 4 Discussion
In this study we showed that an established neural network for predicting correspondence and smooth interpolation of 3D shapes can be applied to HN organ segmentations from CT scans. We additionally evaluated two methodological extensions to leverage the CT imaging information.
The correspondence models were compared to an intensity-based NRR algorithm regularly used for radiotherapy outcome modelling. The NRR produced better correspondences for the spinal cord and mandible in terms of the geodesic error showing the effectiveness of the image registration method for the more straightforward task of aligning the skeleton and anatomy enclosed by bone. However, the original _Neuromorph_ model and extensions all produced significantly lower landmark errors for every organ than the NRR baseline as well as producing better chamfer distance results for soft tissue organs. This promising result demonstrates the potential of such correspondence methods to reduce uncertainties in radiotherapy outcome modelling. Further work is required to quantify the uncertainty reduction and its impact for this purpose.
An intensity-based non-rigid registration algorithm was used as a comparison baseline for the learning-based correspondence models. A mesh-based registration such as coherent point drift could have alternatively been applied for a more direct comparison [12]. However, intensity-based image registration algorithms are the current standard for aligning images and structures for radiotherapy applications, particularly for outcome modelling, and therefore provide a more relevant comparison for this study [2, 11, 20].
The performance of the original _Neuromorph_ model was slightly improved by incorporating imaging information, not as explicit imaging features, but rather by introducing an additional imaging term to the loss function. This configuration does not require imaging at inference time, instead the imaging is used solely when training. This configuration was shown to be particularly effective at regularising the predicted correspondences with substantially reduced conformal distortion results. This indicates that the inclusion of an imaging loss term produced more anatomically feasible and robust deformations. The improvement in conformal distortion metric, whilst hardly affecting performance in terms of the other metrics, makes this particular configuration appealing for future exploration as a starting point for an anatomically informed non-rigid registration method.
The original _Neuromorph_ model was also extended to receive imaging input directly and predict correspondences based on geometric and imaging features, but this extension did not improve performance. We believe that this is primarily due to the highly consistent data used for model training. The segmentations were as close to the consensus guidelines as possible which is unlikely in clinical practice. This meant the contours will deviate only slightly from "true" organ boundaries. We envisage providing the imaging information as input to the model to be of greater use in scenarios where the segmentations are more variable, and could be inconsistent with the underlying anatomy. This is an interesting avenue for future work.
While _Neuromorph_ is an established model for everyday 3D shapes, we believe this is the first time it has been shown to be effective in biomedical applications. Additionally, while there are other learning based correspondence methods [1, 9], this is the first to combine geometric learning and leverage imaging, providing a slight improvement on the original model in terms of anatomical feasibility.
The additional imaging loss component described in section 2.4 utilises the mean squared error for simplicity. This metric is only appropriate when quantifying the similarity of mono-modal scans which are intensity calibrated, such as CT scans used for radiotherapy planning. If the underlying imaging was a cone-beam CT or MRI, an alternative measure such as mutual information or correlation ratio could be used.
Our model was primarily developed with outcome modelling in mind, which relies on inter-patient analysis. Inter-patient correspondence is a more complex task than identifying intra-patient correspondence since there is greater variability in the anatomy. Consequently, we believe that extension to the intra-patient tasks should be straightforward.
## 5 Conclusion
We have shown that an established model, originally developed for generic 3D shapes can be adapted for applications in biomedical imaging. Specifically, this model could be used to identify corresponding points on 3D organs to improve spatial normalisation in outcome modelling applications, potentially reducing the associated uncertainties and facilitating the development of better radiotherapy treatments. Further, we envision that in the future, such a correspondence tool, which also provides a smooth interpolation, could be deployed at the heart of an effective, anatomically informed non-rigid registration method. |
2309.13506 | Evaluating the Usability of Differential Privacy Tools with Data
Practitioners | Differential privacy (DP) has become the gold standard in privacy-preserving
data analytics, but implementing it in real-world datasets and systems remains
challenging. Recently developed DP tools aim to make DP implementation easier,
but limited research has investigated these DP tools' usability. Through a
usability study with 24 US data practitioners with varying prior DP knowledge,
we evaluated the usability of four Python-based open-source DP tools:
DiffPrivLib, Tumult Analytics, PipelineDP, and OpenDP. Our results suggest that
using DP tools in this study may help DP novices better understand DP; that
Application Programming Interface (API) design and documentation are vital for
successful DP implementation; and that user satisfaction correlates with how
well participants completed study tasks with these DP tools. We provide
evidence-based recommendations to improve DP tools' usability to broaden DP
adoption. | Ivoline C. Ngong, Brad Stenger, Joseph P. Near, Yuanyuan Feng | 2023-09-24T00:10:47Z | http://arxiv.org/abs/2309.13506v3 | # Evaluating the Usability of Differential Privacy Tools with Data Practitioners
###### Abstract.
Differential privacy (DP) has become the gold standard in privacy-preserving data analytics, but implementing it in real-world datasets and systems remains challenging. Recently developed DP tools aim to ease data practitioners' burden in implementing DP solutions, but limited research has investigated these DP tools' usability. Through a usability study with 24 US data practitioners with varying prior DP knowledge, we comprehensively evaluate the usability of four Python-based open-source DP tools: DiffPrivLib, Tumult Analytics, PipelineDP, and OpenDP. Our results suggest that DP tools can help novices learn DP concepts; that Application Programming Interface (API) design and documentation are vital for learnability and error prevention; and that user satisfaction highly correlates with the effectiveness of the tool. We discuss the balance between ease of use and the learning curve needed to appropriately implement DP, and also provide recommendations to improve DP tools' usability to broaden adoption.
Keywords:Usability evaluation, differential privacy, privacy enhancing technology, developer tools +
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †: journal:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †:
+
Footnote †
Our objective in this study is to assess the usability of publicly available DP tools through a mixed-methods usability study involving data practitioners. By evaluating four major properties of usability: learnability, efficiency, error prevention, and user satisfaction (Krishnan, 2017), we seek to investigate three key research questions (RQs):
* RQ1: How effectively can DP tools help data practitioners learn DP concepts?
* RQ2: How effectively can DP tools help data practitioners complete DP-related tasks?
* RQ3: How satisfied are data practitioners with DP tools for differentially private data analysis?
## 2. Related Work
### Differential Privacy and Implementation Challenges
Differential privacy (DP) (Krishnan, 2017; Krishnan, 2017) is a formal privacy definition designed to allow statistical analysis while protecting information about individuals. Differentially private analyses, often called _mechanisms_, typically add random noise to analysis results in order to achieve privacy. The random noise ensures that the probability of observing a particular result does not change significantly when one person's data is added or removed from the dataset being analyzed. Formally, two datasets \(D,D^{\prime}\in\mathcal{D}\) are called _neighboring datasets_ if they differ in one person's data, and a mechanism \(\mathcal{M}\) satisfies \((\epsilon,\delta)\)-DP if for all neighboring datasets \(D\) and \(D^{\prime}\), and all possible sets of outcomes \(S\):
\[\Pr[\mathcal{M}(D)\in S]\leq e^{\epsilon}\Pr[\mathcal{M}(D^{\prime})\in S]+\delta\]
For a deterministic function \(f\in\mathcal{D}\rightarrow\mathbb{R}\) of a specific dataset, \((\epsilon,0)\)-DP can be achieved by adding Laplace noise to the result with center \(0\) and scale \(\frac{\Delta}{\epsilon}\). Here, \(\Delta\) is the _sensitivity_ of \(f\)--the maximum possible change in \(f\)'s output when one person's data is added to or removed from the dataset. Gaussian noise can also be used to achieve \((\epsilon,\delta)\)-DP, with a similar dependence on sensitivity.
One important property of DP is _sequential composition_: if multiple analyses are performed on the same data, their \(\epsilon s\) and \(\delta s\) add up. Formally, if \(\mathcal{M}_{1}\) satisfies \((\epsilon_{1},\delta_{1})\)-DP, and \(\mathcal{M}_{2}\) satisfies \((\epsilon_{2},\delta_{2})\)-DP, then releasing results from both mechanisms satisfies \((\epsilon_{1}+\epsilon_{2},\delta_{1}+\delta_{2})\)-DP. In the (common) setting of multiple analyses of the same data, the analyst often sets a total _privacy budget_ (e.g. that the total \(\epsilon\) for all analyses must be less than \(1\)), and allocates portions of the budget to each of the analyses they want to perform. Total \(\epsilon\) budgets are commonly in the single digits, while total \(\delta\) budgets must be much smaller--e.g. \(\frac{1}{n}\) for a dataset of size \(n\). Alternative composition theorems have also been developed that yield tighter bounds on the total privacy budget, but these are more complicated to apply.
Implementing differential privacy mechanisms correctly can be tricky: Data practitioners must correctly bound the sensitivity of all analyses, account for the total privacy budget and the composition of all analyses performed, and ensure that the system is free of common side-channels that can reveal sensitive data (e.g. floating-point vulnerabilities (Krishnan, 2017; Krishnan, 2017), sensitivity bugs (Krishnan, 2017), and timing attacks (Krishnan, 2017)). Moreover, errors in DP implementation are almost impossible to detect. When queries produce insufficiently private responses, inexperienced users are unlikely to notice that desired levels for privacy have not been met, thus putting individuals' data at risk.
### Existing DP Tools
Numerous tools and libraries have been developed to make implementing differential privacy easier for data practitioners (Krishnan, 2017; Krishnan, 2017; Krishnan, 2017; Krishnan, 2017; Krishnan, 2017; Krishnan, 2017; Krishnan, 2017; Krishnan, 2017; Krishnan, 2017). These tools are typically designed to handle the tricky parts of DP automatically. For example, tools may calculate sensitivity automatically (Krishnan, 2017; Krishnan, 2017; Krishnan, 2017) and ensure the privacy budget is not
violated [3; 5; 6; 27; 34; 39; 48; 54; 57]. They may also provide carefully designed and vetted implementations of basic DP mechanisms like the Laplace mechanism [2; 4; 5; 48; 57].
**Tools included in this study.** We study four open-source DP tools using the Python programming language - PipelineDP, OpenDP, DiffPrivLib, and Tumult Analytics - according to the inclusion criteria in Section 3.1.
PipelineDP [6] is a Python-based open-source DP tool developed by OpenMined and Google particularly for machine learning pipelines. It is built on top the GoogleDP[4] library and supports a small number of queries like count, sum, and average, limiting the tool's functionality. However, PipelineDP runs on different data pipeline processing frameworks, remote and locally, suited to a wide range of data environments. The PipelineDP API does not require in-depth understanding of programming, mathematics or differential privacy by data analysts who want to add DP to a machine learning pipeline.
OpenDP [5] is an open-source, work-in-progress library of differential privacy algorithms that have been implemented in Rust, with bindings to interface with Python code. The OpenDP library is part of the larger OpenDP Project, a hub for privacy preserving software development based at Harvard University. Consistent with its academic roots, OpenDP has published the theoretical framework [26] for the DP libraries, a design that executes a wide range of differential privacy variables and operates on a wide range of underlying data sets.
DiffPrivLib[2] is a Python library created by IBM so that data scientists can experiment with differential privacy, as well as develop DP applications. DiffPrivLib is designed to work alongside commonly-used Python libraries like NumPy (for matrix operations) and Pandas (for processing tabular data).
Tumult Analytics[13], is a Python interface built on top of Tumult Core, a framework for differential privacy computation developed based on the same white paper as the OpenDP library. Tumult Labs is the startup company responsible for Tumult Core and Analytics. The framework builds on Spark, and is designed to scale to massive datasets. The underlying privacy accounting framework is extensible without requiring deep design changes.
**Tools not included in this study.** We excluded DP tools using programming languages other than Python, such as GoogleDP (C++) [4], Privacy on Beam (Go) [7], ZetaSQL (SQL) [8; 54], Chorus (Scala) [1; 34], and PINQ (C#) [39] to prioritize the validity and comparability in the study (details in Section 3.1). DPCreator [3] and the Private data Sharing Interface (PSI) [27] provide interactive query interfaces rather than an API. These tools support users with limited programming skills, but lack flexibility and functionality compared to the Python-based open-source tools selected. We also exclude tools primarily designed for machine learning [48; 57] to ensure comparability.
### User Research around DP
Several studies have investigated people's perception and understanding of DP. Bullek et al. [16] examined the comprehension of randomized response by utilizing animated spinners with varying bias rates (40%, 60% and 80%) to guide participants in answering sensitive questions in a questionnaire. The results indicated a general preference for spinners with higher privacy levels, although some participants doubted the truthfulness of the high-privacy spinners. Cummings et al. [21] conducted a study to assess the impact of DP communication on data sharing willingness and end-user expectations of privacy. Participants were provided with one of six different DP explanations and one of two relevant scenarios (such as salary disclosure or medical records with DP), and the results showed that the explanations raised end-user expectations of privacy, but did not increase their willingness to share data. Both studies indicate that the general public with limited DP knowledge have reservations towards DP's privacy protection promises.
Further studies explored solutions to better communicate complex DP concepts to the general public to ease their confusion or distrust. Xiong et al. (Xiong et al., 2018) assessed the impact of DP communication on user comprehension and data sharing willingness by recruiting participants via Amazon MTurk and testing various creative scenarios to explain DP. To validate the results and account for cultural differences, Kuhtreiber et al.(Kuhtreiber et al., 2019) replicated the study with participants from a different cultural and demographic background. While the results indicated a need for a more effective method of communicating DP and a general lack of DP understanding among participants, the German participants were more willing to share data compared to those in the USA and India.
Overall, existing user research on differential privacy reveals that the general public with only limited DP knowledge have difficulties understanding DP and often distrust DP claims, and that individuals and organizations who are potential adopters face various technical barriers to implement DP. In this study, we extend existing user research by examining data practitioners' DP understanding, who often need a relatively good understanding of DP to comfortably implement it. Also, through evaluating existing DP tools with data practitioners, we will further articulate the usability problems that potential adopters face in DP implementation.
### Usability of DP Tools and other Data Science Tools
A few studies have examined the usability of DP tools.
Nanayakkara et al. (Nanayakkara et al., 2018) studied the effectiveness of visualizations for supporting the selection of differential privacy parameters. They present Visualizing Privacy (ViP), an interactive interface visualizing relationships between epsilon, accuracy, and disclosure risk. By adjusting epsilon, users can see updated distributions that illustrate expected accuracy and risk trade-offs.
However, limited user research focuses on data practitioners, who are the potential adopters of DP. Garrido et al. (Garrido et al., 2018) interviewed 24 practitioners from 9 major companies to understand the challenges in deploying differential privacy. Their findings highlight cumbersome data access processes blocking analysts, the importance of SQL over machine learning, and the need to prioritize data security of individuals' data privacy. The authors make the case for implementing DP by using public APIs, having concluded that DP can help shorten data access processes, enable cross-silo exploration, improve analysts' accuracy understanding and fill gaps in the technical development process for building their own DP tools.
Murtagh et al. (Murtagh et al., 2018) conducted a usability evaluation of the Privacy-preserving Integration (PSI) tool, a Web-based differential privacy explorer geared to non-technical users. Despite succeeding at usability tasks with the tool, study participants identified areas of confusion and error. The authors suggest that future research focus on clearly communicating complex concepts of differential privacy, such as privacy loss parameters and budget allocation. Sarathy et al. (Sarathy et al., 2019) conducted an extensive usability study where they interviewed 19 non-expert participants using the DP Creator prototype (a PSI-like DP explorer) to understand perceptions, challenges, and opportunities around DP analysis. Their findings highlight several challenges, including users' limited understanding of decision implications, lack of raw data access, plus new, difficult and unfamiliar workflows. They also discuss the exciting potential of DP to expand public access to privacy-protected data sources, aiding research tasks like exploratory analysis and replication studies.
Both studies only evaluated the usability of one DP tool and recruited non-experts. Our study significantly contributes to DP tool usability study by evaluating multiple DP tools with participants of varying DP knowledge.
Govtech Singapore recently benchmarked the same four DP tools as we studied (Xiong et al., 2018). Their tests compared tools' feature sets in terms of their Analysis (query type and interactivity), Security (data visibility, floating-point vulnerability), Usability (scalability, parameters' input and feedback, pre- and post-query processing) and Differential Privacy
(mechanisms, definitions, composition). Performance comparisons among the four tools used synthetic data to gauge scale (data set spreads of 50, 250, 500), skew (data set shapes of 0, 5, 50) and size (data sizes between 1000 and 1 billion data points). The usability assessment did not extend to hands on tests with representative users, like our study did. It compliments our study and offers a valuable guide for data practitioners.
Outside of DP tools, prior research investigated the usability of data science tools among hands-on data practitioners. Akil et al. (Akil et al., 2017) present a comparison of three of the most prominent distributed data processing platforms for cloud computing, Apache Hadoop MapReduce, Apache Spark, and Apache Flink, from a usability perspective. Factors such as ease of use, learnability, language support, auto-configuration, and community support that make big data platforms more effective for users in data science and begins an exploration of the usability of these platforms. Another study by Mehta et al. (Mehta et al., 2018) evaluated five large-scale image analysis systems (i.e., SciDB, Myria, Spark, Dask, and TensorFlow) and finds that each of them has limitations like a lack of support for user-provided Python code, no direct support for scientific image file formats, and manual tuning requirements for efficient execution. Data science tasks often have steep learning requirements and, in at least some cases, the tools are not meeting practitioners' expectations for straightforward data processing and analysis. Both studies suggest that there is room for usability improvement for data science tools across the board, not just for differential privacy.
## 3. Methods and Study Design
To evaluate the usability of open-source DP tools, we chose the usability testing methodology (Sarwaran et al., 2017; Mehta et al., 2018) to uncover challenges that data practitioners face and gain a deeper understanding of the learning curve involved - a vital consideration given the specialized expertise often needed for DP tools. Given the limited adoption of DP, real-world observations are not yet viable. Usability testing can also identify obstacles that impede data practitioners from effectively implementing DP. We also leveraged the methods of surveys, interviews, and think-aloud protocol (Krishnan et al., 2018) to collect a comprehensive set of data to answer our research questions. We chose to execute the usability test remotely rather than in-person to widen the recruitment beyond local participants, and research has shown that remote synchronous usability tests align closely in efficacy with traditional lab-based tests (Krishnan et al., 2018).
### Selection of Differential Privacy Tools
We chose four DP tools by adhering to a set of predetermined inclusion criteria. We started with nine DP tools and then chose to impose an open source requirement for inclusion, a benefit in terms of their algorithmic and methodologic transparency. Comprehensive documentation is our second inclusion requirement. Tool-provided documentation allowed us to set reasonably difficult usability tasks without having to inform the user about how a tool works. We favored tools built using Python, which helped us to develop consistent requirements for usability tasks, and also, to recruit participants from a population of data practitioners who could also code. Server demands were also a factor. Tools needed to have capabilities to consistently support usability testing tasks. In the end we settled on four tools for our usability study: OpenDP, PipelineDP, DiffPriLib, and Tumult Analytics.
### Study Procedures
In our study, we adopted a between-subjects design, assigning each participant to one of the four DP tools. Based on responses to our eligibility survey (Appendix A) we categorized participants into DP novices and DP experts with the idea of evenly distributing the expertise levels across each tool, and offering a floor for participant diversity, at least in terms of DP knowledge.
* **Screening:** We distributed an eligibility survey alongside our recruitment advertisements. This survey determined participants' eligibility as well as securing informed consent and gathered some basic demographic details. We also included questions to test Python and differential privacy knowledge. Correct responses to DP knowledge questions (Questions 8-11 in Appendix A) helped to distinguish experts from novices. We sent invitations to qualified survey respondents for a usability study that we then carried out on Microsoft Teams, continuing until we achieved our target sample size. Our refined dataset had 12 experts and 12 novices. Each DP tool had three individuals from each expertise level.
* **Pre-task Procedures:** Before commencing usability study tasks, participants were instructed to share their screens and introduced to the think-aloud methodology. Pre-task procedures involved reviewing a handout that covered fundamental aspects of DP, followed by a tutorial that walked a participant through task requirements by executing the code in Jupyter notebook cells. We crafted equivalent tutorials for all libraries in order to prevent bias. Participants were allowed to refer back to the handout and the tutorial as they worked on usability tasks. Participants were also given access to the tool's documentation before and during their task work. Google search was also a permitted resource but how-to resources like StackOverflow were not permitted. The rationale is to recognize a participants' need for information that we cannot think to provide, and it is a necessary aid to allow search for general resource information about Python or about non-DP Python libraries. StackOverflow, alternatively, was restricted to reduce biases and reliance on pre-existing solutions. If the goal is an accurate assessment of participants' independent abilities with the tool, nearly all of the necessary information is at hand with our handout, the tutorial and the tool documentation, but search is available if an edge case materializes.
* **Usability testing tasks:** We designed three usability tasks shown in Table 1, which asked participants to perform differentially private data analysis. These tasks were based on examples from Pipeline DP documentation[6] and used synthetic data about restaurant visits across a week, where each record represented a distinct visit, tagged with a visitor ID. These tasks involve common data analysis and require essential queries supported by all four tools (count, sum, mean). Task completion depended on providing answers to data analysis questions while adhering to differential privacy guidelines. All of the participant's programming code for the work is Python, executed in a Jupyter notebook over a 60 minute time period. We encouraged participants to vocalize their thought process. We recorded both their spoken insights and on-screen actions, gaining a deeper understanding of their interaction with the tools. For a fair evaluation, we ensured that each tool received consistent tasks.
* **Post-task procedures:** We asked participants to complete a post-task survey (Appendix B) and to answer questions in a post-task interview (appendix C). The survey repeated differential privacy questions from the
\begin{table}
\begin{tabular}{|p{142.3pt}|p{142.3pt}|} \hline
**Task** & **Description** \\ \hline Task 1 & How crowded is the restaurant on weekdays? (total number of visits for each weekday) \\ \hline Task 2 & Total amount of time spent by visitors on each weekday (exclude weekends). \\ \hline Task 3 & Average amount of time spent by visitors on each weekday (exclude weekends) \\ \hline \end{tabular}
\end{table}
Table 1. Usability tasks
eligibility survey in order to assess learning outcomes, experiences, and confidence. It also gathered additional feedback on participants' experiences and perceptions. The post-task interview complemented the survey by providing deeper insights into participants' preferences, challenges, and suggestions for improvement.
### Usability Evaluation Metrics and Analysis
We designed our study to collect a comprehensive set of quantitative and qualitative data that we could then use to thoroughly assess the usability of all four DP tools to address the three research questions.
#### 3.3.1. RQ1: How effectively can DP tools help data practitioners learn DP concepts?
Learnability is a metric for determining how easily users navigate specific tools or interfaces. Experienced data scientists sometimes fail to grasp the intracacies and subtleties of differential privacy [21, 56]. The DP tools we examined are available to any data practitioners and share the goal of making differential privacy understandable. A clear conceptual understanding of DP, in general, will help a user to leverage these tools. These tools balance instructional clarity and implementation complexity so that users can then undertake meaningful, privacy-maintained data analysis. We assess learnability by asking the same multiple-choice questions to participants in their initial eligibility survey and their post-task survey. We also ask participants about key DP concepts in our post-task survey and post-task interview. Additionally, we tracked participants' completion times in order to gauge differences in the learning curves for novices and experts across the four tools.
#### 3.3.2. RQ2: How effectively can DP tools help data practitioners complete DP-related tasks?
We used three metrics -- learnability, efficiency and error prevention -- that, taken together, assess usability of the four DP tools.
**Learnability** is assessed using task success and failure rates, a standard metric [14]. For each tool and expertise level, we evaluated whether users succeeded or failed to complete tasks, as well as assessing the correctness of their completed efforts.
**Efficiency** measures the speed with which users can accomplish tasks with a specific tool or interface. For each tool and based on expertise, we recorded the time taken to complete each task.
**Error prevention** measures how well a tool prevents user errors and, in the cases of error, how well a tool facilitates error identification and recovery. Since every participant was assigned the same set of three tasks, every participant workflow had the same opportunities for user errors, though errors might be more correctly described as interruptions of progress toward task completion.
We called these workflow interruptions "stucks" and defined six different types of stuck (described in Table 2) for counting. Our tasks are programming tasks so we only included these with at least two years programming experience. Once a participant worked their way out of a stuck situation, we called it an "unstuck" and counted it in
\begin{table}
\begin{tabular}{|p{85.4pt}|p{113.8pt}|p{113.8pt}|} \hline
**Stuck Type** & **Abbreviation** & **Definition** \\ \hline Python stuck & Python & Don’t know the correct Python or Pandas function to use. \\ \hline Tool stuck & Tool & Don’t know the correct DP tool function to use, or failing to correctly interpret error codes. \\ \hline Expected result stuck & Result & Answer from a DP tool query that is not in line with expected DP values. \\ \hline Documentation stuck & Documentation & Struggle to interpret documentation descriptions. \\ \hline Question stuck & Task & Misinterpretation of a Task assignment, or need to clarify a Task detail. \\ \hline DP misunderstanding & DP & Incorrectly interpreting or applying DP. \\ \hline \end{tabular}
\end{table}
Table 2. Stucks definitions
the corresponding category. We compiled the frequency of "stuck" and "unstuck" events for each tool, and for the participants' level of expertise, and in the process analyzed the nature and category of errors encountered, and our participants' success rate in resolving them.
3.3. RQ3: How satisfied are data practitioners with DP tools for differentially private data analysis?
We measure the user satisfaction of the four DP tools for their intended tasks through standardized usability measurements. Since these tools are specialized data science tools for specific purposes, we created customized versions of the System Usability Scale (SUS) (Hansen et al., 2010) and the Net Promoter Score (NPS) (Nguyen et al., 2011) specific to this study. These measurements were included in our post-task survey. Additionally, in our post-task interview, we asked open-ended questions about participants' experiences, overall satisfaction, areas of improvement as well as the specific aspects of the tool participants found particularly helpful or challenging. This helped us interpret the standardized usability measurements reported by participants.
#### 3.3.4. Qualitative Data Analysis
In addition to our quantitative metrics, we delved deep into the qualitative data collected from this study. This data included transcripts of audio recordings, video recordings of participants' screen, and Jupyter notebooks from all usability sessions. The first and the second authors performed the qualitative analysis. One had a strong foundation in differential privacy, while the other, though acquainted with the broader concepts of DP, possessed deeper expertise in human-computer interaction. Both authors familiarized themselves with all study materials and collected qualitative data, and then followed a hybrid thematic analysis approach combining inductive and deductive coding (Nguyen et al., 2011) to annotated observations, text excerpts, and relevant quotes with relevant codes, aiming to unearth pivotal themes.
Specifically, the two authors collaboratively prepared the initial codebooks based on their knowledge before the study. These codebooks underwent interactive refinements after the pilot sessions and the main study sessions by continuous analysis of the actual data collected. The analysis yielded both qualitative and quantitative elements, including codes that pinpointed the number of completed tasks, the duration for each task, challenges specific to certain tools, and misunderstandings related to DP concepts.
The research team employed a methodical approach to data reconciliation. For time-related data, we averaged the estimates from both authors. For count data, we took the highest of the two counts. Conflicts in theme interpretation were resolved through discussion until a consensus was reached.
Furthermore, to augment our findings from quantitative metrics, we selected representative quotes from post-task interviews. These quotes served to offer additional reasoning, shed light on the context, and validate our quantitative insights.
## 4. Recruitment and Participants
This study received approval from our university's Institutional Review Board (IRB) before we started participant recruitment. We first conducted a pilot study with four data science graduate students (one participant per tool) from our university to test and improve study instruments and logistics, including adjusting study time allocation, increasing participant compensation, and clarifying survey and interview questions. Each pilot participants was compensated a gift card of 40 US dollars for 1-1.5 hours of study time.
For the full study, we aimed to enlist a total of 24 US participants according to best practices for usability testing with developers in the privacy and security field (Brads et al., 2017). This enabled us to allocate six individuals to each of the differential privacy tools under investigation. To recruit participants, we initially posted on Reddit message boards, but quickly
realized that targeted efforts, such as connecting through academic or industry-specific mailing lists, yielded better results. Of the 109 respondents who started our eligibility survey, 83 completed it. We disqualify potential participants with less than two years' Python experience. From the qualified group, we invited 47 to the study and 34 scheduled a session, but we only conducted 26 tests due to 7 no-shows and one mid-session quitter (no participant ID). Among 26 completed sessions, we excluded two from data analysis (N001, E012): One due to the participant's limited Python skills, while the other due to the disruption caused by an unexpected tool update that shortened task completion time. All 26 participants were each compensated with a gift card of 40 US dollars for up to 1.5 hours of study time. To address the underrepresentation of females in data science and computing fields, we deliberately oversampled female participants. Participants' ages spanned from 18 to over 40, but most (14) fell between 25-34 years. Our sample consisted of 54% females, 38% males, 4% nonbinary individuals, and 4% who chose not to specify their gender. We conducted all usability test sessions on Microsoft Teams, following specific guidelines to maintain consistency. After the study session, each participant was compensated with a gift card of 40 US dollars for up to 1.5 hours of study time.
## 5. Results
The study measured four key aspects of DP tool usability: learnability, efficiency, error prevention, and user satisfaction. Quantitative data consists of: an initial eligibility survey and a post-task survey, task-specific metrics like success rate, time on task and stuck/unstuck. Qualitative data consists of: recorded think-aloud protocols, open-ended post-task survey questions, and post-task interviews. We made direct comparisons between the tools that were tested, and between novice and expert participants. We obtained a representative snapshot of the strengths, weaknesses and performance gaps for the DP tools. We hope our insights will guide future enhancements and increased adoption of these tools. A summary of our findings, organized by research question, appears in Table 3.
### RQ1: How effectively can DP tools help data practitioners learn DP concepts?
To answer RQ1, we measured the difference in participants' understanding of differential privacy by asking them the same set of four multiple choice questions on DP concepts before and after using the DP tool they were assigned. There were additional questions that tested DP knowledge in the post-task survey and interview.
Figure 1 reports the average percentage of correct answers to these questions by participants' DP expertise and by tool. Specifically, Figure 0(a) shows the percentage of correct responses pre- and post-tasks for both experts and novices, averaged across tools. Experts already familiar with DP did not acquire much additional knowledge. However, novices new to differential privacy increased their DP knowledge test scores in the post-task survey from the prior eligibility survey. The average correctness score increased from 37.5% to 60%. The result indicates that DP tools do well at introducing new users to core DP concepts, but may not be effective for further educating experts. Figure 0(b) reveals noticeable differences in knowledge improvement in understanding DP concepts before and after using each tool. All of the tools except OpenDP boosted concept knowledge. DiffPrivLib saw the greatest jump.
In our post-task survey, we asked participants to select sources that helped them understand DP concepts during the study and results are shown in Figure 2. The results by tool (Figure 1(a)) show that all participants found the handout and tutorial to be the most useful sources to support DP understanding. The results by expertise (Figure 1(b)) show that experts relied heavily on their prior DP knowledge, while novices used the handout and tutorial to understand the required DP concepts.
Our results suggest that using DP tools may aid novices in understanding DP concepts, and that different tools produce this effect to different degrees. A nearly 25% novice knowledge gain suggests the tools, documentation, and
**RQ2**: How effectively can DP tools help data practitioners's lemma DP concepts?
**Results**
**RQ1**: How effectively can DP tools help data practitioners's lemma DP concepts?
**Learnability**
**Post-task responses improved for novices, but not for experts (Fig. 1a).**
**DiffPrivLib produced the largest improvement, while OpenDP produced the least (Fig. 1b).**
**All users found the handout and tutorial more useful than tool documentation (Fig. 2).**
**Experts relied heavily on prior knowledge, while novices did not (Fig. 2).**
**Learnability**
**DiffPrivLib had the highest rate of task completion, while OpenDP had the lowest (Fig. 3a).**
**Tumult Analytics had the highest rate of task correctness, while DiffPrivLib and OpenDP had the lowest (Fig. 3b).**
**Efficiency**
**Tumult Analytics had the best average time to complete the first task, while OpenDP had the worst (Fig 4a).**
**Task completion times were similar between novices and experts (Fig. 4b).**
**Error prevention**
**All users found the tutorial useful for task completion, and were hindered by lack of DP knowledge and documentation (Fig. 5, 6).**
**Novices were most hindered by lack of DP knowledge, while experts were most hindered by documentation (Fig. 6b).**
**RQ3**: How satisfied are data practitioners with DP tools for differentially private data analysis?
**Satisfaction**
**DiffPrivLib had the highest NPS and SUS scores, while OpenDP had the lowest (Fig. 8).**
**Satisfaction scores align well with success rate (Fig. 3a, 8).**
**Users were most satisfied with tools that had high success rates.**
**User satisfaction may be an effective proxy for tool effectiveness in DP analytics tools.**
examples used are beneficial for introducing fundamentals. However, there is still room for improvement. Novices correctly responded to only 60% of the post-task DP knowledge questions.
The specifics of the DP tool also seem to be important for learnability. We saw greater than 20% difference in relative scores between DiffPrivLib and OpenDP for DP knowledge questions. The evidence suggests that differences in tools' design and documentation play a role in how well the tool helps users to learn about DP.
Since the most helpful sources reported by participants in the post-task survey were the handout and the tutorial, and these materials were created by the study team, we cannot absolutely discern the extent DP tools contributed to novices' increased DP understanding.
Figure 1. Average proportion of correct answers to DP knowledge questions before and after using the assigned tool. Blue bars represent data from the eligibility survey and orange bars represent data from the post-task survey.
Figure 2. Sources that support DP understanding, by tool and by expertise level.
However, data from post-task interview suggests that concrete examples (like the ones in our tutorials) and short explainers (like the handout) can help novices understand important DP concepts. One participant emphasized this sentiment, stating: _"It also helped to have the tutorial... [it] was very clear and the description of each cell made it clear to me what was going on... if you had only given me the documentation... it would have taken me much longer to put it together (E001)."_
Post-task interview data also suggest that participants could use more support in understanding DP concepts. In the case of \(e\)-values and privacy budgets, we asked participants for a real-world opinion on how strong the privacy protection was for their just-completed task. Responses lacked consistency and confidence. _"I think that's the hard question to answer"_, one DP expert participant (E006) told us. _"The total privacy budget for all of the tasks was 1.2, a value that is in-line with recommended guidelines. [\(e\)] is around 1.0. So, maybe that's somewhat strong,"_ she concluded.
Other similar answers _"I think [\(e\)] should be much lower...probably around.5 or probably even lower to be honest...with 1.2 I wasn't seeing much variance in the results."_ from participant (E003) and _Pretty strong...very strong, actually"_ from (N007), echoing this perspective.
### RQ2: How effectively can DP tools help data practitioners complete DP-related tasks?
We evaluated learnability, efficiency, and error prevention properties of DP tools to address RQ2. The results appear below, organized by usability property.
#### 5.2.1. Learnability
We analyzed the completion status and correctness of each task to assess how easy it is for participants learn to use DP tools. Figure 2(a) shows the overall completion rates for three usability testing tasks across four tools. A score of 100% means all participants assigned to the tool completed the task. There are significant differences between tools: all DiffPrivLib participants completed all three tasks, while none of the OpenDP participants completed tasks #2 or #3. Tumult Analytics and PipelineDP results fall between these two extremes, with all users of both of these tools completing at least task #1.
We found that the varying completion rates likely derive from tools' different API designs. DiffPrivLib provides a minimal API, and encourages use of the library in combination with existing well-known data analytics libraries like
Figure 3. Learnability Metrics: (a) task completion rate and (b) task correctness rate.
andas. Similarly, Tumult Analytics is designed to mimic an existing data analytics API (Spark). OpenDP, in contrast, does not leverage better known Python libraries for a learning scaffold. The OpenDP API requires users to understand technical details of DP like composition of transformations in order to perform analytics tasks.
Participant comments on API design from post-task interviews lend support to this finding. Participants liked the similarity of the Tumult Analytics to Spark. _"I think the fact that it was very similar to Spark was really helpful,"_ one expert participant (E006) told us. _"I have a decent amount of experience with Spark and Pandas, so that was very intuitive to just be able to kind of use the existing functions."_
Figure 2(b) shows the correctness rate. A score of 100% means all participants assigned to the tool produced correct solutions for the task. Some solutions were complete but incorrect, so each correctness score is no larger than the corresponding completion score. Combined, the completeness and correctness results show that:
* Complete Tumult Analytics and OpenDP solutions were all correct.
* Complete PipelineDP solutions were mostly--but not all--correct.
* Complete DiffPrivLib solutions were all _incorrect_ for tasks #2 and #3.
Notably, all six DiffPrivLib participants used incorrect sensitivity settings or failed to enforce their sensitivity settings (e.g. by clipping) in tasks #2 and #3. Since task #1 involved a counting query, the obvious choice of sensitivity (1) happened to be correct. DiffPrivLib does not signal an error in this situation, and this mistake does violate DP. Some expert participants were uneasy about their approach for setting sensitivity, but even these participants were not able to produce correct solutions.
One PipelineDP participant (expert E004) used strings (rather than integers) as grouping keys, resulting in histograms containing only 0s, and the participant did not notice the mistake. _"It's the right number of attributes. And it's the right metric, I think,"_ the participant said after completing the tasks but getting incorrect answers. _"The result is very noisy,"_ he added, noting that he could not see a way to scale the noise within appropriate bounds, then saying, _"I don't know if there's a way to check the final [privacy] budget."_ This situation suggests that PipelineDP may not give users useful,
Figure 4. Average task completion time: (a) by tool (b) by expertise.
informative feedback about the correctness of query results. While this mistake affected the correctness of the results, it did not violate DP.
Confusion about whether (and how) the tool handles the privacy budget was a characteristic among some Pipeline DP and DiffPrivLib users. About PipelineDP:
_I would expect maybe that [a] budget accountant object could tell me my budget so far. [I'm] looking for a way to figure out how much I spent so far._ (E009)
And about DiffPrivLib:
_[I'm] confused about how the privacy budget would be handled at the object level. When creating the mechanism objects, should I use the same object for every analysis...and the \(\epsilon\) will add up to the right number...can you compose all of those together? That wasn't totally clear to me when completing the task._ (N011)
#### 5.2.2. Efficiency
To measure efficiency, we calculated the time taken to complete each task using the screen recordings obtained through Microsoft Teams. Figure 3(a) shows the time spent by participants on each task, for each of the four tools. The results show that OpenDP participants spent the most time on task #1 (nearly 30 minutes on average), while Tumult Analytics participants spent the least (fewer than 15 minutes on average), with DiffPrivLib (about 17 minutes) and PipelineDP (about 20 minutes) falling in between.
The results follow a similar trend for task #2, with all participants taking less time for task #2 than task #1. However, the results look very different for task #3. OpenDP participants spent almost no time on task #3, while participants using the other three tools spent similar amounts of time on task #3. The significant difference in the results for task #3 is due to the time limit imposed by the study design. OpenDP participants spent nearly all of the allotted time on tasks #1 and #2, and had very little time left to complete task #3. Similarly, participants using the other tools either finished task #3 quickly or ran out of time, resulting in similar times for the other three tools.
Figure 3(b) shows the time spent on each task, by participants' expertise level. For tasks #1 and #3, novices and experts took roughly the same amount of time; for task #2, however, experts took _longer_ than novices, primarily because they spent additional time considering the impact of parameter settings and the correctness of their approach. Novice users, on the other hand, typically accepted the tool's default settings without question, and did not spend time considering these issues.
#### 5.2.3. Error prevention
To measure error prevention, we analyzed participants' post-task survey response on what factors helped or hindered them in task completion, as well as counting each time a participant got stuck (a "stuck") when completing a task and and also counting whether they were able to overcome it (an "unstuck") through reviews of Jupyter notebooks and screen recordings.
Figures 5 and 6 show participant responses to post-task survey questions asking what factors helped or hindered in completion of the tasks. The first compares tools, while the second compares levels of expertise. Figure 4(a) shows that the tutorial was generally the most helpful resource we provided, with tool documentation in second place. Participants reported that data science skills were most helpful for DiffPrivLib and Tumult Analytics, somewhat helpful for PipelineDP, and not helpful for OpenDP. Figure 4(b) shows that lack of DP knowledge was generally the largest hindrance to completion--except for OpenDP, where documentation was the largest hindrance. Figure 5(a) shows that novices and experts both found the tutorial most helpful; Figure 5(b) shows that novices were primarily hindered by lack of DP knowledge, while experts were primarily hindered by documentation.
These results highlight the importance of documentation. All participants used documentation extensively, and experts reported this factor as the largest hindrance in completing the tasks (Figure 5(b)). They also highlight differences in APIs: participants found their data science skills very useful in the case of Tumult Analytics and DiffPrivLib, but not useful for OpenDP (Figure 4(a)), suggesting that mimicking existing APIs can help users apply their existing skills.
Figure 7 shows the different kinds of problems that caused participants to get stuck during completion of the tasks, and how (and whether) they managed to get un-stuck. We recorded a "stuck" event each time a participant encountered an error message or an unexpected result that required the participant to fix a problem in their code. (The list of stacks' definitions is in Table 2.) Participants assigned to all tools got stuck at some point, but we observe differences in how often participants managed to get unstuck across tools. Particularly, users of DiffPrivLib and Tumult Analytics nearly always managed to get un-stuck, while users of PipelineDP and OpenDP became "terminally" stuck and ran out of time to complete the task in many cases.
In general, novices and experts became stuck--and got un-stuck--at similar rates. However, novices using OpenDP became terminally stuck in nearly 75% of cases, and expert users of OpenDP became terminally stuck in more than 50% of cases. The terminally stuck were unable to get unstuck within the timeframe of the usability session. Now these unfortunate souls walk the earth in a perpetual, IRB-allowed state of stuck.
In post-task interviews, participants cited challenges with error messages associated with OpenDP's Rust-based API:
_The error messages will typically be a stack trace from Rust, and I don't really know any Rust. So coming from a Python experience, [it] might be better to have error messages in Python that indicate the error in the line of Python._ (E002)
The most common reasons for getting stuck were associated with the tool itself (marked "Tool" in Figure 7) and with its documentation (marked "Documentation" in Figure 7). The "Tool" cases included situations where the participant did not understand how to use the tool's API or got an error message that they did not immediately understand. The "Documentation" cases included situations where the participant was not immediately able to find desired information in the tool's documentation. Participants struggled to find relevant documentation in many cases, and were particularly frustrated by a lack of a search function and few examples in the documentation. A participant's experience with OpenDP encapsulated this sentiment, saying, _"The frustration was that there were no examples...maybe OpenDP is not popular today,... but even online I couldn't get examples of people running into the same problem." (N012)_
In addition, participants found the format of the documentation challenging in some cases. For example, OpenDP's documentation includes many functions on a single page, and lacks a search function--issues highlighted by participants in the post-task interview. A participant (E013) recounted their ordeal, noting, _"In the documentation, I...just got lost in it a bit...I think I wasted a lot of time trying to find...answ."_
Participants also weren't sure where they might find additional resources if they were actual users of the tool. None of the tools included forums or chat rooms that could provide additional support. Illustrating the search for external support, one participant shared, _"Probably...I would go to GitHub [and] open an issue...probably...OpenDP has a discussion forum there." (E007)_
In the quest for deeper understanding, some participants expressed a desire for more educational materials, such as textbooks or academic papers, to enhance the information provided in tutorials and documentation. As one participant suggested, _"Maybe adding specific examples or more articulated definitions there, or links to definitions in other papers would have helped."(E001)_
Participants--including experts--were often not sure which mechanism would be the best choice, and wished that the documentation made recommendations. For example, multiple participants assigned to DiffPrivLib raised this issue in the post-task interview:
* _"Maybe...guidance around which mechanisms to use. (E011)"_
* _"I do think that sometimes when you present people with a suite of 16 options, it's important to detail what the differences are and when one option might be more effective than another."_ (E005)
Participants also struggled to find how to set parameters for some mechanisms, especially for PipelineDP, which includes several parameters not shared by other tools. One participant commented on the documentation about the upper bound for data values in PipelineDP; _"I'm not super super sure about this maximum value because I'm not sure if I interpret it correctly [in] the documentation."_ (E001)
### RQ3: How satisfied are data practitioners with DP tools in differentially private data analysis?
We report both quantitative and qualitative results to articulate participants' user satisfaction towards the DP tools.
Figure 5. Factors helping and hindering task completion, by tool.
Figure 6. Factors helping and hindering task completion, by expertise.
#### 5.3.1. Quantitative Results
To measure user satisfaction, we used the Net Promoter Score (NPS) and System Usability Scale (SUS) metrics. The results appear in Figure 8. DiffPriLib had the highest satisfaction scores, while OpenDP had the lowest scores.
Figure 8. User satisfaction scores: (a) Net Promoter Score (NPS), and (b) System Usability Score (SUS).
Figure 7. Issues that caused participants to become stuck, and how they got unstuck.
Figure 7(a) shows the Net Promoter Score (NPS) results for the four tools. DiffPrivLib had the highest NPS (33.33), followed by Tumult Analytics (-16.67), PipelineDP (-33.33), and OpenDP(-66.67). Figure 7(b) shows the System Usability Scale (SUS) results, which are consistent with the NPS results. DiffPrivLib had the highest SUS score (63.89), followed by Tumult Analytics (57.64), PipelineDP (54.51), and OpenDP (38.19).
These results align with the success rates associated with each tool (Figure 2(a))--DiffPrivLib had the highest satisfaction scores, and also had the highest completion rate; Tumult Analytics was second in both categories; PipelineDP was third, and OpenDP was fourth. The alignment of success rate with tool satisfaction suggests that participants were most satisfied with tools that made it easiest for them to complete the tasks.
This alignment also suggests that user satisfaction may be a good proxy for understanding the effectiveness of DP tools. As a result, measuring user satisfaction (e.g. by surveys of existing users of the tool) may be helpful in understanding the tool's effectiveness, and user feedback that improves satisfaction may also be useful in improving effectiveness.
#### 5.3.2. Qualitative Results
The qualitative feedback from post-task interview provides a fuller picture of participants' satisfaction with the DP tools. The participant responses not only validate the quantitative findings but also shed light on the nuances of each tool's user experience. Participants voiced clear opinions on the usability and features of the DP tools evaluated in this study.
Diffprivlib received a predominantly positive response from participants. Its intuitive API and documentation resonated well with users, suggesting that simplicity is a pivotal element for user satisfaction. A participant reinforced this view by noting. "_I liked the API of the tool. I thought the documentation was pretty clear... I like the API and I like the documentation." (E005)._ Additionally, there was a noteworthy appreciation for DiffPriLib's seamless compatibility with familiar libraries, such as Pandas. This integration was highlighted by another participant who mentioned, "_I really liked that it integrated nicely into a library that I already have worked with, Pandas... acting as a layer on top of what I would already do."(E011)._ Furthermore, there was a visible progression in users' ease with Diffprivlib over time, as (E011) further stated, "_Now...I'm on task three, I feel like I have a hang of the pattern...this isn't adding that much more time to my typical process."(E011)_
Tumult, on the other hand, garnered praise for its resemblances to well-known libraries like Pandas and Spark. Echoing this sentiment, a participant mentioned, "_Similarity with Pandas was definitely A+. That's probably the best thing they've done there. Just very easy to understand."(E010)._ However, the tool's documentation format wasn't without its critics. One participant candidly expressed frustration with Tumult's documentation, noting, "_I would say going through the tumult analytics documentation was kinda frustrating and, it was just a single-page documentation and I had to like scroll all the way down to find the exact syntax."(E003)._ Such feedback emphasizes the necessity for comprehensive and user-friendly documentation formats.
Feedback for PipelineDP also underscored a clear necessity for improved and comprehensive documentation. The lack of detail was a common grievance, as observed by participant (E004), who lamented, "_The documentation was quite incomplete...sometimes it just had one sentence about terms like Max contribution or Max value, and it wasn't really clear to me what that meant."(E004)._ Other participants emphasized the lack of practical usability features, such as search functionality, with one expressing, "_What [does] the documentation say about the budget? I don't have a way to search this page."(E001)._ Yet another participant pinpointed the need for more intuitive error messages, stating, "_I think the error message wasn't super clear and it would be tough to debug."(E004)._ Emphasizing the need for clarity in both documentation and messages in the API itself. A recurrent suggestion from participants was the inclusion of practical
examples to facilitate understanding, with one participant suggesting, _'Functions should contain some examples...[like] what each parameter is... For somebody who is completely new...it is...difficult to understand.'_ (N009)
OpenDP was not without its challenges, with participants often highlighting issues with its error messages and the density of its documentation. Highlighting this issue, one participant remarked, _' The error messages I'm getting here come from rust and I don't know what it means.'_ (E007). Further, the dense nature of OpenDP's documentation was brought to the fore by another participant who pointed out, _"The documentation wasn't useful...[I] felt like it was a little confusing...like a little cluttered...there's a lot of information."_ ((N008)). Once more, the critical importance of examples was emphasized, as observed by participant (N012) who commented, _"Definitions of functions in the OpenDP documentation were helpful, but it would have been a lot more helpful if there were examples."_ (N012)
These participants' insights are evidence that while API design is paramount, documentation quality cannot be overlooked.
## 6. Discussion and Recommendations
We outline the study limitations, discuss usability insights accounting for the differences among these tools, and provide actionable recommendations to improve DP tools' usability.
### Limitations
We acknowledge several limitations of this study. First, we only evaluated four DP tools because we prioritized the comparability of usability tasks across tools. Our results cannot represent all available DP tools, but our recommendations for usability improvement still benefit other non-Python, non-open-source DP tools. Similarly, our findings may not generalize to all data practitioners due to the small US sample size, but our sample is similar to other usability studies evaluating developer tools (Sutton et al., 2016; Wang et al., 2017) and should generate valid insights.
Second, the study design introduced confounding factors to RQ1 and RQ2 results because our DP handout and tutorials (see Section 3.2) provide participants additional help to complete the tasks. However, this was a study design decision to ensure eligible participants with different DP expertise can complete some of the tasks within allocated study time. To mitigate this, we tailored the handout and tutorials to not give direct answers to participants.
Moreover, we only evaluated the usability of three first-step problems applying DP in data analysis. The results may not reflect the overall usability for the full capability of these DP tools. However, usability issues encountered in first-step problems often prevent users from adopting the tools, so the usability recommendations derived from the findings are still valuable to increase DP tools' adoption.
### Improve API Accessibility
**Leverage users' familiarity with mainstream APIs.** The intersection of DiffprivLib with ubiquitous libraries, notably Pandas, garnered commendation. This cohesive integration provided a scaffold for new learning and and obviated the need for relearning. Users could seamlessly transpose their extant knowledge to the DP context, augmenting overall satisfaction. Tumult Analytics was also appreciated for the way its API mimicked that of Spark.In contrast, PipelineDP provides an API centered on performing multiple aggregations at once, and OpenDP provides an API that focuses on transformations and composition. Neither one is substantially similar to existing data science tools. Participants assigned to PipelineDP commented that the API seemed inflexible and not well-suited to do more advanced tasks.
Our results in Sections 5.2.1 and 5.2.2 suggest that leveraging users' familiarity with mainstream APIs improves DP tools' effectiveness.
**Provide clear APIs for setting DP-related metadata.** The tools we studied each have a different way of obtaining DP-related metadata from the user (e.g. total privacy budget, \(\epsilon\) per query, upper bound on data values, etc). Since these metadata elements are not typically present in existing data science tools, it is especially important to design and document the relevant APIs carefully.
DiffPrivLib addresses this challenge by including default values for many metadata elements. Most participants used these defaults without changing them-and in many cases, without understanding they were being used. The choice to use default values simplifies the API, but may result in users accidentally accepting inappropriate default values. DiffPrivLib often issues warnings when default values could result in privacy failures. This helped participants to complete the tasks correctly, and suggests that default values can be effective if appropriately selected and implemented.
Tumult Analytics generally requires users to specify DP-related metadata, but participants found the tool's API for this to be relatively easy to use and well-documented. Participants especially appreciated that Tumult Analytics provides clear opportunities to set total and per-query privacy budgets.
PipelineDP requires users to set DP-related metadata, but participants found its API for doing so to be confusing. Participants struggled with options like max_value, partition_extractor, and privacy_id_extractor, and they often did not find the documentation helpful in understanding the meaning of these options. OpenDP also requires users to set DP-related metadata, but participants in our study found other parts of the API more challenging than the metadata portion.
Our results (Section 5.2.3) suggest that DP tools should make decisions about DP-related metadata (including the privacy budget) clear to the user, provide useful default values when possible, design the API to expose these settings in terms that the user will understand, and provide clear documentation about the meaning of each setting. Our results suggest that DiffPrivLib and Tumult Analytics have accomplished these goals in different ways.
**Ensure clarity in privacy budgeting & budget tracking.** Both novices and experts in our study were particularly concerned with setting and tracking the privacy budget (Section 5.3.2). Tumult Analytics made this process easy and clear, by asking users to set the total and per-query budget with required API calls. For the other libraries, this process was not as clear; some participants assigned to PipelineDP and DiffPrivLib were not sure, for example, whether the library keeps track of the privacy budget at all. This confusion did not necessarily result in failure to complete the tasks correctly, but it would represent serious concern for real-world use of the tools.
Our results suggest that DP tools should be very clear about how to set the privacy budget and how (or whether) the tool accounts for the total budget. Among the tools we studied, Tumult Analytics provides the best example of clarity about the privacy budget.
### Improve Error Prevention & Provide Effective Error Messages
**Raise errors when DP might be violated.** PipelineDP, Tumult Analytics, and OpenDP were designed specifically to prevent violation of DP--they require users to wrap sensitive data using special objects, and then throw an error if the user attempts to perform actions that would violate DP. DiffPrivLib, on the other hand, relies on the user to avoid violating DP; for example, DiffPrivLib relies on the user to set the sensitivity for every mechanism, and does not check that the specified sensitivity has been correctly enforced for the input data. As shown in Section 5.2.1, all of the participants assigned to DiffPrivLib completed all three tasks, but _every single participant_ violated DP in their
solutions for tasks #2 and #3. This strongly suggests that DP tools should focus on error prevention, and should ensure that potential violations of DP result in clear error messages.
Our results also suggest a tension between preventing DP violations and maintaining usability. OpenDP's strict API was effective at preventing DP violations, but OpenDP had lower completion rates and satisfaction scores (Figure 3a, Figure 8. DiffPrivLib's flexible API resulted in many DP violations, but DiffPrivLib had high completion rates and satisfaction scores. Tumult Analytics seems to strike the best balance. Its API was effective at preventing DP violations. And the API users had high completion rates and satisfaction scores. This success is likely due to careful design of the API and its error messages. The user-facing portion of the Tumult Analytics query-building API is simple (unlike OpenDP), and it automatically handles aspects like datatype compatibility and adjusting the scale of noise to the specified privacy budget.
**Provide clear error messages with connections to documentation.** When errors occurred during tasks, many participants had difficulty diagnosing and recovering using the information provided. In particular, participants assigned to PipelineDP and OpenDP described confusion over the meaning of error messages, and trouble finding documentation to understand and fix the problem (see Section 5.2.3). DiffPrivLib and Tumult Analytics generally provided understandable and useful error messages. We recommend DP tools to provide informative error messages about how to fix the error, including pointers to documentation about each type of error--especially when the error is specifically DP-related.
**Avoid error messages that reference implementation details.** OpenDP highlighted a significant challenge when error messages generated in Rust are presented to users who primarily have a Python background. These messages can be particularly perplexing for users who are not familiar with Rust. This indicates the importance of ensuring that tools give feedback in the language most familiar to their intended audience. Tumult Analytics, built on Spark, also mixes languages. However, Tumult Analytics exposes a Python API that seems to hide this mixing. Participants generally had less trouble with its API error messages than they did with OpenDP.
### Provide Clear, Searchable Documentation with Examples
**Include examples in all parts of the documentation.** Many participants requested more sample use cases and code within the documentation and tutorials. Participants were sometimes able to find the documentation for the function they wanted to use, but had trouble understanding the descriptions of each parameter and were not able to find related examples making use of the documented function (see Section 5.3.2). Documentation for Python libraries like NumPy and Pandas commonly provide short examples for each documented function in the API. None of the tools we studied provide similar examples. Adding them would improve the documentation significantly.
**Help users find relevant tutorials.** All of the tools we studied do provide tutorials and code examples as part of their documentation, typically indexed by use case (e.g., "how to perform counting queries with the Laplace mechanism"). Participants often struggled to find the right tutorial to help them, because it was difficult to match the precise task they wanted to accomplish with the scenario in the tutorial. The ability to search tutorials for the API features they use, and additional links from documented functions in the API to tutorials that use them would both help users to locate helpful tutorial code.
**Provide advice on _what_ to do, not just how to do it.** In many cases, participants had trouble deciding what API function to use--for example, given a choice between the Laplace, Gaussian, or Geometric mechanisms, which one is best? (see Section 5.2.3 and Figure 7) Tool documentation typically did not address these questions, since the documentation
focused on how to use the mechanism, rather than on which mechanism to choose. Participants commented that they would appreciate more advice in the tool's documentation on how to select the right mechanism or function to use.
**Avoid long, single-page documentation.** Participants struggled with the single-page formatting used for documentation by all of the tools we studied (see Section 5.3.2). This formatting style includes documentation for every API function within a module or class in a single web page. The page is long and difficult to navigate when the module is large. NumPy and Pandas, by contrast, use one page per documented function. We recommend that DP tools adopt this approach.
**Make everything searchable.** Participants found the ability to search within documentation to be very helpful, and struggled with the lack of a search function in PipelineDP's documentation. Participants assigned to PipelineDP tried using Google to search the documentation, but were often not successful. We strongly recommend that documentation--including tutorials and other examples--be searchable.
**Provide additional resources.** Participants commented that resources they commonly use to solve data science tasks--like Stack Overflow--were not applicable to the DP tools we studied (see Section 5.3.2). This is a natural consequence of tools' novelty, and one that is likely to improve naturally over time. However, tool designers should be careful to provide additional resources that can provide an alternative, such as chat rooms or forums.
### Help Users Understand DP Concepts & Parameters
Participant comments and responses in the post-task interview revealed a need for additional resources to help users understand DP concepts. In many cases, participants were not confident about the parameter settings they used or were not sure how robust the resulting privacy protections would be. These observations reinforce previous work demonstrating that DP concepts are complex and difficult to communicate (Kolmogorov, 2002; Kolmogorov, 2003; Kolmogorov, 2004). We describe some specific challenges faced by the participants of our study below. Addressing these challenges remains an open question.
**Help users understand how to set privacy parameters.** Many participants had difficulty comprehending why certain parameters needed to be set in the DP tools, even after reading provided explanations (Section 5.3.2). Participants wondered why some parameters were necessary, and were not sure where to find advice for setting them. Tool documentation should provide clear descriptions of the concepts behind each parameter and links to additional resources that explain the implications of the parameter and give advice for how to set it.
**Help users understand the strength of the privacy guarantee.** Both experts and novices had trouble understanding and describing the strength of the privacy guarantee in our study. In some cases, expert participants gave opposite answers to the same question. In addition, several participants were unsure how the DP outputs could be shared or published (e.g., whether it would be appropriate to include them in academic papers).
As shown in our results in Section 5.2.3, some participants requested additional educational materials like textbooks or research papers to supplement the tutorials, especially those with less technical backgrounds. Providing links to external beginner-friendly DP learning resources could support users new to core concepts, and could help users to understand the strength of the privacy guarantee.
## 7. Conclusion
We presented the first comprehensive usability study that evaluates four Python-based DP tools with data practitioners. Our results include various measures of the tools' learnability, efficiency, error prevention, and user satisfaction; we found significant differences between the tools in all four aspects. Participants were highly satisfied with DiffPriLib's
simple, flexible API, and completed the tasks quickly, but made mistakes that violated DP. On the other hand, participants were less satisfied with complex, novel APIs like OpenDP's, and struggled to complete the tasks, but the tool prevented DP violations. Tumult Analytics well balanced error prevention, efficiency, and user satisfaction. We recommend that tools provide APIs that copy existing data science tools where possible, make privacy budget choices explicit, and raise errors when DP might be violated. We also recommend that tools include clear documentation with extensive examples, and provide resources for users to learn more about DP. We aim for our findings and recommendations to facilitate the broader adoption of DP.
## Acknowledgments
This work was supported in part by an Amazon Research Award.
|
2309.05364 | Electronic heat tunneling between two metals beyond the WKB
approximation | Two metals at different temperatures separated by large gaps exchange heat
under the form of electromagnetic radiation. When the separation distance is
reduced and they approach contact (nanometer and sub-nanometer gaps), electrons
and phonons can tunnel between the bodies, competing and eventually going
beyond the flux mediated by thermal photons. In this transition regime the
accurate modeling of electronic current and heat flux is of major importance.
Here we show that, in order to quantitatively model this transfer, a careful
description of the tunneling barrier between two metals is needed and going
beyond the traditional WKB approximation is also essential. We employ
analytical and numerical approaches to model the electronic potential between
two semi-infinite jellium planar substrates separated by a vacuum gap in order
to calculate the electronic heat flow and compare it with its radiative
counterpart described by near-field radiative heat transfer. We demonstrate
that the results for heat flux and electronic current density are extremely
sensitive to both the shape and height of the barrier, as well as the
calculation scheme for the tunneling probability, with variations up to several
orders of magnitude. Using the proximity force approximation, we also provide
estimates for tip-plane geometries. The present work provides realistic models
to describe the electronic heat flux, in the scanning-thermal-microscopy
experiments. | Mauricio Gómez Viloria, Philippe Ben-Abdallah, Riccardo Messina | 2023-09-11T10:23:37Z | http://arxiv.org/abs/2309.05364v2 | # Electronic heat tunneling between two metals beyond the WKB approximation
###### Abstract
Two metals at different temperatures separated by large gaps exchange heat under the form of electromagnetic radiation. When the separation distance is reduced and they approach contact (nanometer and sub-nanometer gaps), electrons and phonons can tunnel between the bodies, competing and eventually going beyond the flux mediated by thermal photons. In this transition regime the accurate modeling of electronic current and heat flux is of major importance. Here we show that, in order to quantitatively model this transfer, a careful description of the tunneling barrier between two metals is needed and going beyond the traditional WKB approximation is also essential. We employ analytical and numerical approaches to model the electronic potential between two semi-infinite jellium planar substrates separated by a vacuum gap in order to calculate the electronic heat flow and compare it with its radiative counterpart described by near-field radiative heat transfer. We demonstrate that the results for heat flux and electronic current density are extremely sensitive to both the shape and height of the barrier, as well as the calculation scheme for the tunneling probability, with variations up to several orders of magnitude. Using the proximity force approximation, we also provide estimates for tip-plane geometries. The present work provides realistic models to describe the electronic heat flux, in the scanning-thermal-microscopy experiments.
## I Introduction
Two bodies at different temperatures separated by a vacuum gap can exchange heat through a variety of channels. At large separation distances this energy exchange is purely radiative and governed by the Stefan-Boltzmann law, setting an upper limit for this energy flux, reached only in the theoretical scenario of two blackbodies. When the separation distance becomes smaller than the thermal wavelength (of the order of \(10\,\mu\)m at ambient temperature) we move into the regime of near-field radiative heat transfer (NFRHT) theory. In this domain, it is known that the radiative flux can exceed the Stefan-Boltzmann limit thanks to the contribution of evanescent (i.e. non-propagative) photons [1]. This strong flux amplification can reach several order of magnitude for materials supporting resonant surface modes of the electromagnetic field in the infrared, such as phonon-polaritons for polar materials [2; 3; 4] or a continuum of hyperbolic modes [5].
The physics at play becomes even richer when going to smaller distances, in the so-called extreme-near-field regime, at separation distances in the nanometer range and below. This distance regime has been recently probed by two experiments [6; 7] reaching diverging conclusions, the former confirming theoretical predictions, the latter observing a strong flux amplification, to date unexplained. In the extreme-near-field regime, it has been shown that radiation can be influenced by nonlocal effects [8; 9; 10], which could lead to new interesting phenomena, such as the existence of a radiative contribution stemming from non-optical modes between polar materials [11; 12]. It has also been argued that at sub-nanometer scales two new heat carriers contribute to energy transfer [13; 14; 15; 16]. On the one hand, acoustic vibrations from a surface can have an influence on another surface due to molecular and electrostatic forces, leading to phonon tunneling [15; 16; 17; 18; 19; 20; 21]. On the other hand, when dealing with metals, electron tunneling is expected to significantly contribute and predicted to dominate close to contact.
Besides the development of experimental setups probing heat flux in the extreme near field (for which the agreement with theory is often qualitative due to vibration, deformation and contamination [22]), the study of energy exchange at such short distance scales is also of remarkable importance due to recent and ongoing developments in nanofabrication and miniaturization. As a matter of fact, nanodevices need efficient thermal management techniques in order to be reliable, since slight temperature differences can drive significant uncontrolled amounts of heat. Motivated by these challenges, the study of the electronic contribution to energy exchange is of major importance. Moreover, the study of energy and heat flux by thermal electrons in the tunneling regime is of interest for the development of thermal transistors and thermal amplifiers [23; 24]. "Thermal" refers to electrons described by local equilibrium Fermi-Dirac statistics but below the work function. Electrons in the tail of the distributions are exchanged by tunneling if the barrier is thin, carrying both charge and heat. Under the influence of an electric potential bias, this can lead to the Nottingham effect where the electronic heat flux is large and non reciprocal [16; 25], leading to the mutual heating of both electrodes. So far most studies comparing it with the radiative counterpart have modeled the effect of the barrier under a single model, including single-step potentials [13] or under the influence of classical image forces [14; 15] which need to be regularized.
The study of the barrier height is of special importance for the study of surfaces in field emission and scanning tunneling microscopy (STM). For the latter, phenomenological or semiclassical formulas are derived to deduce the barrier height from the measured current [26; 27]. However these calculations need additional corrections depending on the electrodes,
as deformations of the tip and the surface can lead to apparent barrier height and apparent gap distances, and thus apparent surfaces that differ from the expected results [28; 29; 30]. It also does not help that the current often varies exponentially with respect to various parameters, which limits the sensitivity to small feature changes [22]. Slight differences in chemical composition can lead to asymmetrical barriers which shift the conductance minima [31]. Attractive forces can also appear near the surface producing a vibrating motion of the tip which in turn influences the measured barrier heights [28]. The sensibility to the tip motion and deformability has led to the development of atomic force microscopy [32]. Another problem is contamination: even for clean surfaces and ultra-high vacuums, the work functions measured using these techniques can be lower than the expected value by some eV [33]. All of these issues imply that near-field scanning thermal microscopy (STnM) [9], which adapts the equipment of STM and AFM to measure heat currents, suffers from the same problems in the presence of electronic heat transfer. Nevertheless, probing these heat exchanges could provide a secondary test for the barriers and interactions at extreme and near fields.
In this work we focus on providing a numerical bound to the electronic tunneling heat current by analyzing the effects of the modeling of the barrier in various extreme cases. The tunneling probability of electrons is calculated from a rigorous calculation based on the transfer-matrix method applied within a density functional approach to ideal jellium bodies as well as an analytic nonlocal Poisson equation under the Thomas-Fermi approximation [34; 35]. We also analyze the case of a parametrized phenomenological barrier given by a generalized Gaussian function. These approaches allow us to explore the influence of the height but also the shape of the barrier between two metallic electrodes.
This paper is organized as follows: The definitions of current density and heat flux are discussed in Section II for the case of thermal electron tunneling and NFRHT. In Section III, we discuss the results of the classical potential under semi-classical approximations and illustrate the limitations of such an approach. Section IV is devoted to more realistic models for the electronic barrier potential between two metals and the calculation of the transmission probability. In Sec. V we discuss the electronic heat flux in two different configurations, namely two metallic half spaces (plane-plane configuration) and a tip-plane configuration using the proximity force approximation (PFA) as shown in Fig. 1. We finally conclude in Sec. VI.
## II Electronic current density and extreme-near-field heat flux
Let us consider the system depicted in Fig. 1(a), consisting of two metallic parallel planar substrates, separated by a vacuum gap of thickness \(d\) along the \(z\) direction. They are assumed to be large enough along the \(x\) and \(y\) directions so that they can be considered infinitely extended. The two substrates are kept at two different temperatures \(T_{1}\) and \(T_{2}\) by two external thermostats and they are characterized by the same Fermi level \(E_{\mathrm{F}}\). As explained above, even when separated by a vacuum gap, these bodies can exchange energy through the tunneling of different carriers. Here we are going to focus on electron tunneling, and take photon tunneling, i.e. radiative heat transfer, as a quantitative reference for comparison.
Even in the case of large work functions, electrons may escape the surface of a metal as a result of a temperature difference (thermionic emission) or in the presence of an externally applied electric field (field emission). The latter scenario is possible due to quantum tunneling, allowing for a non-zero transmission probability for electrons with classically insufficient energy to overcome the potential barrier in the region between the two substrates. More specifically, in the so-called extreme-near-field regime, i.e. when the barrier width is in the nanometer range and below, it is possible to induce a significant electron tunneling already for a small temperature difference and in the absence of a bias field. The tunneling current density in this configuration can be expressed as [27]
\[J=-\frac{em}{2\pi^{2}\hbar^{3}}\int_{0}^{\infty}\mathrm{d}E_{z} \int_{0}^{\infty}\mathrm{d}E_{\perp} \tag{1}\] \[\times\Delta n_{\mathrm{FD}}(E,T_{1},T_{2},E_{\mathrm{F}}) \mathcal{T}^{(\mathrm{el})}(E_{z}),\]
where \(-e\) is the electron electric charge, \(m\) its mass, \(E=E_{\perp}+E_{z}\) its total kinetic energy decomposed in contributions stemming from velocities perpendicular and parallel to the surface, and
\[\Delta n_{\mathrm{FD}}(E,T_{1},T_{2},E_{\mathrm{F}})=n_{\mathrm{FD}}(E,T_{1}, E_{\mathrm{F}})-n_{\mathrm{FD}}(E,T_{2},E_{\mathrm{F}}), \tag{2}\]
Figure 1: Extreme near field heat fluxes between (a) two semi-infinite slabs of temperatures \(T_{1}\) and \(T_{2}\) made of the same metal with Fermi energy \(E_{\mathrm{F}}\) separated by a vacuum gap of length \(d\) and (b) between tip and sample with the same parameters. Near-field electromagnetic radiation (rad) and thermal electrons (el) can channel heat from body 1 to body 2. In (b), the tip is considered spherical with radius \(R\), divided into infinitesimal disks for proximity force calculations.
\(n_{\rm FD}(E,T_{i},E_{\rm F})=1/[\exp([E-E_{\rm F}]/k_{\rm B}T_{i})+1]\) being the Fermi-Dirac distribution that depends on both temperature \(T_{i}\) and Fermi energy \(E_{\rm F}\) associated with each medium. The key physical quantity appearing in Eq. (1) is the electronic transmission probability \(\mathcal{T}^{\rm(el)}(E_{z})\) for the electron to cross the gap, which due to the symmetry of the problem depends only on its kinetic energy \(E_{z}\) perpendicular to the surface. The transmission probability has to be calculated by determining the transmission amplitude of a given electron crossing the gap in the presence of an electronic barrier \(U(z)\) produced by image forces. The methods to calculate \(\mathcal{T}^{\rm(el)}(E_{z})\) are described in Sec. IV.
The net transfer of electrons between the two substrates is also at the origin of an energy flux (heat flow) \(\Phi^{\rm(el)}\) which, as discussed in detail in [16], can in some configurations compete with and go beyond the photonic (radiative) heat flux \(\Phi^{\rm(rad)}\). The total heat flux between the substrates is thus given by
\[\Phi=\Phi^{\rm(el)}+\Phi^{\rm(rad)}. \tag{3}\]
We remark that in the extreme near-field regime, namely for gaps smaller than \(1\,\mathrm{nm}\), one can also consider the possibility of phonon tunneling due to Van der Waals and electrostatic forces [17; 18; 19; 20], but this mechanism turns out to be a smaller contribution than the electronic one in the absence of bias [16] and will be neglected here. For the electronic contribution, the heat flux takes the form [16; 25]
\[\Phi^{\rm(el)}(T_{1},T_{2},d)=\frac{m}{2\pi^{2}\hbar^{3}}\int_{0} ^{\infty}{\rm d}E_{z}\int_{0}^{\infty}{\rm d}E_{\perp} \tag{4}\] \[\times(E-E_{\rm F})\Delta n_{\rm FD}(E,T_{1},T_{2},E_{\rm F}) \mathcal{T}^{\rm(el)}(E_{z}),\]
where \((E-E_{\rm F})\) represents the energy contribution associated with each electron. Note that in the absence of bias voltage we make no distinction between the heat flow in the two directions (to and from cold and hot bodies), as in this case heat flows reciprocally in the usual thermodynamic way. This reciprocity does not always hold due to the Nottingham effect [16; 25], which can lead for example to heating of both bodies in the presence of an applied bias voltage.
As stated above, radiative heat flux will be taken as a reference for comparison to electronic flux, since in the distance range considered here we can expect to find the transition separation distance below which electronic heat flux overcomes electromagnetic radiation [16]. The near-field radiative heat flux between the two bodies can be expressed as [1; 36; 37]
\[\Phi^{\rm(rad)}(T_{1},T_{2},d)= \tag{5}\] \[\int_{0}^{\infty}\!\frac{{\rm d}\omega}{2\pi}\hbar\omega\Delta n _{\rm BE}(\omega,T_{1},T_{2})\int_{0}^{\infty}\!\frac{{\rm d}k}{2\pi}\,k\!\sum_ {\alpha={\rm s},{\rm p}}\mathcal{T}_{\alpha}^{\rm(rad)}(k,\omega,d),\]
where
\[\Delta n_{\rm BE}(\omega,T_{1},T_{2})=n_{\rm BE}(\omega,T_{1})-n_{\rm BE}( \omega,T_{2}), \tag{6}\]
\(n_{\rm BE}(\omega,T_{i})=1/[\exp(\hbar\omega/k_{\rm B}T_{i})-1]\) being the Bose-Einstein distribution, \(k\) the parallel component of the wavevector, \(\omega\) the angular frequency of each mode, and
\[\mathcal{T}_{\alpha}^{\rm(rad)}(k,\omega,d) \tag{7}\] \[=\begin{cases}\frac{(1-|r_{\alpha}|^{2})^{2}}{|1-r_{\alpha}^{2} \exp(2{\rm i}k_{z}d)|^{2}},&k<\omega/c,\\ \frac{4\,({\rm Im}\,r_{\alpha})^{2}\exp(-2\,{\rm Im}\,k_{z}d)}{|1-r_{\alpha}^{ 2}\exp(-2\,{\rm Im}\,k_{z}d)|^{2}},&k\geq\omega/c,\end{cases}\]
the radiative transmission probability. The transmission probability is separated in terms of the two polarizations, given by the transverse electric (\(\alpha={\rm s}\)) and transverse magnetic (\(\alpha={\rm p}\)) contributions, where \(k_{z}=\sqrt{(\omega/c)^{2}-k^{2}}\). The integral in Eq. (5) is carried out over all values of \(k\), including the contribution of propagative (\(k<\omega/c\)) and evanescent (\(k>\omega/c\)) waves. The latter dominate for distances below the thermal wavelength, in the micrometer range at ambient temperature. The reflection coefficients in (5) are given by Fresnel's formulas,
\[r_{\rm s}(k,\omega)=\frac{k_{z}-k_{\rm m,z}}{k_{z}+k_{\rm m,z}},\quad r_{\rm p }(k,\omega)=\frac{\epsilon(\omega)k_{z}-k_{\rm m,z}}{\epsilon(\omega)k_{z}+k_ {\rm m,z}}, \tag{8}\]
where \(k_{\rm m,z}=\sqrt{(\omega/c)^{2}\epsilon(\omega)-k^{2}}\) is the \(z\) component of the wavevector inside the media. In this paper, we employ a local description of the dielectric susceptibility given by Drude's model, as
\[\epsilon(\omega)=\epsilon_{\infty}-\frac{\omega_{\rm pl}^{2}}{\omega(\omega+{ \rm i}\Gamma)}, \tag{9}\]
where \(\omega_{\rm pl}\) is the plasma frequency of the metal, \(\Gamma\) is the damping coefficient and \(\epsilon_{\infty}\) is the high frequency value [38]. Here we neglect the nonlocal radiative effects that appear in the extreme near-field regime [8; 9; 10], as this modification is negligible when compared to electronic tunneling [16].
## III The standard modeling of electron tunneling
The evaluation of the electronic transmission probability in Eq. (1) and Eq. (4) depends on the calculation of the electronic barrier potential. For a charge between two metallic plates, this potential is classically calculated using the image method [27], given by the classical image potential,
\[U_{\rm cl}(z)=W_{0}+E_{\rm F}+\frac{e^{2}}{16\pi\epsilon_{0}d}\left[\Psi(z/d)+ \Psi(1-z/d)+2\gamma\right], \tag{10}\]
only defined between \(z=0\) and \(z=d\), where \(W_{0}\) is a vertical shift, \(\epsilon_{0}\) is the vaccum permittivity, \(\Psi(z)\) is the digamma function and \(\gamma\) is the Euler-Mascheroni constant. The last term to the right of Eq. (10) is known as the image potential or image force and has the effect of rounding the edges of a square barrier of height \(E_{\rm F}+W_{0}\). The height of the barrier is reduced by the image potential, so \(W_{0}\) is not the true work function. The image potential also reduces with the gap \(d\). However, this expression (10) can be unphysical. Due to its divergences at
the boundaries, the potential should be impenetrable [39] and semiclassical calculations of the transmission, which require smooth potentials, may not be valid. The presence of electronic interactions leads to a barrier that is actually smooth and penetrates into the metal. To avoid this issue, it is often suggested to redefine the image planes and translate the potential inside the metal by a few angstroms.
In order to compare with different barriers, we are interested in studying carefully the influence of height and shape of the barrier in a more general scenario. Thereby, we introduce a parametrized barrier described by a symmetric generalized Gaussian (GG) distribution [40], defined as
\[U^{\rm(GG)}(\alpha,\beta,u_{0};z)=u_{0}E_{\rm F}\exp\left(-\left|\frac{\frac{z }{d}-\frac{1}{2}}{\alpha}\right|^{\beta}\right), \tag{11}\]
where the barrier is centered at \(d/2\) and has three main parameters: the normalized scale parameter \(\alpha\) quantifying the penetration of the potential inside the metal, the shape parameter \(\beta\) which controls its peakedness and the barrier height \(u_{0}\) (in units of the Fermi energy \(E_{\rm F}\)). The latter is connected to the work function \(W\) by the simple relation \(W=E_{\rm F}(u_{0}-1)\). This function also allows for long tail distributions, but in order to ensure that \(U^{\rm(GG)}(\alpha,\beta,u_{0};z)\to 0\) as \(z\to\pm\infty\), we add the restriction \(\beta\geq 1\). Equation (11) describes a standard Gaussian distribution for \(\beta=2\) and approaches the square barrier as \(\beta\to\infty\). The GG barrier allows to obtain results for general barrier shapes and heights. The generalized Gaussian potential has the advantage that it is also defined inside the metal.
In Fig. 2(a), we illustrate the image potential (black solid line) and two GG parametrizations corresponding to two different barriers with the same height and different shape that penetrate inside the metal (\(z/d<0\) and \(z/d>1\)).
For a given parametrization of \(U^{\rm(GG)}(\alpha,\beta,u_{0};z)\), we can now solve for the transmission of an electron with energy \(E_{z}\) inside the metal. However the transmission probability has only a few analytical solutions tied to specific electronic barrier shapes. In practical applications, the barrier height is often estimated qualitatively by using semiclassical approximations like that of the one-dimensional Wentzel-Kramers-Brillouin (WKB) method [41], where the transmission is given by
\[\mathcal{T}^{\rm(el)}_{\rm WKB}(E_{z})= \tag{12}\] \[\exp\left(-\frac{2\sqrt{2m}}{\hbar}\int_{z_{1}}^{z_{2}}{\rm d}z \,\sqrt{U(z,V_{\rm b})-E_{z}}\right),\]
where the integration is usually carried out between the zeros of the integrand, \(z_{1}\) and \(z_{2}\), in the region where the electronic barrier height \(U(z)\) is larger than the energy \(E_{z}\). However the WKB approximation is drastic and should be avoided for extremely small gaps. Even if it is often the preferred technique for the calculation of the transmission of one-dimensional barriers, this approximation is not valid in the presence of abrupt potentials and in principle should be avoided when using the classical image potential (10), which can be proved to be impenetrable [39].
In Fig. 2(b) we show the current density (1) for a gap of \(d=1\) nm calculated using the WKB approximation, for the potentials defined in Fig. 2(a), where the integration in Eq. (12) is carried between \(z_{1}=0\) and \(z_{2}=d\). As expected the current density can decrease by various orders of magnitude as a function of the height of the barrier. However it is also interesting to observe the sensitivity to the shape of the potential. For the broadest GG barrier (\(\alpha=0.7\), red dashed line) the difference with the current density of the image potential can reach more than an order of magnitude depending on the height. For the thinner GG barrier (\(\alpha=0.45\), blue dash-dotted line), the barrier is more similar to the image potential, but its relative difference with current density of the image potential can vary non-monotonically. For a given GG barrier, increasing the shape factor \(\beta\) of the barrier from a peaked distribution to a square potential can induce a reduction of one order of magnitude or more of the current density even if the barrier height is kept at the same value [see, inset of Fig. 2(b)].
It is clear that just estimating the height of the barrier does not suffice to provide a quantitative calculation that would
Figure 2: (a) Shape of three electronic potential barriers of height of 1.5 \(E_{\rm F}\): image potential (black solid line), GG barrier with \((\alpha,\beta)=(0.7,3)\) (red dashed line), GG barrier with \((\alpha,\beta)=(0.45,10)\) (blue dash-dotted line). (b) Current density calculated within WKB approximation for the same potential barriers as (a) as a function of the barrier height for a gap of \(d=1\) nm. Inset: current density as a function of the shape factor \(\beta\) for a GG barrier of \(\alpha=0.45\) and height \(u_{0}=1.5\).
match experimental results. A given experimental data point of the tunneling current can be reproduced by any series of slightly similar potentials. It is possible to fit the data by adjusting the height \(W\) and images planes of the classical image potential (10) or by choosing a smooth potential by changing the height, shape and penetration depth. Also by postulating variable parameters one is able to fit any dependence of the current density with distance. This arbitrary choice makes it hard to understand what would be the next correction to the standard theory of electronic tunneling, as any divergence from experimental values can be identified as an unusual work function if one does not account for the changes in barrier shape or lack of sensitivity due to the use of the WKB approximation. These problems motivate the inquiry to understand how much discrepancy there can be between less arbitrary theoretical models. When driven by a bias voltage, the classical image potential along with WKB approximation can be enough to broadly estimate the current density for gaps of a few nm [27] but would remain a very rough approximations for smaller gaps where the shape of the barrier changes and WKB is not sensitive at all to the penetration of the potential inside the barrier. The fact that in some experiments the work function of metals is mysteriously low [6; 22] might depend on these approximations.
## IV Beyond the standard approach
In the this section, we describe two different models that go beyond the classical image potential (10), one based on a many-body calculation using density functional theory and another based on a nonlocal electrostatic solution of Poisson equation. With the goal of obtaining quantitative results for the heat flux from these models, we will also drop the WKB approximation altogether and replace it with a more precise numerical calculation of the transmission coefficient based on \(S\)-matrices.
### Local density approximation for jellium
Due to the unphysical nature of the classical images potential (10), a realistic calculation of the electronic barrier for a metal requires a many-body treatment of electronic interactions. In this approach, we model the electron gas inside the metallic bodies as a jellium (interacting electron cloud over a positive ionic background) and the effective potential that the electronic cloud exerts on a single probe electron is recovered. The jellium model has the advantage that it only depends on a single parameter, the Wigner-Seitz radius, which makes it very practical for the study of metals. It also allows us to clearly keep defined edges of the metal gap using a sharp ionic background. According to the Hohenberg-Kohn theorem [42], the total many-body energy can be written uniquely in terms of the electronic density \(n\) as
\[\mathcal{E}[n] =K[n]+\int\mathrm{d}^{3}\mathbf{r}\;U_{\mathrm{ext}}(\mathbf{r} )n(\mathbf{r}) \tag{13}\] \[+\frac{e^{2}}{8\pi\epsilon_{0}}\int\mathrm{d}^{3}\mathbf{r}\int \mathrm{d}^{3}\mathbf{r}^{\prime}\;\frac{n(\mathbf{r})n(\mathbf{r}^{\prime})}{ |\mathbf{r}-\mathbf{r}^{\prime}|}+\mathcal{E}_{\mathrm{xc}}[n], \tag{14}\]
where \(K[n]\) is the kinetic energy of a non-interacting electron gas, the second term represents the interaction with an external potential \(U_{\mathrm{ext}}(\mathbf{r})\), the third term is the electron-electron interaction and \(\mathcal{E}_{\mathrm{xc}}\) represent the exchange-correlation contribution [43]. For the problem at hand, we are interested in the effective potential
\[U(\mathbf{r})=U_{\mathrm{ext}}(\mathbf{r})+\frac{e^{2}}{4\pi\epsilon_{0}}\int \mathrm{d}^{3}\mathbf{r}^{\prime}\;\frac{n(\mathbf{r}^{\prime})}{|\mathbf{r}- \mathbf{r}^{\prime}|}+\frac{\partial\mathcal{E}_{\mathrm{xc}}[n]}{\partial n( \mathbf{r})}, \tag{15}\]
acting on single electron and due to the surrounding electrons.
By choosing a form of \(\mathcal{E}_{\mathrm{xc}}[n]\), we can then solve the Kohn-Sham equations,
\[\left[-\frac{\hbar^{2}}{2m}\nabla^{2}+U(\mathbf{r})\right]\psi_{j}(\mathbf{r} )=E_{j}\psi_{j}(\mathbf{r}) \tag{16}\]
for \(j=1,2,\cdots,N\), for a system of \(N\) electrons in a given volume, to obtain back the electronic density \(n(\mathbf{r})=\sum_{j=1}^{N}|\psi_{j}(\mathbf{r})|^{2}\). By iterating over this self-consistent system of equations, Eqs. (13) and (16), one can obtain a realistic approximation of the electronic density and the effective potential in the gap and inside the metal.
The exchange-correlation term is not known and requires to be treated under certain approximations. For this manuscript, we will restrict our calculations to the local-density approximation (LDA) [44], which assumes an exchange-correlation function of the form \(\mathcal{E}_{\mathrm{xc}}^{(\mathrm{LDA})}=\int\mathrm{d}^{3}\mathbf{r}\;n( \mathbf{r})\varepsilon_{\mathrm{xc}}[n(\mathbf{r})]\) where \(\varepsilon_{\mathrm{xc}}=\varepsilon_{\mathrm{x}}+\varepsilon_{\mathrm{c}}\) is the exchange-correlation energy per electron for jellium. This term can be divided into two terms: a Fock exchange term \(\varepsilon_{\mathrm{x}}\) that can be written analytically for a homogeneous electron gas, and a correlation term \(\varepsilon_{\mathrm{c}}\) that is often obtained by quantum Monte-Carlo methods. In the present work, we implement the exchange-correlation potential using the Perdew and Yang approach [45]. For two semi-infinite jellium slabs with perfectly flat surfaces separated by a vacuum gap, this construction leads to an effective LDA barrier \(U^{(\mathrm{LDA})}(\mathbf{r})=U^{(\mathrm{LDA})}(z)\) that we calculate numerically using the GPAW toolkit [46; 47] (see Appendix A for technical details).
### Thomas-Fermi approximation for the nonlocal Poisson equation
The LDA approximation for jellium neglects the crystal structure of the material making it inadequate for the description of surface effects in metals. Moreover, it can provide low values for the work function of metals [33]. For that reason, we propose another model that would be closer to the semi-classical calculation, but where the work function is not an input of the model like in the classical image potential (10).
Inspired by previous results [16, 38], we reintroduce here an additional barrier based on the analytical solution to the nonlocal Poisson equation, [16, 34, 35], given by
\[\begin{split}\left(\frac{\partial^{2}}{\partial z^{2}}-k^{2}\right)G (k;z,z^{\prime})-&\int\mathrm{d}z^{\prime\prime}\Pi(k;,z,z^{ \prime})G(q;z^{\prime\prime},z^{\prime})\\ &=\delta(z-z^{\prime}),\end{split} \tag{17}\]
where \(\delta(z)\) is the Dirac delta distribution, \(G(k;z,z^{\prime})\) is the Green function and \(\Pi(k;z,z^{\prime})\) is the polarization operator. Using the specular reflection approximation, we can write the polarization operator as
\[\Pi(k;z,z^{\prime})=\begin{cases}\Pi_{1}(k;z-z^{\prime})+\Pi_{1}(k;z+z^{ \prime}),\\ \qquad\qquad\qquad\qquad\qquad z,z^{\prime}\leq 0,\\ \Pi_{2}(k;z-z^{\prime})+\Pi_{2}(k;z+z^{\prime}),\\ \qquad\qquad\qquad\qquad\qquad z,z^{\prime}\geq d,\\ \Pi_{\mathrm{gap}}(k;z-z^{\prime})+\Pi_{\mathrm{gap}}(k;z+z^{ \prime}),\\ \qquad\qquad\qquad\qquad 0<z,z^{\prime}<d,\end{cases} \tag{18}\]
where
\[\Pi_{b}(k;z\mp z^{\prime})=\int_{-\infty}^{\infty}\frac{\mathrm{d}q_{z}}{2\pi} K^{2}[\epsilon_{b}(K)-1]\exp(\mathrm{i}q_{z}[z\mp z^{\prime}]), \tag{19}\]
\(K^{2}=k^{2}+q_{z}^{2}\), and \(\epsilon_{b}(K)\) is the dielectric function of each region \(b=1,2,\mathrm{gap}\). For simplicity, we would use the long wavelength Thomas-Fermi approximation (TFA) for the dielectric function [48] inside the metal, given by
\[\epsilon_{\mathrm{TF}}(K)=\epsilon_{1,2}(K)=1+\frac{k_{\mathrm{TF}}^{2}}{K^{2 }}, \tag{20}\]
where \(k_{\mathrm{TF}}=\sqrt{e^{2}m_{\mathrm{e}}k_{\mathrm{F}}/\pi^{2}\hbar^{2} \epsilon_{0}}\) is the inverse of the Thomas-Fermi screening length [48, 38] and it is the only input parameter for the calculation. By solving for \(G(k;z,z^{\prime})\) in Eq. (17) [16, 35] we can recover the electronic potential for a single electron by calculating
\[U^{\mathrm{(TFA)}}(z)=\frac{e^{2}}{4\pi\epsilon_{0}}\left\{\frac{k_{\mathrm{TF }}}{2}-\int_{0}^{\infty}\mathrm{d}k\,k\left[G(k;z)+\frac{1}{2k}\right]\right\}. \tag{21}\]
where the constant \(e^{2}k_{\mathrm{TF}}/8\pi\epsilon_{0}\) is introduced to set the bottom of the band equal to 0. The Thomas-Fermi approximation considered the first valid approximation beyond the classical image potential used to reproduce screening effects, but does not reproduce other quantum phenomena like the Friedel oscillations of the electronic density. The TFA barrier reproduces the classical potential (10) for ideal metals [\(\epsilon_{1,2}(K)\rightarrow\infty\)].
### Comparison
To go beyond the classical image potential (10), we have introduced two different models that go beyond the classical assumptions, the TFA potential of Sec. IV.2 which introduces screening effects and the LDA approach of Sec. IV.1, which treats the quantum many-body problem and adds the effects of exchange and correlation potentials. On the one hand, the effective TFA barrier \(U^{\mathrm{(TFA)}}(z)\) is simpler to implement, but is known to overestimate the size of the barrier. On the other hand, \(U^{\mathrm{(LDA)}}(z)\) for jellium is a more complete treatment but requires numerical effort and it is well known to underestimate the work function of metals [33]. For these reasons, the LDA and the nonlocal TFA barrier serve to set two limits for the height and shape of the effective barrier.
Additionally, the parametrized GG barrier from Eq. (11) is simple enough to allow us to fit both TFA and LDA barriers, and to compare the heat flux and current related to these models. The LDA and TFA effective electronic barriers are shown in Fig. 3 for three distances \(d=\)0.3, 0.6 and 2 nm. We remark that, as anticipated, the LDA curve (in red) is always smaller than the TFA curve (in black) coming from nonlocal Poisson equation. Contrary to the predictions of the classical image potential, the LDA and TFA barriers are not divergent and penetrate into the metal. In order to fit the LDA curves using the GG function from Eq. (11), we can either fit with respect to the three parameters \(\alpha,\beta\) and \(u_{0}\), or fix the value of \(u_{0}\) as equal to the barrier maximum divided by \(E_{\mathrm{F}}\) and then fit with respect to \(\alpha\) and \(\beta\). As shown in Fig. 3, the former (latter) choice results in an underestimated (overestimated) function. This procedure allows us to define an average value for \(\alpha,\beta\) and \(u_{0}\) (reported in Fig. 3 for each distance) along with an error bar for the three parameters.
For large \(d\), both the LDA and TFA barriers tend to a square step potential (corresponding to a large \(\beta\)). For \(d<1\) nm, the barrier maxima of LDA and TFA potentials decrease with distance. The LDA barrier can take some negative values, but this effect is not as pronounced as in the Friedel oscillations of the electronic density due to the exchange-correlation contribution [43].
The dependence of the GG fitting parameters with respect to the distance is shown in Fig. 4. Close to contact, the LDA barrier gets significantly more Gaussian \(\beta\lesssim 3\) or even close to a Laplace distribution \(1<\beta<2\). The LDA value for the barrier height \(u_{0}\) (in relative units with respect to \(E_{\mathrm{F}}\)) is below the Fermi level for gap distances smaller than about 3 A, which could be interpreted as contact since most electrons are no longer tunneling and are actually delocalized between the two bodies, meaning that in this case the work function is not properly defined. We have verified that the value of \(u_{0}\) for the LDA barrier for gaps larger than 1 nm already coincides with the expected theoretical value for the jellium work function for gold [33] for a single surface, which is lower than the experimental value by 3 to 4 eV. Conversely, the asymptotic value of \(u_{0}\) for the TFA barrier overestimates the barrier height by the same amount and shows a much slower convergence (see the inset of Fig. 4). In this scenario an estimate for the work function can be obtained by employing again the relation discussed above \(W=E_{\mathrm{F}}(u_{0}-1)\). The sharp decrease in the height of the TFA barrier close to contact is comparable to the apparent barrier height that is found in experiments, where the height remains constant when reducing the distance up to a couple of A [33]. The scale factor \(\alpha\) does not vary
much but close to contact diverges indicating delocalization and larger penetration of the effective potential into the metal.
### Transmission probability using the \(S\)-matrix method
As we want to quantitatively account for the barrier shape, we work with a quantum mechanical description of the barrier alongside a more accurate algorithm based on the scattering \(S\)-matrix algorithm from multilayered optics [49] to calculate the electronic transmission probability. This method accounts for oscillations of the transmission at large energies and the shape of the barrier inside the metal. The \(S\)-matrix algorithm provides the same results as the transfer-matrix method [50], which consists of dividing the barrier in differential slices and multiplying the transfer matrices of each slab.
Instead of using transfer matrices we calculate the scattering matrix of the \(i\)-th slice and multiply them together in sequence using the Redheffer star product [51]. In the end, we recover the total \(S\)-matrix of the barrier for a given electron energy from which the transmission probability can be extracted. The \(S\)-matrices are preferred here over the transfer matrix method for their numerical stability for large gaps [49].
The electronic transmission probability, used in the equations of the current density (1) and of the heat flux (4), is plotted in Fig. 5 for an intermediate gap distance \(d\) of 5 A. It can be seen that the two methods barely agree qualitatively, as the WKB method of Eq. (12) is well known to neglect the oscillation of the transmission for electrons with energy higher than the barrier height as seen in the case of the TFA potential, whereas for the LDA barrier the oscillations of the tranmission are less drastic due to the smoothness of the potential (small \(\beta\)). The WKB transmission rises much more rapidly than in the \(S\)-matrix calculation which will lead to an overestimate of both current and flux. This rapid increase is less drastic for the TFA case, which is expected as the WKB approximation will approach the one calculated with the \(S\)-matrix algorithm for larger distances and barrier heights. We clarify that aside from the figures where it is labeled as such, we do not employ the WKB method in the reported calculations anywhere else in this manuscript.
Figure 4: (a) Scale parameter \(\alpha\), (b) shape parameter \(\beta\) and (c) relative barrier height \(u_{0}\) of GG potential as a function of the gap size \(d\), when fitted to the TFA (black) and LDA (red) effective potentials. Error bars indicate fit errors. Inset: TFA \(u_{0}\) as a function of distance (nm) for larger distances.
Figure 3: Effective electronic barrier potential between two semi-infinite slabs of the same metal as a function of the horizontal coordinate \(z\), for three different distances \(d\), calculated using LDA and TFA methods (described in the text), along with the fits of the LDA barrier using a parametrized generalized Gaussian (GG) function with parameters \(\alpha,\beta,u_{0}\) (see text for details).
## V Results
### Plane-plane configuration
In this section, we discuss electronic tunneling in the case of the plane-plane configuration as illustrated in Fig 1(a). For all figures, we consider temperatures \(T_{1}=400\) K and \(T_{2}=300\) K. The current density and electronic heat flux (color axis) as a function of the shape factor \(\beta\) and relative barrier height \(u_{0}\) are shown in Fig 6, calculated using Eqs. (1) and (4) for a GG barrier for two different distances. Both current and heat flux are presented in logarithmic scale in order to show the strong discrepancies that can be obtained by slight changes in the height but also in the shape \(\beta\) of the barrier. The specific cases of LDA and TFA are marked by points in Fig. 6, with associated error bars. Not only do the positions of these points show the extreme sensitivity of both current and heat flux to the choice of the barrier shape, but in the configuration of Fig. 6(d) (\(d=1\) nm) LDA and TFA even lead to opposite conclusions on the comparison between electronic and photonic flux. Note that our calculations do not include the surface roughness which is considered to reduce the height of the barrier [52]. These results highlight the importance of understanding the realistic shape of the barrier as it can lead to difference in the order of magnitude of the current and electronic heat flux. We also confirm that the electronic heat flux can overcome the radiative heat flux at least for distances smaller than 1 nm, independently of the model.
After discussing the impact of the barrier shape, we focus on the method employed to calculate the tunneling probability. To this aim we compare in Fig. 7 the electronic flux calculated with the \(S\)-matrix algorithm and the WKB method. The figure clearly shows that the difference in the calculation method can lead to disagreements of several orders of magnitude depending on both the barrier height \(u_{0}\) and shape \(\beta\). In experiments, one could be able to correct the WKB estimation of the barrier by comparing it with a more precise transmission calculations. However if the shape of the barrier is not properly taken into account one still risks to underestimate the barrier height. Both precise numerical and WKB methods would seem only to agree in extreme cases where the barrier is shallow or very peaked (low \(\beta\)).
The current-density dependence with respect to the distance is shown in Fig. 8(a). The current for both TFA and LDA shows an exponential behavior with respect to \(d\). Nevertheless, the difference between the two results is of about 3 order of magnitude. Two different fits for the LDA barrier are shown (red dashed lines) depending on the fit of the three parameters \(u_{0}\), \(\beta\) and \(\alpha\) with a GG function, or just two (fixing \(u_{0}\) to the height of the LDA barrier).
Similarly, in Fig. 8(b) we represent the electronic heat flux for the TFA (black) and LDA barrier (in red, with two possible GG fits). The electronic heat fluxes (dashed lines) are compared with the total contribution (solid lines) that include the radiative heat transfer. Interestingly, this curve lead to rather different conclusions in terms of the distance below which electronic heat flux goes above the radiative one (more than 1 nm for LDA, around 7 A for TFA).
### Tip-plane configuration
As explained above, while the experimental challenges associated with parallelism make the plane-plane scenario rather complicated to implement, the tip-plane configuration is much more convenient and widely used. In order to estimate the impact of barrier height and shape in this geometry, we exploit the Derjaguin or proximity force approximation (PFA) [53], typically employed in different contexts (including but not limited to near-field radiative heat transfer) to deal with complex geometries by exploiting the results from the plane-plane configuration. Other geometry-dependent methods exist, but they are challenging to implement for small gap sizes due to slow numerical convergence [54]. For a spherically-shaped tip of radius \(R\), the net power exchanged between the tip and the sample, in the absence of applied bias, can be written as
\[P=P^{\rm(rad)}+P^{\rm(el)},\] (22a) in agreement with the net flux defined in Eq. ( 3 ), where each term is defined as \[P^{\rm(Q)}=2\pi\int_{0}^{R}{\rm d}s\ s\ \Phi^{\rm(Q)}\Big{(}d+R-\sqrt{R^{2}-s^{2}} \Big{)}\,, \tag{22b}\]
where \(Q\in\{\rm rad,el\}\). The PFA calculation uses the results from the plane-plane configuration and considers the tip as a collection of rings at different distances from the plane, as illustrated in Fig. 1(b). For electrons, almost all the tunneling heat comes from the tip apex as quantitatively shown in App. B. As in Eq. ( 3 ), we also neglect the phonon tunneling contribution in these equations and consider rigid electrodes. The phononic contribution has been shown to be up to 10% of the electronic flux for angstrom gaps when using a fluctuational approach and in the absence of bias [16] and at contact
Figure 5: Electronic transmission probability as a function of kinetic energy \(E_{Z}\) (in units of \(E_{\rm F}\)) for a gap of \(d=5\) Å. Two barriers are presented TFA and LDA, under two different calculation methods: \(S\)-matrix algorithm and WKB method.
electrons are the main carrier attributed to the thermal conductivity of metals [48]. However the suitability of the PFA and the influence of geometry for phonons remains unexplored.
In Fig. 9, we compare the different contributions to the heat emitted by a tip of radius \(R=50\) nm. Due to numerical integration, the distance where the electronic contribution (black and red dashed) dominate over the radiative contribution (blue dotdashed) is slightly shorter than in the plane-plane configuration (cf. Fig. 8). As the electronic flux is almost exponential, most of the contribution is being emitted from a small percentage of the tip apex, as expected. However to account for the radiative contribution to the emitted power, one must consider a circular region with a radius larger than half of the tip radius. We obtain that the difference between the TFA barrier (black dashed) and the LDA barrier (red dashed) can make this distance vary by up to half a nanometer, for the radius considered here.
As of current experiments, this increase in the heat flux in the extreme near field regime due to the electronic contribution is either not detectable as in Ref. [6] or larger contributions appear at larger distances as in Ref. [7]. For the latter case, the presence of contamination has sometimes been suggested [15] and the presence of a bias voltage could have an additional influence [16].
The magnitude of the exchanged power \(P\) obtained with the LDA approach at \(4\,\mathrm{\SIUnitSymbolAngstrom}\) of separation distance (see Fig. 9), corresponding to the lattice constant of gold, is comparable with the one measured experimentally in Kittel's scanning thermal
Figure 6: Density plot of the the current density \(J\) (color axis, upper panels) and electronic heat flux (color axis, lower panels) between two semi-infinite metals with temperatures \(T_{1}=400\) K and \(T_{2}=300\) K, described by GG barrier and for gap distances of 0.6 (left panels) and 1 nm (right panels), as a function of shape factor \(\beta\) and \(u_{0}\) (for \(\alpha=0.48\)). Dashed line indicates the value of the near field radiative heat flux for the given distance. The corresponding mean \(\beta\) and \(u_{0}\) to fit the TFA and LDA barriers are indicated by points in the figure, along with associated error bars.
Figure 7: Density plot of the ratio between the electronic heat flux \(\Phi^{(\mathrm{el})}\) calculated using the \(S\)-matrix algorithm and \(\Phi^{(\mathrm{el})}_{\mathrm{WKB}}\) from the WKB method (color scale) as a function of the relative height \(u_{0}\) and the shape \(\beta\) for a 1 nm GG barrier. The corresponding mean \(\beta\) and \(u_{0}\) to fit the TFA and LDA barriers are indicated by points with associated error bars.
microscopy experiment [7] (although a problem related to the definition of physical contact still remains in this experiment where an unexpected increase of the heat flux is observed at nanometric separation distances). This power corresponds to an effective conductivity \(\kappa_{\rm eff}=P/(T_{2}-T_{1})d\simeq 10\) W/m/K, which remains smaller than the thermal conductivity of metals (\(\kappa_{\rm Au}=318\) W/m/K at \(T=300\) K), a value which can be considered as an unsurpassable limit. On the contrary, using the TFA approach, the thermal conductivity is two orders of magnitude smaller. This tends to demonstrate that the use of the LDA leads to a transfer which is overestimated near the contact, since the presence of an external bias voltage will further increase the transfer.
## VI Conclusions
In this work, we have investigated the electronic current and associated heat flux between two parallel metallic slabs separated by a vacuum gap in the nanometer and sub-nanometer range of distances (extreme near field). We have first shown that both quantities strongly depend on the description of the electronic barrier in the gap. More specifically, we have compared an approach based on the solution to the nonlocal Poisson equation in the Thomas-Fermi approximation to the numerical solution of the Kohn-Sham equations in the local-density approximation for jellium. We have shown that these approaches lead to quite different effective electronic barrier potentials (both in shape and height), and that the resulting electronic current and heat flux may differ by several orders of magnitude. Besides, by employing a generalized Gaussian shape for the barrier, we have confirmed the extreme sensitivity of both quantities with respect to the distribution parameters, describing barrier shape, height and degree of penetration inside the metallic slabs. Also, our results based on an accurate \(S\)-matrix-scheme have confirmed the limits of the widely-employed semiclassical WKB approach to deal with the transmission probability of electrons through an arbitrary potential barrier.
Moreover we have seen that while the LDA leads to an overestimated transfer near the contact, the TFA seems to be a more realistic approach since it allows reproducing the magnitude of heat flux measured in the recent experiments. However the presence of an external bias voltage has not been considered in the present study. It will require specific attention in a future work.
Our results show how quantitatively relevant is the choice of both the electronic barrier shape and the transmission-probability calculation scheme to obtain a reliable value of both current and heat flux. Apart from its fundamental interest, we have shown that in the context of extreme-near-field heat transfer this discrepancy can have an impact on the threshold distance at which electronic flux competes and goes beyond the radiative one. More generally, our results could be relevant for a more realistic modeling of experimental setups involving scanning thermal microscopy.
###### Acknowledgements.
This research was supported by the French Agence Nationale de la Recherche (ANR), under grant ANR-20-CE05
Figure 8: Current density as a function of the gap distance \(d\). The current density for the TFA barrier (black dashed line) is compared to the LDA calculations (red \(\times\)) and two possible fits using GG barrier (red dashed lines).
Figure 9: Heat power emitted by a tip of radius \(R=50\) nm, as a function of the distance \(d\) for temperatures \(T_{1}=400\) K and \(T_{2}=300\) K. Two electronic contributions are shown based on the model of the TFA barrier (black dashed) and LDA one fitted with a GG distribution (red dashed). Solid lines include the radiative contribution (blue dash-dotted).
0021-01 (NearHeat).
## Appendix A Computational details of LDA effective potential calculation
For all LDA barrier calculations we use the grid-based projected augmented wave (GPAW) open source toolkit [46, 47]. Each cell is composed of two jellium slabs of thickness equal to 4 times the lattice constant separated by a vacuum gap as in Fig. 1. For all the calculations we consider a \(4\times 4\times 4\) supercell with periodic boundary conditions, with a plane-wave cutoff energy of 400 eV. The grid spacing starts at 0.2 A and is reduced until finding a convergent barrier shape. The number of electronic bands in the calculation is set equal to the number of electrons in each cell, proportional to the volume of metal on each side.
For gold we use a lattice constant of 4.078 A and a Wigner-Seitz radius of 3.02 Bohr radii.
## Appendix B Tip depth contributing to the emitted PFA power
It is often considered in STM experiments only the last atom at the tip apex is responsible for the tunneling [33]. We can confirm that this behavior is also reproduced under the PFA. Due to the different power laws of the heat flux as a function of distance for the different carriers, their behavior is different under PFA. We define the partial power as
\[P(r)=P^{\rm(rad)}(r)+P^{\rm(el)}(r),\] (11a) in agreement with the net flux of Eq. ( 3 ), where each term is defined as \[P^{(Q)}(r)=2\pi\int_{0}^{r}\mathrm{d}s\;s\;\Phi^{(Q)}\Big{(}d+R-\sqrt{R^{2}-s^ {2}}\Big{)}\,,\] (11b) where the equations are analogous to Eq. ( 22 ) but the integration goes from the tip apex up to a distance \[r<R\]. In Fig. 10, we show the electronic and radiative contributions to \[P(r)\] divided by the total power \[P\], as a function of the tip depth \[r\].
|
2309.03701 | User's Reaction Patterns in Online Social Network Communities | Several one-fits-all intervention policies were introduced by the Online
Social Networks (OSNs) platforms to mitigate potential harms. Nevertheless,
some studies showed the limited effectiveness of these approaches. An
alternative to this would be a user-centered design of intervention policies.
In this context, we study the susceptibility of users to undesired behavior in
communities on OSNs. In particular, we explore their reaction to specific
events. Our study shows that communities develop different undesired behavior
patterns in reaction to specific events. These events can significantly alter
the behavior of the community and invert the dynamics of behavior within the
whole network. Our findings stress out the importance of understanding the
reasons behind the changes in users' reactions and highlights the need of
fine-tuning the research to the individual's level. It paves the way towards
building better OSNs' intervention strategies centered on the user. | Azza Bouleimen, Nicolò Pagan, Stefano Cresci, Aleksandra Urman, Gianluca Nogara, Silvia Giordano | 2023-09-07T13:23:16Z | http://arxiv.org/abs/2309.03701v1 | # User's Reaction Patterns in Online Social Network Communities
###### Abstract
### Introduction
Misinformation, hate speech, toxicity, trolling, and malicious bots are examples of undesired behavior on Online Social Networks (OSNs) with the potential for serious implications in real life for individuals and societies. These harmful impacts range from escalating physical violence in protests [1], threatening democratic elections' integrity [2], to leading to genocides [3]. To address these harms, OSNs platforms introduced some moderation strategies that aim at mitigating these problems. Most of these interventions follow a one-size-fits-all approach, as policies are equally applied to all users [4]. However, these interventions sometimes exacerbate the phenomena instead of limiting them [5, 6]. A possible explanation could consist in the fact that users present diversified reactions to moderation policies [7], in particular because users are typically grouped in communities within OSNs. In this context, some studies highlight the need for personalized moderation interventions [4]. However, to do so, we need to better understand user's susceptibility defined as the factors that drive them toward particular reactions. In other words, we aim to get insights on what makes users more or less likely to engage in undesired behavior. In this work, we aim at studying the reaction of users in network communities as a first step toward understanding user's susceptibilities on the individual level. In turn, this represents a preliminary step towards designing personalized moderation strategies.
### Results
#### Dataset
We base our study on the VaccinItaly dataset [8]. It is a collection of tweets related to the Covid-19 discussion in Italy ranging from Dec, 20th, 2020 to Oct, 22nd 2021. The topic has been very controversial all around the world. Consequently, the choice of this dataset is suitable for studying the susceptibilities of users in contexts that prompt adversarial reactions. The dataset consists of \(\sim\)12 million tweets in Italian, half of which are retweets. It involves 551,816 unique users, where 86% of them have less than 20 tweets in the dataset. We selected a subset of users that are involved enough to reflect the core discussion on the Covid-19 vaccines in Italy. To do so, we adapted and applied the definition of a _core user_ from [6] to our dataset. This reduces the number of users to 9,278 (1.7% of users) who are responsible for nearly half of the tweets.
The purpose of the study being to observe the behavior of users within the network structure they are in, we built the retweet network of users. It is a directed weighted
network where nodes represent the users and edges represent retweets. The weights of the edges represent the number of retweets from one user to another.
**Community detection.** We applied the Louvain community detection algorithm [9] on the network with a resolution parameter of 0.7. We obtained two main communities that gather 87% of the nodes in the network. We qualitatively analyzed the tweets of the nodes with the high authority scores in these two communities [10]. In one community, the nodes with high authority scores tweet content in favor of the vaccines while, in the other community, the nodes with high authority scores are against the adoption of vaccines and the government's measures to contain the spread of the virus. The same observations are found when analyzing the most retweeted tweets or the tweets of the most central users in these two communities. Hence, we assume that one community is dominated by a **Pro vaccination** discourse (Provax community: 3,980 nodes) and the other is dominated by an **Anti vaccination** discourse (Novax community: 3,831 nodes).
Since our work aims at measuring the differences in the reactions of users belonging to different communities, we first need to understand whether the communities were stable over time, or whether they evolved. In the latter case, differences between the two communities could be simply due to the flow of users between them. To do so, we ran different instances of the community detection algorithm on different sub-periods of time and evaluated the user's flows across communities. Our analysis showed that the composition of the two communities remains overall stable over time, hence we can use the community partitioning based on the whole dataset.
**Toxicity in communities.** In this abstract, due to space restrictions, we limit the analysis to measuring the user toxicity. Nevertheless, an analysis of negativity was also done and similar observations were obtained. To measure the toxicity of the text of the tweet, we used the Detoxify library [11]. It is a state-of-the-art method for computing toxicity [12, 13]. Detoxify has a multilingual model for non-English texts. For Italian it reaches an AUC of 89.18% [11]. The model returns a score ranging from 0 (low toxicity) to 1 (high toxicity).
We present in Fig. 1 the daily average toxicity of the text written by the users belonging to the Provax and Novax communities. Fig. 1 shows that, from the beginning of the data collection until mid-June, the toxicity level of the Provax community is lower than the one of the Novax community. Interestingly, this trend is inverted starting from mid-June where we notice that Provax users become on average more toxic than Novax users. Overall, the toxicity of both communities increases throughout time as shown by the Mann-Kendall test for trends. However, the Provax toxicity rate increases more than twice as fast as the Novax one.
**Toxicity around specific events.** To deepen the analysis, we look into the reaction of the Provax and Novax communities around specific events related to the Covid-19 pandemic in Italy. The selected events are presented in Tab. 1. In Fig. 2, we present box plots of the toxicity of the tweets posted by the users of the two communities for the three days following a specific event. Both Provax and Novax have a similar reaction, in terms of toxicity, to the start of the vaccination campaign on Dec 27\({}^{\text{th}}\). In fact, there is no significant difference between the toxicity of the two communities. For Mar. 15\({}^{\text{th}}\) and Apr. 22\({}^{\text{nd}}\), the toxicity of the Novax community is significantly higher. This relates to the trends of toxicity observed in Fig. 1. For
Jun. 11th, the difference between the two communities is not significant anymore. This happens at the time where, in Fig. 1, we recognize an increase in the toxicity level of the Provax community reaching the level of the Novax one. For Jul. 22nd and Aug. 6th, it is the Provax that is this time significantly more toxic than the Novax community. Fig. 2 illustrates that users belonging to different communities develop different reaction patterns depending on which events they are confronted with. It also supports the inversion in the temporal trend of the toxicity within both communities observed in Fig. 1. This result shows that some events might trigger among groups of users a reaction that can shape the behavior of a whole community, while the impact can be non-existent for other groups of users. This supports the need for intervention strategies targeted at the group and at the individual level.
## Conclusion
Through the case study of the VaccinItaly dataset, we studied the reaction of users to specific events on OSNs considering their position in the network. We found that these events impact differently the OSN users. They can significantly alter the behavior of a community as a whole and invert the dynamics of behavior within the whole network. Our work highlights the presence of an understudied phenomenon which is the user's susceptibility to undesired behavior. It stresses out the importance of understanding the reasons behind the changes in users' reactions and fine-tuning the research to the individual's level. Possible paths forward include investigating social contagion effects, the interplay between the reactions in the two communities, and the existence of a relation between the structure of the network,
\begin{table}
\begin{tabular}{l l} Date & Event \\ \hline
**Dec. 27th, 2020** & Start of the vaccination campaign in Italy. \\
**Mar. 15th, 2021** & The Italian government announces lockdown during eater vocation. \\ Apr. 22nd, 2021 & Introducing the use of the Green Pass. \\
**Jun. 11th, 2021** & Death of a young woman after receiving Astrazeneca shot. \\
**Jul. 23rd, 2021** & Announcing mandatory Green Pass starting from the 6th of August 2021. \\ Aug. 6th, 2021 & Mandatory Green Pass required to access several public spaces. \\ \end{tabular}
\end{table}
Table 1: List of event related to the Covid-19 pandemic in Italy. Events highlighted in bold correspond to pics in the number of tweets posted by the user on that day.
Figure 1: Daily toxicity average in written text for Provax and Novax communities. A moving average of a 7-day window was applied to the plot.
the position of a user within the graph, and their reaction to a particular event. We hope, through our contribution, to pave the way towards building better OSNs' intervention strategies centered on the user.
|
2309.13512 | Object Classification Model Using Ensemble Learning with Gray-Level
Co-Occurrence Matrix and Histogram Extraction | In the field of object classification, identification based on object
variations is a challenge in itself. Variations include shape, size, color, and
texture, these can cause problems in recognizing and distinguishing objects
accurately. The purpose of this research is to develop a classification method
so that objects can be accurately identified. The proposed classification model
uses Voting and Combined Classifier, with Random Forest, K-NN, Decision Tree,
SVM, and Naive Bayes classification methods. The test results show that the
voting method and Combined Classifier obtain quite good results with each of
them, ensemble voting with an accuracy value of 92.4%, 78.6% precision, 95.2%
recall, and 86.1% F1-score. While the combined classifier with an accuracy
value of 99.3%, a precision of 97.6%, a recall of 100%, and a 98.8% F1-score.
Based on the test results, it can be concluded that the use of the Combined
Classifier and voting methods is proven to increase the accuracy value. The
contribution of this research increases the effectiveness of the Ensemble
Learning method, especially the voting ensemble method and the Combined
Classifier in increasing the accuracy of object classification in image
processing. | Florentina Tatrin Kurniati, Daniel HF Manongga, Eko Sediyono, Sri Yulianto Joko Prasetyo, Roy Rudolf Huizen | 2023-09-24T00:20:16Z | http://arxiv.org/abs/2309.13512v1 | Jurnal Ilmah Teknik Elektro Komputer dan Informatika (JITEKI)
###### Abstract
In the field of object classification, identification based on object variations is a challenge in itself. Variations include shape, size, color, and texture, these can cause problems in recognizing and distinguishing objects accurately. The purpose of this research is to develop a classification method so that objects can be accurately identified. The proposed classification model uses Voting and Combined Classifier, with Random Forest, K-NN, Decision Tree, SVM, and Naive Bayes classification methods. The test results show that the voting method and Combined Classifier obtain quite good results with each of them, ensemble voting with an accuracy value of 92.4%, 78.6% precision, 95.2% recall, and 86.1% F1-score. While the combined classifier with an accuracy value of 99.3%, a precision of 97.6%, a recall of 100%, and a 98.8% F1-score. Based on the test results, it can be concluded that the use of the Combined Classifier and voting methods is proven to increase the accuracy value. The contribution of this research increases the effectiveness of the Ensemble Learning method, especially the voting ensemble method and the Combined Classifier in increasing the accuracy of object classification in image processing.
This work is licensed under a Creative Commons Attribution-Share
Alike 4.0
**Corresponding Author**:
Florentina Tatrin Kurniati, Faculty of Information Technology, Universitas Kristen Satya Wacana
Jl. Diponegor No.52-60, Salatiga, Jawa Tengah 50711
Email: [email protected]
## 1 Introduction
In the field of classification, grouping based on the nature and characteristics of objects is essential [1], [2]. Object classification is used to differentiate objects in images based on relevant attributes [3], [4]. These problems are found in various fields besides the ability of identification systems based on characteristics. As in research with variations in poses, expressions, and lighting it becomes one of the challenges for identification. Besides that, in the medical field, object identification is a challenge in itself, one of which is for the detection of medical diseases which involves the identification of pathology on medical images. Identification requires extraction, to find out the characteristics of the object, you can use GLCM. In previous research, the system performance showed an accuracy of 0.984, a sensitivity of 0.992, a specificity of 0.968, and a precision of 0.967 with Magnetic Resonance Imaging samples. The classification model using SVM with k-NN obtained results of 94.6% and 91% [5]-[8]. In addition, in the industrial field, object classification for identification is used to group objects based on characteristics. This identification can be used for product identification, sorting, and recognition of an object [9]-[11]. So object identification is a critical study because
it requires accurate and effective results. So it is very important to overcome these problems so that the results become reliable [12, 13, 14]. In object classification, there are several problems that need to be addressed, including variations in the complexity of objects in the dataset, including variations in shape, size, color, texture, and object context. This variation can cause difficulties in recognizing and distinguishing objects accurately [15, 16]. In addition, every single classification model may have specific weaknesses or tend to provide unstable predictive results in some situations [17, 18, 19].
Based on this, the purpose of this research is to improve the accuracy of object identification. Improvement using several classification methods such as K-Nearest Neighbors (KNN), Random Forest (RF), Support Vector Machine (SVM), Decision Tree, and Naive Bayes (NB). Ensemble Learning methods including Voting and Combined Classifiers, are used to improve reliability [19, 20, 21, 22]. Voting and Combined Classifier methods use classification, K-Nearest Neighbors (K-NN), Random Forest (RF), Support Vector Machine (SVM), Decision Tree, and Naive Bayes (NB) [23, 24, 25, 26, 27]. The combination of these methods used to predict and classify allows the combination to improve prediction results. This is based on that each model may have an emphasis on specific characteristics or features that are more relevant in object classification [28, 29, 30, 31]. By using a combination of classification prediction results from several models, it can improve classification accuracy and reduce the tendency of prediction errors when using a single model. Ensemble Learning methods such as Voting and Combined Classifier can overcome the problem of complexity and variation in object classification. Combining the prediction results from several classification models will improve the accuracy of object classification and provide more reliable results in various image processing and pattern recognition applications [32].
In addition to the Ensemble Learning method, this study also uses GLCM (Gray-Level Co-occurrence Matrix) feature extraction and histograms to obtain relevant characteristics or features from the image. GLCM is an effective method for describing the spatial relationship between pixel intensities in an image. GLCM features namely Contrast, Correlation, Energy, Homogeneity, entropy will be extracted from object images for use in the classification process [33, 34, 35, 36]. Histogram feature extraction shows the frequency of occurrence of pixel intensity levels. The histogram is represented as a numeric vector that represents the image. Histogram features can be used in classification algorithms to predict unknown image classes. By combining Ensemble Learning and GLCM feature extraction methods and histograms, a more accurate object classification model is produced. The contribution of this research increases the effectiveness of the Ensemble Learning method, by using Voting Ensemble and Combined Classifier to increase reliability.
## 2 Methods
The object detection method begins with hold pre-processing, as shown in Fig. 1. The object datasets with different sizes are resized to ensure uniform size in order to simplify the extraction process. The extraction process uses the Gray-Level Co-occurrence Matrix (GLCM) and histogram methods [34, 37].
Figure 1: Flowchart Classification Model using Ensemble Learning
Feature extraction uses the GLCM and histogram methods. Features are obtained by analyzing pixel pairs (GLCM) and the distribution of pixels in the sample image (histogram) as shown in Table 1 and Table 2. Features are classified individually using (a). Random Forest (RF) in principle this method uses several tens to hundreds of decision trees for classification, (b). This Support Vector Machine (SVM) method for feature classification uses a hyperplane to maximize the distance of object classes, (c). the k-Nearest Neighbors (k-NN) method classifies object classes based on the majority of nearest neighbor classes, (d). the Naive Bayes method determines class labels based on features, with features not related to each other, (e) while the Decision Tree method uses a tree structure to describe rules and predict classes [38]-[42]. Ensemble voting method by combining the prediction results from various individual classification algorithms. For method (g). combine classifier to improve accuracy based on priority.
\[Energy=\sum_{i}\sum_{j}P(i,j)^{2} \tag{1}\]
\[Contrast=\sum_{i}\sum_{j}(i-j)^{2}*P(i,j) \tag{2}\]
\[Homogeneity=\sum_{i}\sum_{j}\frac{P(i,j)}{1+(i-j)^{2}} \tag{3}\]
\[Entropy=-\sum_{i}\sum_{j}P(i,j)*log\big{(}P(i,j)\big{)} \tag{4}\]
\[Correlation=\frac{\sum_{i}\sum_{j}[i\ast P(i,j)]-\mu_{x}*\mu_{y}}{\sigma_{x}* \sigma_{y}} \tag{5}\]
Equation (1) Energy measures texture uniformity, calculated by adding the squares of the co-occurrence probabilities of each pair of pixels in the matrix. Equation (2) Contrast measures the variation between neighboring pixels, calculated by taking the difference between the row and column indices, squaring it, and then multiplying it by the probability of co-occurrence. Equation (3) Homogeneity measures the proximity of elements in the co-occurrence matrix. Equation (4) Entropy measures the complexity of the information in an image. Equation (5) Correlation which measures the linear dependence between pixel intensities at positions (i) and (j) the average pixel intensity of each coordinate. Where, i and j are co-occurrence matrix indices, P(i,j) is the element of the co-occurrence matrix at position (i,j), \(\mu\)x and \(\mu\)y are the average row and column weights of the co-occurrence matrix dan ox and ox are the standard deviation of the row and column weights of the co-occurrence matrix.
After the dataset pre-processing process is complete, the next step is to perform the extraction using the GLCM (Grey Level Co-occurrence Matrix) and Histogram methods. GLCM is a two-dimensional matrix that describes the relationship between pixel intensities in an image. To describe important information from the matrix by calculating the energy, which describes the intensity of the pixels scattered in the image matrix. For
\begin{table}
\begin{tabular}{c c c} \hline
**No** & **Step Pseudo Code** & **Pseudo Code** \\ \hline
1 & Create an array called histogram with size N, initialized with zeros, where N & function \\ & is the number of possible intensity levels in the image & calculateHistogram(image): \\
2 & Create an array called histogram with size N, initialized with zeros, where N & histogram = array of size N, \\ & is the number of possible intensity levels in the image. & initialized with zeros \\
3 & Get the width and height of the image & width = width of image \\
4 & Iterate over each pixel in the image using two nested loops, one for the y- & height = height of image \\
4 & coordinate (rows) and one for the x-coordinate (columns). & for y from 0 to height-1: \\
5 & Retrieve the intensity of the pixel at coordinate (x, y) in the image. & for x from 0 to width-1: \\
6 & Increment the corresponding element in the histogram array by 1 for the & intensity = intensity of pixel at \\ & found intensity level. & (x, y) in image \\
7 & Repeat steps 4 and 5 for each pixel in the image. & histogram[intensity] = \\
8 & Return the calculated histogram & histogram[intensity] + 1 \\ \hline \end{tabular}
\end{table}
Table 1: Histogram Pseudo Code
contrast measure the significant difference in pixel intensity in the matrix. Meanwhile, homogeneity is calculated to measure the extent to which the pixel intensities are similar in the matrix. Meanwhile, entropy is used to describe the level of disorder or complexity in the image matrix. The next feature is a correlation to measure the linear relationship between the pixel intensities in the matrix. The next feature extraction is the histogram, which is the analysis of the pixel intensity distribution in an image. The histogram is used to determine the distribution of pixel intensity across a range of possible values. The histogram extraction process uses the stages of changing the grayscale image and calculating the histogram at the pixel gray level. determine the range of minimum and maximum intensity values in the image, the range is divided into several intervals and adjusted to the needs of the analysis. Each pixel in the image is placed into the appropriate bin based on its intensity. This is done by comparing the pixel intensity values with predetermined bin interval limits. The number of pixels in the bin is counted and represents the distribution of the frequency or intensity of the pixels in each value interval. The histogram formula is shown in Table 1. Improved predictions using the Voting Ensemble and Combined Classifier methods. The Voting Ensemble method collects predictions from each model and votes based on a majority. Whereas Combined Classifier is more dynamic, a model with unknown label predictions, then predictions from other models will replace it as shown in Table 2.
## 3 Results and Discussion
The confusion matrix is used to test and find out the accuracy of the tested model for classification models with Random Forest (RF) Fig. 2(a), accurate prediction results in all classes.
In Fig. 2(b), for the K-Nearest Neighbors (KNN) model, the results of classifying several samples experienced prediction errors in a number of classes. In Fig. 2(c) Decision Tree model also shows a similar pattern, although with a slightly lower error rate than KNN. Meanwhile, Fig. 2(d) Supports Vector Machine (SVM) model also faces challenges in classifying the different classes, especially in predicting the first and second classes with a significant number of errors. Meanwhile, Fig. 2(e) Naive Bayes (NB) model also shows performance similar to SVM, with a significant error rate in the first and second-class predictions. The test
\begin{table}
\begin{tabular}{c c c} \hline
**No** & **Pseudo Code** & **Algorithm** \\ \hline \multirow{7}{*}{1} & FUNCTION VotingEnsemble(RF\_predict, SVM\_predict, \\ & kNN\_predict, NB\_predict, DT\_predict) & \\ & FOR i = 1 TO length(RF\_predict) & \\ & Create a list: predict\_list = [RF\_predict[i], SVM\_predict[i], kNN\_predict[i], \\ & NB\_predict[i], DT\_predict[i]] & \\ & vote = Most frequent\_label(predict\_list) & Voting Ensemble \\ & ensemble\_predict[i] = vote & \\ & END FOR & \\ & RETURN ensemble predict & \\ & END FUNCTION & \\ & FUNCTION CombinedEnsemble(RF\_predict, SVM\_predict, \\ & kNN\_predict, NB\_predict, DT\_predict) & \\ & FOR i = 1 TO length(RF\_predict) & \\ & IF RF\_predict[i] = “unknown” & \\ & ensemble\_predict[i] = RF\_predict[i] & \\ & ELSE IF SVM predict[i] = “unknown” & \\ & ensemble\_predict[i] = SVM\_predict[i] & \\ & ELSE IF kNN\_predict[i] = “unknown” & \\ & ensemble\_predict[i] = kNN\_predictions[i] & Combined \\ & ELSE IF NB\_predict[i] = “unknown” & Classifier \\ & ensemble\_predict[i] = NB\_predictions[i] & \\ & ELSE & \\ & ensemble\_predict[i] = DT\_predictions[i] & \\ & END IF & \\ & END FOR & \\ & RETURN ensemble predict & \\ & END FUNCTION & \\ & \(\cdot\) Main execution & \\ \hline \multirow{7}{*}{TRAIN each model (RF, SVM, k-NN, NB, Decision Tree) using training data} & \\ & PREDICT with each model using test data & \\ \cline{1-1} & voting\_results = VotingEnsemble(RF\_predict, SVM\_predict, kNN\_predict, NB\_predict, DT\_predict) \\ \cline{1-1} & combined\ results = CombinedEnsemble(RF\_predict, SVM\_predict, kNN\_predict, NB\_predict, DT\_predict) \\ \hline \end{tabular}
\end{table}
Table 2: Voting Ensemble and Combined Classifier
results show that the Random Forest classification model has an accuracy of 99.09%, this method also shows superiority in precision and recall with 99.28% and 98.96% respectively. This indicates that the Random Forest not only classifies most of the samples correctly but also exhibits a good balance in minimizing errors. While SVM showed the lowest performance with 43.47% accuracy, with precision reaching 43.52%, and 41.48% recall, indicating that SVM is often mistaken in identification. The k-NN and Tree methods show average performance with an accuracy of 76.13% and 79.73%, respectively. Both have a balance between precision and recall, indicating that they have a relatively balanced error rate for positive and negative classifications. Meanwhile, Naive Bayes has an accuracy of 50.90%, with a precision of 56.55% but a lower recall of 46.09%. The test results show that Random Forest shows the best and most consistent performance in all evaluation metrics.
Based on the independent classification, RF, k-NN, SVM, Tree, and NB are used in the Voting Ensemble and Combined Classifier models (Fig. 3(a) and Fig. 3(b)). The result is that the Combined Classifier method has an accuracy of 98.88%, a precision of 99.01%, a recall of 98.72%, and an F1-score of 98.86%. Meanwhile, the Voting Ensemble accuracy was 87.39%, the average precision was 88.42%, the average recall was 86.24%, and the F1 score was 86.96%. These results show that the Combined Classifier is able to classify better than the Voting Ensemble model.
The Confusion matrix in Voting Ensemble has a high level of accuracy in predicting Class 0, class 2, and Class 3, with 99 correct predictions for Class 0, 135 correct predictions for Class 1, and 174 correct predictions for Class 2. However, this model has a little difficulty in predicting Class 2, with 16 prediction errors in Class 1 and 11 prediction errors in Class 2. Overall, Voting Ensemble shows good performance with a high degree of accuracy. Meanwhile, the confusion matrix on the Combined Classifier shows almost identical results to the Voting Ensemble, with a high degree of accuracy in all classes. This model predicts class 1 and class 3 perfectly, with 123 correct predictions for class 1 and 182 correct predictions for class 3. Similar to the Voting
Figure 3: Confusion Matrix (a) Voting Ensemble (b) Combined Classifier
Figure 2: Confusion Matrix (a) Random Forest (b) k-NN (c) Decesion Tree
Ensemble, this model has little difficulty predicting Class 2, with only 1 prediction error to Class 1 and 3 prediction errors to Class 3. Overall, the Combined Classifier also shows very good performance with a high degree of accuracy.
The test results in the form of a bar chart are shown in Fig. 4 prediction results 4(a). Class-0, 4(b). Class-1 and 4(c). Class-2, it can be seen that the SVM (Support Vector Machine) and NB (Naive Bayes) models have a lower prediction success rate for all classes compared to the other models. In addition, RF (Random Forest), VE (Voting Ensemble), and CC (Combined Classifier) seem to do a very good job predicting all classes, with bar heights reaching almost 100%. Models that are not accurate in predicting are class 1: SVM and NB models, class 2 SVM and NB models, and class 3 SVM and NB models. In general, SVM and NB models appear to be the least accurate in predicting all classes.
The highest accuracy, shown in Table 3 value is in the Random Forest (RF) and Combined Classifier classification models, with an accuracy of 0.993. Meanwhile, the model with the lowest accuracy is the Support Vector Machine (SVM) with an accuracy of 0.599. Precision is the ratio of True Positives divided by the total number of True Positives and False Positives. This shows how often the model is right when it predicts the positive class. RF and Combined Classifiers also have the highest precision, with a value of 0.976. The model with the lowest precision is Naive Bayes (NB), with a value of 0.198. Recall or Sensitivity is the ratio of True Positives divided by the total number of True Positives and False Negatives which indicates how often the model finds a positive class when it is actually a positive class. RF and Combined Classifier have the highest recall, with a value of 1,000. The model with the lowest recall is SVM, with a value of 0.481. F1 Score is the harmonic mean of precision and recall. The F1 score tries to strike a balance between precision and recall. RF and Combined Classifier have the highest F1 score, with a value of 0.988. These results indicate that the stretch model is capable of solving problems related to object variations [5, 6, 7, 8, 43].
## 4 Conclusion
The classification models evaluated were Random Forest (RF), K-Nearest Neighbors (KNN), Decision Tree, Support Vector Machine (SVM), and Naive Bayes (NB), the results of Random Forest (RF) have high accuracy. Meanwhile, the SVM and NB models experienced difficulty in classification, with SVM recording the lowest accuracy of 43.47%. Even though the k-NN model and Decision Tree have moderate performance, the precision and gain values are balanced. The classification model is used in the Voting Ensemble model with an accuracy of 87.39%, while the Combined Classifier shows superiority with an accuracy of 98.88%, precision of 99.01%, recall of 98.72%, and F1 score of 98.86%. The Voting Ensemble and Combined Classifier models open opportunities for wider development for models with low accuracy.
|
2309.17012 | Benchmarking Cognitive Biases in Large Language Models as Evaluators | Large Language Models are cognitively biased judges. Large Language Models
(LLMs) have recently been shown to be effective as automatic evaluators with
simple prompting and in-context learning. In this work, we assemble 15 LLMs of
four different size ranges and evaluate their output responses by preference
ranking from the other LLMs as evaluators, such as System Star is better than
System Square. We then evaluate the quality of ranking outputs introducing the
Cognitive Bias Benchmark for LLMs as Evaluators (CoBBLEr), a benchmark to
measure six different cognitive biases in LLM evaluation outputs, such as the
Egocentric bias where a model prefers to rank its own outputs highly in
evaluation. We find that LLMs are biased text quality evaluators, exhibiting
strong indications on our bias benchmark (average of 40% of comparisons across
all models) within each of their evaluations that question their robustness as
evaluators. Furthermore, we examine the correlation between human and machine
preferences and calculate the average Rank-Biased Overlap (RBO) score to be
49.6%, indicating that machine preferences are misaligned with humans.
According to our findings, LLMs may still be unable to be utilized for
automatic annotation aligned with human preferences. Our project page is at:
https://minnesotanlp.github.io/cobbler. | Ryan Koo, Minhwa Lee, Vipul Raheja, Jong Inn Park, Zae Myung Kim, Dongyeop Kang | 2023-09-29T06:53:10Z | http://arxiv.org/abs/2309.17012v3 | # Benchmarking Cognitive Biases in Large Language Models as Evaluators
###### Abstract
Large Language Models (LLMs) have recently been shown to be effective as automatic evaluators with simple prompting and in-context learning. In this work, we assemble 15 LLMs of four different size ranges and evaluate their output responses by preference ranking from the other LLMs as evaluators, such as _System Star is better than System Square_. We then evaluate the quality of ranking outputs introducing the Cognitive Bias Benchmark for LLMs as Evaluators (CoBBLer)1, a benchmark to measure six different cognitive biases in LLM evaluation outputs, such as the Egocentric bias where a model prefers to rank its own outputs highly in evaluation. We find that LLMs are biased text quality evaluators, exhibiting strong indications on our bias benchmark (average of **40%** of comparisons across all models) within each of their evaluations that question their robustness as evaluators. Furthermore, we examine the correlation between human and machine preferences and calculate the average Rank-Biased Overlap (RBO) score to be **49.6%**, indicating that machine preferences are misaligned with humans. According to our findings, LLMs may still be unable to be utilized for automatic annotation aligned with human preferences.
Footnote 1: Our project page: [https://minnesotanlp.github.io/cobbler](https://minnesotanlp.github.io/cobbler)
## 1 Introduction
Large language models (LLMs) (Brown et al., 2020; Ouyang et al., 2022) adapted to follow various kinds of instructions have been popularly utilized for several natural language tasks. The general standard for testing a model's capabilities is benchmarking its performance on static evaluation suites such as Fan et al. (2019) and Wang et al. (2020). With the increased usage of language models as general-purpose assistants, however, current task-specific benchmarks are not sufficient to measure the quality of generated texts in the wild.
Figure 1: Our CoBBLer pipeline to evaluate the 15 popular LLMs that are instruction-tuned and trained with human feedback for their capabilities as unbiased automatic evaluators.
Recent studies have shown that LLMs can serve as evaluators themselves: Wu & Aji (2023) utilize LLMs as self-evaluators to automatically judge the quality of open-ended generations and compare them with human judgments via an Elo-score calculation. Other works, such as AlpacaEval (Li et al., 2023b), also utilize LLMs, such as GPT-4 (OpenAI, 2023), as automatic evaluators to reduce the time and cost overhead of human annotations. As noted by these works, such automatic evaluation leaderboards have a number of limitations, including a preference for long outputs or outputs that are more similar to the evaluators' generation qualities.
In this work, we propose CoBBELr, the Cognitive Bias Benchmark for evaluating the quality and reliability of LLMs as Evaluators, as depicted in Figure 1. We collect a set of 50 question-answering examples from two well-established benchmarking datasets: BigBench (Srivastava et al., 2023) and El15 (Fan et al., 2019). We then generate responses from each LLM in consideration (totaling 50 responses) and prompt the models to evaluate their own and other models' responses. We assemble 15 of the best-performing models based on the HuggingFace OpenLLM leaderboard (Beeching et al., 2023) as well as API-based models and evaluate how these models reflect cognitive biases as evaluators. We test six different biases to benchmark their evaluation quality and categorize the model biases into two groups: (1) **Implicit Biases** to determine the inherent biases that can be implicitly extracted from each model's evaluation from a uniform prompt, and (2) **Induced Biases**, which add modifications to the original prompts akin to adversarial attacks, to induce negative behaviors. We conduct a round-robin evaluation of all possible \(\binom{15}{2}\) pairs of response for all 50 QA examples, comparing every possible unique pair among the 15 model responses. For every pairwise comparison, we then prompt the models to select the response that is more coherent and better aligned with a reference response. As shown in Figures 1(a) and 1(b), we find that the majority of the models strongly exhibit several of the different biases, which may compromise the credibility of their role as evaluators.2 Furthermore, we conduct experiments for human preferences by crowdsourcing six human annotators and collecting each of their rankings for a total of 300 annotations. From our findings, we observe a low correlation between human and machine judgments via Rank-Biased Overlap (RBO), indicating that machine and human preferences are generally in low agreement.
Footnote 2: In total, **42K** samples are analyzed across six biases benchmarking each model for a complete 630K samples.
Our core contributions are as follows:
* A new benchmark (CoBBELr) for evaluating LLMs to perform unbiased evaluations within the QA setting.
Figure 2: Major findings of this work: the intensity of model biases as well as the alignment between machine and human preferences. For (a), we show the proportion of biased responses for each evaluator relative to the random threshold. We scale each of the axes to the score of the most biased model. In (b), we draw a heatmap of the scores relative to randomly choosing model outputs. The scores were normalized via \(z\)-normalization using the mean and std of Random as a reference point. A darker red indicates a stronger intensity of bias, while a darker blue shade indicates more unbiased evaluations. In (c), we show the average Rank-Biased Overlap (RBO) scores between aggregated human preferences and each of the 15 LLMs. Higher RBO means higher similarity.
* An examination of an exhaustive list of evaluation biases that have not been covered by previous studies. We find that most LLMs cannot perform as unbiased evaluators, testing on 6 different cognitive biases.
* A comprehensive lineup of models (sizing from \(3B\) to \(>\)\(175B\) parameters) as evaluators, encompassing the current state-of-the-art language models covering over **630k** comparisons.
Based on our benchmark, we find that most models that are instruction-tuned or trained on human feedback exhibit various cognitive biases when used as automatic evaluators, and that may negatively impact the quality of evaluations. Thus, we propose the use of our benchmark (CoBBLer) for measuring the capabilities of future language models as evaluators to enable unbiased and reliable evaluations that are well-aligned with human preferences.
## 2 Related Work
LLMs as Evaluators.Owing to the effectiveness of LLMs, many recent research works have investigated their utility in various downstream tasks, such as machine translation (Kocmi and Federmann, 2023), summarization (Shen et al., 2023; Gao et al., 2023), code generation (Zhuo, 2023), writing assistance (Schick et al., 2023; Raheja et al., 2023), factual consistency (Cohen et al., 2023; Gekhman et al., 2023; Luo et al., 2023), and more. Schick et al. (2023); Raheja et al. (2023). Additionally, many studies have investigated leveraging LLMs for general-purpose NLG evaluation. For instance, Liu et al. (2023); Chen et al. (2023); Wang et al. (2023) have investigated the effectiveness of GPT-4 and ChatGPT respectively, against existing reference-free evaluation methods for NLG, whereas Fu et al. (2023) proposed an evaluation framework, GPTscore, which utilizes the emergent abilities of LLMs to score generated texts. Recently, Li et al. (2023) and Zheng et al. (2023) conducted similar experiments by employing LLMs as evaluators in a pairwise setting to judge between two generations. Although these works present promising results for LLMs as automatic evaluators, our work dives deeper into the limitations of machine evaluations. We take a closer look at machine artifacts that could be detrimental to data quality by benchmarking an exhaustive list of biases impacting LLMs-as-evalators.
LLM Evaluation Benchmarks.It is becoming increasingly challenging to evaluate open-source LLMs as they become more powerful and performant. As a result, there has been an increasing need to develop better evaluation benchmarks for measuring the performance of LLMs. However, most of these benchmarks, such as LM-Eval-Harness (Gao et al., 2021), MMLU (Hendrycks et al., 2021), HELM (Liang et al., 2022) and BIG-Bench (Srivastava et al., 2023), have focused only on measuring the LLMs' core capability on a varied, but confined set of tasks, and not their capabilities as evaluators. Numerous works have also attempted to develop benchmarks based on the capability of LLMs to conduct NLG evaluations by evaluating candidate texts in isolation or comparing two or more candidate texts in accordance with specific evaluation aspects. However, they are restricted because they still require token-level comparisons between the input text and reference (Zhang et al., 2020; Tang et al., 2023). Our work aims to resolve these issues by creating a comprehensive, publicly available benchmark (CoBBLer) across a wide variety of LLMs to measure their abilities as evaluators of generated text while deeply investigating the numerous implicit and induced cognitive biases in LLM evaluators. Our work in this direction overlaps with Bai et al. (2023), who propose a Language-Model-as-an-Examiner benchmark to evaluate four foundational models on open-domain question-answering tasks. However, our works differ in examining the impact of cognitive biases in LLM evaluations while (Bai et al., 2023) test the ability of LLMs to formulate and grade their own questions. Our work also directly overlaps with Zheng et al. (2023), who propose LLM-as-a-judge to study the capability of LLMs to emulate human preferences. Our experimental setups are similar, but we highlight key differences. We cover a wider demographic of current popular language models and an overall different focus on QA as opposed to other domains such as math and reason. Furthermore, our benchmark emphasizes a wider range of biases (implicit/induced) to better describe machine artifacts when used as automatic evaluators. Specifically, CoBBLer measures the extent to which each LM-as-evaluator is impacted in each decision by certain artifacts within prompts (i.e., prompting format, prompt information) over a comprehensive list of cognitive biases.
Cognitive Biases in LLMs.While biases have been well-known to exist in LLMs (Wang et al., 2023; Talboy and Fuller, 2023; Wu and Aji, 2023), many recent works investigating the behaviors of
LLMs have also uncovered similarities with cognitive biases. Some recent works (Zhao et al., 2021; Liu et al., 2022; Lu et al., 2022) have shown that the order of training examples in GPT-3 could lead to differences in accuracy between near chance and near state-of-the-art. Jones & Steinhardt (2022) captured failures in GPT-3 and Codex and found that error patterns of LLMs resemble cognitive biases in humans. Our work overlaps with these in some of the biases we cover, but we present a much more holistic and comprehensive evaluation of LLMs, both when they are used as predictors, as well as evaluators of text quality. Along this aspect, while our work is close to Wu & Aji (2023), who investigate biases related to fabricated factual and grammatical errors while using GPT-4 as an evaluator, our work is much more comprehensive in terms of the number of LLMs analyzed, the types of biases analyzed, and creation of an open benchmark.
## 3 CoBBLer: Cognitive Bias Benchmark for LLMs as Evaluators
The following criteria are used to select each type of evaluation bias:
* **General Applicability.** Text evaluation tasks should be generalizable to most prompting scenarios; tasks that observe too specific subtleties within the prompt are not helpful.
* **Impartiality.** The prompt should not involve any leading statements to extract some desired quality of the evaluations
* **Memorylessness.** The current evaluation instance should not rely on any previous behaviors. Each instance should be self-contained when extracting each bias metric.
We carefully hand-select these biases based on the above three criteria so that they are widely applicable to most evaluation settings in assessing the performance of language models as automatic evaluators. Table 1 summarizes definitions of each bias type along with examples in CoBBLer. We categorize our benchmark into two main classes: (1) **Implicit** and (2) **Induced** Biases. For Implicit biases, we feed a general prompt that shows system outputs in a pairwise manner to extract any biased behaviors that implicitly exist within the model's evaluations without giving the model any additional information other than the instructions and the system outputs. For Induced biases, we feed prompts geared towards each different bias, which we describe as "induced behaviors" that achieve a similar goal to adversarial attacks, such as presenting false information, to measure the impact on each of the models' evaluation quality. We note that due to the nature of induced biases, criterion 2 is not entirely fulfilled, but we specifically choose ones that are still generally observable in an evaluation setting.
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Bias** & **Bias Behavior** & **Example** \\ \hline Order Bias & The tendency to give preference to an option & **System Star:**\(x\) & **System Square:**\(y\) \\ & based on their order (e.g. first, second, or last) & **System Square:**\(y\) & **System Star:**\(x\) \\ \hline Compassion & The tendency to observe different behaviors & **Model** Alpaca:**\(x\) & **Model Vicuna:**\(y\) \\ Fade & when given recognizable names as opposed to & **Model Vicuna:**\(y\) & **Model Alpaca:**\(x\) \\ & anonymized aliases. & **Model Star (You):**\(x\) \\ \hline Egocentric & The inclination to prioritize one’s own responses & **Model Square:**\(y\) \\ Bias & regardless of response quality. & **Model Square:**\(y\) \\ \hline Salience & The tendency to prefer responses based on the & **System Star:** The quick brown fox jumps \\ Bias & length of the response (more often preferring & over the lazy dog. \\ & shorter responses or longer responses). & **System Square:** The fox jumped. \\ \hline Bandwagon & The tendency to give stronger preference to majority belief without critical evaluation. & **85\%** believe that System Star is better. \\ \hline Attentional & The inclination to give more attention to irrelevant or unimportant details. & **System Square** likes to eat oranges and apples \\ \hline \hline \end{tabular}
\end{table}
Table 1: Definition and examples of each bias type in CoBBLer. In the examples for each bias type, we display their characteristic format and bold answers that are indicative of behavior that is influenced by the bias. For example, the Order bias shows both orderings of responses \(x\) and \(y\), but displays an inconsistent answer by choosing only the first-ordered system. Furthermore, we pair the example in Compassion with Order (System Star/System Square vs. Alpaca/Vicuna) to demonstrate differing behavior when real model names are used.
### Implicit Biases
We categorize biases as "implicit" if the biases inherent to the model-as-an-evaluator can be witnessed without including any additional information other than instructing the model to judge the quality of two given generated texts and declaring one the winner.
**Order Bias** is an evaluation bias we observe when a model tends to favor the model based on the order of the responses rather than their content quality. Order bias has been extensively studied (Jung et al., 2019; Wang et al., 2023; Zheng et al., 2023), and it is well-known that state-of-the-art models are still often influenced by the ordering of the responses in their evaluations. To verify the existence of order bias, we prompt both orderings of each pair and count the evaluation as a "first order" or "last order" bias if the evaluator chooses the first ordered (or last ordered) output in both arrangements respectively.
**Compassion Fade (Naming).**(Butts et al., 2019; Vastfjall et al., 2014) is a cognitive bias that denotes a decrease in empathy as the number of identifiable individuals increases. To this phenomenon, we present real/identifiable names associated with each response to each evaluator instead of anonymous aliases (e.g. System A). To analyze this bias, we determine the evaluator to be affected if they exhibit different behaviors from when anonymous aliases were used as a result of using recognizable names. Thus, an unbiased evaluator would make evaluations similar to when anonymized names were responses.
**Egocentric Bias (Self-Preference).**(Ross and Sicoly, 1979) is a cognitive bias that refers to the tendency to have a higher opinion of oneself or to more easily accept ideas if they match one's own. We utilize this definition to measure how often evaluators prefer their own responses in both the anonymized and named cases. We define an evaluator to be egocentrically biased if, for each instance, the evaluator prefers its own response over others. We note that an unbiased evaluator would choose between themselves and other comparand models equally in proportion. However, we highlight that some models would naturally generate higher quality responses (e.g., GPT4 vs. RedPajama), resulting in a stronger inclination for such evaluators to choose their own responses.
**Salience Bias (Length).**(Schenk, 2010; Zheng et al., 2023) The evaluator tends to favor responses that are either shorter or longer in length. An unbiased evaluator would be split evenly between responses that are shorter or longer in length. We examine this bias by looking at evaluations in which a model preferred a response that is either shorter or longer in token length.
### Induced Biases
We categorize a bias as "induced" when it requires modifications to the primary prompt or the inclusion of additional information with the original instructions. We specifically look to test the robustness of each of the models as evaluators by introducing false or off-topic information and examining the impact that these setups have on the quality of their role as evaluators. For both of the induced biases below, we would expect an unbiased evaluator to generally pick responses highlighted by Bandwagon and Distraction around 25% of the time given by an empirically calculated threshold (Random) by randomly choosing model responses.
**Bandwagon Effect**.(Schmitt-Beck, 2015) The evaluator's preferences are influenced by the collective preference rather than being based on their own independent judgments. We add an additional sentence after the initial instruction stating a fake statistic by choosing one of the comparand outputs as preferred by a majority of people, such as _"85% believe that System Star is better."_. To validate that a model was affected by this bias, we prompt each pair twice, stating each model output in the statistic to examine whether the evaluator conforms to the majority.
**Attentional Bias (Distraction).** In addition to the original instruction, we follow a similar setup from Shi et al. (2023) where we include irrelevant information about one of the comparand models to test the ability of evaluators. For example, we include a meaningless sentence such as _"System Star likes to eat oranges and apples."_ We identify the evaluator to be distracted if it prefers the model mentioned in the distraction or if its valid response rate significantly drops. We repeat this setup twice for each pair (once for each model in the pair) to ensure a fair evaluation.
Experiment Setup
In this section, we discuss our evaluation framework for benchmarking each of the different biases in LLMs as evaluators for text quality comparison. Figure 1 describes the pipeline for our experiments. We first generate responses from various LLMs considered for this study (Section 4.1) and present them in a pairwise fashion for quality evaluation by each evaluator model (Section 4.2). In Section 4.3, we describe our setup for the human preference study.
### Datasets and Models
We choose two widely used datasets employed to train and benchmark instruction-tuned models.
**Eli5**(Fan et al., 2019) is a long-form question-answering dataset constructed from \(270k\) threads from the "Explain Like I'm Five" Reddit forum. The online forum consists of a community for individuals to ask various questions, and answers are provided in a format that is comprehensible to five-year-olds, along with assigned scores based on community votes. For our purposes, we only utilize the questions and their highest-rated answers to generate responses and benchmark automatic evaluators for text-generation quality.
**BigBench**(Srivastava et al., 2023) is a collection of benchmarks that look to probe the abilities of language models over a diverse range of tasks. We specifically utilize the _strategyQA_(Geva et al., 2021) dataset, which was constructed by crowdsourcing questions from writers as well as their responses with short justifications. We choose the _strategyQA_ dataset to generate responses that require multi-step reasoning to effectively benchmark the ability of models to comprehend and compare the quality between two different explanations.
We specifically only choose corpora in the Question-Answering (Q/A) domain for ease of use in generating responses. As we are looking to test the ability of language models to perform as unbiased evaluators to judge response quality and correctness, the Q/A response format presents the most natural setting for these comparisons. Specifically, we choose a set of 50 question-answering examples from the Eli5 and BigBench (taking 25 random examples from each), generate responses from each language model for each question (a total of 50 responses per model), and prompt them to evaluate the generated responses.
**Models** We study the behaviors of 15 different models organized into 4 groups, covering a wide range of model sizes from \(3B\) parameters to over \(175B\). In Table 2 from top to bottom, we evaluate GPT-4, ChatGPT, and InstructGPT (OpenAI, 2023) that comprise of the largest group of models with size \(>\)\(100B\) parameters. In the second group containing models of size \(>\)\(40B\) parameters, we examine LLAMAv2(Touvron et al., 2023), LLAMa(Touvron et al., 2023), Coherence, and Falcon(Almarouei et al., 2023). The third group is comprised of models \(>\)\(10B\) parameters, which include Alpaca(Taori et al., 2023), Vicuna(Chiang et al., 2023), OpenAssistant(Kopf et al., 2023), DollyV2(Conover et al., 2023). Finally, the last group consists of models with size \(<\)\(10B\) parameters comprised of Baize(Xu et al., 2023b), Koala(Geng et al., 2023), WizardLM(Xu et al., 2023a), MPT(Team, 2023), and RedPajama(Computer, 2023).
### Text Evaluation Setting
**Response Generation**. Figure 1 demonstrates our generation and evaluation3 pipeline for CoBBLEr. We first generate the responses from each of the models by prompting 50 instructions from the combined dataset, which we post-process to extract only the response from all models, to have a total of 750 generations. We note that for chat models, we slightly alter the instruction prompt but keep the general instruction template the same for uniformity.
Footnote 3: We define “models” and “evaluators” interchangeably
**Pairwise Evaluation**. After we collect all the model responses, we then prompt each evaluator to compare the generations in a pairwise manner. We generate all unique pairs amongst all models for each of the 50 instances, creating a total of 5250 instances for each evaluator to rank. To evaluate whether an evaluator is biased or not, we run each pairwise instance twice by switching their order for the second evaluation call, generating a total sample size of 10500 for each bias type (totaling 42K samples evaluated for each evaluator as Salience and Egocentric can be directly extracted
from Order). We prompt each evaluator by giving two responses that are randomly shuffled and anonymize the model associated with the response with an alias such as "System Star" or "System Square." To reduce ambiguity, we ask the evaluators to choose only one of the two models to have a clear winner and, furthermore, include a specific response format. We specify guidelines4 within the evaluation prompts by asking the evaluator to compare generations based on the _coherence_ of each of the responses in terms of correctness of content and alignment to the instruction/reference provided. Within the evaluation prompt, we include the original generation instruction and reference as context but keep a more open-ended setting for LLMs to judge responses they prefer more naturally. We then post-process each "eval-gen" via pattern matching and label each output as a valid or invalid response, such that if a response is valid, we give one point to the preferred system. We additionally conduct a list-wise ranking with \(N=4\) models. However, we find that most LLMs in the \(<\)\(40B\) range have trouble generating valid rankings (Appendix C). This may be due to the increased complexity of the task (Dziri et al., 2023) where the ranking of \(N\) generations may become much more difficult as \(N\) gets larger (since the task complexity increases).
Footnote 4: The exact evaluation prompt formats for each bias benchmark are viewable in Appendix B
**Benchmarking.** The benchmark is carried out in a leaderboard style in which the evaluator with the lowest proportion of biased responses across all the different benchmarks is identified as the "least biased". As the comparisons are limited to a pair-wise fashion, and in order to isolate potential biases from other bias benchmarks, we identify an evaluator as biased by empirically calculating a threshold via random selection as well as evaluating each pairwise instance _twice_ for validation. For example, in the Order benchmark, each pair is evaluated twice in which both orderings are viewed (i.e. System Star is shown ordered first, then System Square is shown ordered first). We then randomly select a model in each response pair and measure the percentage of first-order biases, where the first-ordered model is chosen in each pair, and similarly measure last-order bias. We calculate the random selection threshold for each bias benchmark, and models above these thresholds are identified to exhibit the said bias.
We carry out this experiment for each bias that requires modifications5 to the original prompt and process their evaluation outputs (we refer to these as "_eval-gens_") to examine their preference behaviors. The prompts for each of the different bias benchmarks are described in Appendix B, and the parameters utilized for response.
Footnote 5: Some biases such as Salience do not require any modifications to the original prompt as they can be directly extracted in tandem with Order.
### Human Preference Study
Collecting Human Preferences in \(\mathbf{N}=\mathbf{15}\)-rankwise SettingWe investigated potential relationships between human evaluations and those conducted by models on generated texts. For this, we gathered human preferences using crowdsourcing. We engaged six workers on Amazon Mechanical Turk (AMT) workers, all of whom possessed at least a US high school diploma and achieved Human Intelligence Task (HIT) approval rates greater than 99%. Each worker evaluated 50 sets of instances, with each instance comprising 15 responses generated by 15 distinct models. The workers then ranked these texts based on their preference, from top (highest quality) to bottom (lowest quality), while considering two main criteria: (1) _fluency_ concerning the given instruction sentence in an instance and (2) _alignment_ and _coherency_ with both instruction and reference sentences in the instance. Further details on worker recruitment, payment, and the interface design for these experiments, are described in Appendix D.1 and D.3.
Measuring Human-LLM Evaluation Similarity We calculate the Rank-Biased Overlap (RBO) 6 score (Webber et al., 2010) to measure the _agreement_ between two ranked outputs by humans and models. RBO, which can vary from 0 (non-conjoint) to 1 (identical), assigns more weight to the top \(k\) items in the lists being compared. Given our assumption that workers will likely place the best-quality generated texts at the top of their ranked lists, RBO is a fitting metric for our experimental setup. We adopted a parameter of \(p=0.8\), which concentrates 86% of all weights on the top 5 list positions, following Webber et al. (2010). Further mathematical details of RBO can be found in Appendix D.2.
We analyzed the extent of similarity between human preferences and model evaluations in ranking generated texts across 15 different LLMs. To achieve this, we proceeded with specific steps of standardizing rankings across different annotators, enabling us to quantify the overall similarity between human and LLM evaluations. First, we counted instances where one model received a higher ranking than another model in pairwise comparisons, resulting in \(\binom{15}{2}\) such comparisons for each instance. We then aggregated these counts across six annotators, which allowed us to identify a normalized ranking of 15 LLM-generated texts for each instance. Applying analogous procedures to the rankings evaluated by the 15 models themselves, we established a corresponding set of 15 models for each instance. Finally, we computed the average top-5 weighted RBO between the aggregated ranking of human preference and that of LLM evaluations across all 50 instances, as a final RBO between human preference and model evaluation.
Identifying Biases in Pairwise Human PreferenceWe also investigated the impact of specific biases 7 (Order Bias, Salience Bias, Bandwagon Effect, and Attentional Bias) within human preferences in pairwise settings, by sourcing AMT workers. More details on AMT recruitment, study designs, and payments for the pairwise setup are described in Appendix D.1 and D.3.
Footnote 7: Other types of biases such as Compassion Fade and Egocentric Bias cannot be applied to human cases. We tested the effect of Salience Bias from the Order Bias experiment setup.
The evaluation procedure for these human biases mirrored that described in Section 4.2. However, due to the vastness of the pairwise model comparison settings, we first randomly selected 25 of the 50 total instances. Then for each instance, we randomly paired 15 models and created another 15 pairs by reversing their order, resulting in 30 pairs in total. This finally totals 750 pairs for all 25 instances.
After collecting all annotations for each bias, we calculated the average IAA using the RBO for each bias. Each instance consisted of uniquely (but randomly) sampled model pairs, with some models appearing multiple times. Hence, we determined the rank of each model in the sampled pairs by calculating the ratio of the model's "win" to its total appearances. With this data, we re-ranked each model in the sampled pairs per instance. Afterward, we computed the mean RBO among the ranked model lists from each group of three AMT workers per instance. We then averaged these RBO values over all 25 instances. Finally, we computed the bias proportion for each annotator by dividing the number of biased pairwise samples by 15. Following these steps, we aggregated the bias proportions across all annotators, highlighting the overall influence of bias on human preference in pairwise selections.
## 5 Results and Discussion
For each bias, we analyze the performance of each of the 15 models as evaluators of generated responses in the QA setting. We provide details of LMs-as-evaluators relative to the Random baseline on each of our bias benchmarks in Table 2, and Fig. 2b and 4, showing the intensity of each bias as well as the distribution of the biased responses. We provide a visual breakdown of the proportional impact of the average performance of each model as unbiased evaluators in Fig. 3. On average, we see that models within the \(10B\) size range are most affected by each of the bias benchmarks in Fig. 3a. This can be attributed to the Bandwagon Effect and Attentional Bias benchmarks, in which the induced biases contribute to almost half of their average bias score as seen in Fig. 3b. Furthermore, we see that, on average, the implicit biases contribute relatively similarly to each model's overall bias scores, indicating that scaling up the model size does not reduce implicit biases in evaluators.
### Bias Analysis
Implicit BiasesWe first examine the performance of each evaluator on the implicit bias benchmarks for Order Bias, Compassion Fade, Salience Bias and Egocentric Bias. For the Order Bias benchmark in Table 2, we observe that most models (11/15) tend to be drawn towards either the first- or last-ordered model in each of the pairwise comparisons. Amongst the models influenced, we further observe that the majority of models tend to prefer the first-ordered response, especially within the second size group (\(>\)\(40B\)), in which the first-ordered system was strongly favored in over
50% of comparisons. In contrast, we see that the fourth size group(\(<\)\(10B\)), consisting of the smallest models, tends to prefer the last-ordered response.
For Compassion Fade, we show the real model names to the evaluators instead of the anonymized aliases. However, this bias is difficult to gauge solely based on the random threshold. Therefore, we further compare the impact of model names vs. anonymized aliases by jointly comparing the results from Compassion Fade with the ones from Order Bias. In essence, for a model to not be influenced in its decisions by real model names, we expect the results for Compassion Fade to be relatively similar to the Order Bias benchmark. However, we see in Table 2 that all models are dramatically influenced by real model names, where, surprisingly, the bias is largely increased for the largest group of models (\(>\)\(100B\)), and in contrast, the smallest models (\(<\)\(10B\)) where it is mostly reduced. This difference in the degree of impact that Order Bias and Compassion Fade have on each model can be viewed in Fig. 4 and Table 2.
Next, we examine the tendency for models to choose their own responses or Egocentric Bias in both the aliased and named cases. For the anonymized aliases, the largest models, with the exception of InstructGPT, generally tend to prefer their own responses (\(>50\%\)). Additionally, although Koala has a low valid response rate, it also tends to prefer its own responses in cases where it produces a valid preference. When including real model names, we see a large drop in self-preference for models in the largest size group (\(>\)\(100B\)) models, which see a large increase in bias for each position. However, on average, we see an increase in self-preference with real model names amongst the smaller models in the size range \(>\)\(10B\) and \(<\)\(10B\), suggesting that real model names draw smaller models to prefer their own responses regardless of text quality more often compared to anonymized aliases.
For Salience Bias, we observe that the larger models in the first and second size groups are drawn more strongly to longer responses, which align with findings from other works (Wu and Aji, 2023; Zheng et al., 2023). However, smaller models (excluding MPT) tend to be less influenced by the length of the responses they are comparing, suggesting that smaller models in the third and fourth size groups are less susceptible to the text's lengths.
Lastly, we address specific models such as LLAMAv2, LLaMA, DollyV2, and Koala that show abnormal results on most of the benchmarks. This can be attributed to their low valid response rates, which are displayed in Table 7 that list the average percentages in which models return a valid choice
Figure 3: Overview of performance across all of the bias benchmarks categorized into 4 size groups from results in Table 2. The red-dotted line denotes the average threshold taken from the calculated Random in which the average scores were taken by summing their scores from Table 2 and then taking their average.
between "System Star" or "System Square". This may be explained by our prompting format or the capabilities of the model themselves, in which models with a particularly low valid response rate may have difficulty understanding the instructions provided. 8
Footnote 8: We provide example evaluation generations for each model with our code [https://github.com/minnesotalamlp/cobbler](https://github.com/minnesotalamlp/cobbler)
Induced BiasesNext, we evaluate the performance of each evaluator on the induced bias benchmarks: Bandwagon Effect and Attentional Bias. For Bandwagon Effect, we observe that almost all models (11/15) are heavily influenced by irrelevant statistics regarding majority preference in which \(>70\%\) of evaluations on average followed the bandwagon preference regardless of text quality. Although we only included a simple fake statistic (e.g. _85% of people preferred "System Star"_), we see that evaluators can be heavily influenced by this external information that heavily impairs their ability to make fair comparisons. Notably, we see that models with a particularly low valid response rate (e.g. LLaMA, Koala) were not so impacted by the bandwagon effect, although this can be attributed to the limited evaluations that were properly generated.
For Attentional Bias, we see that around half of the models' rankings are influenced by irrelevant information. Specifically, we see that models in the third size group (\(>\)\(10B\)) were the most strongly impacted by the distracting information, with \(>80\%\) of evaluations being counted as distracted. On the other hand, API-based models such as ChatGPT and Coherence remained robust against these distractions in their rankings. We include the list of distractions we use in Appendix B.
### Model Size
We conduct a supplementary experiment analyzing the impact of each bias for different models scaled by size in Table 3. We present results from a range of model sizes with LLaMAv2 and Vicuna. Interestingly, we see that the valid response rate within LLaMAv2 goes down as the model size is
\begin{table}
\begin{tabular}{l c c c c c c c c c c} \hline \hline Model & Size & \multicolumn{3}{c}{Order} & \multicolumn{3}{c}{Comp.} & \multicolumn{3}{c}{Ego.} & Sal. & Band. & Attn. \\ & & First & Last & First & Last & Order & Comp. & & & \\ \hline Random & - & 0.24 & 0.25 & 0.24 & 0.25 & 0.24 & 0.24 & 0.5 & 0.25 & 0.25 \\ \hline GPT4 & - & 0.17 & 0.06 & 0.46 & 0.33 & 0.78 & 0.06 & 0.56 & 0.0 & 0.0 \\ ChatGPT & 175B & 0.38 & 0.03 & 0.41 & 0.25 & 0.58 & 0.17 & 0.63 & 0.86 & 0.06 \\ InstructGPT & 175B & 0.14 & 0.24 & 0.29 & 0.19 & 0.28 & 0.27 & 0.66 & 0.85 & 0.54 \\ \hline LLAMAv2 & 70B & 0.47 & 0.08 & 0.09 & 0.17 & 0.06 & 0.0 & 0.62 & 0.04 & 0.03 \\ LLaMA & 65B & 0.61 & 0.0 & 0.0 & 0.0 & 0.0 & 0.02 & 0.42 & 0.0 & 0.01 \\ Coherence & 54B & 0.33 & 0.17 & 0.38 & 0.27 & 0.27 & 0.15 & 0.60 & 0.82 & 0.14 \\ Falcon & 40B & 0.74 & 0.03 & 0.09 & 0.18 & 0.05 & 0.11 & 0.59 & 0.28 & 0.40 \\ \hline Alpaca & 13B & 0.0 & 0.81 & 0.23 & 0.29 & 0.18 & 0.39 & 0.47 & 0.75 & 0.81 \\ Vicuna & 13B & 0.32 & 0.17 & 0.17 & 0.15 & 0.27 & 0.45 & 0.53 & 0.81 & 0.78 \\ OpenAssist & 12B & 0.56 & 0.11 & 0.03 & 0.22 & 0.15 & 0.06 & 0.49 & 0.72 & 0.82 \\ DollyV2 & 12B & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 \\ \hline Baze & 7B & 0.0 & 0.95 & 0.21 & 0.32 & 0.02 & 0.36 & 0.49 & 0.82 & 0.24 \\ Koala & 7B & 0.24 & 0.01 & 0.0 & 0.11 & 0.48 & 0.86 & 0.55 & 0.13 & 0.1 \\ WizardLM & 7B & 0.08 & 0.64 & 0.22 & 0.34 & 0.14 & 0.29 & 0.53 & 0.76 & 0.27 \\ MPT & 7B & 0.49 & 0.1 & 0.11 & 0.27 & 0.21 & 0.25 & 0.63 & 0.95 & 0.52 \\ RedPajama & 3B & 0.08 & 0.38 & 0.16 & 0.33 & 0.04 & 0.06 & 0.52 & 0.18 & 0.17 \\ \hline \hline \end{tabular}
\end{table}
Table 2: A comparison of 15 models with different ranges of model sizes across six different bias benchmarks. In terms of making unbiased comparisons, a higher score indicates poorer performance, while a lower score indicates stronger performance. Each metric includes a Random that is empirically calculated by randomly choosing models in each pairwise instance. For Order Bias and Comparison Fade, _First_ indicates the proportion of responses preferring the first ordered response and _Last_ for the last ordered response. For Salience Bias, models with scores less than 0.5 prefer responses with fewer tokens, and scores above 0.5 prefer responses with more tokens. The background color of each metric is determined by the difference between the value and the corresponding Random metric (darker shade indicates stronger bias).
scaled up, but the impact of each bias greatly increases as the model size is scaled down (with the exception of Salience Bias). On the implicit bias benchmarks, LLAMAv2 exhibits more robust performance with the proportion of responses affected by each bias Salience Bias in which longer responses are much more strongly preferred. For the induced bias benchmarks, a similar trend is viewed in which the effect of each bias on the model as an evaluator is dampened in correlation to the model scale. On the contrary, Vicuna exhibits a stronger valid response rate as the model size is scaled; however, certain implicit biases are much more amplified, such as Order Bias and Salience Bias. For implicit biases, Vicuna tends to prefer itself when actual model names are used as size is scaled smaller while tending to prefer much more verbose responses as model size is scaled higher. Across the induced biases, Vicuna performs more resiliently proportionally to scale, although still strongly influenced by Bandwagon Effect but much less affected by Attentional Bias. We include another visualization correlating the overall performance on each of the bias benchmarks with model size for the main results in Figure 3a.
### Agreement with Human Preferences
N-rank wise PreferenceWe report an average inter-annotator agreement (IAA) score, determined using the average RBO among the six AMT workers, of 0.478. This signifies a modest but reasonable consensus among workers in ranking the LLM outputs, given the challenges of ranking all 15 LLM-generated outputs.
The average RBO score between human and model preferences is 0.496. This indicates that, on average, 49.6% of the ranks of 15 LLM-generated texts are overlapped between human preferences
Figure 4: Proportion of responses that were labeled bias for each bias benchmark. We visualize the distribution of the 15 models tested that varies by the y-axis. The red dashed line indicates the Random threshold for each bias benchmark that serves as a litmus between biased and unbiased LMs-as-evaluators. The spread on the x-axis is randomly distributed for visual clarity.
\begin{table}
\begin{tabular}{l c c c c c c c c c c c} \hline \hline Model & Size & Order & \multicolumn{2}{c}{Compression} & \multicolumn{2}{c}{Eoocent.} & \multicolumn{2}{c}{Salience} & Bandwag. & Attent. & Avg. Valid \\ & & First & Last & First & Last & Order & Comp. & & & & Responses \\ \hline LLAMAv2 & 70B & 0.47 & 0.08 & 0.09 & 0.17 & 0.06 & 0.0 & 0.62 & 0.04 & 0.03 & 0.54 \\ & 13B & 0.82 & 0.04 & 0.09 & 0.19 & 0.07 & 0.0 & 0.79 & 0.28 & 0.28 & 0.86 \\ & 7B & 0.98 & 0.0 & 0.25 & 0.33 & 0.01 & 0.02 & 0.49 & 0.42 & 0.02 & 0.98 \\ \hline Vicuna & 33B & 0.95 & 0.0 & 0.20 & 0.38 & 0.03 & 0.25 & 0.84 & 0.69 & 0.26 & 0.99 \\ & 13B & 0.32 & 0.17 & 0.17 & 0.15 & 0.27 & 0.45 & 0.53 & 0.81 & 0.78 & 0.87 \\ & 7B & 0.58 & 0.04 & 0.14 & 0.0 & 0.20 & 0.64 & 0.58 & 0.50 & 0.61 & 0.86 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Performance comparison in proportion to their model scale. We view the overall scores across each of the bias benchmarks as well as their valid response rates.
and model evaluations. This finding points out that model evaluations do not closely align with human preferences regarding their reasoning behind ranking the quality of LLM-generated texts.
Figure 2c presents the average RBO scores for each of the 15 models compared against the aggregated human preferences collected across 50 instructions. ChatGPT achieved the highest average RBO score of 0.619 when compared to human evaluations. All other models also demonstrated lower alignment with human preferences, with Baize standing out with the lowest RBO score of 0.11 on average in relation to human preferences among all 15 models. Models with decreased size also tend to misalign with an overall human preference, as we observe that the average RBO of models of size \(>10B\) and \(<10B\) are 0.44 and 0.32, respectively, which are lower than models of \(>40B\) (0.52) and \(>100B\) (0.49).
We also provide examples of the ranking preferences from each of the evaluators compared to human preferences in Table 4. We see that although within the top 5 rankings for these examples, models such as GPT4 and Vicuna share some similarities in their preferences, most models have little overlap with human preferences.
Biases in Pairwise Human PreferenceThe resulting average IAAs were 0.33 (Order Bias), 0.44 (Bandwagon Effect), and 0.38 (Attentional Bias), signifying a modest degree of agreement among human annotators.
The proportion of biased responses across all human annotators for Order Bias, Salience Bias, Bandwagon Effect, and Attentional Bias are presented in the table below, as a comparison with Vicuna. Similarly with Table 2, a higher score indicates more biased performance. We observe that human annotators still exhibited biases when making their preferences on LLM evaluations, but less than LLM evaluators on average.
The average proportion of Order Biases across all annotators are 0.20 (first) and 0.18 (last). Compared to Table 2, the annotators showed lower first-order bias than the average of proportions shown by models of size \(>100B\) (0.23), \(>40B\) (0.54), and \(>10B\) (0.29), respectively. These scores indicate that the annotators, on average, exhibited less first-order biases than models of greater size. However, the annotators showed higher last-order bias than the average of proportions shown by models of size \(>100B\) (0.11) and \(>40B\) (0.07), respectively, but lower than models of \(>10B\) (0.37) and \(<10B\) (0.42).
For the Bandwagon effect, humans were less affected (0.47) than models of size \(>100B\) (0.57), \(>10B\) (0.76), and \(<10B\) (0.57). For Salience Bias, humans showed a greater preference for longer responses (0.52) on average, less than the proportions of models of size \(>100B\) (0.62), \(>40B\) (0.56), and \(<10B\) (0.54). However, we observed that humans showed a greater Attentional Bias in text quality evaluation (0.35), which exceeds models of size \(>100B\) (0.20), \(>40B\) (0.14), and \(<10B\) (0.26).
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline \multicolumn{2}{c}{Instruction: Did people in Korea under} & \multicolumn{4}{c}{Instruction: Why classical music still sounds} \\ \multicolumn{2}{c}{Japanese Rule watch a lot of Iron Chef?} & \multicolumn{4}{c}{good today (after four hundred years), but lots} \\ \multicolumn{2}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} \\ \cline{5-10} GPT4 & Compare & Vicuna & **MPT** & Human & GPT4 & Otherwise & Victoria & **MPT** & Human \\ \hline GPT4 & Baize & GPT4 & Baize & GPT4 & GPT4 & Instricted & Victoria & Baize & Baize \\ WizardM & GPT4 & Compare & WizardLM & Vicuna & ChairGPT & Vecuna & WizardLM & Vicuna \\ ChaigGPT & WizardLM & Baize & Alipaca & Coffee & Falcon & WizardLM & Baize & ChatGPT & Koala \\ Baize & Coffee & Instricted &MPT & ChatGPT & Baize & Alipaca & Koala & Vecuna & Instricted \\ Content & Alipaca & Dolly & GPT4 & WizardLM & OptAssist & Koala & Cientific & Instricted & Cientific \\ \hline \hline \end{tabular}
\end{table}
Table 4: Two examples of the (top-5) rankings for each LM-as-evaluator of the four model sizes compared to the average human rankings. We calculate the rank-wise scoring for each LM-as-evaluator by aggregating the number of “wins” for each model from the pairwise comparisons and construct a list-based ranking. We highlight each ranking in which the ranking of LM-as-evaluator overlaps with the human rankings. Full ranking data can be viewed on our project page.
## 6 Conclusions
In this paper, we provide an extensive analysis of 15 recently developed LLMs and their capabilities as automatic annotators for text quality comparison in the Q/A setting. We propose a new benchmark CoBBLER that evaluates the performance of these models for (1) **Implicit** and (2) **Induced** biases. Our findings reveal that the majority of these LLMs-as-evaluators exhibited several cognitive biases, while humans relatively less so. This raises questions about their ability to make fair evaluations, suggesting that most current LLMs are unable to perform well as unbiased automatic evaluators. Furthermore, we also gather human preferences and find that most models are not well-aligned with human judgments, reaching only 49% agreement on average. With evaluation capabilities that include various cognitive biases as well as a low percentage of agreement with human preference, our findings suggest that LLMs are still not suitable as fair and reliable automatic evaluators.
## Limitations and Future Work
Even over several extensive experiments to benchmark the evaluation capabilities and the influence of cognitive bias in LLMs, we note various limitations in the current study. Some models, as viewed in Table 7, reached a very low valid response rate, notably LLAMA and DollyV2. This may be a result of the prompting format input, in which these tested models have difficulty understanding the instructed task in which proper evaluations could not be extracted. Our results are also only limited to the Q/A domain, in which certain biases may be more amplified or have less impact in different evaluation tasks, such as mathematical reasoning or text generation, that may vary in the performance of LLMs-as-evaluators. Furthermore, we acknowledge a few limitations within our human judgment study, notably the human annotations themselves, which reach subpar Inter-Annotator Agreement. This may be due to the difficulty of the task, asking MTurk annotators to rank 15 models to limit the number of comparisons required in a pairwise format, but also increases the complexity of the task itself, which may have caused lower quality in the annotations. Additionally, the computational cost and necessary resources to replicate the results in CoBBLER may limit the ability of the general audience to replicate the benchmark. Models ranging from 3B to 170B+ parameters in size require extensive resources and several GPUs that become very costly to run thousands of inference calls.
For future areas of exploration, potential de-biasing methods provide another area of interest in ameliorating each bias. For example, techniques such as chain-of-thought reasoning can be employed in order to reduce the effect of each benchmarked bias for current models. Other extensions may involve utilizing the CoBBLER data to further fine-tune a language model that reduces these cognitive biases that may provide higher-quality evaluations.
|
2307.16878 | Contrastive Learning for API Aspect Analysis | We present a novel approach - CLAA - for API aspect detection in API reviews
that utilizes transformer models trained with a supervised contrastive loss
objective function. We evaluate CLAA using performance and impact analysis. For
performance analysis, we utilized a benchmark dataset on developer discussions
collected from Stack Overflow and compare the results to those obtained using
state-of-the-art transformer models. Our experiments show that contrastive
learning can significantly improve the performance of transformer models in
detecting aspects such as Performance, Security, Usability, and Documentation.
For impact analysis, we performed empirical and developer study. On a randomly
selected and manually labeled 200 online reviews, CLAA achieved 92% accuracy
while the SOTA baseline achieved 81.5%. According to our developer study
involving 10 participants, the use of 'Stack Overflow + CLAA' resulted in
increased accuracy and confidence during API selection. Replication package:
https://github.com/disa-lab/Contrastive-Learning-API-Aspect-ASE2023 | G. M. Shahariar, Tahmid Hasan, Anindya Iqbal, Gias Uddin | 2023-07-31T17:41:10Z | http://arxiv.org/abs/2307.16878v2 | # Contrastive Learning for API Aspect Analysis
###### Abstract
We present a novel approach - CLAA - for API aspect detection in API reviews that utilizes transformer models trained with a supervised contrastive loss objective function. We evaluate CLAA using performance and impact analysis. For performance analysis, we utilized a benchmark dataset on developer discussions collected from Stack Overflow and compare the results to those obtained using state-of-the-art transformer models. Our experiments show that contrastive learning can significantly improve the performance of transformer models in detecting aspects such as Performance, Security, Usability, and Documentation. For impact analysis, we performed empirical and developer study. On a randomly selected and manually labeled 200 online reviews, CLAA achieved 92% accuracy while the SOTA baseline achieved 81.5%. According to our developer study involving 10 participants, the use of _Stack Overflow + CLAA_ resulted in increased accuracy and confidence during API selection. Replication package: [https://github.com/disa-lab/Con](https://github.com/disa-lab/Con) traistive-Learning-API-Aspect-ASE2023.
API aspects, Contrastive learning, Transformers, API review, Aspect detection, LIME
## I Introduction
API (Application Programming Interface) review is a crucial process in software engineering that builds insight on an API's functionality, performance, and overall quality. Feedback from API users is essential in this process as it helps to identify areas for improvement and leads to better software development process [1]. By asking questions or sharing experiences and opinions about a particular API in online forums like Stack Overflow (SO), developers create a discussion thread that serves as a review [2]. Studies indicate that while there are multiple reviews available for a given API, developers tend to put higher importance on reviews that address specific aspects of an API [3]. For instance, developers may be particularly interested in learning about an API's security features or its ease of use. These findings have led researchers to develop automated methods for accurately identifying the different aspects covered in API reviews [2, 3, 4, 5, 6].
Classifying API reviews based on predefined aspects is a challenging task. API reviews often use technical terms, domain-specific language, and jargon that are specific to programming languages, frameworks, and the API functionality. These terms may not be commonly used in other types of text and may not be present in the pre-trained language model's vocabulary, which makes it difficult for pre-trained language models to comprehend the meaning of the text. For example, the review _"But this is also for transforming into well XMLs."_ is related to "usability" aspect while the review _"I'm searching the java library for parsing XML, I googled a bit but couldn't found other than dom4j"_ is related to "community" aspect.
In this paper, we propose a technique CLAA (Contrastive Learning for API Aspects). CLAA uses the principles of Contrastive Learning (CL) [7] and learns representations of API reviews that are specific to certain aspects, enabling the model to better distinguish between them. If a pre-trained language model is fine-tuned using CL, the model understands the technical vocabulary and domain-specific language used in API reviews better. Figure 1 demonstrates an example of how CLAA can improve the API aspect detection task. We present a more clear visualization in Figure 3 where we show the advantages of CLAA by highlighting the performance improvement over baselines. This approach utilizes supervised contrastive training objective to fine-tune each pre-trained transformer model (BERT [8], RoBERTa [9], XLNet [10], ALBERT [11], BERTOverflow [12], ELECTRA [13], and T5 [14]) to learn the aspect-wise semantic representations of API
Fig. 1: An example showing how CLAA is superior to non-CL methods in our experiments. CL minimized the distance between an instance (anchor) and positive examples (instances from a certain aspect category) and maximized the distance between the anchor and negative examples (instances do not belong to that aspect). The output explanations are shown for both the baseline and CLAA with highlighted features. Orange color signifies features crucial for considering a text related to an aspect, while blue color represents the opposite. The lighter the color, the less importance the feature has.
review instances. These learned representations are then used to fine-tune the pre-trained transformers to act as sequence classifiers for API aspect detection task. To explain why the classifier makes a certain prediction, CLAA uses a perturbation based text explainer framework LIME (Local Interpretable Model-Agnostic Explanations) [15]. Unlike that of Yang et al. [3], our approach does not rely on only fine-tuning pre-trained transformers to act as a classifier. Rather CLAA uses two stage training: first the pre-trained transformers are fine-tuned to learn the representations of the reviews of different aspect categories through contrastive training, and then the transformers with knowledge of the aspect-wise learned representations are fine-tuned using CrossEntropy training.
CLAA holds the potential to offer a more insightful and contextually relevant API selection process which can enhance the decision making process in choosing specific APIs for specific projects. For example, consider a discussion regarding the _usability_ aspect of an API such as "The new update introduced a more efficient API for data processing". CLAA can differentiate that this type of text stands apart from non-aspect text like "The team went out for lunch today". By embedding these texts in a shared feature space, CLAA gains the ability to cluster aspect based API-related discussions in proximity while distancing them from conversations unrelated to a certain aspect. This ability to distinguish relevant from non-relevant texts contributes to the identification of crucial factors for API selection. It aligns recommendations with the project's prerequisites and customizes options based on user preferences, all of which can substantially enhance the API selection process for users.
We evaluate our proposed approach in two ways: (a) performance analysis and (b) impact analysis. For performance analysis, we compare CLAA with the state-of-the-art baselines through extensive experimentation. Experimental results show that it outperforms the SOTA results significantly. For impact analysis of CLAA, we conducted two experiments: (a) empirical study and (b) developer study. First, we collect all "json" and "java" related posts and comments from Stack Overflow. Then we apply the CLAA and the best performing baseline model (RoBERTa). CLAA achieved 92% accuracy by correctly predicting 184 sentences, while the baseline achieved 81.5% accuracy by correctly predicting 163 sentences on randomly selected 200 reviews. Second, to demonstrate the usefulness of CLAA, we conduct a developer study in two phases. In the first phase, we ask 10 participants (five professional developers + five graduate level students) to perform two API selection related tasks using three different settings (_Stack Overflow_ only, _Stack Overflow + baseline_, _Stack Overflow + CLAA_). In the second phase, we collect their comparative feedback on the correctness, confidence and usefulness of CLAA. The findings of the study show that developers are more accurate about API selection when they are using CLAA. In summary, we made the following contributions in this paper:
1. **CLAA**: We present CLAA, a novel approach for aspect analysis in API reviews. In CLAA, the aspect detection component leverages contrastive learning, which yields state-of-the-art results and the classifier output explanation component offers explanations to aid in aspect detection.
2. **Evaluation**: We conducted a thorough assessment of CLAA with both performance and impact analysis. It was compared with a closely related state-of-the-art approach [3] and also a developer study involving ten participants was carried out.
## II CLAA: CL for API Aspects
The CLAA tool comprises of two main components, i.e., the API aspect detection component and the outcome (of classifier) explanation component. The first component is responsible for classifying the API reviews into different aspect categories. The second component provides an interpretation of the predictions made by the first component.
### _API Aspect Detection Component_
We use API aspect detection component to categorize API reviews into some predefined aspects. The input to the component is an API related review, and the output is an aspect category. This component is composed of three parts: (1) contrastive training, (2) cross-entropy training, and (3) hyper-parameter tuning.
#### Ii-A1 Contrastive training
Supervised contrastive learning is a technique widely used in Natural Language Processing (NLP) that requires training a model to differentiate between similar but distinct examples in order to improve its generalization ability [16]. The basic idea behind the method is to train a model to identify the similarities and differences between two examples, with one example being the positive example and the other one being negative. Figure 2 depicts a brief overview of the contrastive training procedure. At first, we separate the training data into two classes: instances belonging to a specific aspect are designated as the positive class, while the rest are considered negative. We split the training data into several batches of size \(32\). For aspect categories with fewer than \(100\) samples during training, to guarantee that each batch includes at least one positive class sample, the number of instances in these categories are doubled by duplicating each sentence. This data augmentation technique is considered minimal, as it is performed in a supervised setting and follows the unsupervised data augmentation method described in [16]. The next step involved defining the model to be trained on the data. We utilize transformer models, as outlined in section III-A1, as the encoder. The encoder learns the representations of the sentences in a batch. To train the encoder we use the NT-XENT (Normalized Temperature-Scaled Cross Entropy Loss) as the objective function which measures the similarity between the positive and negative examples in a pair and optimizes the model so that the similarity score between positive examples is maximized while the similarity between negative examples is minimized. The temperature scaling helps to balance the trade-off between the two objectives and adjust the sharpness of the optimization process. We use the supervised contrastive
loss mentioned in Khosla et al. [7], which can handle multiple positive and negative instances in a single batch. The goal of this loss function is to increase the similarity between positive instances and decrease the similarity between negative instances within a batch. We utilize the learned representations of the encoder to fine-tune the transformers for classifying API reviews. This process is repeated for each aspect and each transformer.
#### Ii-A2 Cross-entropy training
We fine-tune the transformer encoder as a sequence classifier to categorize API reviews. We employ _binary cross-entropy loss_, a commonly used loss function in binary classification during fine-tuning. Given a predicted probability distribution \(y_{\text{pred}}\) and the true label \(y_{\text{true}}\), the binary cross-entropy loss is calculated as follows:
\[\text{BCEloss}=-(y_{\text{true}}*\log(y_{\text{pred}})+(1-y_{\text{true}})* \log(1-y_{\text{pred}})) \tag{1}\]
The objective is to minimize this loss, i.e. to make the predicted probability \(y_{\text{pred}}\) as close as possible to the true label \(y_{\text{true}}\).
#### Ii-A3 Hyper-parameter settings
For contrastive training, we used \(0.063\) and \(0.1\) as the temperature in NT-XENT loss function. We fine-tuned transformers for \(5\) epochs with a batch size of \(32\), _AdamW_ as optimizer, and a learning rate of \(5e-05\). To have at least one positive sample per batch, we doubled the sample size for aspects with fewer than \(100\) samples. For cross-entropy training, we fine-tuned transformers to act as classifiers by adding a dropout and linear layer and a varying learning rates of \(1e-05,2e-05\), and \(5e-05\). We used \(10-fold\) cross-validation, with a maximum sequence length of \(160\) and _binary cross-entropy_ loss. We ran the experiments on Google Colab [17] and the pre-trained models were implemented using Hugging Face Transformer library.
### _Classifier Output Explanation Component_
We utilize LIME (Local Interpretable Model-Agnostic Explanations) [15], a widely used text explanation framework that assists in comprehending how the classifier in API aspect detection component of CLAA performs predictions. For instance, the sentence "_CBC encryption in itself is not thread self_" is a review regarding security aspects, where the words "_encryption_" and "_thread_" are crucial to determine it as a security aspect. LIME can determine the words or phrases in a review that significantly influenced the decision of the classifier in CLAA. LIME selects a specific piece of text that the machine learning model has made a prediction on. It then creates a number of perturbed versions of the text by making small, random changes to the original text. These perturbed versions are used to approximate the local behavior of the machine learning model around the original text. LIME uses a simpler, interpretable model (such as a logistic regression model) to make predictions on the perturbed versions of the text. The goal is to create a model that can accurately predict how the original machine learning model would behave around the original text, but in a way that is easier to understand. LIME then examines the simpler model to determine which words or phrases were most important in predicting the classifier's output for the original text. This can help explain why the classifier made the prediction.
## III Evaluation
We answer two research questions:
1. **Performance Analysis.** How accurate is CLAA to detect API aspects?
2. **Impact Analysis.** How effective is CLAA to support users during their analysis of API reviews?
### _Study Setup for Performance Analysis_
We used the dataset from Uddin et al. [6] for performance comparison. The dataset consists of \(4,522\) sentences extracted from \(1,338\) Stack Overflow (SO) posts and was manually labeled. Out of the \(4,522\) sentences, \(4,307\) belong to a single API aspect, \(209\) belong to two aspects, and the remaining \(6\) belong to more than two aspects. The data distribution of the benchmark dataset is presented in Table I.
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Aspects** & **Definition** & **No. of Samples** \\ \hline \multirow{2}{*}{
\begin{tabular}{c} Performance \\ \end{tabular} } & Facilitates the comparison between two or more \\ & APIs in terms of performance, resource consumption. & 348 (7.7\%) \\ \hline \multirow{2}{*}{Usability} & Discussion regarding the API usage, applications & \multirow{2}{*}{1437 (31.8\%)} \\ & and integration challenges. & \\ \hline \multirow{2}{*}{Security} & Discussion related to the level of data & \multirow{2}{*}{163 (3.6\%)} \\ & security provided by the API & \\ \hline \multirow{2}{*}{Community} & Discussion related to the activity level of & \multirow{2}{*}{93 (2.1\%)} \\ & the community of API practitioners. & \\ \hline \multirow{2}{*}{Compatibility} & Discussion related to the API’s compatibility & \multirow{2}{*}{93 (2.1\%)} \\ & with the specified framework environments. & \\ \hline \multirow{2}{*}{Portability} & Discussion related to the adaptability of the API in & \multirow{2}{*}{70 (1.5\%)} \\ & circumstances such as numerous & \\ & operating system environments. & \\ \hline \multirow{2}{*}{Documentation} & Discussion related to the clarity and completeness & \multirow{2}{*}{256 (5.6\%)} \\ & of the API’s official documentation. & \\ \hline \multirow{2}{*}{Bug} & Discussion related to the overall existence & \multirow{2}{*}{189 (4.2\%)} \\ & or absence of bugs and faults in the API. & \\ \hline \multirow{2}{*}{Legal} & Discussion related to the level of legal authorization & \multirow{2}{*}{50 (1.1\%)} \\ & and access provided for API usage. & \\ \hline \multirow{2}{*}{OnlySentiment} & Discussion that expresses simply opinion & \multirow{2}{*}{348 (7.7\%)} \\ & regarding an API, with no technical details. & \\ \hline \multirow{2}{*}{Others} & Discussion related to the APIs that do not & \multirow{2}{*}{1699 (37.6\%)} \\ & fall under the previously described aspects. & \\ \hline \end{tabular}
\end{table} TABLE I: The distribution of the API review aspects along with definition in Opiner [6] dataset.
Fig. 2: Supervised contrastive training procedure.
#### Iv-A1 Models used to detect aspects in CLAA
We utilized seven pre-trained transformer models in the API aspect detection component of CLAA. Firstly, we employed BERT [8], which was trained using 'Masked Language Modeling' and 'Next Sentence Prediction' objectives. Another model we used was RoBERTa [9], a modified version of BERT that incorporated a larger training dataset and different training strategies. Additionally, we employed ALBERT [11], which maintained BERT's performance while reducing its parameters. For domain-specific tasks, we utilized BERTOverflow [12], trained with a large dataset from Stack Overflow. We also incorporated XLNet [10], a generalized auto-regressive model that considers inter-dependency between tokens during permutation language modeling. Another model we used was ELECTRA [13], which focused on identifying replaced tokens in the input sequence. Lastly, we included T5 [14], trained with a 'Masked Language Modeling' goal, but with a different approach to handling consecutive tokens. Table II describes the architecture details.
#### Iv-A2 Evaluation metrics
We evaluated the performance of each classifier inside CLAA using five metrics: _weighted precision (P), weighted recall (R), weighted f1 score (F1), matthews correlation coefficient (MCC),_ and _weighted area under ROC curve (ROC AUC)_. We considered F1 score as the primary evaluation metric. To validate the results of the performance comparison, we conducted a _paired bootstrap re-sampling test_[18]. This test allows us to assess the significance of the differences between the classifiers in binary classification by repeatedly re-sampling the original data and comparing the accuracy and F1-score of the classifiers.
### _Study Setup for Impact Analysis_
We conducted two experiments:
#### Iv-B1 Empirical
We utilized the March 2023 Stack Overflow data dump, which was the latest available at the time. We collected posts and comments using two tags: "json"and "java". Our experimental dataset comprised all the posts and comments labeled with at least one of these tags, amounting to a total of \(8,722\) posts and \(12,483\) comments. Since our focus was on textual analysis, we used the _BeautifulSoup_ library to extract only relevant information such as titles and URLs. Additionally, we employed the NLTK sentence tokenizer to extract all sentences from the posts and comments, which resulted in approximately \(86,853\) sentences. To evaluate the performance of both the best performing baseline model and CLAA, this dataset was used. Each tool generated a list of sentences labeled as an aspect. We then randomly selected a subset of sentences from the dataset and manually compared the accuracy of each model on those sentences.
#### Iv-B2 User Study
To analyze the effectiveness of CLAA in assisting development tasks, we conducted a user study by following the work of Uddin et al. [19]. Ten developers took part in the study, and each of them completed two tasks involving the selection of an API from a pool of two options. Once the tasks were completed, the developers were invited to provide feedback on their experience of using CLAA through an online data collection tool.
**A. Tasks.** We created two tasks for the study following that of Uddin et al. [19]. Each task used a different set of APIs. Both tasks involved choosing an API from a pool of two options. The first set of options included "GSON" and "org.json", while the second set included "jackson" and "json-lib".
**(T1)** The participants were tasked to choose between two APIs, GSON and org.json, based on two criteria: a) usability, and b) licensing usage. The correct answer was GSON.
**(T2)** The study required participants to choose between two APIs, Jackson and json-lib, based on two criteria: a) performance, and b) pre-installed base in leading frameworks. The correct answer was Jackson. For each task, we asked each developer to complete it in three different settings:
(1) **SO only**: Using Stack Overflow as the only resource.
(2) **SO + baseline**: Using both Stack Overflow and the best performing baseline (RoBERTa).
(3) **SO + CLAA**: Using both Stack Overflow and CLAA. For each task and setting, each developer was required to provide the following answers: (1) **Selection**: The API they chose.
(2) **Confidence**: Their level of confidence while making the selection, measured on a five-point scale ranging from fully confident (value 5) to fully unsure (value 1). (3) **Reasoning**: The reason or reasons for their selection, expressed in one or more sentences. Once the developer finished the assigned tasks, we requested their feedback regarding their experience with CLAA. Specifically, we inquired about the extent to which they would utilize CLAA for future development tasks and also invited them to suggest improvements that could be made to CLAA. The developers were encouraged to provide detailed responses to these questions.
**B. Participants.** We recruited a total of ten developers for our study, with experience levels ranging from 1 to 12 years. Of the participants, five were graduate students and the remaining five were professional developers. Three of the professional developers were affiliated with a software company, while the remaining two were recruited through Freelancer.com. To ensure that each participant fully understood the study and its requirements, we contacted them directly via email, chat, and Skype. Each developer was granted access to our data collection tool. To facilitate the study, we developed inference scripts for both CLAA and the baseline with the highest performance. These scripts are designed to take API reviews
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Architecture** & **Used Model** & **L** & **H** & **P** \\ \hline BERT & bert-base-uncased & 12 & 12 & 110M \\ \hline RoBERTa & roberta-base & 12 & 12 & 123M \\ \hline BERTOverflow & jeniya/BERTOverflow & 12 & 12 & 149M \\ \hline ALBERT & albert-base-v2 & 12 & 12 & 11M \\ \hline XLNet & xlnet-base-cased & 12 & 12 & 110M \\ \hline ELECTRA & google/electrota-base-discriminator & 12 & 12 & 110M \\ \hline T5 & t5-small & 6 & 8 & 60M \\ \hline \end{tabular}
\end{table} TABLE II: Architecture details of of pre-trained transformer models (L = Layers, H = Heads, P = Parameters).
as input and generate an aspect as output. We made these available online, allowing each study participant to utilize them for each task.
## IV Performance Analysis of CLAA (RQ1)
We investigate the following sub-RQs:
1. Does CLAA offer improvement over the baselines?
2. What are the misclassification categories observed in the predictions of both the baseline and CLAA?
### _RQ1.1 Does CLAA offer improvement over the baselines?_
We answer the following sub-RQs to answer this research question:
1. Does contrastive learning improve the performance of pre-trained models compared to the state-of-the-art performance of transformer based models?
2. What is the impact of employing different contrastive learning objectives on the performance of CLAA?
_1) RQ1.1a Does contrastive learning improve the performance of pre-trained models compared to the state-of-the-art performance of transformer based models?_
_Approach._ To perform aspect-wise binary classification on API reviews, we divide the dataset into two categories: reviews that are related to a specific aspect, and reviews that are not. We fine-tune the pre-trained transformers using only cross-entropy training to establish a baseline performance. After that, we apply CLAA to the same dataset to determine if it can enhance the performance.
_Results._ To evaluate CLAA, we have used seven pre-trained transformer models. The average performance of these models on different aspects, including performance improvement, is presented in Table III. The fine-tuning with contrastive learning in CLAA led to a significant improvement to the average performance of the transformer models, except for BERTOverflow, XLNet, and T5. Out of all the transformer models used in CLAA, RoBERTa demonstrates the highest performance in terms of average F1, MCC, and AUC scores, despite having a similar average F1 score to BERT and ALBERT. However, fine-tuning BERTOverflow, XLNet, and T5 with contrastive learning does not have a significant impact on their performance. Furthermore, XLNet without contrastive learning exhibits a better average MCC score. Due to space limitation, we report the detailed performance comparison on each of the 11 aspects in the Github repository. According to the results, BERT demonstrated impressive performance for 9 of the 11 aspects (excluding Community and Compatibility) with high MCC, AUC, and F1 scores. On the other hand, RoBERTa displayed poor performance in the Compatibility aspect with MCC and AUC scores of 0 and 0.5, respectively, but performed better in other aspects, exhibiting MCC scores higher than 47% which indicates a very strong positive correlation and AUC scores higher than 89% which mark it as a decent classifier. Additionally, in the Community aspect, RoBERTa with contrastive learning achieved the best performance, with an MCC score of 87.87% and an AUC score of 91.86%. BERTOverflow had poor performance on six aspects, including Security, Community, Compatibility, Portability, Bug, and Legal, as evidenced by a MCC score of 0 and an AUC score of 0.5, indicating no correlation and random guessing. Overall, BERTOverflow performed poorly and had little predictive ability on most aspects. Although XLNet and T5 showed good evaluation scores in API aspect detection, the impact of contrastive training on their performance was minimal. Even though their average F1 score was improved, their MCC and AUC scores were unsatisfactory, indicating that they were not effective in correctly identifying the API aspects. ALBERT demonstrated the most significant average F1 score improvement among all models. However, it had poor performance in the Compatibility aspect with low MCC and AUC scores. Nonetheless, for all other aspects, ALBERT achieved an MCC score of over 73% and an AUC score of over 83%. Similarly, ELECTRA also showed significant improvement in performance, but both models performed poorly in the Legal aspect, with an MCC score of 0 and an AUC score of 0.5, indicating random guessing. To clarify the improvement brought about by CLAA over the baseline models, we utilized t-Distributed Stochastic Neighbor Embedding (t-SNE) [20] to create visual representations of the sentence embeddings produced by the pre-trained transformer models. In Figure 3, we present a visualization of sentence embeddings generated by the ALBERT transformer model in both the CLAA and baseline method. We specifically chose the ALBERT model since it demonstrated the greatest improvement in terms of F1-score within the CLAA framework. In the upper images (3a to 3d), we can observe the embeddings of two classes (aspect and non-aspect) generated by the baseline ALBERT model. The orange and blue dots represent embedding vectors in a two-dimensional space, where orange indicates aspect reviews and blue indicates non-aspect reviews. The visualization shows
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \cline{2-11} \multicolumn{1}{c|}{} & \multicolumn{4}{c|}{**CLAA**} & \multicolumn{4}{c|}{**Baseline**} & \multicolumn{4}{c|}{**Improvement by CLAA (\%)**} \\ \hline
**Model** & **F1** & **STDV** & **MCC** & **AUC** & **F1** & **STDV** & **MCC** & **AUC** & **F1** & **MCC** & **AUC** \\ \hline BERT & **0.9745** & 0.010 & 0.7684 & 0.8815 & 0.9422 & 0.008 & 0.6068 & 0.7850 & 3.2316 & 16.1630 & 9.6450 \\ \hline RoBERTa & **0.9776** & 0.012 & 0.7620 & 0.8831 & 0.9429 & 0.007 & 0.5846 & 0.7748 & 3.4700 & 17.7400 & 10.8300 \\ \hline Bert Overflow & **0.9107** & 0.011 & 0.1415 & 0.5650 & 0.9076 & 0.011 & 0.1337 & 0.5587 & 0.3080 & 0.7770 & 0.6300 \\ \hline XLNET & **0.9448** & 0.012 & 0.5194 & 0.7488 & 0.9380 & 0.011 & 0.5197 & 0.7451 & 0.6820 & -0.0300 & 0.3700 \\ \hline ALBERT & **0.9725** & 0.009 & 0.6757 & 0.8259 & 0.9306 & 0.012 & 0.4720 & 0.7142 & 4.1910 & 20.3760 & 11.1740 \\ \hline ELECTRA & **0.9677** & 0.007 & 0.5368 & 0.7726 & 0.9350 & 0.018 & 0.4920 & 0.7361 & 3.2650 & 4.4770 & 3.6500 \\ \hline T5 & **0.9055** & 0.009 & 0.2187 & 0.5707 & 0.9033 & 0.009 & 0.1886 & 0.5594 & 0.2200 & 3.0100 & 1.1250 \\ \hline \end{tabular}
\end{table} TABLE III: Performance comparison between CLAA and the baselines (fine-tuned transformers without contrastive learning). Here, all the results are averaged across all the aspects. STDV is the standard deviation over F1-score.
that there is a significant overlap between the embeddings of the two classes, and the decision boundary between them is not clear. This implies that it is challenging to separate the two classes accurately using the baseline ALBERT model. The lower images (3e to 3h) in Figure 3 illustrate the embeddings of the two classes produced by CLAA. The visualization reveals dense clusters of embedding vectors that are clearly separated by a relatively distinct margin between the two clusters, indicating why CLAA has achieved significant performance improvement.
V-A2 RQ1.1b What is the impact of employing different contrastive learning objectives on the performance of CLAA?
_Approach._ We utilized two distinct supervised contrastive learning objectives, namely _TripletMarginLoss_[21] and _ContrastiveLoss_[22], to fine-tune BERT and RoBERTa in CLAA. The supervised version of _TripletMarginLoss_ and _ContrastiveLoss_ take into account labeled data for calculating loss, maximizing similarity between positive samples (same class) and minimizing similarity between negative samples (different classes). BERT fine-tuned with the _TripletMarginLoss_ training objective, displayed a significant improvement of approximately 3-5% in average F1-Score across all aspects, compared to all the baseline models. This inspires us to explore whether this performance enhancement remains consistent with different training objectives.
_Results._ We compare BERT and RoBERTa's performance on all aspects using NT-XENT loss, TripletMarginLoss and ContrastiveLoss. BERT fine-tuned with TripletMarginLoss outperforms NT-XENT loss and ContrastiveLoss. For example, in the "Usability" aspect, BERT fine-tuned with NT-XENT loss achieved an F1-score of 0.86, while BERT fine-tuned with TripletMarginLoss achieved 0.98, and ContrastiveLoss achieved 0.91. In contrast, for the "Others" aspect, BERT fine-tuned with ContrastiveLoss has a poor F1-score of 0.91, while BERT fine-tuned with NT-XENT and TripletMarginLoss achieved around 0.98. Conversely, for RoBERTa, the results are the opposite. RoBERTa fine-tuned with TripletMarginLoss performs poorly with an F1-score of only 0.71, while RoBERTa fine-tuned with NT-XENT loss performs best with an F1-score of 0.94. RoBERTa fine-tuned with ContrastiveLoss achieved an F1-score of 0.85. Additionally, we observe a decrease in performance for RoBERTa fine-tuned with TripletMarginLoss and ContrastiveLoss for the Bug and Others aspects. Table IV demonstrates that BERT with TripletMarginLoss outperforms NT-XENT loss in terms of average F1, MCC, and AUC scores by a margin of 1.4%, 11%, and 5%, respectively. BERT, fine-tuned with TripletMarginLoss, exhibits the best performance among all transformer models for API aspect detection.
_RQ1.2 What are the misclassification categories observed in the predictions of both the baseline and CLAA?_
We answer the research question by considering the following two sub-RQs:
1. How frequently can one tool correct the misclassification of another tool?
2. What are the reasons behind the misclassification?
_1) RQ1.2a How frequently can one tool correct the misclassification of another tool?_
_Approach._ We identify all of the instances that were misclassified by both the baseline model and CLAA on an aspect-by-aspect basis. In both the models, we use ALBERT as the pre-trained model as it achieved the highest average F1 improvement. Then, we determine how often one tool can fix the misclassification of the other tool. Specifically, if one tool misclassifies an instance, the other can potentially correct it by predicting the correct API aspect. Our study shows that the baseline and CLAA can work together to complement each other's strengths. Additionally, we report the frequency with which both models fail to correctly predict each aspect.
_Results._ Table V shows the percentage of textual units that are misclassified by each tool and can be potentially corrected
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{**Aspect**} & **Tool** & **\# of misclassified** & \multicolumn{2}{c}{**Tools could correct**} \\ & **wrong** & **intances** & \multicolumn{2}{c}{**wrong aspect**} \\ \cline{3-5} & **Baseline** & 339 & & 89\% \\ \multirow{2}{*}{Performance} & **CLAA** & 73 & 48\% & \\ & **Both** & 38 & & \\ \hline \multirow{2}{*}{Usability} & **Baseline** & 999 & & 94\% \\ & **CLAA** & 307 & 81\% & \\ & **Both** & 59 & & \\ \hline \multirow{2}{*}{Security} & **Baseline** & 113 & & 81\% \\ & **CLAA** & 95 & 77\% & \\ & **Both** & 22 & & \\ \hline \multirow{2}{*}{Community} & **Baseline** & 136 & & 65\% \\ & **CLAA** & 50 & 4\% & \\ & **Both** & 48 & & \\ \hline \multirow{2}{*}{Compatibility} & **Baseline** & 140 & & 34\% \\ & **CLAA** & 135 & & \\ & **Both** & 93 & & \\ \hline \multirow{2}{*}{Portability} & **Baseline** & 77 & & 83\% \\ & **CLAA** & 14 & 7\% & \\ & **Both** & 13 & & \\ \hline \multirow{2}{*}{Documentation} & **Baseline** & 235 & & 80\% \\ & **CLAA** & 81 & 42\% & \\ & **Both** & 47 & & \\ \hline \multirow{2}{*}{Bug} & **Baseline** & 163 & & 97\% \\ & **CLAA** & 81 & 94\% & \\ & **Both** & 5 & & \\ \hline \multirow{2}{*}{Legal} & **Baseline** & 63 & & 21\% \\ & **CLAA** & 77 & 35\% & \\ & **Both** & 50 & & \\ \hline \multirow{2}{*}{Only Sentiment} & **Baseline** & 235 & & 65\% \\ & **CLAA** & 118 & 31\% & \\ & **Both** & 82 & & \\ \hline \multirow{2}{*}{Others} & **Baseline** & 954 & & 84\% \\ & **CLAA** & 339 & & 55\% \\ \cline{1-1} & **Both** & 153 & & \\ \hline \hline \end{tabular}
\end{table} TABLE V: Statistics on how the misclassification of a tool can be corrected by another tool.
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{**BERT**} & \multicolumn{3}{c}{**RoBERTa**} \\ \hline
**Objective** & **Avg** & **Avg** & **Avg** & **Avg** & **Avg** & **Avg** \\
**Function** & **F1** & **MCC** & **AUC** & **F1** & **MCC** & **AUC** \\ \hline NT-XENT loss & 0.974 & 0.768 & 0.881 & **0.978** & **0.762** & **0.883** \\ TripletMarginLoss & **0.988** & **0.880** & **0.932** & 0.950 & 0.683 & 0.847 \\ ContrastiveLoss & 0.972 & 0.813 & 0.900 & 0.944 & 0.377 & 0.687 \\ \hline \hline \end{tabular}
\end{table} TABLE IV: Performance comparison of BERT and RoBERTa in CLAA with three different contrastive learning objectives.
by another tool. The first column of the table represents the API aspects that each tool is designed to detect. The third column displays the number of API reviews that are misclassified by a particular tool mentioned in the second column. In the fourth and fifth columns, we have reported how often the other tools can correct the misclassifications made by the tool mentioned in the second column. Our analysis in Table V reveals that CLAA performs better than the baseline model in correcting misclassifications for most API aspects, such as _Bug, Usability, Performance, Portability_, and _Community_. For instance, CLAA can correct a high percentage (97%, 94%, and 89%) of misclassifications made by the baseline model for _Bug, Usability_, and _Performance_ aspects, respectively. On the other hand, the baseline model can only correct a small percentage (7% and 4%) of misclassifications made by CLAA. However, the baseline model performs better than CLAA in correcting misclassifications for _Legal_ aspect, where it can correct 35% of the misclassified instances compared to only 21% corrected by CLAA. It is worth noting that both the baseline and CLAA struggle to detect certain aspects, such as _Community, Compatibility, Legal_, and _Others_, where they fail to classify a considerable number of API review instances i.e. 48, 93, 50, and 153, respectively. Nevertheless, our study shows that CLAA has a lower number of misclassified instances in general, indicating its effectiveness in the API aspect analysis task.
_2) RQ1.2b What are the reasons behind the misclassification?_
_Approach_. To analyze the reasons behind misclassifications, we randomly selected 20 misclassified reviews from each of the eleven aspects, resulting in a total of 220 misclassified instances. We identified five error categories to analyze the reasons behind misclassifications, and we manually labeled the misclassified instances into one of these categories. The categories are: (a) general error, (b) politeness, (c) inability to deal with context information, (d) lack of domain-specific knowledge and (e) unknown tokens.
_Results_. Figure 4 summarizes the error categories and their distributions in our analysis. We discuss the categories below. **General Error**. Tools made errors while processing textual contents; which led to misclassifications. These errors include failure to properly process URLs, inability to determine the presence of negation, and failure to process linguistic cues
Fig. 4: Error categories identified from the prediction of CLAA and the baseline.
Fig. 3: Visualization of the embeddings produced by ALBERT pre-trained model on four API aspects: _usability, compatibility, bug, others_. Sub-figures _(a to d)_ presents the embeddings produced as a baseline. Sub-figures _(e to h)_ represents the embeddings produced by CLAA which exhibit a clear decision boundary for binary API aspect detection task.
or typos. For instance, the baseline model misclassified the review "_Not saying it isn't an issue_" because it failed to identify negation, while CLAA misclassified the review "_I like how this works without adding more libraries to your project_" for the same reason. We also found that both models tend to misclassify reviews containing URLs, such as "_However, I also have extensive experience in working with SWT and the URL_[http://wiki.eclipse.org/index.php/JFace](http://wiki.eclipse.org/index.php/JFace) [JFace- Ui-toolkit], which is built on top of it_".
**Politeness.** Tools made mistakes and categorized reviews into the wrong aspects due to the presence of polite language. For example, the baseline model misclassified reviews like "_Thanks for the comments_" and "_@Barak Schiller Thanks for posting link to XStream?_" because it focused on the word "Thanks" and ignored the reason for commenting and posting on a community. Similarly, CLAA misclassified the review "_thanks :) jaas book is really good_" because it focused on polite markers like "thanks" and a smiley face emoticon.
**Inability to deal with context**. Tools failed to grasp the context of the text, which was essential in determining the API aspect. For instance, the baseline model incorrectly classified the sentence "_Java Cryptography Extensions (The Practical Guide Series) by Jason Weiss?_" as it only mentions a book related to Java cryptography extensions, without providing any specific information about an API aspect. However, CLAA may have relied on keywords like "Java" and "cryptography" to predict the correct aspect (_Security_) without considering the broader context. Both the baseline and CLAA misclassified the review "_Why take the whole kitchen sink when all you need is the tap?_" as they failed to understand its context. Although the sentence uses a plumbing analogy, it does not provide any information about a specific aspect of software development or an API.
**Lack of domain knowledge**. Sometimes, tools misclassified text due to their limited understanding of the specific terminologies, jargons, and documentations related to a particular domain. For instance, in the sentence "_Hibernate uses ANTLR for sql and hql parsing_", the CLAA model misclassified it as it lacks the knowledge of the importance of compatibility between different software components. Similarly, in the sentence "_CBC encryption in itself is not thread safe_", the baseline model failed to recognize it as a security aspect related to cryptography and encryption due to the lack of domain-specific knowledge. However, CLAA model might have relied on the word "encryption" to make the correct prediction. In another example, "_Look and Feel: AWT components more closely reflect the look and feel of the OS they run on_" is a sentence that talks about portability aspect. The baseline model misclassified this sentence due to its limited understanding of the challenges of software portability and the importance of designing software that can run on different platforms.
**Unknown tokens**. The tools struggled with understanding special characters like '<>', '@', and underscores in tags, class or method names. These characters were often identified as unknown tokens (<unk>). For instance, when the sentence "_If CSS is in a <style> tag, it will be interpreted as text_" was presented to the models, both the baseline and CLAA misclassified it because they failed to recognize that '<style>' is a tag used in HTML to style documents. Transformer models depend on word patterns and frequency to grasp context, and in this case, the models might not have had enough information to differentiate between '<style>' as a tag and plain text. Another example is "_JAXB Bindings File Sets @XmliElement type to String instead of XMLGregorianCalendar_" which was misclassified by both the models because they did not understand that '@XmliElement' is a method name. In another case, the models might have misclassified the sentence "_Note that the example above uses a simplified way to issue calls via the_ClientResource_ class_" because they did not recognize that '_ClientResource_' is the name of a class due to the underscores in its name.
## V Impact Analysis of CLAA (RQ2)
We answer the following sub-RQs to assess the impact:
1. Does CLAA offer improvement over the baselines in terms of generalization performance (performance over unseen data)?
2. How useful is CLAA to support users during API selection?
### _RQ2.1 Does CLAA offer improvement over the baselines in terms of generalization performance?_
_Approach._ We collected all "json" and "java" related posts from Stack Overflow and applied both CLAA and the best performing baseline to them. For this comparison, we used the RoBERTa transformer model in both the CLAA and baseline approach. Each model generated a list of sentences labeled as an aspect. We then selected a random subset of the collected sentences and manually compared the accuracy of each model on these sentences.
_Results._ Table VI displays the distribution of sentences among the eleven aspects labeled by both CLAA and the baseline model. The row labeled "None" indicates the number of sentences that neither model could categorize. It is evident from the table that CLAA outperformed the baseline model in most aspects, except for Bug. Specifically, in the Bug aspect, CLAA labeled 4,492 sentences, while the baseline model labeled 5,765 sentences. In some aspects such as Performance, Community, Portability, and Legal, the difference in the number of labeled sentences was relatively minor. It is also worth noting that CLAA categorized more sentences into multiple aspects, while the baseline model was more conservative in its labeling. Specifically, the baseline model labeled 9,196 sentences into two aspects and 180 sentences in three aspects, while CLAA labeled 10363 sentences into two aspects, 214 sentences into three aspects, and only 3 sentences into four aspects. The table also shows that the baseline model left around 3030 sentences unlabeled, while CLAA managed to label all but 32 sentences. To further investigate, we randomly selected 200 sentences, manually labeled them (the first author performed the initial annotations, which were subsequently reviewed by an external annotator with expertise in the SE domain) and calculated the accuracy
of both models. CLAA achieved 92% accuracy by correctly predicting 184 sentences, while the baseline achieved 81.5% accuracy by correctly predicting 163 sentences. These findings provide evidence that CLAA is a more accurate and reliable model than the baseline. Despite the overall high accuracy of both models, they were unable to categorize any sentences in the compatibility aspect, indicating a limitation in their ability to categorize certain types of reviews. This could be due to the models being trained on a limited number of API reviews from Stack Overflow, which may not be representative of reviews from other sources or unseen APIs.
### _RQ2.2 How useful is CLAA to support users during API selection?_
_Approach._ We evaluated the usefulness of CLAA in API selection through a user study. The objective of the study was to assess if CLAA helps the users to be more accurate and confident in their API selection. After collecting the responses of the participants, we analyzed them along two dimensions: correctness and confidence. Correctness refers to the precision of the participant's selection in both settings, while confidence refers to the level of confidence they had in making their selection. Furthermore, we computed a conversion rate for each participant, which represents the ratio of participants who made an incorrect selection while using _Stack Overflow_ but made a correct selection when using _Stack Overflow + baseline_ and _Stack Overflow + CLAA_ separately.
_Results._ All ten developers successfully accomplished their assigned tasks. In Table VII, we have included the impact of CLAA on task completion. In the case of Task 1, only 40% of the developers who solely used _Stack Overflow_ were able to choose the correct API, while 80% were successful when they used _Stack Overflow + baseline_. However, when they switched to using _Stack Overflow + CLAA_, all of them were able to select the right API, resulting in a 100% conversion rate. For Task 2, 60% of the developers using _Stack Overflow_ alone were able to pick the right API, while 90% were able to do so when they used _Stack Overflow + baseline_. Nonetheless, all of them were able to choose the correct API when they used _Stack Overflow + CLAA_, which also resulted in a 100% conversion rate. Additionally, the developers reported feeling more confident when utilizing CLAA with Stack Overflow to make their API selections. In the case of Task 1, their confidence level increased from 3.5 to 4.7, which is considered almost fully confident. Similarly, for Task 2, their confidence level increased from 4.2 to 4.8, indicating a high level of confidence. The follow-up survey conducted with the developers indicated that they found CLAA to be the most useful tool for API selection. For instance, P3 (i.e. participant number 3) commented that: _"I really appreciate how CLAA helps me make a decision on which API to choose based on different aspects. It's like having a personal AI assistant for API selection!"_ P7 found the visualization feature to be particularly useful in aiding decision-making, stating that _"I makes the decision-making process much easier and quicker."_ The participants also found the usage of CLAA to be an easier and faster method for API selection. According to P10, "_CLAA has the potential to save a lot of time and effort for developers. Instead of spending hours reading through reviews and trying to make a decision, they could use CLAA to quickly narrow down their options."_ Nonetheless, participants offered suggestions to enhance the usefulness of CLAA. For example, P1 stated that "_I would love to see CLAA expand to cover more aspects beyond the current ones. It would make it an even more comprehensive tool for API selection._" P8 proposed the addition of a comparison feature, which would allow developers to compare two APIs side by side based on a given aspect.
## VI Discussions
### _Feature Importance_
The features emphasized by LIME when used with CLAA offers a clear understanding of the critical aspects within a review. Table VIII presents six instances for both the baseline and CLAA, side by side, highlighting the critical features for predicting the model's outcome in a text using LIME (blue color signifies features crucial for not considering a text related to an aspect, while orange color representing the opposite). We used the ALBERT transformer model for both baseline and CLAA. Upon analyzing the predictions and the highlighted features by LIME for both models, it appears that CLAA outperformed the baseline model by accurately categorizing the aspects. The features identified by LIME using CLAA provide a precise understanding of the essential aspects of the text, whereas the features highlighted by the baseline model were not as relevant to the core aspects of
\begin{table}
\begin{tabular}{|c|c|c|} \hline
**Aspects** & **CLAA** & **Baseline** \\ \hline Performance & 1522 & 1492 \\ \hline Usability & 25234 & 22645 \\ \hline Security & 1407 & 1067 \\ \hline Community & 546 & 410 \\ \hline Compatibility & 0 & 0 \\ \hline Portability & 336 & 274 \\ \hline Documentation & 3515 & 3088 \\ \hline Bug & 4492 & 5765 \\ \hline Legal & 204 & 177 \\ \hline OnlySentiment & 2255 & 1807 \\ \hline Others & 47310 & 47098 \\ \hline None & 32 & 3030 \\ \hline \end{tabular}
\end{table} TABLE VI: Comparison of aspect distribution after applying pre-trained RoBERTa model in both CLAA and baseline on empirical experimental dataset consisted of 86,853 sentences. The _None_ row indicates the number of sentences that neither model could categorize
\begin{table}
\begin{tabular}{c c c c c} \hline
**T** & **Tool** & **Correctness** & **Confidence** & **Conversion** \\ \hline \multirow{3}{*}{1} & SO & 40\% & 3.5 & \multirow{3}{*}{66.67\%} \\ & SO+Baseline & 80\% & 4.4 & & \\ & SO+CLAA & 100\% & 4.7 & 100\% \\ \hline \multirow{3}{*}{2} & SO & 60\% & 4.2 & \multirow{3}{*}{75\%} \\ & SO+Baseline & 90\% & 4.5 & & \\ \cline{1-1} & SO+CLAA & 100\% & 4.8 & & 100\% \\ \hline \end{tabular}
\end{table} TABLE VII: Progression of learning from Stack Overflow to CLAA.
the text. Due to space constraints, we limit our analysis of this claim to only the first and last examples listed in Table VIII. For the first example, the baseline model misjudged the review as not related to usability. The features highlighted by LIME using the baseline model for usability aspect included "taught", "me", "exactly", "to", "chain", "cannot" and "null" which are all related to the concept of "chaining". However, the model seems to have missed the crucial aspect of the discussion which is about when to chain getters. In contrast, CLAA made the correct prediction. The features highlighted by LIME using CLAA in case of usability included "accepted", "an", "answer", "taught", "me", "exactly", "long", "you" and "like" which are all related to the discussion of when to chain getters. Thus, CLAA has accurately captured the key aspects of the text. In the second example, the baseline model correctly predicted that the text was not related to usability, while CLAA made an incorrect prediction by identifying the text as related to usability. The features highlighted by LIME using the baseline model for considering the text as not related to usability included "I", "have", "anything", "possible", "install" and "installing". These words suggested that the text was discussing installation, which is not directly related to usability. Meanwhile, the features identified by LIME using CLAA for considering the text as not related to usability included "anything", "beginners", "install", "wanted" and "people". These words also provide no clear indication of the idea related to usability. However, CLAA seems to have captured the idea that the user "wanted something as simple as possible" which might have been the reason behind the incorrect prediction. In case of third example, the original aspect was security, and although the baseline model incorrectly predicted the text as not related to security aspect, CLAA made the correct prediction. The features highlighted by LIME for not considering the text related to security aspect using both models were "it", "avenues" and "got" which do not directly relate to the concept of security. However, CLAA identified "or" as a feature for not considering as a security aspect, which could imply that it recognized the importance of considering other possibilities or alternatives. In contrast, the features identified using the baseline model for security aspect were "you", "have", "then", "that", "right", "unexpected" and "pray" which are less relevant to the concept of security to make the correct prediction. CLAA identified "exploits", "unexpected", "denial-of-service", "to" and "significant" as features that are more directly related to the aspect of security. Similarly, in the fourth example, the baseline model seems to focus more on technical terms such as "commons configuration", "XML" and "hibernation" which could lead to a classification of the text as not sentiment-related. On the other hand, CLAA includes more sentiment-related features such as "liked" and "taken look" that indicate a positive or negative sentiment towards the topic. In this case, the correct classification would be OnlySentiment aspect, which is correctly predicted by CLAA. When comparing the two models in the fifth example, it appears that they have differing views on how to classify the text. The original aspect was related to bug, and while the baseline model inaccurately predicted that the text was not related to bug, CLAA correctly classified it as bug. Using the baseline model LIME highlighted "the", "and", "html4", "css" and "is" as features that do not directly relate to the concept of bugs, whereas CLAA did not identify any such features. As for the LIME features related to bug aspect, the baseline model focused on "handles", "parser", "think", "better" and "erroneous" which suggest a focus on specific language and phrasing in the text. In contrast, CLAA's features for bug aspect included "bug-free" and "erroneous" indicating a greater emphasis on specific requirements and features mentioned in the text. Despite both the baseline model and CLAA making an incorrect prediction for the sixth example, the features extracted by LIME using both models suggest that
\begin{table}
\begin{tabular}{c c l l} \hline \hline Aspect & Prediction & Baseline (B) & CLAA (C) \\ \hline \multirow{3}{*}{Usability} & Incorrect by B & \multirow{3}{*}{\begin{tabular}{l} \end{tabular} } \\ & & \begin{tabular}{l} \end{tabular} \\ \cline{1-1} & & \begin{tabular}{l} \end{tabular} \\ \hline \multirow{3}{*}{Not Usability} & \multirow{3}{*}{\begin{tabular}{l} Correct by B \\ Incorrect by C \\ \end{tabular} } \\ \cline{1-1} & & \begin{tabular}{l} \end{tabular} \\ \hline \multirow{3}{*}{\begin{tabular}{l} Not Usability \\ \end{tabular} } & \begin{tabular}{l} Correct by B \\ Incorrect by C \\ \end{tabular} \\ \hline \multirow{3}{*}{\begin{tabular}{l} Security \\ \end{tabular} } & \begin{tabular}{l} Incorrect by B \\ Correct by C \\ \end{tabular} &
\begin{tabular}{l}
they have recognized some significant words and expressions related to the compatibility aspect. The baseline model has identified the words "together", "well", "within", "tend" and "components" as important for compatibility aspect, while CLAA has identified "UI", "AWT", "mixing", "within" and "components". It is possible that the models might have been confused by the negation ("avoid mixing") in the sentence, which could have led them to assign a lower probability to the sentence being related to compatibility. Additionally, the sentence itself is relatively short and does not provide a lot of contextual information, which could make it difficult for the models to accurately classify the sentence.
### _Threats to Validity_
**Internal validity threats** relate to the limitations in study design and implementation that may impact experimental results. To reduce these threats, we reused an existing replication package for aspect based API review classification provided by [3] and performed necessary modification to implement CLAA. We used the contrastive loss functions provided in [23]. Potential threats introduced due to the usage of LIME include sensitivity to perturbation strategy, inadequate sample size for generating explanations, noise and randomness issues, biases inherited from the original model etc. To mitigate these threats, for each explanation generation, we have have used a sample size of 100 (perturbed samples), to reduce noise and randomness we have used a seed value of 42. To validate the explanations, quantitative and qualitative analysis is need which we leave open for future work.
**External validity threats** refers to the generalizability of the results. For API aspect detection using CLAA, the results may only apply to the specific dataset used and the domain of Stack Overflow but may not be applicable to other datasets or domains.
**Construct validity threats** refer to concerns regarding the relationship between theory and observations. Such threats could arise due to errors in measurement. We used the same evaluation metrics including precision, recall, f1 score, MCC, and AUC that were used in [3]. To gauge the performance of the participants, we determined the percentage of correct answers, which may be influenced by external factors like tiredness, time management etc.
## VII Related Work
Related work can broadly be divided into two types: studies that aimed to understand the nature and types of software and API aspects and tools that are developed to detect and analyze the aspects.
**Studies.** Barua et al. [24] analyzed Stack Overflow and found that discussions were mainly about programming languages, tools, and frameworks, and their popularity correlated with the corresponding technologies. Uddin et al. [1] investigated how and why developers seek and analyze API-related opinions on Stack Overflow. Uddin and Khomh [2] proposed a technique for automatically mining opinions expressed about APIs in Stack Overflow; Lin et al. [5] proposed a pattern-based mining technique to extract opinions from Q&A websites, finding that developers frequently express opinions about API usability and functionality. Uddin and Khomh proposed techniques to summarize API reviews [19] and mine API aspects from reviews [4], finding that developers often express opinions about aspects like ease of use, documentation, and performance. Zhang and Hou [25] proposed a technique to extract problematic API features from forum discussions and found that developers often find issues related to documentation, compatibility, and complexity of APIs. Ahasanuzzaman et al. [26] aimed at detecting posts on Stack Overflow that are related to API issues. In their subsequent study, Ahasanuzzaman et al. proposed a supervised learning method named CAPS [27], which employed five distinct dimensions and a conditional random field (CRF) technique.
**Tools.** Several recent studies have utilized natural language processing to enhance API documentation quality and classify API reviews based on their aspects. Opiner [6] extracts and summarizes relevant opinions about APIs from user comments on various platforms, while Treude and Robillard [28] automatically detect API-related sentences from Stack Overflow discussions to augment API documentation. Uddin and Khomh employ ML classifiers to identify API aspects [6], and Yang et al. [3], Nibir et al. [29] used pre-trained transformer models for aspect-based API review classification. In contrast, CLAA uses contrastive learning prior to fine-tuning transformer models as classifiers. In the software engineering field, contrastive learning is gaining popularity, and it has been utilized in recent works such as SynCoBERT [30] for multi-modal code representation, ContraCode [31], Heloc [32] for code representation learning, CODE-MVP [33] for code representation from multiple views, Clear [34] for API recommendation, varclr [35] for variable semantic representation, contrastive learning for code clone detection [36], bug priority inference [37], multi-modal code review [38], code retrieval and summarization [39].
## VIII Conclusion
In this paper, we propose CLAA, a tool that helps with API selection and aspect-wise online reviews aggregation. It uses a two stage training: first it uses supervised contrastive training objective to fine-tune seven pre-trained transformer models to learn aspect-wise semantic representations of API review instances, which are then used to act as sequence classifiers for API aspect detection tasks. CLAA uses LIME to explain why the classifier makes certain predictions. Experimental results show that CLAA significantly outperforms the state-of-the-art baselines in terms of F1 score, MCC score, and AUC score. RoBERTa performed the best in CLAA among the seven transformer models. However, the results also showed that models with large structures may not always perform well in categorizing API reviews. There is still room for further research in applying contrastive learning to other pre-trained language models, and the dataset used in the study is imbalanced, which could affect the results. CLAA was found to be more accurate than the baseline model in online actual and developer studies. |
2306.17697 | Analysis of Oversampling in Uplink Massive MIMO-OFDM with Low-Resolution
ADCs | Low-resolution analog-to-digital converters (ADCs) have emerged as an
efficient solution for massive multiple-input multiple-output (MIMO) systems to
reap high data rates with reasonable power consumption and hardware complexity.
In this paper, we analyze the performance of oversampling in uplink massive
MIMO orthogonal frequency-division multiplexing (MIMO-OFDM) systems with
low-resolution ADCs. Considering both the temporal and spatial correlation of
the quantization distortion, we derive an approximate closed-form expression of
an achievable sum rate, which reveals how the oversampling ratio (OSR), the ADC
resolution, and the signal-to-noise ratio (SNR) jointly affect the system
performance. In particular, we demonstrate that oversampling can effectively
improve the sum rate by mitigating the impact of the quantization distortion,
especially at high SNR and with very low ADC resolution. Furthermore, we show
that the considered low-resolution massive MIMO-OFDM system can achieve the
same performance as the unquantized one when both the SNR and the OSR are
sufficiently high. Numerical simulations confirm our analysis. | Mengyuan Ma, Nhan Thanh Nguyen, Italo Atzeni, Markku Juntti | 2023-06-30T14:21:11Z | http://arxiv.org/abs/2306.17697v1 | # Analysis of Oversampling in Uplink Massive MIMO-OFDM with Low-Resolution ADCs
###### Abstract
Low-resolution analog-to-digital converters (ADCs) have emerged as an efficient solution for massive multiple-input multiple-output (MIMO) systems to reap high data rates with reasonable power consumption and hardware complexity. In this paper, we analyze the performance of oversampling in uplink massive MIMO orthogonal frequency-division multiplexing (MIMO-OFDM) systems with low-resolution ADCs. Considering both the temporal and spatial correlation of the quantization distortion, we derive an approximate closed-form expression of an achievable sum rate, which reveals how the oversampling ratio (OSR), the ADC resolution, and the signal-to-noise ratio (SNR) jointly affect the system performance. In particular, we demonstrate that oversampling can effectively improve the sum rate by mitigating the impact of the quantization distortion, especially at high SNR and with very low ADC resolution. Furthermore, we show that the considered low-resolution massive MIMO-OFDM system can achieve the same performance as the unquantized one when both the SNR and the OSR are sufficiently high. Numerical simulations confirm our analysis.
Massive MIMO-OFDM, energy efficiency, low-resolution ADCs, oversampling.
## I Introduction
Massive multiple-input multiple-output (MIMO) is a crucial physical-layer technology for current and future wireless systems [1], which provides high spectral efficiency thanks to the large number of antennas at the base station (BS) [2]. However, when massive MIMO is adopted at millimeter wave and (sub-)THz frequencies [3, 4], its energy efficiency can be severely burdened by the high power consumption of each radio-frequency (RF) chain. In this respect, analog-to-digital converters (ADCs) are the most power-hungry RF components, as their power consumption increases exponentially with the number of resolution bits [5]. For instance, high-speed ADCs (e.g., operating at 1 Gsample/s) with high resolution (e.g., 8-12 bits) can consume several Watts [6]. Therefore, adopting low-resolution ADCs at the BS has been regarded as an effective approach to reducing power consumption without excessively compromising the performance [7, 8].
Despite the reduced power consumption, low-resolution ADCs introduce a non-linear quantization distortion to the signal, which cannot be eliminated by increasing the transmit power. While adding more antennas at the BS can compensate for the performance loss due to the quantization distortion [9, 10, 11], it also raises the overall power consumption and hardware complexity. On the other hand, temporal oversampling can improve the sum rate in quantized massive MIMO systems without increasing the number of antennas and RF chains [12]. Furthermore, oversampling enables higher-order modulation over a 1-bit quantized single-antenna additive white Gaussian noise (AWGN) channel [13]. In addition, it was shown in [14] that the sum rate grows roughly logarithmically with the oversampling ratio (OSR). Most of the aforementioned studies consider narrowband or single-carrier systems and are not readily applicable to wideband multi-carrier scenarios in general. This is because the correlation of time-domain symbols due to the low-resolution ADCs makes the frequency-domain signal model for multi-carrier systems more involved [15]. Massive MIMO orthogonal frequency-division multiplexing (MIMO-OFDM) systems with low-resolution ADCs and oversampling were studied by Ucuncu _et al._ in [16] under adjacent channel interference (ACI). Specifically, this work analyzed the performance with zero-forcing (ZF) combining and showed that oversampling can improve the signal-to-interference-plus-noise-and-distortion ratio (SINDR) and suppress the ACI in both 1-bit and multi-bit quantized systems.
Inspired by [16], we perform a deeper analysis of how the OSR, the ADC resolution, and the SNR collectively affect the performance of uplink massive MIMO-OFDM systems with low-resolution ADCs and oversampling, which was not reported in [16]. We first present the frequency-domain signal model for an uplink MIMO-OFDM system, which accounts for the impact of low-resolution ADCs on the received time-domain symbols. We then derive an approximate closed-form expression of an achievable sum rate based on the Bussgang decomposition, which considers the temporal and spatial correlation of the quantization distortion. Our analysis reveals that oversampling can significantly improve the sum rate by mitigating the quantization distortion, especially at high SNR and with very low ADC resolution (down to 1-bit). We further demonstrate that the considered low-resolution massive MIMO-OFDM system can achieve the same performance as its unquantized counterpart when both the SNR and the OSR are sufficiently high. Numerical simulations validate our analysis and highlight the trade-off between the OSR and the ADC resolution in terms of energy efficiency and hardware complexity.
## II System Model
We consider an uplink massive MIMO system where a BS equipped with \(M\) antennas receives signals from the \(U\) single-antenna user equipments (UEs). The OFDM is assumed over a wideband channel to deal with the frequency selectivity.
Specifically, let \(\Delta f=\frac{1}{T_{\rm u}}\) be the subcarrier spacing, where the OFDM symbol duration \(T_{\rm u}\) is assumed to be fixed. Let \(f_{k}=f_{\rm c}+\left(k+1-\frac{N_{\rm c}+1}{2}\right)\Delta f\), \(k=0,\ldots,N_{\rm c}-1\) denote the \(k\)-th subcarrier frequency, where \(f_{\rm c}\) is the center carrier frequency. Among the total \(N_{\rm c}\) subcarriers, \(K\) subcarriers are employed for signal transmission, while other \(N_{\rm c}-K\) subcarriers are employed for oversampling [15]. Let \(\tilde{s}_{u}[k]\) be the transmit symbol of the \(u\)-th UE at subcarrier \(k\) with \(\mathbb{E}\left[|\tilde{s}_{u}[k]|^{2}\right]=1,\ k=0,\ldots,K-1\). Note that \(\tilde{s}_{u}[k]=0\) for \(k=K,\ldots,N_{\rm c}-1\) when \(N_{\rm c}>K\). Because the sampling frequency is \(f_{\rm s}=N_{\rm c}\Delta f\) while the transmission bandwidth of signals is \(B_{\rm w}=K\Delta f\), the OSR is defined as \(\beta=\frac{N_{\rm c}}{K}\). Hence, \(\beta=1\) and \(\beta>1\) indicate the Nyquist sampling and the oversampling scheme, respectively. The time-domain symbol is obtained by \(N_{\rm c}\)-points inverse discrete Fourier transform (IDFT), which can be expressed as
\[s_{u}[n]=\frac{\sqrt{p}}{\sqrt{N_{\rm c}}}\sum_{k=0}^{N_{\rm c}-1}\tilde{s}_{u }[k]e^{j\frac{2\pi nk}{N_{\rm c}}},\quad n=0,\ldots,N_{\rm c}-1, \tag{1}\]
where \(n\) represents the index of the time-domain symbol, and \(p\) denotes the average transmit power. At the receiver, the time-domain signals are first downconverted to the baseband and transformed back to the frequency domain by \(N_{\rm c}\)-points discrete Fourier transform (DFT).
Let \(\mathbf{s}[n]=\left[s_{1}[n],\ldots,s_{U}[n]\right]^{T}\) and \(\tilde{\mathbf{s}}[k]=\left[\tilde{s}_{1}[k],\ldots,\tilde{s}_{U}[k]\right]^{T}\), where \(\tilde{s}_{u}[k]\) is the frequency-domain signal transmitted by the \(u\)-th UE, and \(s_{u}[n]\) is given in (1). The discrete-time received signal at time sample \(n\) at the BS is given by
\[\mathbf{r}[n]=\sum_{d=0}^{D-1}\mathbf{H}[d]\mathbf{s}[n-d]+\mathbf{w}[n], \tag{2}\]
where \(D=\beta D_{0}\) with \(D_{0}\) being the maximum number of delay taps under Nyquist sampling, and \(\mathbf{H}[d]=\left[\mathbf{h}_{1}[d],\cdots,\mathbf{h}_{U}[d]\right]\in \mathbb{C}^{M\times U}\) denotes the channel matrix at the \(d\)-th time delay with \(\mathbf{h}_{u}[d]\) representing the channel between the \(u\)-th UE and the BS. Here, \(\mathbf{w}[n]\) represents the AWGN vector and \(\mathbf{w}[n]\sim\mathcal{CN}(\mathbf{0},\sigma_{\mathbf{n}}^{2}\mathbf{I})\), where \(\sigma_{\mathbf{n}}^{2}\) denotes the AWGN power. By taking the DFT of both sides of (2), the frequency-domain received signal is expressed as
\[\tilde{\mathbf{r}}[k]=\sqrt{p}\tilde{\mathbf{H}}[k]\tilde{\mathbf{s}}[k]+ \tilde{\mathbf{w}}[k],\quad k=0,\ldots,N_{\rm c}-1, \tag{3}\]
where \(\tilde{\mathbf{r}}[k]=\frac{1}{\sqrt{N_{\rm c}}}\sum_{n=0}^{N_{\rm c}-1}\mathbf{ r}[n]e^{-j\frac{2\pi nk}{N_{\rm c}}}\), \(\tilde{\mathbf{w}}[k]=\frac{1}{\sqrt{N_{\rm c}}}\sum_{n=0}^{N_{\rm c}-1} \mathbf{w}[n]e^{-j\frac{2\pi nk}{N_{\rm c}}}\), and \(\tilde{\mathbf{H}}[k]=\left[\tilde{\mathbf{h}}_{1}[k],\ldots,\tilde{\mathbf{h} }_{U}[k]\right]\) with \(\tilde{\mathbf{h}}_{u}[k]=\sum_{d=0}^{D-1}\mathbf{h}_{u}[d]e^{-j\frac{2\pi Hk}{ N_{\rm c}}}\). Note that \(\tilde{\mathbf{r}}[k]=\tilde{\mathbf{w}}[k]\) for \(k=K,\ldots,N_{\rm c}-1\) as \(\tilde{\mathbf{s}}[k]=\mathbf{0}\) in these cases.
## III Signal Model with Quantization
Assuming that the UEs employ high-resolution DACs, the BS uses identical pair of low-resolution ADCs in each RF chain for the in-phase and quadrature-phase signals. Focusing on the performance impact of ADCs, we assume in our analysis that all RF circuits other than the ADCs (e.g., local oscillators, mixers, and power amplifiers) are ideal. We further assume that the sampling rate \(f_{\rm s}\) of the ADCs at the BS is the same as that of the DACs at the UE, and the system is perfectly synchronized. Finally, we assume that the spectrum of the output of ADCs is contained within \(\big{[}-\frac{f_{\rm s}}{2},\frac{f_{\rm s}}{2}\big{]}\), i.e., without out-of-band emissions [15].
### _Quantization Modeling_
We begin by defining the codebook of a scalar quantizer of \(b\) bits as \(\mathcal{C}=\{c_{0},\ldots,c_{N_{\rm q}-1}\}\), where \(N_{\rm q}=2^{b}\) is the number of output levels of the quantizer. The quantization thresholds set is \(\mathcal{T}=\{t_{0},\ldots,t_{N_{\rm q}}\}\), where \(t_{0}=-\infty\) and \(t_{N_{\rm q}}=\infty\) allows inputs with arbitrary power. For signals with standard Gaussian distribution, the Lloyd-Max algorithm can find the optimal \(\mathcal{C}\) and \(\mathcal{T}\) that achieve the minimum square error (MSE) between the input and output of the quantizer. Note that the Lloyd-Max quantizer is generally non-uniform, and the optimal \(\mathcal{C}\) and \(\mathcal{T}\) for \(1\)-\(5\) bits are given in [17, Table I]. Let \(Q(\cdot)\) denote the quantization function. For a complex signal \(x=\Re\{x\}+j\Im\{x\}\), we have \(Q(x)=Q(\Re\{x\})+jQ(\Im\{x\})\) with \(Q(\Re\{x\})=c_{I(\Re\{x\})}\), where \(I(\Re\{x\})=i\in\{0,\ldots,N_{\rm q}-1\}\) for \(\Re\{x\}\in[i,t_{i+1}]\); \(Q(\Im\{x\})\) is obtained in a similar way. When the input signal of the quantizer is a vector, \(Q(\cdot)\) is applied elementwise.
The Bussgang decomposition allows to model a non-linear input-output relation of a Gaussian signal as a linear transformation [18]. To model the quantization of the received signal in (2) by the low-resolution ADCs, we first rewrite (2) as
\[\bar{\mathbf{r}}=\bar{\mathbf{H}}\bar{\mathbf{s}}+\bar{\mathbf{w}}, \tag{4}\]
where \(\bar{\mathbf{r}}=\big{[}\mathbf{r}[N_{\rm c}-1]^{T},\ldots,\mathbf{r}[0]^{T} \big{]}^{T}\), \(\bar{\mathbf{s}}=\big{[}\mathbf{s}[N_{\rm c}-1]^{T},\ldots,\mathbf{s}[0]^{T} \big{]}^{T}\), and \(\bar{\mathbf{w}}=\big{[}\mathbf{w}[N_{\rm c}-1]^{T},\ldots,\mathbf{w}[0]^{T} \big{]}^{T}\). Furthermore, \(\bar{\mathbf{H}}\in\mathbb{C}^{M\times N\times N_{\rm c}}\) is a block circulant matrix [16]. With the Bussgang decomposition, \(\bar{\mathbf{z}}=Q(\bar{\mathbf{r}})\) can be expressed as
\[\bar{\mathbf{z}}=\bar{\mathbf{B}}\bar{\mathbf{r}}+\bar{\boldsymbol{\eta}}, \tag{5}\]
where \(\bar{\mathbf{z}}=\big{[}\mathbf{z}[N_{\rm c}-1]^{T},\ldots,\mathbf{z}[0]^{T} \big{]}^{T}\), and where \(\bar{\boldsymbol{\eta}}=\big{[}\boldsymbol{\eta}[N_{\rm c}-1]^{T},\ldots, \boldsymbol{\eta}[0]^{T}\big{]}^{T}\) denotes the non-Gaussian distortion vector that is uncorrelated to \(\bar{\mathbf{r}}\). Here, \(\bar{\mathbf{B}}\) represents the Bussgang gain matrix. In the case of the same resolution (\(b\) bits) ADCs at all the RF chains, \(\bar{\mathbf{B}}\) reduces to a scalar \(\alpha=1-\gamma\), where \(\gamma\) denotes the inverse signal-to-quantization-distortion ratio (SQR). Note that, for a given resolution, \(\gamma\) is constant, which has been tabulated in [17]. Therefore, (5) is equivalent to
\[\mathbf{z}[n]=\alpha\mathbf{r}[n]+\boldsymbol{\eta}[n],\quad n=0,\ldots,N_{\rm c }-1. \tag{6}\]
To facilitate the performance evaluation in the frequency domain, the analysis continues by taking the DFT of both sides of (6), yielding
\[\tilde{\mathbf{z}}[k] =\alpha\tilde{\mathbf{r}}[k]+\tilde{\boldsymbol{\eta}}[k]\] \[=\alpha\sqrt{p}\bar{\mathbf{H}}[k]\tilde{\mathbf{s}}[k]+\mathbf{e}[k ],\quad k=0,\ldots,N_{\rm c}-1, \tag{7}\]
where \(\tilde{\mathbf{z}}[k]=\frac{1}{\sqrt{N_{\rm c}}}\sum_{n=0}^{N_{\rm c}-1} \mathbf{z}[n]e^{-j\frac{2\pi nk}{N_{\rm c}}}\) and \(\tilde{\boldsymbol{\eta}}[k]=\frac{1}{\sqrt{N_{\rm c}}}\sum_{n=0}^{N_{\rm c}-1}
processing signal vector at subcarrier \(k\) is given by
\[\hat{\mathbf{x}}[k]=\mathbf{G}[k]^{H}\tilde{\mathbf{z}}[k]=\sqrt{p}\alpha\mathbf{G }[k]^{H}\tilde{\mathbf{H}}[k]\tilde{\mathbf{s}}[k]+\mathbf{G}[k]^{H}\mathbf{e}[k]. \tag{8}\]
The \(u\)-th element of \(\hat{\mathbf{x}}[k]\) can be expressed as
\[\hat{x}_{u}[k] =\underbrace{\sqrt{p}\alpha\mathbf{g}_{u}[k]^{H}\mathbf{h}_{u}[ k]s_{u}[k]}_{\text{desired signal}}\] \[\quad+\underbrace{\sqrt{p}\alpha\sum\limits_{j\neq u}^{U}\mathbf{ g}_{u}[k]^{H}\mathbf{h}_{j}[k]s_{j}[k]}_{\text{interference}}+\underbrace{\mathbf{g}_{u}[k]^{H} \mathbf{e}[k]}_{\text{AWGN and quasifaction}}, \tag{9}\]
and the resulting SINDR is
\[\zeta_{u}[k]=\frac{p\alpha^{2}|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{u}[k]|^{2}}{p \alpha^{2}\sum\limits_{j\neq u}^{U}|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{j}[k]|^{2 }+\mathbf{g}_{u}[k]^{H}\mathbf{C}_{\mathbf{e}_{k}}\mathbf{g}_{u}[k]}, \tag{10}\]
where \(\mathbf{C}_{\mathbf{e}_{k}}=\mathbb{E}\left[\mathbf{e}[k]\mathbf{e}[k]^{H} \right]=\mathbf{C}_{\tilde{\mathbf{\eta}}_{k}}+\alpha^{2}\sigma_{n}^{2}\mathbf{I}\) and \(\mathbf{C}_{\tilde{\mathbf{\eta}}_{k}}=\mathbb{E}\left[\tilde{\mathbf{\eta}}[k]\tilde {\mathbf{\eta}}[k]^{H}\right]\). Treating the interference-plus-noise-and-distortion term as a Gaussian random variable with the same variance, we obtain an achievable sum rate as [9]
\[R=\sum\limits_{k=1}^{K}\sum\limits_{u=1}^{U}\Delta f\log_{2}\left(1+\zeta_{u} [k]\right). \tag{11}\]
It is observed that \(\mathbf{C}_{\tilde{\mathbf{\eta}}_{k}}\) is required to compute the sum rate in (11). However, obtaining \(\mathbf{C}_{\tilde{\mathbf{\eta}}_{k}}\) may be challenging due to the quantization distortion. Alternatively, we derive an approximate closed-form expression in the following proposition.
**Proposition 1**: \(\mathbf{C}_{\tilde{\mathbf{\eta}}_{k}}\) _can be approximated as_
\[\mathbf{C}_{\tilde{\mathbf{\eta}}_{k}}\approx\gamma(1-\gamma)\bigg{(}\frac{p}{N_ {\text{c}}}\sum\limits_{k=0}^{K-1}\operatorname{diag}\left(\tilde{\mathbf{H}} [k]\tilde{\mathbf{H}}[k]^{H}\right)+\sigma_{n}^{2}\mathbf{I}\bigg{)}, \tag{12}\]
_where we recall that \(\gamma\) represents the inverse SQR given in [17], and \(p\) denotes the average transmit power. The approximation in (12) becomes more accurate at low SNR or at high SNR with a low OSR and a high ADC resolution._
Proposition 1 can be obtained through the DFT of the time-domain correlation matrix \(\mathbf{C}_{\mathbf{r}}[t]=\mathbb{E}\left[\mathbf{\tau}[n]\mathbf{r}[n-\iota]^{ H}\right]\) and \(\mathbf{C}_{\mathbf{\eta}}[\iota]=\mathbb{E}\left[\mathbf{\eta}[n]\mathbf{\eta}[n-\iota]^{ H}\right]\) as well as the approximation \(\mathbf{C}_{\mathbf{\eta}}[0]\approx\gamma(1-\gamma)\operatorname{diag}(\mathbf{C}_{ \mathbf{\tau}}[0])\) derived in [19]. The detailed proof is omitted due to limited space. Note that \(\mathbf{C}_{\mathbf{\eta}}[\iota]\) includes the temporal and spatial correlations of the quantization distortion.
Using the result in (12), we can approximate \(\mathbf{C}_{\mathbf{e}_{k}}\) as
\[\mathbf{C}_{\mathbf{e}_{k}}\approx(1-\gamma)\left(\frac{\gamma p}{N_{\text{c} }}\sum\limits_{k=0}^{K-1}\operatorname{diag}\left(\tilde{\mathbf{H}}[k] \tilde{\mathbf{H}}[k]^{H}\right)+\sigma_{n}^{2}\mathbf{I}\right). \tag{13}\]
From (13), the SINDR can be rewritten as
\[\zeta_{u}[k] \approx\frac{|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{u}[k]|^{2}}{\sum \limits_{j\neq u}^{U}|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{j}[k]|^{2}+\mathbf{g}_ {u}[k]^{H}\mathbf{C}_{\mathbf{e}}\mathbf{g}_{u}[k]}, \tag{14}\]
where
\[\mathbf{C}_{\mathbf{e}}=\frac{\gamma}{(1-\gamma)\beta}\mathbf{H}_{\text{e}}+ \frac{1}{\rho(1-\gamma)}\mathbf{I} \tag{15}\]
with \(\mathbf{H}_{\text{e}}=\frac{1}{K}\sum\nolimits_{k=0}^{K-1}\operatorname{diag} \left(\tilde{\mathbf{H}}[k]\tilde{\mathbf{H}}[k]^{H}\right)\) and \(\rho=\frac{p}{\sigma_{n}^{2}}\). Note that \(\rho\) denotes the SNR. We observe that the sum rate in (11) based on (14) is jointly affected by three factors, i.e., the OSR, the ADC resolution, and the SNR. We note some important observations in the following:
1. With high ADC resolution, we have \(\gamma\to 0\), which yields \[\zeta_{u}[k]\rightarrow\frac{|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{u}[k]|^{2}}{ \sum\limits_{j\neq u}^{U}|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{j}[k]|^{2}+\frac{1 }{\rho}\|\mathbf{g}_{u}[k]\|^{2}}.\] (16) Based on (16), we can readily obtain the sum rate corresponding to the unquantized system.
2. It can be observed that increasing the OSR helps to mitigate the quantization distortion, which results in a higher sum rate. In particular, when the OSR increases without bound, i.e., \(\beta\rightarrow\infty\), \(\zeta_{u}[k]\) is limited by the SNR, which is \[\zeta_{u}[k]\rightarrow\frac{|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{u}[k]|^{2}}{ \sum\limits_{j\neq u}^{U}|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{j}[k]|^{2}+\frac{1 }{\rho(1-\gamma)}\|\mathbf{g}_{u}[k]\|^{2}}.\] (17) This implies that, as the sum rate approaches the upper bound constrained by the SNR, the advantages gained from increasing the OSR become less significant.
3. In addition, at high SNR, oversampling can effectively improve the sum rate, especially with very low ADC resolution. In particular, when the SNR approaches infinity, i.e., \(\rho\rightarrow\infty\), the second term of (15) approaches zero and \(\mathbf{C}_{\mathbf{e}}\rightarrow\frac{\gamma}{(1-\gamma)\beta}\mathbf{H}_{ \text{e}}\). Hence, we obtain \[\zeta_{u}[k]\rightarrow\frac{|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{u}[k]|^{2}}{ \sum\limits_{j\neq u}^{U}|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{j}[k]|^{2}+\frac{ \gamma}{\beta(1-\gamma)}\mathbf{g}_{u}[k]^{H}\mathbf{H}_{\text{e}}\mathbf{g}_{u}[k]},\] (18) which is limited by the quantization distortion and can be enhanced by increasing the OSR. Moreover, it is seen that a lower ADC resolution yields a larger \(\frac{\gamma}{\beta(1-\gamma)}\), resulting in more significant performance enhancement due to oversampling. This is because \(\gamma\) is inversely proportional to the resolution and \(\frac{\gamma}{1-\gamma}\) monotonically increases with \(\gamma\). On the other hand, at low SNR, the benefits of increasing the OSR can be marginal because the second term of (15), i.e., the AWGN, could outweigh the quantization distortion.
4. When \(\rho\rightarrow\infty\) and \(\beta\rightarrow\infty\), we have \[\zeta_{u}[k]\rightarrow\frac{|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{u}[k]|^{2}}{ \sum\limits_{j\neq u}^{U}|\mathbf{g}_{u}[k]^{H}\mathbf{h}_{j}[k]|^{2}},\] (19) which yields an upper bound of (16) when \(\rho\rightarrow\infty\). This implies that, with sufficiently large SNR and OSR, a 1-bit quantized system can perform similarly to its unquantized counterpart.
We summarize the above discussions in the following remark:
**Remark 1**: _Oversampling can effectively improve the sum rate of low-resolution systems, especially at high SNR. In general, oversampling performs better at higher SNR and with lower ADC resolution. In particular, when both the SNR and the OSR are sufficiently large, the performance of the quantized system approaches that of the unquantized one._
## IV Simulation Results
We consider the maximum ratio combining (MRC) to evaluate the achievable sum rate, which is \(\mathbf{G}[k]=\mathbf{H}[k]\mathrm{diag}\left(\mathbf{H}[k]^{H}\mathbf{H}[k] \right)^{-1}\). The delay-\(d\) channel between the \(u\)-th UE and the BS is modeled as [20]
\[\mathbf{h}_{u}[d]=\sqrt{\frac{M}{L}}\sum_{\ell=1}^{L}\beta_{u,\ell}p(dT_{\mathrm{ s}}-\tau_{u,\ell})\mathbf{a}(\theta_{u,\ell}), \tag{20}\]
where \(\beta_{u,\ell}\), \(\tau_{u,\ell}\), and \(\theta_{u,\ell}\) denote the \(\ell\)-th path gain, path delay and angle-of-arrival, respectively. Here, \(p(t)\) represents the pulse-shaping function following the same parameters as in [20]. In the simulations, we assume \(\beta_{u,\ell}\sim\mathcal{CN}(0,1)\), \(\tau_{u,\ell}\sim\mathcal{U}\big{[}0,\frac{D_{0}}{B_{w}}\big{]}\) with \(D_{0}=\frac{K}{4}\) as in [21], and \(\theta_{u,\ell}\sim\mathcal{U}[0,2\pi]\). Here, \(\mathcal{U}[a,b]\) represents the uniform distribution in the interval \([a,b]\). The array steering vector is expressed as \(\mathbf{a}(\theta)=\frac{1}{\sqrt{M}}\left[1,e^{-j\pi\sin(\theta)},\dots,e^{-j (M-1)\pi\sin(\theta)}\right]\). Furthermore, we set \(f_{\mathrm{c}}=140\ \mathrm{GHz}\), \(\Delta f=10\ \mathrm{MHz}\), \(K=128\), and \(L=3\) due to the channel sparsity in the (sub-)THz band. The AWGN power is \(\sigma_{n}^{2}=N_{0}\Delta f\) with \(N_{0}\) being the AWGN power density. The following results are obtained by averaging over \(10^{3}\) independent channel realizations.
Fig. 1 shows the achievable sum rate versus the SNR with 1-bit, 2-bit, and 3-bit ADCs. Specifically, we consider: (a) the sum rate obtained with \(\rho\rightarrow\infty\) and \(\beta\rightarrow\infty\) in (19) ("Total Upper Bound"); (b) the sum rate of the unquantized system in (16) ("Unquantized System"); (c) the approximate sum rate in (14) ("Analytical Approx"); and (d) the sum rate in (18) obtained with \(\rho\rightarrow\infty\) ("SNR Infinity Bound"). We make the following observations from these figures. First, increasing the ADC resolution leads to significant performance improvement, and employing 3-bit ADCs allows to approach the sum rate of the unquantized system. This agrees with the findings in [11]. Second, increasing the OSR substantially enhances the sum rate, especially at high SNR and with 1-bit ADCs, e.g., \(27\%\) and \(6\%\) performance improvement at \(\rho=20\ \mathrm{dB}\) for the 1-bit and 3-bit quantized system, respectively. However, the performance improvement is marginal at low SNR cases due to the large AWGN. Third, it can be observed from Figs. 1(a) and 1(b) that the 1-bit quantized system with \(\beta=4\) can achieve a comparable sum rate to the configuration with 2-bit ADCs. However, since the typical power consumption of ADCs can be modeled as \(\kappa f_{\mathrm{s}}2^{b}\) with \(\kappa\) being the constant associated with the ADC quality [5], there is a trade-off between the OSR and the ADC resolution in terms of energy efficiency and hardware complexity.
Fig. 2 plots the achievable sum rate versus the SNR with different OSRs and ADC resolutions, where the power consumptions of ADCs are equated for three configurations based on \(\kappa f_{\mathrm{s}}2^{b}\), namely: (i) \(b=1\) and \(\beta=4\); (ii) \(b=2\) and \(\beta=2\); and (iii) \(b=3\) and \(\beta=1\). The results reveal that increasing the resolution of ADCs is more effective than increasing the OSR. Specifically, the configuration with \(b=3\) and \(\beta=1\) achieves the highest performance. Moreover, it is observed that the system that employs 1-bit ADCs oversampled by a factor
Fig. 1: Achievable sum rate versus SNR with \(M=64\), \(U=4\), and \(K=128\).
Fig. 3: Achievable sum rate versus OSR with 1-bit ADCs, \(M=64\), \(U=4\), and \(K=128\).
Fig. 2: Achievable sum rate versus SNR with \(M=64\), \(U=4\), and \(K=128\).
of \(4\) can attain comparable performance to the 2-bit system without oversampling. However, this comes at the cost of double the energy expenditure. This observation is consistent with the results reported in [16] regarding the bit error rate. Nonetheless, we remark that 1-bit quantized systems have very low hardware complexity (e.g., the automatic gain control used in multi-bit quantized systems is no longer needed), which is not taken into account in our numerical results.
Fig. 3 depicts the achievable sum rate versus the OSR with 1-bit ADCs. The sum rate obtained with (17) at \(\beta\rightarrow\infty\) is referred to as "OSR Infinity Bound". It is seen that oversampling can substantially improve the sum rate at medium-to-high SNR, while it only yields minor benefits for low SNR scenarios. Furthermore, increasing the SNR and the OSR can progressively bridge the performance gap between the 1-bit quantized and the unquantized systems. This is due to the system performance being jointly corrupted by the AWGN and the quantization distortion. While improving the SNR can overcome the AWGN, the resulting reduced randomness makes the quantization distortion more significant. Therefore, oversampling, which mitigates the quantization distortion, can further enhance the system performance. As such, oversampling is more effective at high SNR, as observed in Fig. 3. Ideally, when \(\beta\rightarrow\infty\), the quantization distortion can be entirely suppressed, and the performance is upper-bounded by the SNR. However, it is observed that, for \(\beta\geq 20\), increasing the OSR yields only minor gains. Therefore, determining a reasonable OSR is crucial to achieve a suitable trade-off between sum rate and energy efficiency, considering that the power consumption of the ADCs increases linearly with the sampling frequency.
## V Conclusion
We analyzed the impact of oversampling on the achievable sum rate in an uplink massive MIMO-OFDM system with low-resolution ADCs. Both the analytical and numerical results demonstrated that oversampling can significantly improve the sum rate of quantized systems by mitigating the quantization distortion. In particular, oversampling gives higher gains at higher SNR and with lower ADC resolution (especially down to 1-bit). Furthermore, we showed that the system with low-resolution ADCs can approach the performance of the unquantized system when both the SNR and the OSR are sufficiently high. Moreover, the results indicate the necessity to strike a balance between the OSR and the ADC resolution in terms of energy efficiency and hardware complexity. We note that, although these results are obtained assuming single-antenna UEs, they can be similarly derived for the scenario with multi-antenna UEs. Furthermore, we observed similar results using ZF combining, with the difference being that higher sum rate improvements are achieved due to oversampling compared to employing MRC. Future research may investigate the trade-off between the OSR and the ADC resolution to maximize the energy efficiency.
## Acknowledgements
This work was supported by the Academy of Finland (332362 EERA, 336449 Profi6, 346208 6G Flagship, 348396 HIGH-6G, and 357504 EETCAMD), Infotech Oulu, and the European Commission (101095759 Hexa-X-II).
|
2309.08132 | Warped product pointwise bi-slant submanifolds of locally product
Riemannian manifolds | In this paper we introduce the concept of pointwise bi-slant submanifolds of
locally product Riemannian manifolds and studied warped product pointwise
bi-slant submanifolds of locally product Riemannian manifolds. We obtain some
characterization results for warped products pointwise bi-slant submanifolds.
Also, we provide some non-trivial examples of such warped product submanifolds. | Prince Majeed, Mehraj Ahmad Lone | 2023-09-15T04:02:08Z | http://arxiv.org/abs/2309.08132v1 | [
###### Abstract
In this paper we introduce the concept of pointwise bi-slant submanifolds of locally product Riemannian manifolds and studied warped product pointwise bi-slant submanifolds of locally product Riemannian manifolds. We obtain some characterization results for warped products pointwise bi-slant submanifolds. Also, we provide some non-trivial examples of such warped product submanifolds.
Locally product Riemannian manifold, Pointwise bi-slant submanifolds, Warped products. Warped product pointwise bi-slant submanifolds of locally product Riemannian manifolds]Warped product pointwise bi-slant submanifolds of locally product Riemannian manifolds
1, 2010 Mathematics Subject Classification (2010): 53C15, 53C25, 53C40, 53C42.
Primary 53C15, 53C25, 53C40, 53C42.
## 1 Introduction
In [7], Chen introduced the notion of slant submanifolds. It includes totally real as well as holomorphic submanifolds. Numerous geometer groups continue to study and conduct research on this idea of submanifolds. Recently, the related literature of slant submanifolds has been compiled in the form of two books by Chen, Shahid and Solamy (see [16, 17]). Since the introduction of slant submanifolds, many generalizations and extensions of slant submanifolds have been introduced, like: semi-slant, pointwise slant, hemislant, pointwise hemislant and many more. The related literature of these kind of generalizations can be be found in (see, [12, 18, 22, 24, 25]). A more generic class of submanifolds in the form of bi-slant submanifolds was introduced by Cabrerizo and Cariazo [6]. This class of submanifolds acts as a natural generalization of CR, semi-slant, slant, hemi-slant submanifolds [22, 24, 26]. In connection to this generic notion of submanifolds, some of recent studies can be found in [21]. Further the extended notion of pointwise bi-slant submanifolds of Kaehler manifolds can be found in [15].
Bishop and O'Neill in 1960\(s\) introduced the concept of warped product manifolds. These manifolds find their applications both in physics as well as in mathematics. Since then the study of warped product submanifolds has been investigated by many geometers (see, [3, 10, 11, 13]). In particular, Chen started looking these warped products as submanifolds of different kinds of manifolds (see, [8, 9]). In this connection, in Kaehlerian settings, he proved besides CR- products the non-existence of warped products of the form \(N^{\perp}\times_{f}N^{T}\), where \(N^{\perp}\), \(N^{T}\) is a totally real and holomorphic submanifold, respectively [34]. Now from the past two decades this area of research is an active area of research among many of the geometry groups. For the overall development of the subject we refer the reader to see Chen's book on it [14].
Now while importing the survey of warped products to slant cases, Sahin in [30] proved the non-existence of semi-slant warped products in any Kaehler manifold. Then in [32] he extended the study to pointwise semi-slant warped products of Kaeherian manifolds. Sahin [33] and Atcken [1] investigated warped product semi-slant submanifolds of locally product Riemannain manifolds. They proved there is no warped product semi-slant submanifold of the form \(M_{T}\times_{f}M_{\theta}\) of a locally product Riemannian manifold \(\bar{M}\) such that \(M_{T}\) and \(M_{\theta}\) are invariant and proper slant submanifolds of \(\bar{M}\), respectively. Moreover they provided non-trival examples and proved a characterization theorem for warped product semi-slant submanifolds of the form \(M_{\theta}\times_{f}M_{T}\).
The main motivation for our paper is a recent study of Uddin, Alghamdi and Solamy [35], in which they have studied the geometry of warped product pointwise semi-slant submanifolds of locally product Riemannian manifolds. In this paper, we try to generalize the notion to bi-slant warped products of locally product Riemannain manifolds. We proved several results on pointwise bi-slant submanifolds of locally product Riemannain manifolds, in addition, we proved some characterization results for pointwise bi-slant submanifolds of locally product Riemannain manifolds. Later, we also provide some non-trivial examples of such submanifolds.
## 2 Preliminaries
Let \(\bar{M}\) be an m-dimensional differential manifold with a tensor field \(F\) of type (1,1) such that \(F^{2}=I\) and \(F\neq\pm I\). Then we say that \(\bar{M}\) is an almost product manifold with almost product structure \(F\). If an almost product manifold \(\bar{M}\) has a Riemannian metric \(g\) such that
\[g(FX,FY)=g(X,Y), \tag{2.1}\]
for any \(X,Y\in\Gamma(T\bar{M})\), then \(\bar{M}\) is called an almost product Riemannian manifold[36], where \(\Gamma(T\bar{M})\) denotes the set of all vector fields of \(\bar{M}\). Let \(\bar{\nabla}\) denotes the Levi-Civita connection on \(\bar{M}\) with respect to the Riemannian
metric \(g\). if \((\bar{\nabla}_{X}F)Y=0\), for any vector \(X,Y\in\Gamma(T\bar{M})\), the \(\bar{M}\) is called a locally product Riemannian manifold [23].
Let \(M\) be a Riemannian manifold isometrically immersed in \(\bar{M}\) and we denote by the symbol \(g\) the Riemannian metric induced on \(M\). Let \(\Gamma(TM)\) denote the Lie algebra of vector fields in \(M\) and \(\Gamma(T^{\perp}M)\), the set of all vector fields normal to \(M\). If \(\nabla\) be the induced Levi-Civita connection on \(M\), the Gauss and Weingarten formulas are respectively given by
\[\bar{\nabla}_{X}Y=\nabla_{X}Y+\sigma(X,Y), \tag{2.2}\]
and
\[\bar{\nabla}_{X}N=-A_{N}X+\nabla_{X}^{\perp}N, \tag{2.3}\]
for any \(X,Y\in\Gamma(TM)\) and \(N\in\Gamma(T^{\perp}M)\), where \(\nabla^{\perp}\) is the normal connection on \(T^{\perp}M\) and \(A\) the shape operator. The shape operator and the second fundamental form of \(M\) are related by
\[g(A_{N}X,Y)=g(\sigma(X,Y),N), \tag{2.4}\]
for any \(X,Y\in\Gamma(TM)\) and \(N\in\Gamma(T^{\perp}M)\), and \(g\) denotes the induced metric on \(M\) as well as the metric on \(\bar{M}\).
For a tangent vector field \(X\) and a normal vector field \(N\) of \(M\), we can write
\[FX=TX+\omega X,\ \ \ \ FN=BN+CN, \tag{2.5}\]
where \(TX\) and \(\omega X\) (respectively, \(BN\) and \(CN\)) are the tangential and normal components of \(FX\) (respectively, of \(FN\)).
Moreover, from (2.1) and (2.5), we have
\[g(TX,Y)=g(X,TY), \tag{2.6}\]
for any \(X,Y\in\Gamma(TM)\).
We can now specify the following classes of submanifolds of locally product Riemannian manifolds:
(1) A submanifold \(M\) of a locally product Riemannian manifold \(\bar{M}\) is said to be slant (see [2, 7, 28]), if for each non-zero vector \(X\) tangent to \(M\), the angle \(\theta(X)\) between \(FX\) and \(T_{p}M\) is a constant, i.e., it does not depend on the choice of \(p\in M\) and \(X\in T_{p}M\).
(2) A submanifold \(M\) of a locally product Riemannian manifold \(\bar{M}\) is called semi-invariant submanifold (see [5, 27]) of \(\bar{M}\) if there exists a differentiable distribution \(D:p\to D_{p}\subset T_{p}M\) such that \(D\) is invariant with respect to \(F\) and the complementary distribution \(D^{\perp}\) is anti-invariant with respect to \(F\).
(3) A submanifold \(M\) of a locally product Riemannian manifold \(\bar{M}\) is called semi-slant (see [23, 26]), if it is endowed with two orthogonal distributions \(D\) and \(D^{\theta}\), where \(D\) is invariant with respect to \(F\) and \(D^{\theta}\) is slant, i.e., \(\theta(X)\) is the angle between \(FX\) and \(D_{p}^{\theta}\) is constant for any \(X\in D_{p}^{\theta}\) and \(p\in M\).
**Definition 2.1**.: A submanifold \(M\) of a locally product Riemannian manifold \(\bar{M}\) is called pointwise slant [19], if at each point \(p\in M\), the Wirtinger angle \(\theta(X)\) between \(FX\) and \(T_{p}M\) is independent of the choice of the non-zero vector \(X\in T_{p}M\). In this case, the Wirtinger angle gives rise to a real valued
function \(\theta:TM-\{0\}\to\mathbb{R}\) which is called Wirtinger function or slant function of the pointwise slant function.
We note that a pointwise slant submanifold of a locally product Riemannian manifold is called slant, in the sense of [2, 28], if its Wirtinger function \(\theta\) is globally constant. We also note that every slant submanifold is a pointwise slant slant submanifold.
From Chen's result (Lemma 2.1) of [12], we can easily show that \(M\) is a pointwise slant submanifold of a locally product Riemannian manifold \(\bar{M}\) if and only if
\[T^{2}=(\cos^{2}\theta)I, \tag{2.7}\]
for some real-valued function \(\theta\) defined on \(M\), where \(I\) denotes the identity transformation of the tangent bundle \(TM\) of \(M\). The following relations are the consequences of (2.7) as
\[g(TX,TY)=(\cos^{2}\theta)g(X,Y),\ \ \ \ g(\omega X,\omega Y)=(\sin^{2} \theta)g(X,Y). \tag{2.8}\]
Also, for a pointwise bi-slant submanifold of locally product Riemannian manifold, (2.5) and (2.7) yields
\[B\omega X=(\sin^{2}\theta)X\ \ \ \ C\omega X=-\omega TX. \tag{2.9}\]
## 3 Pointwise bi-slant submanifolds
In this section, we define and study pointwise bi-slant submanifolds of a locally product Riemannian manifold.
**Definition 3.1**.: Let \(\bar{M}\) be a locally product Riemannian manifold and \(M\) a real submanifold of \(\bar{M}\). The we say \(M\) is a bi-slant submanifold if there exists a pair of orthogonal distributions \(D_{1}\) and \(D_{2}\) of \(M\), at a point \(p\in M\) such that
(a) \(TM=D_{1}\oplus D_{2}\);
(b) \(JD_{1}\perp D_{2}\) and \(JD_{2}\perp D_{1}\);
(c) The distributions \(D_{1},D_{2}\) are pointwise slant with slant functions \(\theta_{1},\theta_{2}\), respectively.
The pair \(\{\theta_{1},\theta_{2}\}\) of slant functions is called the bi-slant function. A pointwise bi-slant submanifold \(M\) is called proper if its bi-slant function satisfies \(\theta_{1},\theta_{2}\neq 0,\frac{\pi}{2}\) and both \(\theta_{1},\theta_{2}\) are not constant on \(M\).
Note that (2.5) and condition \((b)\) in the Definition 3.1 imply that
\[T(D_{i})\subset D_{i},\ i=1,2. \tag{3.1}\]
Given a pointwise bi-slant submanifold \(M\) of locally product Riemannian manifold \(\bar{M}\), for any \(X\in\Gamma(TM)\), we put
\[X=P_{1}X+P_{2}X \tag{3.2}\]
where \(P_{i}\) is the projection from \(\Gamma(TM)\) onto \(D_{i}\). Clearly, \(P_{i}X\) is the components of \(X\) in \(D_{i},i=1,2\). In particular, if \(X\in D_{i}\), we have \(X=P_{i}X\). If we put \(T_{i}=P_{i}\circ T\), then we can find (3.2) that
\[FX=T_{1}X+T_{2}X+\omega X, \tag{3.3}\]
for any \(X\in\Gamma(TM)\).
From now onwards, we assume the ambient manifold \(\bar{M}\) is locally product Riemannian manifold and \(M\) is pointwise bi-slant submanifold in \(\bar{M}\).
Now we give the following useful lemma for later use.
**Lemma 3.2**.: _Let \(M\) be a pointwise bi-slant submanifold of a locally product Riemannian manifold \(\bar{M}\) with pointwise slant distributions \(D_{1}\) and \(D_{2}\) with distinct slant functions \(\theta_{1}\) and \(\theta_{2}\), respectively. Then (i) For \(X,Y\in D_{1}\) and \(Z\in D_{2}\), we have_
\[(\sin^{2}\theta_{2}-\sin^{2}\theta_{1})g(\nabla_{X}Y,Z) = g\big{\{}(\sigma(X,Z),\omega T_{1}Y)+(\sigma(X,T_{2}Z),\omega Y) \big{\}}\] \[+g\big{\{}(\sigma(X,Y),\omega T_{2}Z)+(\sigma(X,T_{1}Y),\omega Z )\big{\}}.\]
_(ii) For \(Z,W\in D_{2}\) and \(X\in D_{1}\), we have_
\[(\sin^{2}\theta_{1}-\sin^{2}\theta_{2})g(\nabla_{Z}W,X) = g\big{\{}(\sigma(X,Z),\omega T_{2}W)+(\sigma(Z,T_{1}X),\omega W) \big{\}}\] \[+g\big{\{}(\sigma(Z,W),\omega T_{1}X)+(\sigma(Z,T_{2}W),\omega X )\big{\}}.\]
Proof.: For \(X,Y\in D_{1}\) and \(Z\in D_{2}\), we have
\[g(\nabla_{X}Y,Z)=g(\bar{\nabla}_{X}Y,Z)=g(F\bar{\nabla}_{X}Y,FZ).\]
Using the locally product structure and (2.5), we have
\[g(\nabla_{X}Y,Z) = g(\bar{\nabla}_{X}T_{1}Y,FZ)+g(\bar{\nabla}_{X}\omega Y,T_{2}Z) +g(\bar{\nabla}_{X}\omega Y,\omega Z).\] \[= g(\bar{\nabla}_{X}T_{1}^{2}Y,Z)+g(\bar{\nabla}_{X}\omega T_{1}Y,Z)-g(A_{\omega Y}X,T_{2}Z)\] \[-g(\bar{\nabla}_{X}\omega Z,\omega Y).\]
Again using (2.5) and (2.7), we obtain
\[g(\nabla_{X}Y,Z) = \cos^{2}\theta_{1}g(\bar{\nabla}_{X}Y,Z)-\sin 2(\theta_{1})X( \theta_{1})g(Y,Z)-g(A_{\omega T_{1}Y}X,Z)\] \[-g(A_{\omega Y}X,T_{2}Z)-g(\bar{\nabla}_{X}\omega Z,FY)+g(\bar{ \nabla}_{X}\omega Z,T_{1}Y).\]
By using the orthogonality of two distributions and the symmetry of shape operator, the above equation reduces to
\[\sin^{2}\theta_{1}g(\nabla_{X}Y,Z) = -g(\sigma(X,Z),\omega T_{1}Y)-g(\sigma(X,T_{2}Z),\omega Y)\] \[-g(\bar{\nabla}_{X}B\omega Z,Y)-g(\bar{\nabla}_{X}C\omega Z,Y)-g( A_{\omega Z}X,T_{1}Y).\]
Thus, from (2.9), we obtain
\[\sin^{2}\theta_{1}g(\nabla_{X}Y,Z) = -g(\sigma(X,Z),\omega T_{1}Y)-g(\sigma(X,T_{2}Z),\omega Y)\] \[-\sin^{2}\theta_{2}g(\bar{\nabla}_{X}Z,Y)-\sin 2(\theta_{2})X( \theta_{2})g(Y,Z)\] \[+g(\bar{\nabla}_{X}\omega T_{2}Z,Y)-g(A_{\omega Z}X,T_{1}Y).\]
Using (2.3) and the orthogonality of vector fields, we have
\[\sin^{2}\theta_{1}g(\nabla_{X}Y,Z) = -g(\sigma(X,Z),\omega T_{1}Y)-g(\sigma(X,T_{2}Z),\omega Y)\] \[+\sin^{2}\theta_{2}g(\bar{\nabla}_{X}Y,Z)-g(A_{\omega T_{2}Z}X,Y)\] \[-g(A_{\omega Z}T_{1}Y,X).\]
Now, part \((i)\) of the lemma follows from the above Equation by using (2.4). In the similar fashion, we can prove part \((ii)\).
The following corollary is the immediate consequence of the Lemma 1\((i)\)
**Corollary 3.3**.: _Let \(M\) be a pointwise semi-slant submanifold of a locally product Riemannian manifold \(\bar{M}\). Then,_
\[\sin^{2}\theta g(\nabla_{X}Y,Z)=g(\sigma(X,Y),\omega TZ)+g(\sigma(X,FY),\omega Z),\]
_for any \(X,Y\in D_{1}\) and \(Z\in D_{2}\)._
Proof.: If we put \(\theta_{1}=0\) and \(\theta_{2}=\theta\), a slant function, then the submanifold \(M\) of locally product Riemannian manifold \(\bar{M}\) becomes pointwise semi-slant submanifold. In this case, the first two terms in the right hand size of (3.4) vanish identically. Thus the relation (3.4) reduces to
\[\sin^{2}\theta g(\nabla_{X}Y,Z)=g(\sigma(X,Y),\omega TZ)+g(\sigma(X,FY),\omega Z).\]
The same result has been proved in [35].
## 4 Warped product pointwise bi-slant submanifolds of locally product Riemannian manifold
Let \((M_{1},g_{1})\) and \((M_{2},g_{2})\) be two Riemannian manifolds and \(f>0\), be a positive differentiable function on \(M_{1}\). Consider the product manifold \(M_{1}\times M_{2}\) with its canonical projections \(\pi:M_{1}\times M_{2}\to M_{1}\) and \(\rho:M_{1}\times M_{2}\to M_{2}\). The warped product \(M=M_{1}\times_{f}M_{2}\) is the product manifold \(M_{1}\times M_{2}\) equipped with the Riemannian metric \(g\) such that
\[g(X,Y)=g_{1}(\pi_{*}(X),\pi_{*}(Y))+(f\circ\pi)^{2}g_{2}(\rho_{*}(X),\rho_{*}( Y))\]
for any tangent vector \(X,Y\in TM\), where \(*\) is the symbol for the tangent maps. It was proved in [4] that for any \(X\in TM_{1}\) and \(Z\in TM_{2}\), the following holds
\[\nabla_{X}Z=\nabla_{Z}X=(Xlnf)Z \tag{4.1}\]
where \(\nabla\) denotes the Levi-Civita connection of \(g\) on \(M\). A warped product manifold \(M=M_{1}\times_{f}M_{2}\) is said to be trivial if the warping function \(f\) is constant. If \(M=M_{1}\times_{f}M_{2}\) is a warped product manifold then \(M_{1}\) is totally geodesic and \(M_{2}\) is a totally umbilical (see [4, 9]).
**Lemma 4.1**.: _Let \(M_{T}\times_{f}M_{\theta}\) be a warped product pointwise bi-slant submanifold of a locally product Riemannian manifold \(\bar{M}\) such that \(M_{T}\) and \(M_{\theta}\) are pointwise slant submanifolds with slant functions \(\theta_{1}\) and \(\theta_{2}\), respectively of \(\bar{M}\). Then we have the following_
\[g(\sigma(X,W),\omega T_{2}Z)+g(\sigma(X,T_{2}Z),\omega W)=-(\sin 2\theta_{2})X( \theta_{2})g(Z,W) \tag{4.2}\]
_for any \(X\in TM_{T}\) and \(Z,W\in TM_{\theta}\)._
Proof.: For any \(X\in TM_{T}\) and \(Z,W\in TM_{\theta}\), we have
\[g(\bar{\nabla}_{X}Z,W)=g(\nabla_{X}Z,W)=X(lnf)g(Z,W). \tag{4.3}\]
On the other hand, we also have
\[g(\bar{\nabla}_{X}Z,W)=g(F\bar{\nabla}_{X}Z,FW)=g(\bar{\nabla}_{X}FZ,FW).\]
Now for any \(X\in TM_{T}\) and \(Z,W\in TM_{\theta}\). Using (2.5), we obtain
\[g(\bar{\nabla}_{X}Z,W)=g(\bar{\nabla}_{X}T_{2}Z,T_{2}W)+g(\bar{\nabla}_{X}T_{ 2}Z,\omega W)+g(\bar{\nabla}_{X}\omega Z,FW).\]
Then from (2.1), (2.2), (4.1) and the locally product Riemannian structure, we derive
\[g(\bar{\nabla}_{X}Z,W) = \cos^{2}\theta_{2}X(lnf)g(Z,W)+g(\sigma(X,T_{2}Z),\omega W)+g( \bar{\nabla}_{X}F\omega Z,W)\] \[= \cos^{2}\theta_{2}X(lnf)g(Z,W)+g(\sigma(X,T_{2}Z),\omega W)+g( \bar{\nabla}_{X}B\omega Z,W)\] \[+g(\bar{\nabla}_{X}C\omega Z,W).\]
Using (2.9), we find
\[g(\bar{\nabla}_{X}Z,W) = \cos^{2}\theta_{2}X(lnf)g(Z,W)+g(\sigma(X,T_{2}Z),\omega W)\] \[+\sin^{2}\theta_{2}g(\bar{\nabla}_{X}Z,W)+\sin 2\theta_{2}X( \theta_{2})g(Z,W)\] \[-g(\bar{\nabla}_{X}\omega T_{2}Z,W).\]
Thus the lemma follows from (4.3) and (4.4) by Using (2.3) and (4.1).
**Lemma 4.2**.: _Let \(M_{T}\times_{f}M_{\theta}\) be a warped product pointwise bi-slant submanifold of a locally product Riemannian manifold \(\bar{M}\) such that \(M_{T}\) and \(M_{\theta}\) are pointwise slant submanifolds with slant functions \(\theta_{1}\) and \(\theta_{2}\), respectively of \(\bar{M}\). Then we have the following_
\[g(\sigma(X,Z),\omega W)+g(\sigma(X,W),\omega Z)=-2(\tan\theta_{2})X(\theta_{2} )g(T_{2}Z,W) \tag{4.5}\]
_for any \(X\in TM_{T}\) and \(Z,W\in TM_{\theta}\)._
Proof.: The proof of this lemma follows by Interchanging \(Z\) by \(T_{2}Z\) for any \(Z\in TM_{2}\) in (4.2) and then by using (2.7).
**Lemma 4.3**.: _Let \(M_{T}\times_{f}M_{\theta}\) be a warped product pointwise bi-slant submanifold of a locally product Riemannian manifold \(\bar{M}\) such that \(M_{T}\) and \(M_{\theta}\) are pointwise slant submanifolds with slant functions \(\theta_{1}\) and \(\theta_{2}\), respectively of \(\bar{M}\). Then_
\[(i) g(\sigma(X,Z),\omega W)=g(\sigma(X,W),\omega Z), \tag{4.6}\] \[(ii) g(\sigma(X,Z),\omega Y)=-g(\sigma(X,Y),\omega Z), \tag{4.7}\]
_for any \(X\in TM_{T}\) and \(Z,W\in TM_{\theta}\)._
Proof.: For any \(X\in TM_{T}\) and \(Z,W\in TM_{\theta}\), we have
\[g(\sigma(X,Z),\omega W) = g(\bar{\nabla}_{Z}X,\omega W)\] \[= g(\bar{\nabla}_{Z}X,FW)-g(\bar{\nabla}_{Z}X,T_{2}W).\]
Using (2.1),(2.5) and (4.1), we obtain
\[g(\sigma(X,Z),\omega W)=g(\bar{\nabla}_{Z}T_{1}X,W)+g(\bar{\nabla}_{Z}\omega X,W)-X(lnf)g(Z,T_{2}W).\]
On simplification and using (2.3), (2.4) and (4.1), we derive
\[g(\sigma(X,Z),\omega W)=T_{1}X(lnf)g(Z,W)-g(\sigma(Z,W),\omega X)-X(lnf)g(Z,T_{ 2}W). \tag{4.8}\]
Then from polarization, we get
\[g(\sigma(X,W),\omega Z)=T_{1}X(lnf)g(Z,W)-g(\sigma(Z,W),\omega X)-X(lnf)g(T_{2 }Z,W). \tag{4.9}\]
Subtracting (4.9) from (4.8) and using (2.6), we obtain
\[g(\sigma(X,Z),\omega W)-g(\sigma(X,W),\omega Z)=0.\]
Hence, the proof follows from the above relation.
For part \((ii)\), the proof follows same as part \((i)\).
**Theorem 4.4**.: _Let \(M=M_{T}\times_{f}M_{\theta}\) be a warped product pointwise bi-slant submanifold of a locally product Riemannian manifold \(\bar{M}\) such that \(M_{T}\) and \(M_{\theta}\) are pointwise slant submanifolds with distinct slant functions \(\theta_{1}\) and \(\theta_{2}\), respectively of \(\bar{M}\). Then we have_
\[g(A_{\omega T_{1}X}W+A_{\omega X}T_{2}W,Z)+g(A_{\omega T_{2}W}X+ A_{\omega W}T_{1}X,Z)\] \[=(\sin^{2}\theta_{2}-\sin^{2}\theta_{1})X(lnf)g(Z,W). \tag{4.10}\]
_for any \(X,Y\in TM_{T}\) and \(Z,W\in TM_{\theta}\)._
Proof.: For any \(X,Y\in TM_{T}\) and \(Z,W\in TM_{\theta}\), we have
\[g(\bar{\nabla}_{Z}X,W)=g(\nabla_{Z}X,W)=X(lnf)g(Z,W). \tag{4.11}\]
On the other hand, for \(X,Y\in TM_{T}\) and \(Z,W\in TM_{\theta}\), we have
\[g(\bar{\nabla}_{Z}X,W)=g(F\bar{\nabla}_{Z}X,FW)=g(\bar{\nabla}_{Z}FX,FW).\]
Therefore, by using (2.5), we get
\[g(\bar{\nabla}_{Z}X,W)=g(\bar{\nabla}_{Z}T_{1}X,FW)+g(\bar{\nabla}_{Z}\omega X,T_{2}W)+g(\bar{\nabla}_{Z}\omega X,\omega W).\]
Using (2.1), (2.3) and definition of locally product Riemannian manifold, we obtain
\[g(\bar{\nabla}_{Z}X,W)=g(\bar{\nabla}_{Z}FT_{1}X,W)-g(A_{\omega X}Z,T_{2}W)-g (\bar{\nabla}_{Z}\omega W,\omega X).\]
From (2.5) and symmetry of shape operator, we derive
\[g(\bar{\nabla}_{Z}X,W) = g(\bar{\nabla}_{Z}T_{1}^{2}X,W)+g(\bar{\nabla}_{Z}\omega T_{1}X,W) -g(A_{\omega X}T_{2}W,Z)\] \[-g(F\bar{\nabla}_{Z}\omega W,X)+g(\bar{\nabla}_{Z}\omega W,T_{1}X)\] \[= \cos^{2}\theta_{1}g(\bar{\nabla}_{Z}X,W)-\sin 2\theta_{1}Z(\theta_ {1})g(X,W)-g(A_{\omega T_{1}X}Z,W)\] \[-g(A_{\omega X}T_{2}W,Z)-g(\bar{\nabla}_{Z}F\omega W,X)-g(A_{ \omega W}Z,T_{1}X).\]
Using (2.2), (2.5), (4.1), (4.11) and the orthogonality of vector fields and symmetry of shape operator, we get
\[\sin^{2}\theta_{1}X(lnf)g(Z,W) = -g(A_{\omega T_{1}X}W+A_{\omega X}T_{2}W,Z)-g(\bar{\nabla}_{Z}B \omega W,X)\] \[-g(\bar{\nabla}_{Z}C\omega W,X)-g(A_{\omega W}T_{1}X,Z).\]
Using (2.9), we arrive at
\[\sin^{2}\theta_{1}X(lnf)g(Z,W) = -g(A_{\omega T_{1}X}W+A_{\omega X}T_{2}W,Z)-\sin^{2}\theta_{2}g( \bar{\nabla}_{Z}W,X)\] \[-\sin 2\theta_{2}Z(\theta_{2})g(X,W)+g(\bar{\nabla}_{Z}\omega T_{2 }W,X)\] \[-g(A_{\omega W}T_{1}X,Z).\]
Further, using orthogonality of vector fields and the relation (2.2), (2.3) and (4.1), we obtain
\[\sin^{2}\theta_{1}X(lnf)g(Z,W) = -g(A_{\omega T_{1}X}W+A_{\omega X}T_{2}W,Z)\] \[+\sin^{2}\theta_{2}X(lnf)g(Z,W)-g(A_{\omega T_{2}W}Z,X)\] \[-g(A_{\omega W}T_{1}X,Z).\]
Again using the symmetry of shape operator, we obtain (4.10) from the above relation. Hence the proof is complete. \(\Box\)
Characterization for warped product pointwise bi-slant submanifolds of locally product Riemannian manifolds.
In this section, we will prove the characterization for warped product pointwise bi-slant submanifolds of locally product Riemannian manifolds. For this, we need the following well known theorem of Hiepko's.
**Theorem 5.1**.: [20] _Let \(D_{1}\) and \(D_{2}\) be two orthogonal distribution on a Riemannian manifold \(M\). Suppose that \(D_{1}\) and \(D_{2}\) both are involutive such that \(D_{1}\) is totally geodesic foliation and \(D_{2}\) is a spherical foliation. Then \(M\) is locally isometric to a non-trial warped product \(M_{1}\times_{f}M_{2}\), where, \(M_{1}\) and \(M_{2}\) are integral manifolds of \(D_{1}\) and \(D_{2}\), respectively._
The following result provides a characterization for warped product pointwise bi-slant submanifolds of locally product Riemannian manifolds.
**Theorem 5.2**.: _Let \(M\) be a proper pointwise bi-slant submanifold of a locally product Riemannian manifold \(\bar{M}\) with pointwise slant distributions \(D_{1}\) and \(D_{2}\). Then \(M\) is locally a warped product pointwise bi-slant submanifold of the form \(M_{T}\times_{f}M_{\theta}\), where \(M_{T}\) and \(M_{\theta}\) are pointwise slant submanifolds with
distinct slant functions \(\theta_{1}\) and \(\theta_{2}\), respectively of \(\bar{M}\) if and only if the shape operator of \(M\) satisfies_
\[A_{\omega T_{1}X}Z+A_{\omega X}T_{2}Z+A_{\omega T_{2}Z}X+A_{\omega Z}T_{1}X=(\sin^ {2}\theta_{2}-\sin^{2}\theta_{1})X(\mu)Z \tag{5.1}\]
_for \(X\in D_{1}\), \(Z\in D_{2}\) and for a smooth function \(\mu\) on \(M\) satisfying \(W(\mu)=0\) for any \(W\in D_{2}\)._
Proof.: Let \(M=M_{T}\times_{f}M_{\theta}\) be a warped product pointwise bi-slant submanifold of a locally product Riemannian manifold \(\bar{M}\). Then from Lemma 4.2\((ii)\), we have
\[g(A_{\omega Y}Z+A_{\omega Z}Y,X)=0 \tag{5.2}\]
for any \(X,Y\in TM_{1}\) and \(Z\in TM_{2}\). Interchanging \(Y\) by \(T_{1}Y\) in (5.2), we obtain
\[g(A_{\omega T_{1}Y}Z+A_{\omega Z}T_{1}Y,X)=0. \tag{5.3}\]
Again interchanging \(Z\) by \(T_{2}Z\) in (5.2), we obtain
\[g(A_{\omega Y}T_{2}Z+A_{\omega T_{2}Z}Y,X)=0. \tag{5.4}\]
Adding equations (5.3) and (5.4), we get
\[g(A_{\omega T_{1}Y}Z+A_{\omega Z}T_{1}Y+A_{\omega Y}T_{2}Z+A_{ \omega T_{2}Z}Y,X)=0. \tag{5.5}\]
Then (5.1) follows from (4.10) by using the above fact.
Conversely, if \(M\) be a pointwise bi-slant subamnifold of a locally product Riemannian manifold with pointwise slant distributions \(D_{1}\) and \(D_{2}\) such that (5.1) holds, then from Lemma 3.2\((i)\), we have
\[(\sin^{2}\theta_{2}-\sin^{2}\theta_{1})g(\nabla_{X}Y,Z) = g(A_{\omega T_{1}Y}Z+A_{\omega Z}T_{1}Y\] \[+A_{\omega Y}T_{2}Z+A_{\omega T_{2}Z}Y,X)\]
for any \(X,Y\in D_{1}\) and \(Z\in D_{2}\). Using the above condition (5.1), we have
\[g(\nabla_{X}Y,Z)=X(\mu)g(X,Z)=0\]
which indicates that the leaves of the distributions are totally geodesic in \(M\). On the other hand, from Lemma 3.2\((ii)\), we have
\[(\sin^{2}\theta_{1}-\sin^{2}\theta_{2})g(\nabla_{Z}W,X) = g(A_{\omega T_{2}W}X+A_{\omega W}T_{1}X\] \[+A_{\omega T_{1}X}W+A_{\omega X}T_{2}W,Z).\]
From the hypothesis of the theorem i.e., (5.1), we get
\[g(\nabla_{Z}W,X)=-X(\mu)g(Z,W). \tag{5.6}\]
By polarization, we arrive at
\[g(\nabla_{W}Z,X)=-X(\mu)g(Z,W). \tag{5.7}\]
On subtracting (5.7) from (5.6) and by the definition of Lie bracket, we obtain \(g([Z,W],X)=0\), which depicts that the distribution \(D_{2}\) is integrable. If we
consider a leaf \(M_{2}\) of \(D_{2}\) and the second fundamental form \(\sigma_{2}\) of \(M_{2}\) in \(M\), then from (5.6), we have
\[g(\sigma_{2}(Z,W),X)=g(\nabla_{Z}W,X)=-X(\mu)g(Z,W).\]
Now, by the definition of the gradient we have \(\sigma_{2}(Z,W)=-\bar{\nabla}_{\mu}g(Z,W)\), such that \(\bar{\nabla}_{\mu}\) is the gradient of \(\mu\). The above relations shows that the leaf of \(M_{2}\) is totally Umbilical in \(M\) with the mean curvature vector \(H_{2}=-\bar{\nabla}_{\mu}\). Since \(W(\mu)=0\) for any \(W\in D_{2}\), which clearly shows that the mean curvature is parallel. Thus, the spherical condition is satisfied. Then by Hiepko's Theorem \(M\) is locally a warped product pointwise bi-slant submanifold. Hence the proof is complete. \(\Box\)
The following immediate consequences of the above theorem are given below:
1. In Theorem 5.2, if \(\theta_{1}=0\) and \(\theta_{2}=\theta\), a slant function, then the submanifold \(M\) of locally product Riemannian manifold \(\bar{M}\) becomes a pointwise semi-slant submanifold which has been studied in [35]. In this case, the first two terms in the left hand side of (5.1) vanish identically. Thus, the relation (5.1) is true for pointwise semi-slant warped product and it reduces to
\[A_{\omega TZ}X+A_{\omega Z}FX=(\sin^{2}\theta)X(\mu)Z\]
for \(X\in D_{1}\) and \(Z\in D_{2}\), where \(D_{1}\) and \(D_{2}\) are complex and proper pointwise slant distributions of \(M\). The same has been proved in [35].
2. In Theorem 5.2, if we consider \(\theta_{1}=\theta\) a constant slant angle and \(\theta_{2}=\frac{\pi}{2}\), then it is a case of hemi-slant warped products. In this case, the second and third term in the left hand side of (5.1) vanish identically. Thus the relation (5.1) is true for hemi-slant warped products and it reduces to
\[A_{\omega TX}Z+A_{FZ}TX=(\cos^{2}\theta)X(\mu)Z\]
for \(X\in D_{\theta}\) and \(Z\in D^{\perp}\), where \(D_{\theta}\) and \(D^{\perp}\) are proper slant and totally real distributions.
3. In Theorem 5.2, if \(\theta_{1}=0\) and \(\theta_{2}=\frac{\pi}{2}\), then it is a case of CR-warped product. In this case all the terms in the left hand side of(5.1) vanish identically. Thus the relation (5.1) is true for CR-warped products and it will be
\[A_{FZ}FX=X(\mu)Z\]
for \(X\in D\) and \(Z\in D^{\perp}\), where \(D\) and \(D^{\perp}\) are complex and totally real distributions of \(M\).
Some examples on warped product pointwise bi-slant submanifolds of locally product Riemannian manifold.
_Example 1_.: Let \(\mathbb{R}^{4}\) be the Euclidean space with the cartesian coordinates given by \((x_{1},x_{2},y_{1},y_{2})\) and the almost product structure
\[F\biggl{(}\frac{\partial}{\partial x_{i}}\biggr{)}=\frac{\partial}{\partial x _{i}},\ \ \ \ F\biggl{(}\frac{\partial}{\partial y_{j}}\biggr{)}=-\frac{ \partial}{\partial y_{j}},1\leq i,j\leq 2.\]
A submanifold \(M\) of \(\mathbb{R}^{4}\) defined by
\[\chi(u,v,w)=(wu\cos v,wu\sin v,w\cos v,w\sin v).\]
It is easy to see that the tangent bundle \(TM\) of \(M\) is spanned by the following vectors
\[v_{1}=w\cos v\frac{\partial}{\partial x_{1}}+w\sin v\frac{\partial}{\partial x _{2}},\]
\[v_{2}=-wu\sin v\frac{\partial}{\partial x_{1}}+wu\cos v\frac{\partial}{ \partial x_{2}}-w\sin v\frac{\partial}{\partial y_{1}}+w\cos v\frac{\partial }{\partial y_{2}},\]
\[v_{3}=u\cos v\frac{\partial}{\partial x_{1}}+u\sin v\frac{\partial}{\partial x _{2}}+\cos v\frac{\partial}{\partial y_{1}}+\sin v\frac{\partial}{\partial y _{2}}.\]
Then, clearly we obtain
\[Fv_{1}=w\cos v\frac{\partial}{\partial x_{1}}+w\sin v\frac{\partial}{\partial x _{2}},\]
\[Fv_{2}=-wu\sin v\frac{\partial}{\partial x_{1}}+wu\cos v\frac{\partial}{ \partial x_{2}}+w\sin v\frac{\partial}{\partial y_{1}}-w\cos v\frac{\partial}{ \partial y_{2}},\]
\[Fv_{3}=u\cos v\frac{\partial}{\partial x_{1}}+u\sin v\frac{\partial}{\partial x _{2}}-\cos v\frac{\partial}{\partial y_{1}}-\sin v\frac{\partial}{\partial y_{ 2}}.\]
Then, we find that \(D_{1}=span\{v_{1},v_{3}\}\) is a proper pointwise slant distribution with slant angle \(\theta_{1}=\cos^{-1}\left(\frac{u}{\sqrt{1+u^{2}}}\right)\) and \(D_{2}=span\{v_{2}\}\) is again a proper pointwise slant distribution with slant angle \(\theta_{2}=\cos^{-1}\left(\frac{u^{2}-1}{u^{2}+1}\right)\). Thus, \(M\) is a proper pointwise bi-slant submanifold of \(\mathbb{R}^{4}\).
It is easy to verify that both the distributions \(D_{1}\) and \(D_{2}\) are integrable. If we denote the integrable manifolds of \(D_{1}\) and \(D_{2}\) by \(M_{T}\) and \(M_{\theta}\), respectively. Then the metric tensor \(g\) of product manifold \(M\) is given by
\[g=g_{M_{T}}+w^{2}(1+u^{2})g_{M_{\theta}},\]
where,
\[g_{M_{T}}=w^{2}du^{2}+(1+u^{2})dw^{2}\ \ \ \ and\ \ \ \ g_{M_{\theta}}=dv^{2}.\]
Hence, \(M\) is a proper non-trival warped product pointwise bi-slant submanifold of \(\mathbb{R}^{4}\) with warping function \(f=\sqrt{w^{2}(1+u^{2})}\) and whose bi-slant angles \(\theta_{1},\theta_{2}\neq 0,\frac{\pi}{2}\).
_Example 2_.: Let \(\mathbb{R}^{6}=\mathbb{R}^{3}\times\mathbb{R}^{3}\) be a locally product Riemannian manifold with cartesian coordinates \((x_{1},x_{2},x_{3},y_{1},y_{3},y_{3})\). Consider a submanifold \(M\) of \(\mathbb{R}^{6}\) defined by
\[\chi(u,v,w)=(v\cos u,v\sin u,-v+w,w\cos u,w\sin u,v+w),\]
with almost product structure \(F\) defined by
\[F\bigg{(}\frac{\partial}{\partial x_{i}}\bigg{)}=-\bigg{(}\frac{\partial}{ \partial x_{i}}\bigg{)},\ \ \ \ F\bigg{(}\frac{\partial}{\partial y_{j}}\bigg{)}=\bigg{(}\frac{\partial}{ \partial y_{j}}\bigg{)},1\leq i,j\leq 3.\]
It is easy to see that its tangent space \(TM\) of \(M\) is spanned by the following vectors
\[v_{1}=-v\sin u\frac{\partial}{\partial x_{1}}+v\cos u\frac{\partial}{\partial x _{2}}-w\sin u\frac{\partial}{\partial y_{1}}+w\cos u\frac{\partial}{\partial y _{2}},\]
\[v_{2}=\cos u\frac{\partial}{\partial x_{1}}+\sin u\frac{\partial}{\partial x _{2}}-\frac{\partial}{\partial x_{3}}+\frac{\partial}{\partial y_{3}},\]
\[v_{3}=\frac{\partial}{\partial x_{3}}+\cos u\frac{\partial}{\partial y_{1}}+ \sin u\frac{\partial}{\partial y_{2}}+\frac{\partial}{\partial y_{3}}.\]
Then, we have
\[Fv_{1}=v\sin u\frac{\partial}{\partial x_{1}}-v\cos u\frac{\partial}{\partial x _{2}}-w\sin u\frac{\partial}{\partial y_{1}}+w\cos u\frac{\partial}{\partial y _{2}},\]
\[Fv_{2}=-\cos u\frac{\partial}{\partial x_{1}}-\sin u\frac{\partial}{\partial x _{2}}+\frac{\partial}{\partial x_{3}}+\frac{\partial}{\partial y_{3}},\]
\[Fv_{3}=-\frac{\partial}{\partial x_{3}}+\cos u\frac{\partial}{\partial y_{1}} +\sin u\frac{\partial}{\partial y_{2}}+\frac{\partial}{\partial y_{3}}.\]
Let us put \(D_{1}=span\{v_{1}\}\) is a proper slant distribution with slant angle \(\theta_{1}=\cos^{-1}\left(\frac{w^{2}-v^{2}}{w^{2}+v^{2}}\right)\) and \(D_{2}=span\{v_{2},v_{3}\}\) is again a proper slant distribution with slant angle \(\theta_{2}=\cos^{-1}\left(\frac{2}{3}\right)\). Hence the submanifold \(M\) defined by \(\chi\) is a bi-slant submanifold.
It is easy to verify that both the distributions \(D_{1}\) and \(D_{2}\) are integrable. If we denote the integrable manifolds of \(D_{1}\) and \(D_{2}\) by \(M_{T}\) and \(M_{\theta}\), respectively. Then the metric tensor \(g\) of product manifold \(M\) is given by
\[g=g_{M_{\theta}}+(v^{2}+w^{2})g_{M_{T}}\]
where
\[g_{M_{\theta}}=3(dv^{2}+dw^{2})\ \ \ \ and\ \ \ \ g_{M_{T}}=du^{2}.\]
Hence, \(M\) is a proper non-trival warped product bi-slant submanifold of \(\mathbb{R}^{6}\) with warping function \(f=\sqrt{v^{2}+w^{2}}\) and whose bi-slant angles \(\theta_{1},\theta_{2}\neq 0,\frac{\pi}{2}\).
**Conflicts of Interest**: The authors declare no conflict of interest. |
2301.13456 | Weighted One-Deterministic-Counter Automata | We introduce weighted one-deterministic-counter automata (ODCA). These are
weighted one-counter automata (OCA) with the property of counter-determinacy,
meaning that all paths labelled by a given word starting from the initial
configuration have the same counter-effect. Weighted ODCAs are a strict
extension of weighted visibly OCAs, which are weighted OCAs where the input
alphabet determines the actions on the counter.
We present a novel problem called the co-VS (complement to a vector space)
reachability problem for weighted ODCAs over fields, which seeks to determine
if there exists a run from a given configuration of a weighted ODCA to another
configuration whose weight vector lies outside a given vector space. We
establish two significant properties of witnesses for co-VS reachability: they
satisfy a pseudo-pumping lemma, and the lexicographically minimal witness has a
special form. It follows that the co-VS reachability problem is in P.
These reachability problems help us to show that the equivalence problem of
weighted ODCAs over fields is in P by adapting the equivalence proof of
deterministic real-time OCAs by B\"ohm et al. This is a step towards resolving
the open question of the equivalence problem of weighted OCAs. Furthermore, we
demonstrate that the regularity problem, the problem of checking whether an
input weighted ODCA over a field is equivalent to some weighted automaton, is
in P. Finally, we show that the covering and coverable equivalence problems for
uninitialised weighted ODCAs are decidable in polynomial time. We also consider
boolean ODCAs and show that the equivalence problem for (non-deterministic)
boolean ODCAs is in PSPACE, whereas it is undecidable for (non-deterministic)
boolean OCAs. | Prince Mathew, Vincent Penelle, Prakash Saivasan, A. V. Sreejith | 2023-01-31T07:32:08Z | http://arxiv.org/abs/2301.13456v2 | # One deterministic-counter automata
###### Abstract.
We introduce one deterministic-counter automata (odca), which are one-counter automata where all runs labelled by a given word have the same counter effect, a property we call counter-determinacy. odcas are an extension of visibly one-counter automata - one-counter automata (oca) where the input alphabet determines the actions on the counter. They are a natural way to introduce non-determinism/weights to ocas while maintaining the decidability of crucial problems, that are undecidable on general ocas. For example, the equivalence problem is decidable for deterministic ocas whereas it is undecidable for non-deterministic ocas. We consider both non-deterministic and weighted odcas. This work shows that the equivalence problem is decidable in polynomial time for weighted odcas over a field and polynomial space for non-deterministic odcas. As a corollary, we get that the regularity problem, i.e., the problem of checking whether an input weighted odca is equivalent to some weighted automaton, is also in polynomial time. Furthermore, we show that the covering and coverable equivalence problems for uninitialised weighted odcas are decidable in polynomial time.
We also introduce a few reachability problems that are of independent interest and show that they are in P. These reachability problems later help in solving the equivalence problem.
Key words and phrases:One counter automata, Equivalence, Reachability, Weighted automata
## Introduction
Visibly pushdown automata (vpda) was introduced by Alur and Madhusudan in 2004 [2]. They have received a lot of attention as they are a strict subclass of pushdown automata, suitable for program analysis. vpdas enjoy tractable decidable properties, which are undecidable in the general case. The visibly restriction, in essence, is that the stack operations are _input-driven_, i.e., only depends on the letter read.
In this paper, we investigate a relaxation in the visibly constraint on one-counter automata (oca): the counter actions are no longer input-driven, but are deterministic. We could summarise this as: "any run on a given word has a fixed counter effect". We give a model satisfying this new restriction, which includes all visibly oca.
Syntactically, one deterministic-counter automata (odca) contain two parts:
1. Counter structure: This is a deterministic oca without epsilon transitions. The transitions are deterministic, and the state transitions depend only on the current state, the alphabet, and whether the counter is zero.
2. Finite state machine: This machine has finite states and no counters. It can be deterministic, non-deterministic, or weighted. The transitions of
this machine depend on its current state, current counter structure state, input alphabet, and whether the counter value is zero.
An odca will be called deterministic, non-deterministic, or weighted depending on the type of the finite state machine. One can observe that the class of deterministic oca and the class of visibly oca are specific cases of odcas. In a visibly oca, the input alphabet determines the counter structure.
An odca represents a function that maps words (over a finite alphabet) to a weight. The run of a word over an odca determines its accepting weight. In the case of weighted odca, the weights come from a field, and in the case of deterministic and non-deterministic odca, the weights come from the boolean semiring. Hence a deterministic or non-deterministic odca represents a language, which is the set of all words whose weight is 1.
A non-deterministic odca can have a succinct representation compared to the deterministic odca recognising the same language. For example, for any \(k\in\mathbb{N}\), let \(\mathcal{L}_{k}\) denote the language \(\{a^{n}(b+c)^{m}b(b+c)^{k}\mid m,n\in\mathbb{N}\text{ and }m>n\}\). The non-deterministic odca recognising the language \(\mathcal{L}_{k}\) guesses whether a \(b\) encountered after reading the string \(a^{n}(b+c)^{n+1}\) for some \(n\in\mathbb{N}\) is at the \(k^{th}\) position from the end of the string. An example of a non-deterministic odca that recognises \(\mathcal{L}_{2}\) is shown in Figure 1. The deterministic odca that recognises the same language will have to check whether every \(b\) encountered after reading the string \(a^{n}(b+c)^{n+1}\) is at the \(k^{th}\) position from the end. This will require an additional \(2^{k}\) states.
### Our results
Two odcas are _equivalent_ if the functions they represent are equal. Observe that deterministic real-time ocas are deterministic odcas. We also note that deterministic odcas are deterministic real-time ocas. Bohm et al. [5] proved that the equivalence of deterministic oca is in non-deterministic log space. We show that a non-deterministic odca is equivalent to an exponentially sized deterministic odca. Therefore, unlike non-deterministic ocas, the equivalence of two non-deterministic odcas is decidable and can be determined by a \(\mathsf{PSPACE}\) machine.
This paper also presents a polynomial time algorithm for deciding the equivalence problem of two weighted odcas. If the two odcas are non-equivalent, we output a word (whose length is polynomial in the size of the two odcas) that the two weighted odcas accept with different weights. We dedicate Section 4 to prove Theorem 1.
**Theorem 1**.: _There exists a polynomial time algorithm that decides if two weighted odca are equivalent and outputs a word that distinguishes them, if they are non-equivalent._
To solve the equivalence problem for weighted odca, we introduce a few reachability problems. These problems are also of independent interest. The _complement to vector space (co-VS) reachability problem_ takes a weighted odca, a vector space, and an initial configuration as input. It asks whether it is possible, starting from the initial configuration, to reach a configuration in the complement of the given vector space. We develop novel ideas to show that the unary (resp. binary) co-VS reachability problem is in \(\mathsf{P}\) (resp. \(\mathsf{NP}\)). Let us call a word a _witness_ if the run of the word'reaches' a configuration desired by the reachability problem. Through a series of lemmas, we identify two interesting properties of witnesses.
1. The witnesses satisfy a small model property - a witness that is longer than a polynomial can be 'cut' to get a shorter witness. We remove parts of a
long run and join the remaining portions. The challenge is identifying cuts that preserve the counter actions during the run.
2. The lexicographically smallest word that witnesses the reachability is of the form \(uy_{1}^{r_{1}}vy_{2}^{r_{2}}w\) where \(u,v,w,y_{1}\) and \(y_{2}\) are'small' words and \(r_{1},r_{2}\in\mathbb{N}\).
The reachability problems, along with the ideas developed in the context of real-time oca by Bohm et al. [3] (also see [4][6]), and Valiant, Paterson [21] help us solve the equivalence problem for odcas.
Next, we consider the regularity problem - the problem of deciding whether a weighted odca is equivalent to some weighted automata. In Theorem 36, we show that regularity of odca is decidable in polynomial time. This is done by showing the existence of infinitely many equivalence classes by "pumping up" some parts of a run.
Next, we look at uninitialised odcas - an odca without initial finite state distribution and initial counter state. We show that the "equivalence" problem for unitialised odcas are in polynomial time.
### Related work
Extensive studies have been conducted on weighted automata with weights from semirings. Tzeng [20] gave a polynomial time algorithm to decide the equivalence of two probabilistic automata. The result has been extended to weighted automata with weights over a field. On the other hand, the problem is undecidable if the weights are over the semiring \((\mathbb{N},\min,+)\)[14]. Unlike the extensive literature on weighted automata, the study on weighted versions of pushdown or one-counter machines is limited [11][12][16]. One of the major bottlenecks is the undecidability of many interesting problems.
Probabilistic Pushdown Automata (pPDA) is equivalent to probabilistic recursive state machines (RSMs) or recursive Markov chains [8][15]. These models have been studied extensively for the analysis and model checking of procedural programs [9]. pPDAs can model probabilistic sequential programs with recursive procedure calls. They are also a generalisation of stochastic context-free grammars [1] used in natural language processing, molecular biology, and many variants of one-dimensional random walks [7]. Kucera et al. [16] have looked at model-checking of probabilistic pushdown systems and Brazdil et al. [8] studied temporal properties of probabilistic pushdown automata. The equivalence problem of pPDA was examined by Forejt et al. [10] and they showed that it is equivalent to the multiplicity equivalence of context-free grammars. The decidability of the latter problem is open. Kiefer et al. [13] show that the equivalence of probabilistic vpda is logspace equivalent to polynomial identity testing. The later problem in known to be in coRP.
The bisimilarity problem of probabilistic vpda (resp. probabilistic oca) was shown to be \(\mathsf{EXPTIME}\)-complete (resp. \(\mathsf{PSPACE}\)-complete) by Forejt et al. [11]. They also proved the decidability of the bisimilarity problem of pPDA. Etessami et al. [9] show that probabilistic oca and Quasi-Birth-Death processes are equivalent.
Moving on to the non-weighted models, for non-deterministic pushdown automata the equivalence problem is known to be undecidable. On the other hand, from the seminal result by Senizergues [17], we know that the equivalence problem for deterministic pushdown automata is decidable. The lower bound, though, is primitive recursive [18]. The equivalence problem for deterministic one-counter automata (with and without \(\epsilon\) transitions) is decidable in polynomial time. In fact, similar to that of deterministic finite automata, the problem is \(\mathsf{NL}\)-complete [5].
### Outline of the paper
The rest of this paper is organised as follows. Section 1 contains the basic definitions and some lemmas from linear algebra. We also give a formal definition of odca. In Section 2, we look at the special cases of non-deterministic and deterministic odcas and show the decidability of their equivalence problems. Section 3 analyses a few reachability problems of weighted odcas. In Section 4, we prove Theorem 1 and show that the equivalence of weighted odcas is in polynomial time. Section 5, gives a polynomial time algorithm for the regularity problem of weighted odca, and in Section 6, we prove that the covering problem for weighted odca is in polynomial time. Section 7 gives a short conclusion.
## 1. Preliminaries
### Basic notations
An alphabet is a non-empty finite set of letters. In this paper, we denote the alphabet by \(\Sigma\). We use \(\Sigma^{*}\) to denote the set of finite length words over \(\Sigma\), and for all \(l\in\mathbb{N}\), we use \(\Sigma^{\leq l}\) (resp. \(\Sigma^{l}\)) to denote the set of words over \(\Sigma\) having length less than or equal to \(l\) (resp. exactly equal to \(l\)). Given a
word \(w\in\Sigma^{*}\), we use \(|w|\) to denote the length of the word \(w\). We use the notation \([i,j]\) to denote the interval \(\{i,i+1,\ldots,j\}\). We say that a word \(u=a_{1}\cdots a_{k}\) is a subword of a word \(w\), if \(w=u_{0}a_{1}u_{1}a_{2}\cdots a_{k}u_{k}\), where \(a_{i}\in\Sigma\), \(u_{j}\in\Sigma^{*}\) for all \(i\in[1,k]\) and \(j\in[0,k]\). We call \(u\) a proper subword of \(w\) if \(u\neq w\). We say that a word \(u\) is a prefix of a word \(w\) if there exists \(v\in\Sigma^{*}\) such that \(w=uv\). Given a word \(w=a_{0}\cdots a_{n}\), we write \(w[i\cdots j]\) to denote the factor \(a_{i}\cdots a_{j}\). Given \(d\in\mathbb{N}\), \(sign(d)=0\) if \(d=0\) and is \(1\) otherwise.
### Linear algebra
A field \(\mathcal{F}=(S,+,\cdot,0,1)\) is a set \(S\) with operations \(+\) and \(\cdot\) and distinguished elements \(0\) and \(1\) such that \((S,+,0)\) and \((S,\cdot,1)\) are groups. In this paper, we use \(\mathbf{x},\mathbf{y},\mathbf{z}\) to denote row vectors over a field \(\mathcal{F}\), \(s,t,r\) to denote elements in a field \(\mathcal{F}\) and \(\mathbb{A},\mathbb{B},\mathbb{M}\) to denote matrices over a field \(\mathcal{F}\). We use \(\mathcal{U},\mathcal{V}\) to denote vector spaces. We recall the following facts.
**Lemma 2** ([19]).: _The following are true for a field \(\mathcal{F}\)._
1. _For any set_ \(X\) _of_ \(n\) _vectors in_ \(\mathcal{F}^{m}\) _with_ \(n>m\)_, there exists a vector_ \(\mathbf{x}\in X\) _that is a linear combination of the other vectors in_ \(X\)_._
2. _Given a set_ \(B\) _of_ \(n\) _vectors in_ \(\mathcal{F}^{m}\) _and a vector_ \(\mathbf{x}\in\mathcal{F}^{m}\)_, we can check if_ \(\mathbf{x}\) _is a linear combination of vectors in_ \(B\) _in time polynomial in_ \(m\) _and_ \(n\)_._
3. _Let_ \(k,r\in\mathbb{N}\) _and_ \(\mathbb{M}\in\mathcal{F}^{k\times k}\)_. The matrix_ \(\mathbb{M}^{r}\) _can be computed in time polynomial in_ \(k\) _and_ \(\log(r)\)_._ \(\square\)__
The following properties of vector spaces are important.
**Lemma 3**.: _Let \(\mathcal{V}\) be a vector space, \(k\in\mathbb{N}\) and for all \(r\in[0,k]\)\(\mathbf{z}_{r}\in\mathcal{F}^{k}\) and \(\mathbb{M}_{r}\in\mathcal{F}^{k\times k}\). Then, there exists an \(i\in[1,k]\) such that the following conditions are true:_
1. \(\mathbf{z}_{i}\) _is a linear combination of_ \(\mathbf{z}_{0},\ldots\mathbf{z}_{i-1}\)_, and_
2. _if_ \(\mathbf{z}_{i}\mathbb{M}_{i}\notin\mathcal{V}\)_, then there exists_ \(j<i\) _such that_ \(\mathbf{z}_{j}\mathbb{M}_{i}\notin\mathcal{V}\)_._
Proof.: Let \(k\in\mathbb{N},r\in[0,k]\), \(\mathbf{z}_{r}\in\mathcal{F}^{k},\mathbb{M}_{r}\in\mathcal{F}^{k\times k}\) be matrices over \(\mathcal{F}\) and \(\mathcal{V}\) be a vector space.
**1**.: _Consider the set \(\{\mathbf{z}_{0},\mathbf{z}_{1},\ldots,\mathbf{z}_{k}\}\) of \(k+1\) vectors of dimension \(k\). It follows from Lemma 2 that there are at most \(k\) independent vectors of dimension \(k\), and hence not all elements of the set can be independent._
**2**.: _Let \(i\in[1,k]\) be such that \(\mathbf{z}_{i}\) is a linear combination of \(\mathbf{z}_{0},\ldots\mathbf{z}_{i-1}\) and \(\mathbf{z}_{i}\mathbb{M}_{i}\notin\mathcal{V}\). Let us assume for contradiction that \(\mathbf{z}_{j}\mathbb{M}_{i}\in\mathcal{V}\) for all \(j\in[0,i-1]\). Since \(\mathbf{z}_{i}\) is a linear combination on \(\mathbf{z}_{0},\ldots\mathbf{z}_{i-1}\), there exists \(s_{0},\ldots s_{i-1}\in\mathcal{F}\) such that_
\[\mathbf{z}_{i}=s_{0}\cdot\mathbf{z}_{0}+s_{1}\cdot\mathbf{z}_{1}+\cdots+s_{i- 1}\cdot\mathbf{z}_{i-1}\]
_Since \(\mathbf{z}_{i}\mathbb{M}_{i}=\sum_{j=0}^{i-1}s_{j}\cdot\mathbf{z}_{j}\mathbb{M} _{i}\) and \(\mathcal{V}\) is closed under linear combinations, we get that \(\mathbf{z}_{i}\mathbb{M}_{i}\in\mathcal{V}\) contradicting our initial assumption._
\(\square\)__
**Lemma 4**.: _Let \(\mathcal{V}\) be a vector space, \(k\in\mathbb{N}\) and for all \(r\in[0,k^{2}]\)\(\mathbb{A}_{r},\mathbb{M}_{r},\mathbb{B}_{r}\in\mathcal{F}^{k\times k}\). Then, there exists an \(i\in[1,k^{2}]\) such that for all \(\mathbf{x}\in\mathcal{F}^{k}\) the following conditions are true:_
1. \(\mathbb{M}_{i}\) _is a linear combination of_ \(\mathbb{M}_{0},\ldots,\mathbb{M}_{i-1}\)_, and_
2. _if_ \(\mathbf{x}\mathbb{A}_{i}\mathbb{M}_{i}\mathbb{B}_{i}\notin\mathcal{V}\)_, then there exists a_ \(j<i\) _such that_ \(\mathbf{x}\mathbb{A}_{i}\mathbb{M}_{j}\mathbb{B}_{i}\notin\mathcal{V}\)_._
Proof.: Let \(\mathbb{A}_{r},\mathbb{M}_{r},\mathbb{B}_{r}\in\mathcal{F}^{k\times k}\) for \(r\in[0,k^{2}]\), be matrices over \(\mathcal{F}\) and \(\mathcal{V}\) a vector space.
**1.**_Consider the set \(\{\mathbb{M}_{0},\mathbb{M}_{1},\ldots,\mathbb{M}_{k^{2}}\}\) of \(k^{2}+1\) matrices of dimension \(k^{2}\). It follows from Lemma 2 that there are at most \(k^{2}\) independent vectors of dimension \(k^{2}\), and hence not all elements of this set can be independent._
**2.** Let \(i\in[1,k^{2}]\) be such that \(\mathbb{M}_{i}\) is a linear combination of \(\mathbb{M}_{0},\ldots,\mathbb{M}_{i-1}\) and \(\mathbf{x}\mathbb{A}_{i}\mathbb{M}_{i}\mathbb{B}_{i}\notin\mathcal{V}\). Since \(\mathbb{M}_{i}\) is dependent on \(\mathbb{M}_{0},\ldots,\mathbb{M}_{i-1}\), we prove that there exists \(j<i\) such that \(\mathbf{x}\mathbb{A}_{i}\mathbb{M}_{j}\mathbb{B}_{i}\notin\mathcal{V}\). Let us assume for contradiction that this is not the case. Since \(\mathbb{M}_{i}\) is a linear combination on \(\mathbb{M}_{0},\ldots\mathbb{M}_{i-1}\), there exists \(s_{0},\ldots s_{i-1}\in\mathcal{F}\) such that
\[\mathbb{M}_{i}=s_{0}\cdot\mathbb{M}_{0}+s_{1}\cdot\mathbb{M}_{1}+\cdots+s_{i- 1}\cdot\mathbb{M}_{i-1}\]
Since \(\mathbf{x}\mathbb{A}_{i}\mathbb{M}_{j}\mathbb{B}_{i}\in\mathcal{V}\) for all \(j\in[0,i-1]\) we get that \(\mathbf{x}\mathbb{A}_{i}\mathbb{M}_{i}\mathbb{B}_{i}=\sum_{j=0}^{i-1}s_{j} \cdot\mathbf{x}\mathbb{A}_{i}\mathbb{M}_{j}\mathbb{B}_{i}\in\mathcal{V}\), which is a contradiction.
**Lemma 5**.: _Let \(k\in\mathbb{N},\mathbb{A}\in\mathcal{F}^{k\times k}\) and \(\mathcal{V}\subseteq\mathcal{F}^{k}\) be a vector space. Then the following set is a vector space,_
\[\mathcal{U}=\{\mathbf{y}\in\mathcal{F}^{k}\mid\mathbf{y}\mathbb{A}\in\mathcal{ V}\}.\]
Proof.: To prove that \(\mathcal{U}\) is a vector space, it suffices to show that it is closed under vector addition and scalar multiplication. First, we prove that \(\mathcal{U}\) is closed under vector addition. Let \(\mathbf{z}_{1},\mathbf{z}_{2}\in\mathcal{U}\) be two vectors, since \(\mathbf{z}_{1}\mathbb{A},\mathbf{z}_{2}\mathbb{A}\in\mathcal{V}\), \((\mathbf{z}_{1}+\mathbf{z}_{2})\mathbb{A}=\mathbf{z}_{1}\mathbb{A}+\mathbf{z} _{2}\mathbb{A}\in\mathcal{V}\). Therefore, \(\mathbf{z}_{1}+\mathbf{z}_{2}\in\mathcal{U}\). Now we prove that \(\mathcal{U}\) is closed under scalar multiplication. For any vector \(\mathbf{z}_{1}\in\mathcal{U}\), we know that \(\mathbf{z}_{1}\mathbb{A}\in\mathcal{V}\). Since \(\mathcal{V}\) is a vector space, for any scalar \(r\in\mathcal{F}\), \((r\cdot\mathbf{z}_{1})\mathbb{A}\in\mathcal{V}\), and therefore \(r\cdot\mathbf{z}_{1}\in\mathcal{U}\). This concludes the proof.
In particular, the above lemma holds for the vector space \(\{\mathbf{0}\in\mathcal{F}^{k}\}\).
### One deterministic-counter automata
A one deterministic-counter automata (odca) consists of two parts, a finite state machine, which is a weighted automaton over a semiring, and a counter structure, which is a deterministic oca. An odca is defined as follows:
**Definition 6**.: _A one deterministic-counter automata (odca), \(\mathcal{A}\) over an alphabet \(\Sigma\) and a semiring \(\mathcal{S}\) is as defined below:_
\[\mathcal{A}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{\eta})\]
* \(C\) _is a non-empty finite set of counter states._
* \(\delta:C\times\Sigma\times\{0,1\}\to C\times\{-1,0,+1\}\) _is the deterministic counter transition._
* \(p_{0}\in C\) _is the start state for counter transition._
* \(Q\) _is a non-empty finite set of states of the finite state machine. We assume_ \(|C|=|Q|\) _and use_ \(\mathsf{K}\) _to denote_ \(|Q|\)_._
* \(\boldsymbol{\lambda}\in\mathcal{S}^{\mathsf{K}}\) _is the initial distribution where the_ \(i^{th}\) _component of_ \(\boldsymbol{\lambda}\) _indicates the initial weight on state_ \(q_{i}\in Q\)_._
* \(\Delta:C\times\Sigma\times\{0,1\}\to\mathcal{S}^{\mathsf{K}\times\mathsf{K}}\) _gives the transition matrix for all_ \(p\in C\)_,_ \(a\in\Sigma\) _and_ \(d\in\{0,1\}\)_. The component in the_ \(i^{th}\) _row and_ \(j^{th}\) _column of_ \(\Delta(p,a,d)\) _denotes the weight on the transition from state_ \(q_{i}\in Q\) _to state_ \(q_{j}\in Q\) _on reading symbol_ \(a\) _from counter state_ \(p\) _and counter value_ \(n\) _with sign_\((n)=d\)
* \(\boldsymbol{\eta}\in\mathcal{S}^{\mathsf{K}}\) _is the final distribution, where the_ \(i^{th}\) _component of_ \(\boldsymbol{\eta}\) _indicates the output weight on state_ \(q_{i}\in Q\)_._
A configuration \(\mathsf{c}\) of an odca is of the form \((\mathbf{x}_{\mathsf{c}},p_{\mathsf{c}},n_{\mathsf{c}})\in\mathcal{S}^{ \mathsf{K}}\times C\times\mathbb{N}\). The configuration \((\boldsymbol{\lambda},p_{0},0)\) is the initial configuration of \(\mathcal{A}\). A _transition_ is a tuple \(\tau=(\iota_{\tau},d_{\tau},a_{\tau},\mathtt{ce}_{\tau},A_{\tau},\theta_{\tau})\) where \(\iota_{\tau},\theta_{\tau}\in C\) are counter states, \(d_{\tau}\in\{0,1\}\) is to denote whether the current counter value is zero or not, \(a_{\tau}\in\Sigma\), \(\mathtt{ce}_{\tau}\in\{-1,0,1\}\) is the _counter-effect_, \(\mathbb{A}_{\tau}\in\mathcal{S}^{\mathsf{K}\times\mathsf{K}}\) such that \(\Delta(\iota_{\tau},a_{\tau},d_{\tau})=\mathbb{A}_{\tau}\), and \(\delta(ci_{\tau},a_{\tau},d_{\tau})=\theta_{\tau}\).
Given a transition \(\tau\) and a configuration \(\mathsf{c}\), we denote the application of \(\tau\) to \(c\) as \(\tau(\mathsf{c})=(\mathbf{x}_{\mathsf{c}}\mathbb{A}_{\boldsymbol{\tau}},\theta _{\tau},n_{\mathsf{c}}+\mathtt{ce}_{\tau})\) if \(p_{\mathsf{c}}=\iota_{\tau}\) and \(d_{\tau}=0\) if and only if \(n_{\mathsf{c}}=0\), and is undefined otherwise. Note that the counter values always stay positive, implying that we cannot perform a decrement operation on the counter from a configuration with a counter value of zero.
Given a sequence of transitions \(T=\tau_{0}\cdots\tau_{\ell-1}\), we denote \(\mathtt{word}(T)=a_{\tau_{0}}\cdots a_{\tau_{\ell-1}}\) the word labelling it, \(\mathtt{we}(T)=\mathbb{A}_{\tau_{0}}\cdots\mathbb{A}_{\tau_{\ell-1}}\) its weight-effect matrix, and \(\mathtt{ce}(T)=\mathtt{ce}_{\tau_{0}}+\cdots+\mathtt{ce}_{\tau_{\ell-1}}\) its counter-effect. For all \(0\leq i<j\leq|\ell-1|\), we use \(T_{i\cdots j}\) to denote the sequence of transitions \(\tau_{i}\cdots\tau_{j}\) and \(|T|\) to denote \(\ell\).
We call a sequence of transition \(T=\tau_{0}\cdots\tau_{\ell}\)_floating_ if for all \(i\in[0,\ell-1]\)\(d_{\tau_{i}}=1\) and _non-floating_ otherwise. We denote \(\min_{\mathtt{ce}}(T)=\min_{i}(\mathtt{ce}(\tau_{0}\cdots\tau_{i}))\) the minimal effect of its prefixes and call it its _decrease_ and \(\max_{\mathtt{ce}}(T)=\max_{i}(\mathtt{ce}(\tau_{0}\cdots\tau_{i}))\) is the maximal effect of its prefixes. We say that the sequence of transitions \(T\) is _valid_ if for every \(i\in[0,\ell-2]\), \(\theta_{\tau_{i}}=\iota_{\tau_{i+1}}\). We will only consider valid sequences of transitions.
A _run_\(\pi\) is an alternate sequence of configurations and transitions denoted as \(\pi=\mathsf{c}_{0}\tau_{0}\mathsf{c}_{1}\cdots\tau_{\ell-1}\mathsf{c}_{\ell}\) such that for every \(i\), \(\mathsf{c}_{i+1}=\tau_{i}(\mathsf{c}_{i})\). Given a sequence of transition \(T\) and a configuration \(\mathsf{c}\), we denote \(T(\mathsf{c})\) the run obtained by applying \(T\) to \(\mathsf{c}\) sequentially (if it is defined). The word labelling it, its length, weight effect, and counter-effect are those of its underlying sequence of transitions.
Observe that, for a valid floating sequence of transitions, \(T(\mathsf{c})\) is defined if and only if \(n_{\mathsf{c}}>-\min_{\mathtt{ce}}(T)\), and for a valid non-floating sequence of transitions, \(T(\mathsf{c})\) is defined if and only if \(n_{\mathsf{c}}=-\min_{\mathtt{ce}}(T)\) and for every \(i\), \(d_{\tau_{i}}=0\) if and only if \(\mathtt{ce}(\tau_{0}\cdots\tau_{i-1})=\min_{\mathtt{ce}}(T)\). In particular, observe that if a valid floating sequence of transition \(T\) is applicable to a configuration \((\mathbf{x}_{\mathsf{c}},p_{\mathsf{c}},n_{\mathsf{c}})\), then for every \(n^{\prime}\geq n_{\mathsf{c}}\) and vector \(\mathbf{x}^{\prime}\in\mathcal{S}^{\mathsf{K}}\), it is applicable to \((\mathbf{x}^{\prime},p_{\mathsf{c}},n^{\prime})\).
For any word \(w\), there is at most one run labelled by \(w\) starting from a given configuration \(\mathsf{c}_{0}\). We denote this run \(\pi(w,\mathsf{c}_{0})\). A run \(\pi(w,\mathsf{c}_{0})=\mathsf{c}_{0}\tau_{0}\mathsf{c}_{1}\cdots\tau_{\ell-1} \mathsf{c}_{\ell}\) is also represented as \(\mathsf{c}_{0}\xrightarrow{w}\mathsf{c}_{\ell}\). We use the notation \(\mathsf{c}_{0}\xrightarrow{w}\mathsf{c}_{\ell}\) to denote the existence of some word \(w\) such that \(\mathsf{c}_{0}\xrightarrow{w}\mathsf{c}_{\ell}\). The counter effect of a word \(w\) on a floating run \(\mathsf{c}_{0}\xrightarrow{w}\mathsf{c}_{\ell}\) is \(n_{\mathsf{c}_{\ell}}-n_{\mathsf{c}_{0}}\). The weight with which a word \(w\) is accepted by \(\mathcal{A}\) along the run \(\mathsf{c}_{0}\xrightarrow{w}\mathsf{c}_{\ell}\) is denoted by \(f_{\mathcal{A}}(w,\mathsf{c}_{0})=\lambda\mathtt{we}(\pi(w,\mathsf{c}_{0})) \boldsymbol{\eta}^{\top}\). We use the notation \(f_{\mathcal{A}}(w)\) to denote \(f_{\mathcal{A}}(w,(\boldsymbol{\lambda},p_{0},0))\).
Let \(\mathcal{A}\) and \(\mathcal{B}\) be two odcas. Consider the configurations \(\mathsf{c}\) of \(\mathcal{A}\) and \(\mathsf{d}\) of \(\mathcal{B}\). We say that \(\mathsf{c}\equiv_{l}\mathsf{d}\) if and only if for all \(w\in\Sigma^{\leq l}\), \(f_{\mathcal{A}}(w,\mathsf{c})=f_{\mathcal{B}}(w,\mathsf{d})\) otherwise \(\mathsf{c}\not\equiv_{l}\mathsf{d}\). We say that the configurations \(\mathsf{c}\) and \(\mathsf{d}\) are equivalent if and only if \(\mathsf{c}\equiv_{l}\mathsf{d}\) for all \(l\in\mathbb{N}\) and we denote this by \(\mathsf{c}\equiv\mathsf{d}\). We say that \(\mathcal{A}\) and \(\mathcal{B}\) are equivalent if for all \(w\in\Sigma^{*}\), \(f_{\mathcal{A}}(w)=f_{\mathcal{B}}(w)\).
If the odca is defined over a semiring which is also a field, then we call the model a weighted odca, and if it is the boolean semiring then we call it a non-deterministic/deterministic odca. Note that the equivalence problem of odca
defined over an arbitrary semiring is undecidable because of the undecidability of equivalence of weighted automata over semirings. The class of weighted odcas includes deterministic oca, visibly weighted oca, and deterministic weighted oca. We have to bring in known examples from literature. Have others considered weighted oca? Does our model capture it? Also, note that the \(\delta\) need not be a function and \(A\) is always a function. In that case, the control states are non-deterministic. Like in the previous section, we can determinise it (with an exponential blow-up). The question is, Is equivalence checking in this model (as well as the weighted case) in PTIME?
Given a weighted odca\(\mathcal{A}\) over the alphabet \(\Sigma\) and a field \(\mathcal{F}\), we define it's \(M\)_-unfolding_ weighted automata \(\mathcal{A}^{M}\) as a finite state weighted automaton that recognises the same function as \(\mathcal{A}\) for all runs where the counter value does not exceed \(M\). A formal definition in given below.
**Definition 7** (\(M\)-unfolding weighted automata).: _Let \(\mathcal{A}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{ \eta})\) be a weighted odca over the alphabet \(\Sigma\) and a field \(\mathcal{F}\), let \(\mathsf{K}=|Q|=|C|\). For a given \(M\in\mathbb{N}\), we define an \(M\)-unfolding weighted automata \(\mathcal{A}^{M}\) of \(\mathcal{A}\) as follows, \(\mathcal{A}^{M}=(C^{\prime},\delta^{\prime},p^{\prime}_{0};\ Q^{\prime}, \boldsymbol{\lambda}^{\prime},\Delta^{\prime},\boldsymbol{\eta}^{\prime}_{F})\) where,_
* \(C^{\prime}=C\times[0,M]\) _is the finite set of counter states._
* \(\delta^{\prime}:C^{\prime}\times\Sigma\to C^{\prime}\) _is the deterministic counter transition. Let_ \(p,q\in C,m\in\mathbb{N}\)_,_ \(a\in\Sigma\) _and_ \(d\in\{-1,0,+1\}\)_._ \(\delta^{\prime}((p,m),a)=(q,m+d)\)_, if_ \(\delta(p,a,sign(m))=(q,d)\)_._
* \(p^{\prime}_{0}=(p_{0},0)\) _is the initial counter state._
* \(Q^{\prime}=Q\times[0,M]\) _is the finite set of states._
* \(\lambda^{\prime}\in\mathcal{F}^{|Q^{\prime}|}\) _is the initial distribution._
* \(\Delta^{\prime}:C^{\prime}\times\Sigma\to\mathcal{F}^{|Q|^{\prime}\times|Q^{ \prime}|}\) _gives the transition matrix. For_ \(i,j\in|Q^{\prime}|,p\in C,m\in\mathbb{N}\) _and_ \(a\in\Sigma\)_,_ \[\Delta^{\prime}((p,m),a)[i][j]=\begin{cases}\Delta(p,a,0)[i][j],\text{ if }i,j<\mathsf{K}\\ \Delta(p,a,1)[i\bmod\mathsf{K}][j\bmod\\ \mathsf{K}],\text{ if }\frac{i}{\mathsf{K}}\ =\\ \frac{j}{\mathsf{K}}\\ 0,\text{ otherwise}\end{cases}\]
* \(\boldsymbol{\eta}^{\prime}_{F}\in\mathcal{F}^{|Q^{\prime}|}\) _is the final distribution._ \[\boldsymbol{\eta}^{\prime}_{F}[i]=\boldsymbol{\eta}[i\bmod\mathsf{K}]\]
An uninitialised weighted odca\(\mathcal{A}\) is a weighted odca without an initial counter state and initial distribution. Formally, \(\mathcal{A}=(C,\delta;\ Q,\Delta,\boldsymbol{\eta})\). Given an uninitialised weighted odca\(\mathcal{A}\) and an initial configuration \(\mathsf{c}_{0}=(\mathbf{x},p,0)\), we define the weighted odca\(\mathcal{A}\langle\mathsf{c}_{0}\rangle=(C,\delta,p;\ Q,\mathbf{x},\Delta,\boldsymbol{\eta})\).
Weighted automata (WA) is a restricted form of an odca where the counter value is fixed at zero. The above notions of transitions, runs, acceptance, etc. are used for WA also. We also use the classical notion and represent weighted automata as \(\mathcal{A}=(Q,\lambda,\Delta,\boldsymbol{\eta})\), without counter states.
## 2. Nondeterministic / deterministic odca
A deterministic/non-deterministic odca\(\mathcal{A}\) is an odca over the boolean semiring \(\mathcal{S}=(\{0,1\},\vee,\wedge)\). The language recognised by \(\mathcal{A}\) is given by \(\mathcal{L}(\mathcal{A})=\{w\mid f_{\mathcal{A}}(w)=1\}\). We say an odca\(\mathcal{A}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{\eta})\) is _deterministic_ if for every transition sequence \(T=\tau_{0}\cdots\tau_{\ell-1}\), the vector \(\boldsymbol{\lambda}\mathtt{we}(T)\) contains exactly one \(1\) and non-deterministic otherwise.
The following theorem is 'analogous' to the case of finite automata. The idea is a simple subset construction.
**Theorem 8**.: _For every language recognised by a non-deterministic odca, there is a deterministic odca of at most exponential size that recognise it._
Proof.: Let \(\mathcal{A}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{\eta})\) be a non-deterministic odca. Given a vector \(\mathbf{x}\in\mathcal{S}^{k}\) for some \(k\in\mathbb{N}\), we define the function \(\mathrm{IsDet}\): \(\mathcal{S}^{k}\to\{true,false\}\) as follows:
\[\mathrm{IsDet}(\mathbf{x})=\begin{cases}\text{true, if $\exists i<k$ s.t $ $\mathbf{x}[i]=1$ and $\forall j\neq i,\mathbf{x}[i]=0$}\\ \text{false, otherwise.}\end{cases}\]
Given a transition matrix \(\mathbb{A}\) corresponding to the states \(Q\), we define its determinisation \(\det(\mathbb{A})\) as follows. There are rows and columns corresponding to each set in \(2^{Q}\). For any \(q_{i}\in Q\), let \(\mathcal{M}(q_{i},\mathbb{A})=\{q_{j}\mid\mathbb{A}[i][j]=1\}\) be the set of all states in the row of \(q_{i}\) whose entries are \(1\). With the notation that \(\det(\mathbb{A})[s][s^{\prime}]\) corresponds to the entry of the cell corresponding to the sets \(s,s^{\prime}\in 2^{Q}\), we let \(\det(\mathbb{A})[s][s^{\prime}]=1\) if and only if \(s^{\prime}=\bigcup_{q\in s}\mathcal{M}(q_{i},\mathbb{A})\). We claim that \(\mathcal{A}_{\det}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta^{\prime}, \boldsymbol{\eta}^{\prime})\), with \(\boldsymbol{\eta}^{\prime}\) such that for any \(S\in 2^{Q}\),\(\boldsymbol{\eta}^{\prime}[S]=\bigvee_{s\in S}\boldsymbol{\eta}[s]\) and for all \(p\in C,a\in\Sigma\) and \(d\in\{0,1\}\), \(\Delta^{\prime}(p,a,d)=\det(\Delta(p,a,d))\) is such that it is deterministic and \(\mathcal{L}(\mathcal{A})=\mathcal{L}(\mathcal{A}_{\det})\).
For this, for any sequence of operations \(T=\tau_{0}\cdots\tau_{\ell-1}\), let \(\mathbf{v}_{T},\mathbf{v}_{T}^{\prime}\) be the vectors corresponding to \(\boldsymbol{\lambda}\mathtt{we}(T)\) in \(\mathcal{A}\) and \(\mathcal{A}_{\det}\) respectively. Then we have \(\mathrm{IsDet}(\mathbf{v}_{T}^{\prime})=1\) and for any \(S\in 2^{Q}\), \(\mathbf{v}_{T}^{\prime}[S]=1\) if and only if for all \(q_{i}\in S\), \(\mathbf{v}_{T}[i]=1\).
The equivalence of deterministic odcas can be decided in non-deterministic log space [3]. From the above theorem, it follows that a \(\mathsf{PSPACE}\) machine can decide on the equivalence of non-deterministic odcas.
**Theorem 9**.: _Equivalence of non-deterministic odca is in \(\mathsf{PSPACE}\)._
As a corollary, we get the following.
**Corollary 10**.: _The emptiness and the universality problems of non-deterministic odca are in \(\mathsf{PSPACE}\)._
## 3. Reachability problems in weighted odca
In this section, we examine two reachability problems of weighted odcas. In the subsequent section, we develop the techniques that play a key role in proving the equivalence of weighted odca.
A weighted odca\(\mathcal{A}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{\eta})\). Without loss of generality, assume \(|C|=|Q|\) and denote \(|Q|\) by \(\mathsf{K}\). We use \(\mathcal{V}\subseteq\mathcal{F}^{\mathsf{K}}\) to denote a vector space and \(\overline{\mathcal{V}}=\mathcal{F}^{\mathsf{K}}\setminus\mathcal{V}\) to denote the set complement of \(\mathcal{V}\). Let \(S\subseteq C\) be a subset of the set of counter states, \(X\subseteq\mathbb{N}\) a set of counter values and \(w\in\Sigma^{*}\). The notation \(\mathtt{c}\xrightarrow{w}\overline{\mathcal{V}}\times S\times X\) denotes the run \(\mathtt{c}\xrightarrow{w}\mathtt{d}\) where \(\mathtt{d}\in\overline{\mathcal{V}}\times S\times X\). We call \(z\in\Sigma^{*}\) a _reachability witness_ for \((\mathtt{c},\overline{\mathcal{V}},S,X)\) if \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times X\). Moreover, we say \(z\) is a
minimal_ reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,X)\) if \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times X\) and for all \(u\in\Sigma^{*}\) with \(\mathsf{c}\xrightarrow{u}\overline{\mathcal{V}}\times S\times X\), \(|u|\geq|z|\). We use \(\mathsf{c}\xrightarrow{*}\overline{\mathcal{V}}\times S\times X\) to denote that there exists a word \(u\in\Sigma^{*}\) such that \(\mathsf{c}\xrightarrow{u}\overline{\mathcal{V}}\times S\times X\).
We assume that the vector space \(\mathcal{V}\subseteq\mathcal{F}^{\mathsf{K}}\) will be provided by giving a suitable basis for \(\mathcal{V}\).
1. _co-VS reachability_ problem: 1. _no-VS reachability_ problem: 2. Input: a weighted odca \(\mathcal{A}\), an initial configuration \(\mathsf{c}\), a vector space \(\mathcal{V}\), set of counter states \(S\) and counter value \(m\). 3. Output: _Yes_, if there exists a run \(\mathsf{c}\xrightarrow{*}\overline{\mathcal{V}}\times S\times\{m\}\) in \(\mathcal{A}\). _No_, otherwise.
2. _co-VS coverability_ problem: 3. Input: a weighted odca \(\mathcal{A}\), an initial configuration \(\mathsf{c}\), a vector space \(\mathcal{V}\), and set of counter states \(S\). 4. Output: _Yes_, if there exists a run \(\mathsf{c}\xrightarrow{*}\overline{\mathcal{V}}\times S\times\mathbb{N}\) in \(\mathcal{A}\). _No_, otherwise.
Note that in the second problem, the counter value of the final configuration is not part of the input. We consider the cases where the counter values of the initial configuration and the final counter value, if part of the input, are given in unary or in binary notation separately. Note that the size of the unary representation is exponentially larger than the binary representation for the same value.
First, we look at the particular case of co-VS reachability problem for weighted automata. Note that for weighted automata, the counter value is always zero. Given a weighted automata \(\mathcal{B}\), with \(k\) states, an initial configuration \(\bar{\mathsf{c}}\), a vector space \(\mathcal{U}\subseteq\mathcal{F}^{k}\) and a set of counter states \(S\), the co-VS reachability problem asks whether there exists a run \(\bar{\mathsf{c}}\xrightarrow{*}\overline{\mathcal{U}}\times S\times\{0\}\).
**Theorem 11**.: _There is a polynomial time algorithm that decides the co-VS reachability problem for weighted automata and outputs a minimal reachability witness if it exists._
Proof.: Tzeng [20] gives a polynomial time algorithm for the equivalence of two probabilistic automata by reducing the problem to the co-VS reachability problem where \(\mathcal{V}=\{\mathbf{0}\}\). The same algorithm can be modified to solve the general co-VS reachability problem.
The following lemma will help us break down both the reachability problems into smaller sub-problems.
**Lemma 12**.: _Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,X)\). Consider arbitrary \(z_{1}z_{2}\in\Sigma^{*}\) such that \(z=z_{1}z_{2}\). Let \(\mathsf{d},\mathsf{e}\) be configurations such that \(\mathsf{c}\xrightarrow{z_{1}}\mathsf{d}\xrightarrow{z_{2}}\mathsf{e}\) and \(\mathbb{A}\in\mathcal{F}^{\mathsf{K}\times\mathsf{K}}\) be such that \(\mathbf{x}_{\mathsf{d}}\mathbb{A}=\mathbf{x}_{\mathsf{e}}\). Then \(z_{1}\) is a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{U}},\{p_{\mathsf{d}}\},\{n_{\mathsf{d}}\})\), where \(\mathcal{U}=\{\mathbf{y}\in\mathcal{F}^{\mathsf{K}}\mid\mathbf{y}\mathbb{A} \in\mathcal{V}\}\)._
Proof.: Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,X)\), \(\mathsf{d},\mathsf{e}\) be configurations such that \(\mathsf{c}\xrightarrow{z_{1}}\mathsf{d}\xrightarrow{z_{2}}\mathsf{e}\) where \(z_{1},z_{2}\in\Sigma^{*}\) with \(z=z_{1}z_{2}\) and \(\mathbb{A}\in\mathcal{F}^{\mathsf{K}\times\mathsf{K}}\) be such that \(\mathbf{x}_{\mathsf{d}}\mathbb{A}=\mathbf{x}_{\mathsf{e}}\). Let \(\mathcal{U}=\{\mathbf{y}\in\mathcal{F}^{\mathsf{K}}\mid\mathbf{y}\mathbb{A} \in\mathcal{V}\}\). Assume for contradiction that there exists \(z_{1}^{\prime}\in\Sigma^{*}\) smaller than \(z_{1}\) and \(\mathsf{c}\xrightarrow{z_{1}^{\prime}}\mathsf{f}\) for some configuration \(\mathsf{f}\in\overline{\mathcal{U}}\times\{p_{\mathsf{d}}\}\times\{n_{\mathsf{ d}}\}\). Note that for all \(\mathbf{y}\in\overline{\mathcal{U}}\), the vector \(\mathbf{y}\mathbb{A}\in\overline{\mathcal{V}}\). Since \(n_{\mathsf{f}}=n_{\mathsf{d}}\) and \(p_{\mathsf{f}}=p_{\mathsf{d}}\), the run \(\mathsf{c}\xrightarrow{z_{1}^{\prime}}\mathsf{f}\xrightarrow{z_{2}}\overline{ \mathcal{V}}\times\{p_{\mathsf{e}}\}\times\{n_{\mathsf{e}}\}\) is a valid run and the word \(z_{1}^{\prime}z_{2}\) contradicts the minimality of \(z\)
The following subsection shows that the unary version of co-VS reachability and coverability are in \(\mathsf{P}\). In the subsection after, we show that binary version of both problems are in \(\mathsf{NP}\).
### Unary reachability in \(\mathsf{P}\)
In this subsection, we show that both the reachability problems of weighted odca are solvable in polynomial time when the counter values are given in unary representation.
**Theorem 13**.: _Unary co-VS reachability and co-VS coverability problems are decidable in polynomial time._
The theorem is proved by showing a small model property. i.e., the length of a minimal witness of reachability is bounded by a polynomial in the number of states \(\mathsf{K}\) and the input counter value(s). This is proved by showing that the maximum and minimum counter values encountered during the run of a minimal reachability witness do not exceed some bound. Assume this is not true. In this case, there are two sub-runs of the run which satisfy the following conditions. In the first part, the counter values increases and reaches a maximum counter value. In the second part, the counter values decreases. We show that in such a scenario, we can cut parts from both the sub-runs by maintaining the reachability conditions. This is proved in Lemma 15.
Now, we prove that if the number of distinct counter values encountered during the run of a minimal reachability witness is polynomially bounded, then we can bound the length of that witness.
**Lemma 14**.: _Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,X)\). If the number of distinct counter values encountered during the run \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times X\) is \(t\), then \(|z|\leq\mathsf{K}^{2}\cdot t\)._
Proof.: Let \(\mathsf{c}=\mathsf{c}_{1}\) and \(T(\mathsf{c}_{1})=\mathsf{c}_{1}\tau_{1}\mathsf{c}_{2}\cdots\tau_{h-1}\mathsf{ c}_{h}\) be the run on word \(z\) from \(\mathsf{c}_{1}\) and \(T\) the corresponding sequence of transitions. Let \(t\) be the number of distinct counter values encountered during this run. Now assume for contradiction that \(h>\mathsf{K}^{2}\cdot t\), then by Pigeon-hole principle, there are \(\mathsf{K}+1\) many configurations \(\mathsf{c}_{i_{0}},\mathsf{c}_{i_{1}},\ldots,\mathsf{c}_{i_{\mathsf{K}}}\) with the same counter state and counter value during this run. Let \(\mathbb{A}_{j}\) denote the matrix such that \(\mathbb{A}_{\mathsf{c}_{i_{j}}}\mathbb{A}_{j}=\mathbb{A}_{\mathsf{c}_{h}}\) for all \(j\in[0,\mathsf{K}]\). From Lemma 4 we get that there exists \(r\leq\mathsf{K}\), and \(t\in[0,r-1]\) such that \(\mathbb{A}_{\mathsf{c}_{i_{\mathsf{K}}}}\mathbb{A}_{r}\in\overline{\mathcal{V}}\). Consider the sequence of transitions \(T^{\prime}=\tau_{1\cdots i_{t}}\tau_{r\cdots\ell-1}\) and \(v=\mathtt{word}(T^{\prime})\). The run \(\pi(v,\mathsf{c}_{1})=T^{\prime}(\mathsf{c}_{1})\) is a valid run since \(n_{\mathsf{c}_{t}}=n_{\mathsf{c}_{r}}\) and \(p_{\mathsf{c}_{t}}=p_{\mathsf{c}_{r}}\). This is a shorter run than \(\pi(z,\mathsf{c}_{1})\) and \(\mathsf{c}_{1}\xrightarrow{v}\overline{\mathcal{V}}\times S\times X\). This is a contradiction since \(z\) was assumed to be minimal.
It now suffices to show that the number of distinct counter values encountered during the run of a minimal witness is polynomially bounded. We first show that if the run of a minimal reachability witness of \((\mathsf{c},\overline{\mathcal{V}},S,\{m\})\) is a floating run, then the maximum and minimum counter values encountered during this run are bounded by a polynomial in \(\mathsf{K}\) and the initial and final counter values.
**Lemma 15**.: _Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,\{m\})\). If \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is a floating run, then the maximum counter value during this run is less than \(max(n_{c},m)+\mathsf{K}^{4}\)._
Proof.: Let \(z\in\Sigma^{*}\) be a reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,\{m\})\) and \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) be a floating run. Let \(\mathsf{f}\in\overline{\mathcal{V}}\times S\times\{m\}\), such that \(\mathsf{c}\xrightarrow{z}\mathsf{f}\). We prove that the
maximum counter value encountered during this run are bounded. Let us assume that \(max(n_{\mathtt{c}},m)=n_{\mathtt{c}}\). The case where \(max(n_{\mathtt{c}},m)=m\) can be proven analogously. Assume for contradiction that the maximum counter value encountered during this run is greater than \(n_{\mathtt{c}}+\mathsf{K}^{4}\). There exists \(z_{1},z_{2},z_{3}\in\Sigma^{*}\) such that \(z=z_{1}z_{2}z_{3}\) and configurations \(\mathtt{d},\mathtt{e}\) such that the run on \(z\) from \(\mathtt{c}\) can be written as follows:
\[\mathtt{c}\xrightarrow{z_{1}}\mathtt{d}\xrightarrow{z_{2}}\mathtt{e} \xrightarrow{z_{3}}\mathtt{f}\]
where \(n_{\mathtt{e}}=n_{\mathtt{c}}\) and \(n_{\mathtt{d}}=n_{\mathtt{c}}+max_{\mathtt{ce}}(\pi(z,\mathtt{c}))\) (see Figure 2). Let \(\mathbb{M}\in\mathcal{F}^{\mathsf{K}\times\mathsf{K}}\) such that \(\mathbf{x}_{\mathtt{f}}=\mathbf{x}_{\mathtt{e}}\mathbb{M}\). From Lemma 5 we know that the set \(\mathcal{U}=\{\mathbf{y}\in\mathcal{F}^{\mathsf{K}}\mid\mathbf{y}\mathbb{M} \in\mathcal{V}\}\) is a vector space and hence the vector \(\mathbf{x}_{\mathtt{e}}\in\overline{\mathcal{U}}\). From Lemma 12 we know that \(z_{1}z_{2}\) is a minimal reachability witness for \((\mathtt{c},\overline{\mathcal{U}},\{p_{\mathtt{e}}\},\{n_{\mathtt{e}}\})\). We contradict the minimality of \(z_{1}z_{2}\).
Let \(\mathtt{c}_{1}=\mathtt{c}\) and \(T(\mathtt{c}_{1})=\mathtt{c}_{1}\tau_{1}\mathtt{c}_{2}\cdots\tau_{\ell-1} \mathtt{c}_{\ell}\) denote the run on word \(z_{1}z_{2}\) from the configuration \(\mathtt{c}_{1}\) and \(T\) the corresponding sequence of transitions. Let \(M=max_{\mathtt{ce}}(\pi(z,\mathtt{c}))\). Note that \(M=n_{\mathtt{d}}-n_{\mathtt{c}}\). For any \(i\in[0,M]\), we denote by \(l_{i}\) and \(d_{i}\) the indices such that the counter value \(n_{\mathtt{c}_{1}}+i\) is encountered for the last (resp. first) time before (resp. after) reaching counter value \(n_{\mathtt{c}_{1}}+M\) in \(T(\mathtt{c}_{1})\). That is, \(\mathtt{ce}(T_{1\cdots l_{i}-1})=\mathtt{ce}(T_{1\cdots d_{i}-1})=i\), and for any \(j\in[l_{i},d_{i}-2]\), \(\mathtt{ce}_{T_{1\cdots j}}>i\). We call \(\mathtt{g}_{i}=T_{1\cdots l_{i}-1}(\mathtt{c}_{1})\) and \(\mathtt{g}^{\prime}_{i}=T_{1\cdots d_{i}-1}(\mathtt{c}_{1})\).
Let \(r=\mathsf{K}^{2}+1\). Since \(M>\mathsf{K}^{4}\), by Pigeonhole principle, there exists set of indices \(X=\{i_{1},i_{2},\cdots,i_{r}\}\subseteq[0,M]\) such that for any \(k<r\), we have \(i_{k}<i_{r}\) and for all \(h,j\in X\)\(p_{\mathtt{g}_{h}}=p_{\mathtt{g}^{\prime}_{h}}=p_{\mathtt{g}_{j}}=p_{ \mathtt{g}^{\prime}_{j}}\). For all \(j\in X\), let \(u_{j},v_{j},w_{j}\) be words such that \(\mathtt{c}_{1}\xrightarrow{u_{j}}\mathtt{g}_{j}\xrightarrow{v_{j}}\mathtt{ g}^{\prime}_{j}\xrightarrow{w_{j}}\mathtt{g}^{\prime}_{1}\) as depicted in Figure 2. For all \(j\in X\), let matrix \(\mathbb{A}_{j}\) and \(\mathbb{B}_{j}\) be such that \(\mathbf{x}_{\mathtt{g}^{\prime}_{j}}=\mathbf{x}_{\mathtt{g}_{j}}\mathbb{A}_{j}\) and \(\mathbf{x}_{\mathtt{g}^{\prime}_{1}}=\mathbf{x}_{\mathtt{g}^{\prime}_{j}} \mathbb{B}_{j}\). We know that for all \(j\in X\), \(\mathbf{x}_{\mathtt{g}_{j}}\mathbb{A}_{j}\mathbb{B}_{j}\in\overline{\mathcal{U}}\). Now we list the matrices in the following sequence
\(\mathbb{A}_{i_{r}},\mathbb{A}_{i_{r-1}},\ldots,\mathbb{A}_{i_{1}}\). From Lemma 4, it follows that, there exists \(h,j\in X\) with \(h<j\) such that \(\mathbf{x}_{\mathtt{g}_{h}}\mathbb{A}_{j}\mathbb{B}_{h}\in\overline{\mathcal{U}}\).
Consider the sequence of transitions \(T^{\prime}=\tau_{1\ldots l_{h}-1}\tau_{l_{j}\ldots d_{j}-1}\tau_{d_{h}\ldots\ell}\). The word \(u_{h}v_{j}w_{h}=\mathtt{word}(T^{\prime})\) is a proper subword of \(z_{1}z_{2}\) and the run \(\pi(u_{h}v_{j}w_{h},\mathtt{c}_{1})=T^{\prime}(\mathtt{c}_{1})\) is a valid floating run shorter than \(\pi(w,\mathtt{c}_{1})\) and \(\mathtt{c}_{1}\xrightarrow{u_{h}v_{j}w_{h}}\mathtt{e}^{\prime}\) such that \(\mathtt{e}^{\prime}\in\overline{\mathcal{U}}\times\{p_{\mathtt{e}}\}\times\{n_ {\mathtt{e}}\}\). This contradicts the minimality of \(z_{1}z_{2}\).
Now, we prove that for any run (need not necessarily be a floating run) of a minimal reachability witness \(z\) for \((\mathtt{c},\overline{\mathcal{V}},S,m)\), the maximum counter value encountered during the run \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is bounded by a polynomial in the number of states of the machine and the initial and final counter values. This is achieved by applying Lemma 15 multiple times on the run of the minimal witness (refer Figure 3).
**Lemma 16**.: _If \(z\in\Sigma^{*}\) is a minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\) then the maximum counter value encountered during the run \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is less than \(max(n_{\mathtt{c}},m)+\mathsf{K}^{4}\)._
Proof.: Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\). Consider the run of word \(z\) from \(\mathtt{c}\). Let \(\mathtt{d}\in\overline{\mathcal{V}}\times S\times\{m\}\) such that \(\mathtt{c}\xrightarrow{z}\mathtt{d}\). Assume for contradiction that the maximum counter value encountered during the run \(\mathtt{c}\xrightarrow{z}\mathtt{d}\) is greater than \(max(n_{\mathtt{c}},m)+\mathsf{K}^{4}\). Let \(\mathtt{e}_{1},\mathtt{e}_{2},\cdots,\mathtt{e}_{t}\) be all the configurations in this run such that \(n_{\mathtt{e}_{i}}=0\) for all \(i\in[1,t]\). There exists words \(u_{1},u_{2},\cdots,u_{t+1}\in\Sigma^{*}\) such that \(z=u_{1}u_{2}\cdots u_{t+1}\) and
\[\mathtt{c}\xrightarrow{u_{1}}\mathtt{e}_{1}\xrightarrow{u_{2}}\mathtt{e}_{2 }\xrightarrow{u_{3}}\cdots\xrightarrow{u_{t}}\mathtt{e}_{t}\xrightarrow{u_{t+1 }}\mathtt{d}\]
Note that \(\mathtt{c}\xrightarrow{u_{1}}\mathtt{e}_{1}\), \(\mathtt{e}_{t}\xrightarrow{u_{t+1}}\mathtt{d}\) and \(\mathtt{e}_{i}\xrightarrow{u_{i+1}}\mathtt{e}_{i+1}\) for all \(i\in[1,t-1]\) are floating runs (refer Figure 3).
Figure 3. The figure shows a run from configuration \(\mathtt{c}\) to \(\mathtt{d}\) such that \(\mathbf{x}_{\mathtt{d}}\in\overline{\mathcal{V}}\). Configurations \(\mathtt{e}_{1},\ldots,\mathtt{e}_{\mathtt{d}}\) denote the configurations where counter value zero is encountered during the run. The dashed lines denote the portions that can be removed to get a shorter reachability witness for \((\mathtt{c},\overline{\mathcal{V}},\{p_{\mathtt{d}}\},\{n_{\mathtt{d}}\})\).
We show that the counter values are bounded during each of these floating runs. First, we consider the floating run \(\mathsf{c}\xrightarrow{u_{1}}\mathsf{e}_{1}\). Let \(\mathbb{A}\in\mathcal{F}^{\mathsf{K}\times\mathsf{K}}\) be such that \(\mathbf{x}_{\mathsf{d}}=\mathbf{x}_{\mathsf{e}_{1}}\mathbb{A}\). From Lemma 5 we know that the set \(\mathcal{U}=\{\mathbf{y}\in\mathcal{F}^{\mathsf{K}}\mid\mathbf{y}\mathbb{A} \in\mathcal{V}\}\) is a vector space and hence the vector \(\mathbf{x}_{\mathsf{e}_{1}}\in\overline{\mathcal{U}}\). From Lemma 12, we know that \(u_{1}\) is a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{U}},\{p_{\mathsf{e}_{1}}\},\{0\})\) and therefore by Lemma 15 we know that the maximum counter value encountered during the run \(\pi(u_{1},\mathsf{c})\) is less than \(n_{\mathsf{c}}+\mathsf{K}^{4}\).
Similarly for the floating run \(\mathsf{e}_{t}\xrightarrow{u_{t+1}}\mathsf{d}\), the maximum counter value is bounded by \(n_{\mathsf{d}}+\mathsf{K}^{4}\). Now consider the floating runs \(\mathsf{e}_{i}\xrightarrow{u_{i+1}}\mathsf{e}_{i+1}\) for all \(i\in[1,t-1]\). Again by applying Lemma 15 we get that the maximum counter value encountered during each of these sub-runs is less than \(\mathsf{K}^{4}\). Therefore, the maximum counter value encountered during the run \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is less than \(max(n_{\mathsf{c}},m)+\mathsf{K}^{4}\).
We have shown that the counter values are polynomially bounded during the run of a minimal reachability witness for the co-VS reachability problem. Our next objective is to prove an analogous result for the co-VS coverability problem. The problem is similar to co-VS reachability, except that now we are not given a final counter value. A crucial ingredient in proving this is Lemma 17 where we prove that if the run of a minimal reachability witness \(z\) for \((\mathsf{c},\overline{\mathcal{V}},S,\mathbb{N})\) is a floating run, then the number of distinct counter values encountered during the run \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\mathbb{N}\) is polynomially bounded in \(\mathsf{K}\) and \(n_{\mathsf{c}}\). Using this and the ideas presented earlier for co-VS reachability, we can prove the existence of a polynomial length witness for the co-VS coverability problem.
**Lemma 17**.: _If \(z\in\Sigma^{*}\) is a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,\mathbb{N})\) and \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is a floating run then \(|m-n_{\mathsf{c}}|\leq\mathsf{K}^{2}\)._
Proof.: Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,\mathbb{N})\) and \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is a floating run. Assume for contradiction that \(m>n_{\mathsf{c}}+\mathsf{K}^{2}\). The
case where \(n_{\mathsf{c}}>m+\mathsf{K}^{2}\) can be proven analogously. Let \(\mathsf{c}_{1}=\mathsf{c}\) and \(\pi(z,\mathsf{c}_{1})=\mathsf{c}_{1}\tau_{1}\mathsf{c}_{2}\cdots\tau_{\ell-1} \mathsf{c}_{\ell}\) be such that \(m=n_{\mathsf{c}_{\ell}}>n_{\mathsf{c}_{1}}+\mathsf{K}^{2}\). Consider the sequence of transitions \(T=\tau_{0}\tau_{1}\cdots\tau_{\ell-1}\) in \(\pi(z,\mathsf{c}_{1})\). Since there are only \(\mathsf{K}\) counter states, by Pigeon-hole principle, there exists a strictly increasing sequence \(I=0<i_{0}<i_{1}<\cdots<i_{\mathsf{K}}\leq\ell\) such that for all \(j,j^{\prime}\in I\) (Condition 1) \(p_{\mathsf{c}_{j}}=p_{\mathsf{c}_{j^{\prime}}}\) and (Condition 2) if \(j<j^{\prime}\) then \(n_{\mathsf{c}_{j}}<n_{\mathsf{c}_{j^{\prime}}}\) and for all \(d\in[j+1,j^{\prime}-1]\), \(n_{\mathsf{c}_{j}}<n_{\mathsf{c}_{d}}<n_{\mathsf{c}_{j^{\prime}}}\).
Consider the set of configurations \(\mathsf{c}_{i_{0}},\mathsf{c}_{i_{1}},\ldots,\mathsf{c}_{i_{\mathsf{K}}}\). For any \(j\in[0,\mathsf{K}]\), let \(\mathbb{A}_{j}\) denote the matrix such that \(\mathbf{x}_{\mathsf{c}_{i_{j}}}\mathbb{A}_{j}=\mathbf{x}_{\mathsf{c}_{\ell}}\). Since \(\mathbf{x}_{\mathsf{c}_{i_{d}}}\mathbb{A}_{d}\in\overline{\mathcal{V}}\) for all \(d\in[0,\mathsf{K}]\), from Lemma 3 we get that there exists \(l,k\in[0,\mathsf{K}]\) with \(l<k\) such that \(\mathbf{x}_{\mathsf{c}_{i_{l}}}\mathbb{A}_{k}\in\overline{\mathcal{V}}\).
Consider a configuration \(\mathsf{e}=(\mathbf{x},p,n)\). If \(\pi(u,\mathsf{e})\) is a valid floating run with \(min_{\mathsf{ce}}(\pi(u,\mathsf{e}))>0\), then for all \(m\in\mathbb{N}\) and \(\mathbf{y}\in\mathcal{F}^{\mathsf{K}}\), \(\pi(u,(\mathbf{y},p,m))\) is a valid run. Consider the sequence of transitions \(T^{\prime}=\tau_{i_{k}\cdots\ell-1}\) and let \(u=\mathtt{word}(T^{\prime})\). Because of Condition 2, \(min_{\mathsf{ce}}(\pi(u,\mathsf{c}_{i_{k}}))>0\). Therefore the run \(T^{\prime\prime}(\mathsf{c}_{1})\) where \(T^{\prime\prime}=\tau_{1\cdots i_{l}-1}\tau_{i_{k}\cdots\ell-1}\) is a valid run shorter than \(\pi(z,\mathsf{c}_{1})\). This contradicts the minimality of \(z\).
Now we show that for any run (need not be floating) of a minimal reachability witness \(z\) for \((\mathsf{c},\overline{\mathcal{V}},S,\mathbb{N})\), the maximum counter value encountered during the run \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\mathbb{N}\) is bounded by a polynomial in \(\mathsf{K}\) and the initial counter value.
**Lemma 18**.: _If \(z\in\Sigma^{*}\) is a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,\mathbb{N})\) then the maximum counter value encountered during the run \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\mathbb{N}\) is less than \(max(n_{\mathsf{c}},\mathsf{K}^{2})+\mathsf{K}^{4}\)._
Proof.: Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,\mathbb{N})\). Consider the run of word \(z\) from \(\mathsf{c}\). Let \(\mathsf{d}\in\overline{\mathcal{V}}\times S\times\mathbb{N}\) such that \(\mathsf{c}\xrightarrow{z}\mathsf{d}\). If \(\mathsf{c}\xrightarrow{z}\mathsf{d}\) is a floating run, then by Lemma 17 the maximum counter value encountered during this run will be less than \(n_{\mathsf{c}}+\mathsf{K}^{2}\). Now if \(\mathsf{c}\xrightarrow{z}\mathsf{d}\) is not a floating run, then there exists \(u_{1},u_{2}\in\Sigma^{*}\) such that \(z=u_{1}u_{2}\) and \(\mathsf{c}\xrightarrow{u_{1}}\mathsf{e}\xrightarrow{u_{2}}\mathsf{d}\) where, \(n_{\mathsf{e}_{i}}=0\) and \(\mathsf{e}\xrightarrow{u_{2}}\mathsf{d}\) is a floating run.
Let \(\mathbb{A}\in\mathcal{F}^{\mathsf{K}\times\mathsf{K}}\) be such that \(\mathbf{x}_{\mathsf{d}}=\mathbf{x}_{\mathsf{e}}\mathbb{A}\). From Lemma 5, we know that the the set \(\mathcal{U}=\{\mathbf{y}\in\mathcal{F}^{\mathsf{K}}\mid\mathbf{y}\mathbb{A} \in\mathcal{V}\}\) is a vector space and hence the vector \(\mathbf{x}_{\mathsf{e}}\in\overline{\mathcal{U}}\). Note that for all \(\mathbf{y}\in\overline{\mathcal{U}}\), the vector \(\mathbf{y}\mathbb{A}\in\overline{\mathcal{V}}\). From Lemma 12, we know that \(u_{1}\) is a minimal reachability witness for \((\mathsf{c},\overline{\mathcal{U}},\{p_{\mathsf{e}}\},\{0\})\) and therefore by Lemma 16, we know that the maximum counter value encountered during the run \(\pi(u_{1},\mathsf{c})\) is less than \(n_{\mathsf{c}}+\mathsf{K}^{4}\). Now since \(\mathsf{e}\xrightarrow{u_{2}}\mathsf{d}\) is a floating run and \(u_{2}\) is the minimal such word, from Lemma 17, we get that \(n_{\mathsf{d}}\leq\mathsf{K}^{2}\), and by Lemma 15, we know that the maximum counter value encountered during this run is less than \(\mathsf{K}^{2}+\mathsf{K}^{4}\). Therefore, we get that the maximum counter value encountered during the run \(\mathsf{c}\xrightarrow{z}\mathsf{d}\) is less than \(max(n_{\mathsf{c}},\mathsf{K}^{2})+\mathsf{K}^{4}\).
Proof of Theorem 13.: For solving the co-VS reachability problem when the weighted odca \(\mathcal{A}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{ \eta})\) with \(\mathsf{K}=|Q|=|C|\) states, initial configuration \(\mathsf{c}\), vector space \(\mathcal{V}\), set of counter states \(S\) and counter value \(m\) are given as inputs, we first consider the \(max(n_{\mathsf{c}},m)+\mathsf{K}^{4}\)-unfolding weighted automata \(\mathcal{A}^{max(n_{\mathsf{c}},m)+\mathsf{K}^{4}}=(C^{\prime},\delta^{\prime},p _{0}^{\prime};\ Q^{\prime},\boldsymbol{\lambda}^{\prime},\boldsymbol{\mu}^{ \prime}_{F})\) of \(\mathcal{A}\) as described in Definition 7. From Lemma 16, we know that the maximum counter value encountered during the run of the minimal reachability witness \(z\) for \((\mathsf{c},\overline{\mathcal{V}},S,\{m\})\) is less than \(max(n_{\mathsf{c}},m)+\mathsf{K}^{4}\). We define a vector space \(\mathcal{U}\subseteq\mathcal{F}^{|Q^{\prime}|}\) as follows: A vector \(\mathbf{x}\in\mathcal{F}^{|Q^{\prime}|}\) is in \(\mathcal{U}\) if there
exists \(\mathbf{y}\in\mathcal{V}\) such that for all \(i\in[0,\mathsf{K}-1]\), \(\mathbf{x}[\mathsf{K}\cdot m+i]=\mathbf{y}[i]\) and for all \(n\neq m\) and \(i\in[0,\mathsf{K}-1]\), \(\mathbf{x}[\mathsf{K}\cdot n+i]=0\).
Given a configuration \(\mathsf{c}\) of a weighted odca, we define the vector \(\mathbf{z}_{\mathsf{c}}\in\mathcal{F}^{|Q^{\prime}|}\).
\[\mathbf{z}_{\mathsf{c}}[i]=\begin{cases}\mathbf{x}_{\mathsf{c}}[i\bmod \mathsf{K}],\text{ if }\frac{i}{\mathsf{K}}=n_{\mathsf{c}}\\ 0,\text{ otherwise}\end{cases}\]
Now, consider the configuration \(\bar{\mathsf{c}}=(\mathbf{z}_{\mathsf{c}},(p_{\mathsf{c}},n_{\mathsf{c}}))\) of \(\mathcal{A}^{max(n_{\mathsf{c}},m)+\mathsf{K}^{4}}\) and check whether \(\bar{\mathsf{c}}\xrightarrow{*}\overline{\mathcal{U}}\times S\times\{0\}\). This is a co-VS reachability problem of weighted automata. Using Theorem 11, this can be solved in polynomial time.
For solving the co-VS coverability problem when the weighted odca\(\mathcal{A}\) with \(\mathsf{K}\) states, an initial configuration \(\mathsf{c}\), a vector space \(\mathcal{V}\) and a set of counter states \(S\) are given as inputs, we consider the \(max(n_{\mathsf{c}},\mathsf{K}^{2})+\mathsf{K}^{4}\)-unfolding weighted automata \(\mathcal{A}^{max(n_{\mathsf{c}},\mathsf{K}^{2})+\mathsf{K}^{4}}=(C^{\prime}, \delta^{\prime},p_{0}^{\prime};\ Q^{\prime},\boldsymbol{\lambda}^{\prime}, \boldsymbol{\mu}^{\prime},\boldsymbol{\eta}^{\prime}_{F})\) of \(\mathcal{A}\). From Lemma 18, we know that the maximum counter value encountered during the run of a minimal reachability witness \(z\) for \((\mathsf{c},\overline{\mathcal{V}},S,\mathbb{N})\) is less than \(max(n_{\mathsf{c}},\mathsf{K}^{2})+\mathsf{K}^{4}\). We define a vector space \(\mathcal{U}\subseteq\mathcal{F}^{|Q^{\prime}|}\) as follows: A vector \(\mathbf{x}\in\mathcal{F}^{|Q^{\prime}|}\) is in \(\mathcal{U}\) if there exists \(\mathbf{y}\in\mathcal{V}\) and \(m\in\mathbb{N}\) such that for all \(i\in[0,\mathsf{K}-1]\), \(\mathbf{x}[\mathsf{K}\cdot m+i]=\mathbf{y}[i]\) and for all \(n\neq m\) and \(i\in[0,\mathsf{K}-1]\), \(\mathbf{x}[\mathsf{K}\cdot n+i]=0\). Now, consider the configuration \(\bar{\mathsf{c}}=(\mathbf{z}_{\mathsf{c}},(p_{\mathsf{c}},n_{\mathsf{c}}))\) of \(\mathcal{A}^{max(n_{\mathsf{c}},\mathsf{K}^{2})+\mathsf{K}^{4}}\) and check whether \(\bar{\mathsf{c}}\xrightarrow{*}\overline{\mathcal{U}}\times S\times\{0\}\). This is a co-VS reachability problem of a weighted automaton. From Theorem 11, we know that this can be solved in polynomial time.
### Binary reachability in NP
Consider the case where the counter values are specified in binary. Theorem 13 can still be applied to get an algorithm whose running time is polynomial in the input counter values. Since the counter values are represented in binary, their values can be exponentially large compared to their size. Therefore, we only get an exponential time algorithm for reachability from Theorem 13. This section shows that co-VS reachability can be tested in NP. The technically challenging part of the proof is proved in Lemma 22. It shows that the "lexicographically minimal" reachability witness \(z\) is of the form \(uy_{1}^{r_{1}}vy_{2}^{r_{2}}w\), where the length of the words \(u,y_{1},y_{2},v\) and \(w\) are polynomially bounded in \(\mathsf{K}\) and \(r_{1},r_{2}\) are polynomial values dependent on \(\mathsf{K}\) and the input counter values. This is a polynomial sized representation of the witness \((r_{1},r_{2}\) in binary) whose reachability can be verified in polynomial time. A non-deterministic machine guesses the words \(u,y_{1},y_{2},v\), and \(w\) and verifies reachability in polynomial time.
**Theorem 19**.: _Binary co-VS reachability and co-VS coverability problems are in NP._
We aim to show that there is an "encoding" of a minimal reachability witness of polynomial size with respect to the input size. The following lemma shows that the length of a minimal reachability witness is bounded by a polynomial in the input counter values. Note that this can be exponential in size with respect to the input size when the counter values are represented in binary.
**Lemma 20**.:
1. _If_ \(z\) _is a minimal reachability witness for_ \((\mathsf{c},\overline{\mathcal{V}},S,\{m\})\) _then_ \(|z|\leq\mathsf{K}^{2}\cdot(max(n_{\mathsf{c}},m)+\mathsf{K}^{4})\)_._
2. _If_ \(z\) _is a minimal reachability witness for_ \((\mathsf{c},\overline{\mathcal{V}},S,\mathbb{N})\) _then_ \(|z|\leq\mathsf{K}^{2}\cdot(max(n_{\mathsf{c}},\mathsf{K}^{2})+\mathsf{K}^{4})\)_._
Proof.: **1.** Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\). From Lemma 16, we know that the maximum counter value encountered during the run \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is less than \(max(n_{\mathtt{c}},m)+\mathsf{K}^{4}\). Therefore, there are at most \(max(n_{\mathtt{c}},m)+\mathsf{K}^{4}\) many distinct counter values encountered during this run. Now from Lemma 14 we get that \(|z|\leq\mathsf{K}^{2}\cdot(max(n_{\mathtt{c}},m)+\mathsf{K}^{4})\).
**2.** Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\mathbb{N})\). From Lemma 18, we know that the maximum counter value encountered during the run \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\mathbb{N}\) is less than \(max(n_{\mathtt{c}},\mathsf{K}^{2})+\mathsf{K}^{4}\). Therefore, there are at most \(max(n_{\mathtt{c}},\mathsf{K}^{2})+\mathsf{K}^{4}\) many distinct counter values encountered during this run. Now from Lemma 14 we get that \(|z|\leq\mathsf{K}^{2}\cdot(max(n_{\mathtt{c}},\mathsf{K}^{2})+\mathsf{K}^{4})\).
We define the counter effect of a word \(w\) with respect to a counter state \(q\in C\) as \(\mathtt{ce}(\pi(w,\mathtt{c}))\) where \(\mathtt{c}\) is any configuration with \(n_{\mathtt{c}}=|w|\) and \(p_{\mathtt{c}}=q\). Note that for any two configuration \(\mathtt{c},\mathtt{d}\), \(\mathtt{ce}(\pi(u,\mathtt{c}))=\mathtt{ce}(\pi(u,\mathtt{d})\) if \(n_{\mathtt{c}}=n_{\mathtt{d}}=|w|\) and \(p_{\mathtt{c}}=p_{\mathtt{d}}\). First, we consider the case of the run of a minimal reachability witness from \(\mathtt{c}\), which is a floating run. The following lemma is required for the special case where \(|n_{\mathtt{c}}-m|\) is bounded by a polynomial in \(\mathsf{K}\).
**Lemma 21**.: _If \(z\in\Sigma^{*}\) is a minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\) and \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is a floating run, then_
1. _the minimum counter value during this run is greater than_ \(min(n_{\mathtt{c}},m)-\mathsf{K}^{4}\)_, and_
2. \(|z|\leq\mathsf{K}^{2}\cdot(|n_{\mathtt{c}}-m|+2\mathsf{K}^{4})\)_._
Proof.: Let \(z\in\Sigma^{*}\) be a minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\) and \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is a floating run. We prove the claims in the lemma one by one.
**1.**_This case is symmetric to that of Lemma 15 and can be proven analogously._
**2.** From Lemma 15 and Point 1, we get that the counter values encountered during the run \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) lies between \(max(n_{\mathtt{c}},m)+\mathsf{K}^{4}\) and \(min(n_{\mathtt{c}},m)-\mathsf{K}^{4}\). Let \(t=|n_{\mathtt{c}}-m|\). There are at most \(t+2\cdot\mathsf{K}^{4}\) distinct counter values during this run. Now from Lemma 14 we get that \(|z|\leq\mathsf{K}^{2}\cdot(t+2\cdot\mathsf{K}^{4})\).
We assume a total order on the symbols in \(\Sigma\). Given two words \(u,v\in\Sigma^{*}\), we say that \(u\) precedes \(v\) in the _lexicographical ordering_ if \(|u|<|v|\) or if \(|u|=|v|\) and there exists an \(i\in[0,|u|-1]\) such that \(u[0,i-1]=v[0,i-1]\) and \(u[i]\) precedes \(v[i]\) in the total ordering assumed on \(\Sigma\). A word \(z\in\Sigma^{*}\) is called the lexicographically minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\), if \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times X\) and for all \(u\in\Sigma^{*}\) with \(\mathtt{c}\xrightarrow{u}\overline{\mathcal{V}}\times S\times X\), \(z\) precedes \(u\) in the lexicographical ordering. We show that the lexicographically minimal reachability witness \(z\) for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\) has a special form. First, we consider the case of floating runs.
**Lemma 22**.: _If \(z\in\Sigma^{*}\) is the lexicographically minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\) and \(\mathtt{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is a floating run, then there exists \(u,y,w\in\Sigma^{*}\) and \(r\in\mathbb{N}\) such that \(z=uy^{r}w\) and for all configurations \(\mathtt{d}_{k}\), \(k\in[0,r]\), where \(\mathtt{c}\xrightarrow{uy^{k}}\mathtt{d}_{k}\) the following conditions hold:_
1. \(|u|,|y|\leq 3\mathsf{K}^{7}\) _and_ \(|w|<6\mathsf{K}^{7}\)_,_
2. _either_ \(n_{\mathtt{d}_{i}}>n_{\mathtt{d}_{j}}\) _for all_ \(i,j\) _such that_ \(0\leq i<j\leq r\) _or_ \(n_{\mathtt{d}_{i}}<n_{\mathtt{d}_{j}}\) _for all_ \(i,j\) _such that_ \(0\leq i<j\leq r\)_,_
3. _for all_ \(i,j\in[0,r]\)_,_ \(\mathtt{p}_{\mathtt{d}_{i}}=p_{\mathtt{d}_{j}}\)_, and_
* \(r\in[0,\mathsf{K}^{2}\cdot|n_{\mathsf{c}}-m|+\mathsf{K}^{6}]\).
Proof.: Let \(z\) be the lexicographically minimal reachability witness for \((\mathsf{c},\overline{\mathcal{V}},S,\{m\})\) such that \(\mathsf{c}\xrightarrow{z}\overline{\mathcal{V}}\times S\times\{m\}\) is a floating run. Let \(t=|n_{\mathsf{c}}-m|\). If \(t\leq\mathsf{K}^{4}\), then from Lemma 21, Item 2, we get that \(|z|\leq 3\mathsf{K}^{6}\) and the claim is trivially true. Consider the case where \(n_{\mathsf{c}}>m\). The case where \(m>n_{\mathsf{c}}\) can be proven analogously. Let us assume \(t>\mathsf{K}^{4}\) and let \(d\in\mathbb{Z}\) be such that \(d=-t+\mathsf{K}^{4}+1\).
Let \(\mathsf{c}=\mathsf{c}_{1}\) and \(T(\mathsf{c}_{1})=\mathsf{c}_{1}\tau_{1}\mathsf{c}_{2}\cdots\tau_{\ell-1} \mathsf{c}_{\ell}\) denote the run on word \(z\) from the configuration \(\mathsf{c}_{1}\). For any \(i\in[0,\mathsf{K}^{4}]\), we denote by \(l_{i}\) the index such that the counter value \(n_{\mathsf{c}_{1}}-i\) is encountered for the first time and \(r_{i}\) the index such that the counter value \(n_{\mathsf{c}_{1}}-i+d\) is encountered for the last time in \(T(\mathsf{c}_{1})\) (see Figure 5). Since \(t>\mathsf{K}^{4}\), there are at least \(\mathsf{K}^{4}+1\) pairs of positions \((l_{i},r_{i}),i\in[0,\mathsf{K}^{4}]\) such that for all \(i\in[0,\mathsf{K}^{4}]\) the factor \(z[l_{i},r_{i}]\) has counter effect \(d\) with respect to counter state \(p_{c_{l_{i}}}\). Note that these factors need not be all distinct. Let \(X=\{(l_{i},r_{i})\}_{i\in[0,\mathsf{K}^{4}]}\) be the set containing these pairs of positions and \(W=\{z[l,r]\mid(l,r)\in X\}\) be the set containing the corresponding factors. Note that \(|X|>\mathsf{K}^{4}\).
**Claim 1**.: \(|W|\leq\mathsf{K}^{4}\)_._
Proof: Assume for contradiction that \(|W|>\mathsf{K}^{4}\). Let \(\mathsf{g}\in\overline{\mathcal{V}}\times S\times\{m\}\) be such that \(\mathsf{c}\xrightarrow{z}\mathsf{g}\). Since number of counter states is \(\mathsf{K}\), by Pigeon-hole principle there exists \(Y\subseteq X\) with \(|Y|=\mathsf{K}^{2}+1\) such that for all \((l,r),(l^{\prime},r^{\prime})\in Y\), \(p_{\mathsf{c}_{l}}=p_{\mathsf{c}_{l^{\prime}}}\), \(p_{\mathsf{c}_{r}}=p_{\mathsf{c}_{r^{\prime}}}\), and \(z[l,r]\neq z[l^{\prime},r^{\prime}]\). We say \((l,r)<(l^{\prime},r^{\prime})\) if \(z[l,r]\) precedes \(z[l^{\prime},r^{\prime}]\) in the lexicographical order. Therefore, the elements in \(Y\) have an ordering as follows: \((l_{0},r_{0})<(l_{1},r_{1})<\cdots<(l_{\mathsf{K}^{2}},r_{\mathsf{K}^{2}})\). For all \(i\in[0,\mathsf{K}^{2}]\), let \(u_{i}=z[1,l_{i}],x_{i}=z[l_{i},r_{i}],w_{i}=z[r_{i},\ell]\), configurations \(\mathsf{e}_{i},\mathbf{f}_{i}\) be such that \(\mathsf{c}\xrightarrow{u_{i}}\mathsf{e}_{i}\xrightarrow{x_{i}}\mathsf{f}_{ i}\xrightarrow{w_{i}}\mathsf{g}\) and matrices \(\mathbb{A}_{i},\mathbb{M}_{i},\mathbb{B}_{i}\) be such that \(\mathbf{x}_{\mathbf{e}_{i}}=\mathbf{x}_{\mathsf{c}}\mathbb{A}_{i}\), \(\mathbf{x}_{\mathbf{f}_{i}}=\mathbf{x}_{\mathbf{a}_{i}}\mathbb{M}_{i}\), \(\mathbf{x}_{\mathbf{g}}=\mathbf{x}_{\mathbf{f}_{i}}\mathbb{B}_{i}\).
We know that for all \(i\in[0,\mathsf{K}^{2}]\), \(\mathbf{x}_{\mathsf{c}}\mathbb{A}_{i}\mathbb{M}_{i}\mathbb{B}_{i}\in\overline{ \mathcal{V}}\). Consider the sequence of matrices \(\mathbb{M}_{0},\mathbb{M}_{1},\cdots,\mathbb{M}_{\mathsf{K}^{2}}\). From Lemma 4 we know that there exists an \(i\in[0,\mathsf{K}^{2}-1]\) such that for all \(i\in[0,\mathsf{K}^{2}-1]\), \(i\in[0,\mathsf{K}^{2}-1]\)
1] and \(j<i\) such that \(\mathbf{x}_{\mathsf{c}}\mathbb{A}_{i}\mathbb{M}_{j}\mathbb{B}_{i}\in\overline{\mathcal{ V}}\). Note that the word \(u_{i}x_{j}w_{i}\) precedes \(z\) in the lexicographical ordering. Therefore the run \(\mathsf{c}\xrightarrow{u_{i}x_{j}w_{i}}\overline{\mathcal{V}}\times S\times\{m\}\) contradicts minimality of \(z\). \(\square_{Claim:1}\)
Since \(|W|\leq\mathsf{K}^{4}\) and \(|X|>\mathsf{K}^{4}\), there exists \(i,j\in[0,\mathsf{K}^{4}]\), with \(i<j\) and \(x\in\Sigma^{*}\) such that \((l_{i},r_{i})\in X,(l_{j},r_{j})\in X\) and \(x=w[l_{i},r_{i}]=w[l_{j},r_{j}]\). Let \(u_{1},w_{1},u_{2},w_{2}\in\Sigma^{*}\) such that \(z=u_{1}xw_{1}=u_{2}xw_{2}\). Since \(u_{1}\neq u_{2}\), either \(u_{1}\) is a prefix of \(u_{2}\) or \(u_{2}\) a prefix of \(u_{1}\). Without loss of generality, let us assume \(u_{1}\) is prefix of \(u_{2}\). Therefore, there exists \(v\in\Sigma^{*}\) such that \(u_{2}=u_{1}v\). Let \(\mathsf{e}\) be a configuration such that \(\mathsf{c}\xrightarrow{u_{1}}\mathsf{e}\).
**Claim 2**.: \(|u_{1}|,|v|,|w_{1}|\leq 3\mathsf{K}^{6}\)_._
Proof: Consider the set \(X\). For any \(i,j\in[0,\mathsf{K}^{4}]\), with \(i<j\), \(n_{\mathsf{c}_{l_{i}}}-n_{\mathsf{c}_{l_{j}}}\leq\mathsf{K}^{4}+1\) and \(n_{\mathsf{c}_{r_{j}}}-n_{\mathsf{c}_{r_{i}}}\leq\mathsf{K}^{4}+1\). Therefore the counter-effect of \(u_{2}\) and \(w_{2}\) can be at most \(\mathsf{K}^{4}\). So the counter-effect of \(v\) with respect to counter state \(p_{\mathsf{e}}\) can be at most \(\mathsf{K}^{4}\). Since it is a minimal floating run from Lemma 21, we get that \(|v|\leq 3\mathsf{K}^{6}\). By similar arguments, the counter-effect of \(u_{1}\) and \(w_{1}\) can be at most \(\mathsf{K}^{4}\), and again by Lemma 21, we get that their lengths are at most \(3\mathsf{K}^{6}\). \(\square_{Claim:2}\)
**Claim 3**.: _There exists \(v^{\prime}\in\Sigma^{*}\) and \(r\in[0,\mathsf{K}^{2}\cdot|n_{\mathsf{c}}-m|+\mathsf{K}^{6}]\) such that \(x=v^{r}v^{\prime}\) with \(|v^{\prime}|\leq|v|\)._
Proof: Let \(r\in\mathbb{N}\) be the largest number such that \(x\) is of the form \(v^{r}v^{\prime}\) for some \(v^{\prime}\in\Sigma^{*}\) (see Figure 6). We know that \(z=u_{2}xw_{2}\) and \(u_{2}=u_{1}v\). Therefore, \(z=u_{1}vxw_{2}=u_{1}vv^{r}v^{\prime}w_{2}=u_{1}v^{r}vv^{\prime}w_{2}\). Furthermore, \(z=u_{1}xw_{1}=u_{1}v^{r}v^{\prime}w_{1}\). Now since \(u_{1}v^{r}vv^{\prime}w_{2}=u_{1}v^{r}v^{\prime}w_{1}\), we get that \(vv^{\prime}w_{2}=v^{\prime}w_{1}\). Hence, if \(|v^{\prime}|\geq|v|\), then \(v\) is a prefix of \(v^{\prime}\). This is a contradiction since \(r\) was chosen to be the largest number such that \(x\) is of the form \(v^{r}v^{\prime}\).
In order to show the bound on the value \(r\), we observe the following. We know that the counter effect of the run \(\pi(x,\mathsf{e})\) is \(d\). Therefore from Lemma 21 Point 2, we get that \(|x|\leq\mathsf{K}^{2}\cdot(|d|+2\mathsf{K}^{4})\). Therefore, the value of \(r\) is less than or equal to \(\mathsf{K}^{2}\cdot(|d|+2\mathsf{K}^{4})\). \(\square_{Claim:3}\)
From Claim 3 and Claim 2, we get that \(|u_{1}v^{\prime}w_{1}|\leq 9\mathsf{K}^{6}\) and \(z=u_{1}v^{r}v^{\prime}w_{1}\) for some \(r\in[0,\mathsf{K}^{2}\cdot(|d|+2\mathsf{K}^{4})]\). Note that the factor \(v\) might start and end in different counter states during the run and, therefore need not always have a negative counter effect. However, we also know that the word \(v^{r}\) has a negative counter effect. For \(i\in[1,2\mathsf{K}]\), let \(\mathsf{g}_{i}\) be the configuration such that \(\mathsf{e}\xrightarrow{v^{i}}\mathsf{g}_{i}\). By
Figure 6. The figure shows the factorisation of a word \(z=u_{1}xw_{1}=u_{2}xw_{2}\), where \(x\) is an overlapping factor. The factor \(v\) is a prefix of \(x\) such that \(u_{2}=u_{1}v\). The word \(z\) can be written as \(u_{1}v^{i}v^{\prime}w_{2}\) for some \(i\in\mathbb{N}\) and \(v^{\prime}\) prefix of \(v\).
Pigeon-hole principle there exists \(j,k\in[1,2\mathsf{K}]\) with \(j<\mathsf{K}\) and \(k-j\leq\mathsf{K}\) such that \(p_{\mathtt{g}_{j}}=p_{\mathtt{g}_{k}}\). Also, note that the word \(y=v^{k-j}\) has a negative counter-effect from the counter state \(p_{\mathtt{g}_{j}}\). Let \(r^{\prime}=\frac{r-j}{k-j}\) and \(j^{\prime}=(r-j)\pmod{(k-j)}\). Now consider the word \(z=u_{1}v^{j}y^{r^{\prime}}v^{j^{\prime}}v^{\prime}w_{1}\). Since \(|u_{1}|,|w_{1}|,|v|\leq 3\mathsf{K}^{6}\), \(j<\mathsf{K}\) and \(k-j\leq\mathsf{K}\), we get that \(|u_{1}v^{j}|\leq 3\mathsf{K}^{7}\), \(|v^{j^{\prime}}v^{\prime}w_{1}|<6\mathsf{K}^{7}\), \(|y|\leq 3\mathsf{K}^{7}\) and \(r^{\prime}\in[0,\mathsf{K}^{2}\cdot(|d|+2\mathsf{K}^{4})]\).
**Lemma 23**.: _If \(z\in\Sigma^{*}\) is the lexicographically minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\), then there exists \(u,y_{1},v_{1},v_{2},v_{3},y_{2},w\in\Sigma^{*}\) and \(r_{1},r_{2}\in\mathbb{N}\) such that_
1. \(z=uy_{1}^{r_{1}}v_{1}v_{2}v_{3}y_{2}w\)_,_
2. \(|uy_{1}v_{1}v_{2}v_{3}y_{2}w|\leq 25K^{7}\)_,_
3. \(r_{1},r_{2}\leq max\{m,n_{\mathtt{c}}\}\cdot\mathsf{K}^{2}+\mathsf{K}^{6}\)_,_
4. \(\pi(uy_{1}^{r_{1}}v_{1},\mathtt{c})\) _and_ \(\pi(v_{3}y_{2}^{r_{2}}w,\mathtt{d})\) _are floating runs for configuration_ \(\mathtt{d}\) _where_ \(\mathtt{c}\xrightarrow{uy_{1}^{r_{1}}v_{1}v_{2}}\mathtt{d}\)_., and_
5. \(\mathtt{ce}(\pi(uy_{1}^{r_{1}}v_{1},\mathtt{c}))=\mathtt{ce}(\pi(uy_{1}^{r_{1} }v_{1}v_{2},\mathtt{c}))=-n_{\mathtt{c}}\)_._
Proof.: Let \(z\in\Sigma^{*}\) be the lexicographically minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\). Consider the run of word \(z\) from \(\mathtt{c}\). Let \(\mathtt{d}\in\overline{\mathcal{V}}\times S\times\{m\}\) such that \(\mathtt{c}\xrightarrow{z}\mathtt{d}\). Let \(\mathtt{c}=\mathtt{c}_{1}\) and \(T(\mathtt{c}_{1})=\mathtt{c}_{1}\tau_{1}\mathtt{c}_{2}\cdots\tau_{\ell-1} \mathtt{c}_{\ell}\) denote the run on word \(z\) from the configuration \(\mathtt{c}_{1}\) and \(T\) the corresponding sequence of transitions. Let \(\mathtt{e}_{1}\) be the first configuration with counter value zero and \(\mathtt{e}_{2}\) be the last configuration with counter value zero during this run. Let \(z_{1},z_{2},z_{3}\in\Sigma^{*}\) be such that \(\mathtt{c}\xrightarrow{z_{1}}\mathtt{e}_{1}\xrightarrow{z_{2}}\mathtt{e}_{2} \xrightarrow{z_{3}}\mathtt{c}_{\ell}\) and \(z=z_{1}z_{2}z_{3}\). Observe that \(\mathtt{c}\xrightarrow{z_{1}}\mathtt{e}_{1}\) and \(\mathtt{e}_{2}\xrightarrow{z_{3}}\mathtt{c}_{\ell}\) are floating runs.
From Lemma 22, we know that there exists \(u_{1},u_{3},v_{1},v_{3},y_{1},y_{3}\in\Sigma^{*}\) and \(r_{1},r_{3}\in\mathbb{N}\) such that \(z_{1}=u_{1}y_{1}^{r_{1}}v_{1}\), \(z_{3}=u_{3}y_{3}^{r_{3}}v_{3}\), \(|u_{1}|,|u_{3}|\leq 3\mathsf{K}^{7}\), \(|v_{1}|,|v_{3}|\leq 6\mathsf{K}^{7}\), \(|y_{1}|,|y_{3}|\leq 3\mathsf{K}^{7}\), \(r_{1}\in[0,n_{\mathtt{c}}\cdot\mathsf{K}^{2}+\mathsf{K}^{6}]\) and \(r_{3}\in[0,m\cdot\mathsf{K}^{2}+\mathsf{K}^{6}]\). Also, from Lemma 20 we get that \(|z_{2}|\leq\mathsf{K}^{6}\).
We now prove that the binary co-VS reachability and co-VS coverability problems are in \(\mathsf{NP}\). From Lemma 23 we observe there is a polynomial-size encoding of the lexicographically minimal word (where \(r_{1}\) and \(r_{2}\) are in binary). A non-deterministic machine can guess this encoding and verify the reachability in polynomial time since \(\mathbb{M}^{r}\) can be computed in \(\log(r)\) time (see Lemma 2). A detailed proof is given below.
Proof of Theorem 19.: Let us first look at the binary co-VS reachability problem. Let \(z\in\Sigma^{*}\) be the lexicographically minimal reachability witness for \((\mathtt{c},\overline{\mathcal{V}},S,\{m\})\). Consider the run of word \(z\) from \(\mathtt{c}\). Let \(\mathtt{d}\in\overline{\mathcal{V}}\times S\times\{m\}\) such that \(\mathtt{c}\xrightarrow{z}\mathtt{d}\). Let \(\mathtt{c}=\mathtt{c}_{1}\) and \(T(\mathtt{c}_{1})=\mathtt{c}_{1}\tau_{1}\mathtt{c}_{2}\cdots\tau_{\ell-1} \mathtt{c}_{\ell}\) denote the run on word \(z\) from the configuration \(\mathtt{c}_{1}\) and \(T\) the corresponding sequence of transitions. Let \(\mathtt{e}_{1}\) be the first configuration with counter value zero and \(\mathtt{e}_{2}\) be the last configuration with counter value zero during this run. Let \(z_{1},z_{2},z_{3}\in\Sigma^{*}\) be such that \(\mathtt{c}\xrightarrow{z_{1}}\mathtt{e}_{1}\xrightarrow{z_{2}}\mathtt{e}_{2} \xrightarrow{z_{3}}\mathtt{c}_{\ell}\) and \(z=z_{1}z_{2}z_{3}\). Observe that \(\mathtt{c}\xrightarrow{z_{1}}\mathtt{e}_{1}\) and \(\mathtt{e}_{2}\xrightarrow{z_{3}}\mathtt{c}_{\ell}\) are floating runs.
From Lemma 22, we know that there exists \(u_{1},u_{3},v_{1},v_{3},y_{1},y_{3}\in\Sigma^{*}\) and \(r_{1},r_{3}\in\mathbb{N}\) such that \(z_{1}=u_{1}y_{1}^{r_{1}}v_{1}\), \(z_{3}=u_{3}y_{3}^{r_{3}}v_{3}\), \(|u_{1}|,|u_{3}|\leq 3\mathsf{K}^{7}\), \(|v_{1}|,|v_{3}|\leq 6\mathsf{K}^{7}\), \(|y_{1}|,|y_{3}|\leq 3\mathsf{K}^{7}\), \(r_{1}\in[0,n_{\mathtt{c}}\cdot\mathsf{K}^{2}+\mathsf{K}^{6}]\) and \(r_{3}\in[0,m\cdot\mathsf{K}^{2}+\mathsf{K}^{6}]\). Also, from Lemma 20 we get that \(|z_{2}|\leq\mathsf{K}^{6}\).
Our \(\mathsf{NP}\) algorithm starts by guessing the words \(u_{1},y_{1},v_{1},z_{2},u_{3},y_{3},v_{3}\), the values \(r_{1},r_{2}\), and the configurations \(\mathtt{e}_{1}\) and \(\mathtt{e}_{2}\). We first show how to verify if \(\mathtt{c}\xrightarrow{u_{1}y_{1}^{r_{1}}v_{1}}\mathtt{e}_{1}\)
The algorithm computes configuration \(\mathtt{f}_{0}\) such that \(\mathtt{c}\xrightarrow{u_{1}}\mathtt{f}_{0}\). Now it constructs the matrix \(\mathbb{M}_{y_{1}}\) and computes the configuration \(\mathtt{f}_{1}\) such that \(\mathtt{f}_{0}\xrightarrow{y_{1}}\mathtt{f}_{1}\) and \(\mathbf{x}_{\mathtt{f}_{1}}=\mathbf{x}_{\mathtt{f}_{0}}\mathbb{M}_{y_{1}}\). From Lemma 2, we know that \((\mathbb{M}_{y_{1}})^{r_{1}}\) can be computed by repeated powering in time polynomial in \(\log(r_{1})\) and \(\mathsf{K}\). Let \(\mathtt{f}_{r_{1}}\) be a configuration such that \(\mathtt{f}_{0}\xrightarrow{y^{r_{1}}}\mathtt{f}_{r_{1}}\). From Lemma 22, we know that \(p_{\mathtt{f}_{0}}=p_{\mathtt{f}_{r_{1}}}\) and \(n_{\mathtt{f}_{r_{1}}}=p_{\mathtt{f}_{0}}-r_{1}\cdot(n_{\mathtt{f}_{0}}-n_{ \mathtt{f}_{1}})\). Since \(\mathbf{x}_{\mathtt{f}_{r_{1}}}=\mathbf{x}_{\mathtt{f}_{0}}(\mathbb{M}_{y_{1 }})^{r_{1}}\), we can construct the configuration \(\mathtt{f}_{r_{1}}\) in polynomial time. We now verify in polynomial time whether \(\mathtt{f}_{r_{1}}\xrightarrow{v_{1}}\mathtt{e}_{1}\) or not. We can verify if \(\mathtt{e}_{2}\xrightarrow{u_{3}r_{3}^{r_{3}}v_{3}}\mathtt{d}\) in a similar manner. The algorithm can also check whether \(\mathtt{e}_{1}\xrightarrow{z_{2}}\mathtt{e}_{2}\) in polynomial time since \(|z_{2}|\leq\mathsf{K}^{6}\). It finally checks whether \(\mathtt{d}\in\overline{\mathcal{V}}\times S\times\{m\}\). Hence the binary co-VS reachability problem is decidable in \(\mathsf{NP}\).
As for the binary co-VS coverability problem, either the run of a minimal witness is a floating run or is not. In the former case where the run is floating, from Lemma 17, we know that the difference between the final and initial counter values is at most \(\mathsf{K}^{2}\). In the latter case where the run is grounded, by Lemma 17, we get that the final value is at most \(\mathsf{K}^{2}\). In both the cases, the algorithm guesses the final counter value, and the problem is reduced to the co-VS reachability problem, which is in \(\mathsf{NP}\). Hence the binary co-VS coverability problem is decidable in \(\mathsf{NP}\).
## 4. Equivalence of weighted odca
In this section, we give a polynomial time algorithm to check the equivalence of two weighted odcas (Theorem 1). The algorithm returns a minimal distinguishing word if the odcas are non-equivalent. We use the reachability results presented in the previous section to show that the length of a minimal distinguishing word is short. The idea here is to prove that the maximum counter value encountered during the run of a minimal witness is polynomially bounded. We use this to reduce the equivalence problem to that of weighted automata.
In the remainder of this section, we fix two weighted odcas\(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) over an alphabet \(\Sigma\) and a field \(\mathcal{F}\). For \(i\in\{1,2\}\),
\[\mathcal{A}_{i}=(C_{i},\delta_{i},p_{0_{i}};\ Q_{i},\boldsymbol{\lambda}_{i}, \Delta_{i},\boldsymbol{\eta}_{i}).\]
Without loss of generality assume \(\mathsf{K}=|C_{1}|=|Q_{1}|=|C_{2}|=|Q_{2}|\). We will reason on the synchronised runs on pairs of configurations. Given two weighted odcas, \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) and \(i\in\mathbb{N}\), we denote a _configuration pair_ as \(\mathtt{h}_{i}=\langle\mathtt{c}_{i},\mathtt{d}_{i}\rangle\) where \(\mathtt{c}_{i}\) is a configuration of \(\mathcal{A}_{1}\) and \(\mathtt{d}_{i}\) is a configuration of \(\mathcal{A}_{2}\). We similarly consider _transition pairs_ of \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), and consider _synchronised runs_ as the application of a sequence of transition pairs to a configuration pair. We fix a minimal word \(z\) (also called witness) that distinguishes \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) and \(\ell=|z|\). Henceforth we will denote by
\[\Pi=\mathtt{h}_{0}\tau_{0}\mathtt{h}_{1}\cdots\tau_{\ell-1}\mathtt{h}_{\ell}\]
the run pair of \(z\) from the initial configuration pair. We denote by \(T=\tau_{0}\cdots\tau_{\ell-1}\) the sequence of transition pairs of this run pair. To prove Theorem 1, we use the following lemma, which states that the counter values in \(\Pi\) are bounded by a polynomial \(\operatorname{poly}_{0}(\mathsf{K})\).
**Lemma 24**.: _There is a polynomial \(\operatorname{poly}_{0}:\mathbb{N}\to\mathbb{N}\) such that if two weighted odcas\(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) are not equivalent, then there exists a witness \(z\) such that the counter values encountered during the run of \(z\) are less than \(\operatorname{poly}_{0}(\mathsf{K})\)._
We use Lemma 24 to show that the length of the witness \(z\) is bounded by a polynomial \(\operatorname{poly}_{1}(\mathsf{K})=2\mathsf{K}^{5}\operatorname{poly}_{0}( \mathsf{K})\).
**Lemma 25**.: _There is a polynomial \(\operatorname{poly}_{1}:\mathbb{N}\to\mathbb{N}\) such that if two weighted \(\operatorname{\textsc{odca}}s\)\(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) are not equivalent, then there exists a witness \(z\) such that \(|z|\) is less than or equal to \(\operatorname{poly}_{1}(\mathsf{K})\)._
Proof.: Assume for contradiction that the length of a minimal witness \(z\) is greater than \(\operatorname{poly}_{1}(\mathsf{K})\). From Lemma 24, we know that the counter values encountered during the run \(\Pi\) in less than \(\operatorname{poly}_{0}(\mathsf{K})\). Since \(|z|>\operatorname{poly}_{1}(\mathsf{K})\), by the Pigeonhole principle, we get that there exist indices \(0\leq i_{0}<i_{2}<\cdots<i_{2\mathsf{K}}\leq\ell\) such that for all configuration pairs \(\mathtt{h}_{i_{j}},j\in[1,2\mathsf{K}]\), \(n_{\mathtt{c}_{i_{j}}}=n_{\mathtt{c}_{i_{j-1}}}\), \(n_{\mathtt{d}_{i_{j}}}=n_{\mathtt{d}_{i_{j-1}}}\), \(p_{\mathtt{c}_{i_{j}}}=p_{\mathtt{c}_{i_{j-1}}}\) and \(p_{\mathtt{d}_{i_{j}}}=n_{\mathtt{d}_{i_{j-1}}}\).
For all \(j\in[0,2\mathsf{K}]\) we define the vector \(\mathbf{x}_{j}\in\mathcal{F}^{2\mathsf{K}}\) such that \(\mathbf{x}_{j}[r]=\mathbf{x}_{\mathtt{c}_{i_{j}}}[r],\text{ if }r<\mathsf{K}\) and \(\mathbf{x}_{\mathtt{d}_{i_{j}}}[r-\mathsf{K}],\text{ otherwise. We also define the vector \(\boldsymbol{\eta}\in\mathcal{F}^{2\mathsf{K}}\) such that \(\boldsymbol{\eta}[r]=\boldsymbol{\eta}_{1}[r]\), if \(r<\mathsf{K}\) and \(\boldsymbol{\eta}_{2}[r-\mathsf{K}]\), otherwise. For all \(j\in[0,2\mathsf{K}]\), let \(\mathbb{A}_{j}\) denote the matrix such that \(\mathbf{x}_{j}\mathbb{A}_{j}=\mathbf{x}_{\ell}\). Since \(z\) is a minimal witness, we know that for all \(j\in[0,2\mathsf{K}]\), \(\mathbf{x}_{j}\mathbb{A}_{j}\boldsymbol{\eta}^{\top}\neq 0\). From Lemma 4, we get that there exists \(r,r^{\prime}\in[0,2\mathsf{K}]\), with \(r^{\prime}<r\) such that \(\mathbf{x}_{r^{\prime}}\mathbb{A}_{r}\boldsymbol{\eta}^{\top}\neq 0\). The sequence of transitions \(\tau_{i_{r}+1}\cdots\tau_{\ell}\) can be taken from \(\mathtt{h}_{i^{\prime}_{r}}\) since the counter values and counter states are the same for both configurations. Consider the sequence of transitions \(T^{\prime}=\tau_{0}\cdots\tau_{i^{\prime}_{r}}\tau_{i_{r}+1}\cdots\tau_{\ell}\) and let \(w=\mathtt{word}(T^{\prime})\). The word \(w\) is a shorter witness than \(z\) and contradicts its minimality.
Lemma 25 helps us to reduce the equivalence problem of weighted \(\operatorname{\textsc{odca}}\) to that of weighted automata by "simulating" the runs of weighted \(\operatorname{\textsc{odca}}\) up to length \(\operatorname{poly}_{1}(\mathsf{K})\) by two weighted automata. The naive algorithm will only give us a \(\operatorname{\mathsf{PSPACE}}\) procedure, but there is a polynomial time procedure to do this, and the proof is given below.
Proof of Theorem 1.: We consider the two weighted \(\operatorname{\textsc{odca}}\)s \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\). From Lemma 25, we know that the length of the minimal witness \(z\) is less than \(\operatorname{poly}_{1}(\mathsf{K})\). Let \(M=\operatorname{poly}_{1}(\mathsf{K})\). We construct the \(M\)-unfolding weighted automata \(\mathcal{A}_{1}^{M}\) and \(\mathcal{A}_{2}^{M}\) as described in Definition 7. It follows that, \(\mathcal{A}_{1}\) is non-equivalent to \(\mathcal{A}_{2}\) if and only if there exists a word \(w\in\Sigma^{\leq M}\) such that \(f_{\mathcal{A}_{1}^{M}}(w)\neq f_{\mathcal{A}_{2}^{M}}(w)\). Tzeng [20, Lemma 3.4] gives a polynomial time algorithm to output a minimal word that distinguishes two probabilistic automata. We conclude the proof by noting that the algorithm can be extended to the case of weighted automata.
The rest of this section is dedicated to proving Lemma 24. We adapt techniques developed by Bohm et al. [3] for \(\operatorname{\textsc{ocas}}\). We start by labelling some configuration pairs as background points (see Figure 7). Consider the case where there is no background point in \(\Pi\). By reducing the problem to co-VS reachability/coverability we show that the counter values in \(\Pi\) are polynomially bounded. Now consider the case where there is a background point \(\mathtt{h}_{j}\) in \(\Pi\). We show that the counter values encountered during the run of \(\Pi\) till \(\mathtt{h}_{j}\) is polynomially bounded. This is shown by Lemma 29 and Lemma 35. We conclude by arguing that the length of the run from
\(\mathtt{h}_{j}\) is polynomially bounded.
### Configuration Space
Each pair of configuration \(\mathtt{h}=\langle\mathtt{c},\mathtt{d}\rangle\) is mapped to a point in the space \(\mathbb{N}\times\mathbb{N}\times(C_{1}\times C_{2})\times\mathcal{F}^{ \mathsf{K}}\times\mathcal{F}^{\mathsf{K}}\), henceforth referred to as the _configuration space_. Here, the first two dimensions represent the two counter values, the third dimension \(C_{1}\times C_{2}\) corresponds to the pair of counter states, and the remaining dimensions represent the weight vector. The projection of the configuration space onto the first two dimensions is depicted in Figure 7. We partition the configuration space into three: initial space, belt space, and background space. The size of the initial space and, thickness and number of belts will be polynomially bounded in \(\mathsf{K}\). This partition is indexed on two polynomials, \(\operatorname{poly}_{2}:\mathbb{N}\to\mathbb{N}\) and \(\operatorname{poly}_{3}:\mathbb{N}\to\mathbb{N}\) chosen so that all belts are disjoint outside the initial space. We use some properties of these partitions to show that the length of a minimal witness is bounded. We assume \(\operatorname{poly}_{2}(\mathsf{K})=516\mathsf{K}^{21}\) and \(\operatorname{poly}_{3}(\mathsf{K})=42\mathsf{K}^{14}\). The precise polynomials are required in the proofs of Lemma 26 and Lemma 32.
* _initial space_: All configuration pairs \(\langle\mathtt{c},\mathtt{d}\rangle\) such that \(n_{\mathtt{c}},n_{\mathtt{d}}<\operatorname{poly}_{2}(\mathsf{K})\).
* _belt space_: Let \(\alpha,\beta\in[1,3\mathsf{K}^{7}]\) be co-prime. A belt of slope \(\frac{\alpha}{\beta}\) consists of those configuration pairs \(\langle\mathtt{c},\mathtt{d}\rangle\) outside the initial space that satisfies \(|\alpha.n_{\mathtt{c}}-\beta.n_{\mathtt{d}}|\leq\operatorname{poly}_{3}( \mathsf{K})\). The belt space contains all configuration pairs \(\langle\mathtt{c},\mathtt{d}\rangle\) that is inside belts with slope \(\frac{\alpha}{\beta}\).
* _background space_: All remaining configuration pairs.
The proof of the following lemma is similar to that of the non-weighted case presented in [3].
**Lemma 26**.: _If \(\langle\mathtt{c},\mathtt{d}\rangle\) and \(\langle\mathtt{e},\mathsf{f}\rangle\) are configuration pairs inside two distinct belts and lie outside the initial space, then there is no \(a\in\Sigma\) such that \(\langle\mathtt{c},\mathtt{d}\rangle\xrightarrow{a}\langle\mathtt{e},\mathsf{f}\rangle\)._
Figure 7. Projection of configuration space
Proof.: Recall \(\operatorname{poly}_{2}(\mathsf{K})=516\mathsf{K}^{21}\) and \(\operatorname{poly}_{3}(\mathsf{K})=42\mathsf{K}^{14}\). Let \(B\) and \(B^{\prime}\) be two distinct belts with \(\mu\) being the slope of the belt \(B\) and \(\mu^{\prime}\) the slope of the belt \(B^{\prime}\). Hence \(\mu\neq\mu^{\prime}\). Without loss of generality, let us assume that \(\mu^{\prime}>\mu\). It suffices to show that for all \(x>\operatorname{poly}_{2}(\mathsf{K})\), we have
\[\mu x+\operatorname{poly}_{3}(\mathsf{K})+1<\mu^{\prime}x-\operatorname{poly }_{3}(\mathsf{K})-1.\]
We know that \(\mu^{\prime}-\mu\geq\frac{1}{3\mathsf{K}^{7}}\) and \(x>516\mathsf{K}^{21}\).
Therefore, \(\frac{516\mathsf{K}^{21}}{6\mathsf{K}^{7}}<(\mu^{\prime}-\mu)\cdot x\).
\[\Longrightarrow\mu x+\frac{86\mathsf{K}^{14}}{2}<\mu^{\prime}x- \frac{86\mathsf{K}^{14}}{2}\] \[\Longrightarrow\mu x+42\mathsf{K}^{14}+\mathsf{K}^{14}<\mu^{ \prime}x-42\mathsf{K}^{14}-\mathsf{K}^{14}\] \[\Longrightarrow\mu x+42\mathsf{K}^{14}+1<\mu^{\prime}x-42\mathsf{ K}^{14}-1\]
Lemma 26 ensures that the belts are disjoint outside the initial space and that no run can go from one belt to another without passing through the initial space or background space.
### Belt space
We look at two scenarios in this section. First, we show that if a sub-run of the run of a minimal witness enters and exists a belt from the initial space, then the counter values encountered during this sub-run are polynomially bounded in \(\mathsf{K}\). Secondly, we show that if the run of a minimal witness never enters the background space, then the counter values encountered during this run are polynomially bounded in \(\mathsf{K}\). This is shown by reducing to co-VS reachability of an odca.
Let \(\Pi_{b}=\mathsf{h}_{i}\tau_{i}\mathsf{h}_{i+1}\cdots\tau_{j-1}\mathsf{h}_{j}\) be a sub-run of the run of \(z\) inside a belt with slope \(\frac{\alpha}{\beta}\). Similar to the technique mentioned in [5], each configuration pair \(\mathsf{h}_{r}\), where \(r\in[i,j]\) can alternatively be presented as \(((\mathbf{x}_{\mathsf{c}_{r}},\mathbf{x}_{\mathsf{d}_{r}}),p_{\mathsf{c}_{r}},p_{\mathsf{d}_{r}},l_{r})\) where \(l_{r}\) denotes a line with slope \(\frac{\alpha}{\beta}\) inside the given belt that contains the point \((n_{\mathsf{c}_{r}},n_{\mathsf{d}_{r}})\). Let \(L\) be the set of all lines with slope \(\frac{\alpha}{\beta}\) inside the given belt. Note that \(|L|=\operatorname{poly}_{3}(\mathsf{K})\). The run \(\Pi_{b}\) is similar to the run of a weighted odca\(\mathcal{D}\) that has the tuple \((p_{\mathsf{c}_{r}},p_{\mathsf{d}_{r}},l_{r})\) as the state of the finite state machine and \(\mathbf{x}_{r}\in\mathcal{F}^{2\mathsf{K}}\) as its weight vector where \(\mathbf{x}_{r}[i]=\mathbf{x}_{\mathsf{c}_{r}}[i],\text{ if }i<\mathsf{K}\text{ and }\mathbf{x}_{r}[i]=\mathbf{x}_{\mathsf{d}_{r}}[i-\mathsf{K}],\text{ otherwise. A formal definition of the odca }\mathcal{D}\) is given below.
**Definition 27**.: _Let \(\mathcal{A}_{i}=(C_{i},\delta_{i},p_{0_{i}};\ Q_{i},\boldsymbol{\lambda}_{i}, \boldsymbol{\eta}_{i})\) for \(i\in\{1,2\}\), be the two odca\(\mathcal{D}\) given. Let \(L\) be the set of all lines with slope \(\frac{\alpha}{\beta}\) inside the given belt. We define the odca\(\mathcal{D}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{\eta})\), where the initial state \(p_{0}\) and the initial distribution \(\boldsymbol{\lambda}\) are arbitrarily chosen._
* \(C=C_{1}\times C_{2}\times L\) _is a non-empty finite set of states._
* \(\delta:C\times\Sigma\to C\times\{-1,0,+1\}\) _is the deterministic counter transition. Let_ \(p_{1},q_{1}\in C_{1},p_{2},q_{2}\in C_{2}\)_,_ \(a\in\Sigma\) _and_ \(d_{1},d_{2}\in\{-1,0,+1\}\)_. Let_ \(l_{1},l_{2}\in L\) _and_ \(m_{1},m_{2}\in\mathbb{N}\)_, such that the point_ \((m_{1},m_{2})\) _lies on the line_ \(l_{1}\)_._ \(\delta((p_{1},p_{2},l_{1}),a)=((q_{1},q_{2},l_{2}),d_{1})\)_, if_ \(\delta_{1}(p_{1},a,1)=(q_{1},d_{1})\) _and_ \(\delta_{2}(p_{2},a,1)=(q_{2},d_{2})\) _and the point_ \((m_{1}+d_{1},m_{2}+d_{2})\) _lies on the line_ \(l_{2}\)_. It is undefined otherwise._
* \(Q=Q_{1}\cup Q_{2}\) _is a non-empty finite set of states of the finite state machine._
* \(\Delta:C\times\Sigma\times\{0,1\}\to\mathcal{F}^{2\mathsf{K}\times 2\mathsf{K}}\) _gives the transition matrix for all_ \(p\in C\)_,_ \(a\in\Sigma\) _and_ \(d\in\{0,1\}\)_. For_ \(p_{1}\in C_{1},p_{2}\in C_{2},l\in L,m\in\mathbb{N}\) _and_ \(a\in\Sigma\)_,_ \[\Delta((p_{1},p_{2},l),a)[i][j]=\begin{cases}\Delta(p_{1},a,1)[i][j],\text{if }i,j< \mathsf{K}\\ \Delta(p_{2},a,1)[i\quad-\\ \mathsf{K}][j\quad-\quad\mathsf{K}]\text{,}\\ \quad\text{if }i,j>\mathsf{K}\\ 0,\text{ otherwise}\end{cases}\]
* \(\boldsymbol{\eta}\in\mathcal{F}^{2\mathsf{K}}\) _is the final distribution._ \[\boldsymbol{\eta}[i]=\begin{cases}\boldsymbol{\eta}_{1}[i],\text{ if }i< \mathsf{K}\\ \boldsymbol{\eta}_{2}[i-\mathsf{K}],\text{ otherwise}\end{cases}\]
The sub-run \(\Pi_{b}\) can now be seen as a floating run of a weighted odca \(\mathcal{D}\). If the run \(\Pi\) ends inside a belt, then \(\Pi_{b}=\mathtt{h}_{i}\tau_{i}\cdots\tau_{\ell-1}\mathtt{h}_{\ell}\). In this case, we show that the difference between the counter values of the first and last configuration pairs is smaller than a polynomial in \(\mathsf{K}\).
**Lemma 28**.: _There is a polynomial \(poly:\mathbb{N}\to\mathbb{N}\), such that if \(\Pi_{b}=\mathtt{h}_{i}\tau_{i}\cdots\tau_{\ell-1}\mathtt{h}_{\ell}\) lies inside a belt, then \(|n_{\mathtt{e}_{\ell}}-n_{\mathtt{e}_{i}}|\leq poly(\mathsf{K})\) and \(|n_{\mathtt{d}_{\ell}}-n_{\mathtt{d}_{i}}|\leq poly(\mathsf{K})\)._
Proof.: Let \(\Pi_{b}=\mathtt{h}_{i}\tau_{i}\mathtt{h}_{i+1}\cdots\tau_{\ell-1}\mathtt{h}_{ \ell}\) be a sub-run of the run of a minimal witness inside a belt and ends in the belt. As mentioned in Definition 27, we consider this as the run of the weighted odca \(\mathcal{D}\). Since it is the run of a witness, \(\mathbf{x}_{j}\boldsymbol{\eta}^{\top}\neq 0\). Consider the vector space \(\mathcal{U}=\{\mathbf{y}\in\mathcal{F}^{2\mathsf{K}}\mid\mathbf{y}\boldsymbol{ \eta}^{\top}=0\}\). Our problem now reduces to the co-VS coverability problem in machine \(\mathcal{D}\) and asks whether \((\mathbf{x}_{i},(p_{\mathtt{e}_{i}},p_{\mathtt{d}_{i}},l_{i}),n_{\mathtt{e}_{ i}})\xrightarrow{*}\overline{\mathcal{U}}\times\{(p_{\mathtt{e}_{\ell}},p_{ \mathtt{d}_{\ell}},l_{\ell})\}\times\mathbb{N}\). From Lemma 20, we know that the length of a minimal reachability witness for \(((\mathbf{x}_{i},(p_{\mathtt{e}_{i}},p_{\mathtt{d}_{i}},l_{i}),n_{\mathtt{e}_{ i}}),\overline{\mathcal{U}},(p_{\mathtt{e}_{\ell}},p_{\mathtt{d}_{\ell}},l_{ \ell}),\mathbb{N})\) is polynomially bounded in \(n_{\mathtt{e}_{i}}\) and \(\mathsf{K}\). Hence proved.
In the following lemma, we show that if \(\Pi_{b}=\mathtt{h}_{i}\tau_{i}\mathtt{h}_{i+1}\cdots\tau_{j-1}\mathtt{h}_{j}\) is a sub-run of \(\Pi\) inside a belt and either \(n_{\mathtt{e}_{i}}=n_{\mathtt{e}_{j}}\) or \(n_{\mathtt{d}_{i}}=n_{\mathtt{d}_{j}}\), then the counter values in \(\Pi_{b}\) cannot increase more than a polynomial in \(\mathsf{K}\) from \(n_{\mathtt{e}_{i}}\) and \(n_{\mathtt{d}_{i}}\).
**Lemma 29**.: _There is a polynomial \(poly:\mathbb{N}\to\mathbb{N}\) such that, if \(\Pi_{b}=\mathtt{h}_{i}\tau_{i}\mathtt{h}_{i+1}\cdots\tau_{j-1}\mathtt{h}_{j}\) is a run inside a belt with \(n_{\mathtt{e}_{i}}=n_{\mathtt{e}_{j}}\) or \(n_{\mathtt{d}_{i}}=n_{\mathtt{d}_{j}}\), then the counter effect of any sub-run of \(\Pi_{b}\) is less than or equal to \(poly(K)\)._
Proof.: Let \(\Pi_{b}=\mathtt{h}_{i}\tau_{i}\mathtt{h}_{i+1}\cdots\tau_{j-1}\mathtt{h}_{j}\) be a sub-run of the run of a minimal witness inside a belt such that \(n_{\mathtt{e}_{i}}=n_{\mathtt{e}_{j}}\). We consider this as the run of the weighted odca \(\mathcal{D}\) as mentioned in Definition 27. Since it is the run of a witness, we know that there exists \(\mathbb{A}\in\mathcal{F}^{2\mathsf{K}\times 2\mathsf{K}}\) such that \(\mathbf{x}_{j}\mathbb{A}\boldsymbol{\eta}^{\top}\neq 0\). Consider the vector space \(\mathcal{U}=\{\mathbf{y}\in\mathcal{F}^{2\mathsf{K}}\mid\mathbf{y}\mathbb{A} \boldsymbol{\eta}^{\top}=0\}\).
Our problem now reduces to the co-VS reachability problem in machine \(\mathcal{D}\) and asks whether \((\mathbf{x}_{i},(p_{\mathtt{e}_{i}},p_{\mathtt{d}_{i}},l_{i}),n_{\mathtt{e}_{ i}})\xrightarrow{*}\overline{\mathcal{U}}\times\{(p_{\mathtt{e}_{j}},p_{ \mathtt{d}_{j}},l_{j})\}\times\{n_{\mathtt{e}_{i}}\}\). From Lemma 20, length of a minimal reachability witness for \(((\mathbf{x}_{i},(p_{\mathtt{e}_{i}},p_{\mathtt{d}_{i}},l_{i}),n_{\mathtt{e}_{ i}}),\overline{\mathcal{U}},(p_{\mathtt{e}_{j}},p_{\mathtt{d}_{j}},l_{j})\), \(\{n_{\mathtt{e}_{i}}\})\) is bounded by a polynomial in \(n_{\mathtt{e}_{i}}\) and \(\mathsf{K}\). Hence proved.
We have now shown that if the run of a minimal witness does not enter the background space, then the counter values in this run are polynomially bounded in \(\mathsf{K}\). Now we look at the case where the run enters the background space.
### Background space
In this subsection, we consider the case where the run of a minimal witness enters the background space. We show that during the run of the minimal witness the counter values of the first configuration pair in the background space and the remaining length of the run is polynomially bounded in \(\mathsf{K}\).
Floating runs of a weighted ODCA are isomorphic to runs of a weighted automaton obtained by ignoring counter values. In order to bound the length of the run of a minimal witness in the background space, we introduce the notion of an _underlying uninitialised weighted automaton_.
**Definition 30**.: _For \(l\in\{1,2\}\), the underlying uninitialised weighted automaton of \(\mathcal{A}\) is the uninitialised weighted automaton \(\mathrm{U}(\mathcal{A}_{l})=(Q^{\prime}_{l},\Delta^{\prime}_{l},\mathbf{\eta}^{ \prime}_{l})\), where \(Q^{\prime}_{l}=C_{l}\times Q_{l}\) and \(\mathbf{\eta}^{\prime}_{l}\in\mathcal{F}^{\mathsf{K}^{2}}\) is the final distribution. For \(i<\mathsf{K}^{2},\mathbf{\eta}^{\prime}_{l}[i]=\eta[i\bmod\mathsf{K}]\). The transition matrix is given by \(\Delta^{\prime}_{l}:\Sigma\to\mathcal{F}^{\mathsf{K}^{2}\times\mathsf{K}^{2}}\). Let \(a\in\Sigma\), \(d\in\{-1,0,+1\},i,j<\mathsf{K}^{2}\),_
\[\Delta^{\prime}_{l}(a)[i][j]=\begin{cases}\Delta_{l}(p_{\frac{j}{k}},a,1)[i \bmod\mathsf{K}][j\bmod\mathsf{K}],\\ \qquad\qquad\qquad\text{if $\delta_{l}(p_{\frac{j}{k}},a,1)=(p_{\frac{j}{k}},d)$}\\ 0\text{ otherwise}\end{cases}\]
Note that a configuration of \(\mathrm{U}(\mathcal{A}_{l})\) is a vector of dimension \(\mathsf{K}^{2}\). A configuration \(\mathsf{c}\) of a weighted odca\(\mathcal{A}\) is said to be \(k\)-equivalent to a configuration \(\bar{\mathsf{c}}\) of an uninitialised weighted automata \(\mathcal{B}\), denoted \(\mathsf{c}\sim_{k}\bar{\mathsf{c}}\), if for all \(w\in\Sigma^{\leq k},f_{\mathcal{A}}(w,\mathsf{c})=f_{\mathcal{B}}(w,\bar{ \mathsf{c}})\). We say that \(\mathsf{c}\) is not \(k\)-equivalent to \(\bar{\mathsf{c}}\) otherwise and denote this as \(\mathsf{c}\not\sim_{k}\bar{\mathsf{c}}\).
As we need to test the equivalence of configurations from \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), we consider the uninitialised weighted automata \(\mathcal{B}\), which is a disjoint union of \(\mathrm{U}(\mathcal{A}_{1})\) and \(\mathrm{U}(\mathcal{A}_{2})\). This gives us a single automaton with which we can compare their configurations. Let \(i\in\{1,2\}\) and \(\mathsf{c}\) be a configuration of \(\mathcal{A}_{i}\). For all \(p\in C_{i}\) and \(m<2\mathsf{K}^{2}\), we define the sets \(\mathcal{W}^{p,m}_{i}\). The set \(\mathcal{W}^{p,m}_{i}\) contains vectors \(\mathbf{x}\in\mathcal{F}^{\mathsf{K}}\) such that the configuration \((\mathbf{x},p,m)\) is \(2\mathsf{K}^{2}\)-equivalent to some configuration of \(\mathcal{B}\). The set \(\overline{\mathcal{W}}^{p,m}_{i}\) is the set \(\mathcal{F}^{\mathsf{K}}\setminus\mathcal{W}^{p,m}_{i}\). Formally,
\[\mathcal{W}^{p,m}_{i}=\{\mathbf{x}\in\mathcal{F}^{\mathsf{K}}|\exists\bar{ \mathsf{c}}\in\mathcal{F}^{2\mathsf{K}^{2}},\mathsf{c}=(\mathbf{x},p,m)\sim_{ 2\mathsf{K}^{2}}\bar{\mathsf{c}}\}\]
**Lemma 31**.: _For any \(i\in\{1,2\}\), \(p\in C_{i}\) and \(m<2\mathsf{K}^{2}\), the set \(\mathcal{W}^{p,m}_{i}\) is a vector space._
Proof.: To prove this, it suffices to show that it is closed under vector addition and scalar multiplication. We fix a set \(\mathcal{W}^{p,m}_{i}\). First, we prove that it is closed under scalar multiplication. For any vector \(\mathbf{z}_{1}\in\mathcal{W}^{p,m}_{i}\), we know that there exists a configuration \(\mathsf{c}=(\mathbf{z}_{1},p,m)\) and \(\bar{\mathsf{c}}\in\mathcal{F}^{2\mathsf{K}^{2}}\) such that \(\mathsf{c}\sim_{2\mathsf{K}^{2}}\bar{\mathsf{c}}\). Now, for any scalar \(r\in\mathcal{F}\), the configuration \((r.\mathbf{z}_{1},p,m)\sim_{2\mathsf{K}^{2}}r\cdot\bar{\mathsf{c}}\). Therefore \(r\cdot\mathbf{z}_{1}\in\mathcal{W}^{p,m}_{i}\). Now, we show that it is closed under vector addition. Let \(\mathbf{z}_{1},\mathbf{z}_{2}\in\mathcal{W}^{p,m}_{i}\) be two vectors. Therefore, there exists configurations \(\mathsf{c}_{1}=(\mathbf{z}_{1},p,m)\), \(\mathsf{c}_{2}=(\mathbf{z}_{2},p,m)\), \(\bar{\mathsf{c}}_{1}\in\mathcal{F}^{2\mathsf{K}^{2}}\) and \(\bar{\mathsf{c}}_{2}\in\mathcal{F}^{2\mathsf{K}^{2}}\), such that \(\mathsf{c}_{1}\sim_{2\mathsf{K}^{2}}\bar{\mathsf{c}}_{1}\) and \(\mathsf{c}_{2}\sim_{2\mathsf{K}^{2}}\bar{\mathsf{c}}_{2}\). Consider the configuration \(\mathsf{c}_{3}=(\mathbf{z}_{1}+\mathbf{z}_{2},p,m)\), \(\mathsf{c}_{3}\sim_{2\mathsf{K}^{2}}\bar{\mathsf{c}}_{1}+\bar{\mathsf{c}}_{2}\). Therefore, \(\mathbf{z}_{1}+\mathbf{z}_{2}\in\mathcal{W}^{p,m}_{i}\).
The distance of a configuration \(\mathsf{c}\) of \(\mathcal{A}_{i}\) is the length of a minimal word that takes you from \(\mathsf{c}\) to a configuration \((\mathbf{x},p,m)\) for some \(m<2\mathsf{K}^{2}\) and \(p\in C_{i}\) such that \(\mathbf{x}\in\overline{\mathcal{W}}^{p,m}_{i}\). We define \(\mathrm{dist}_{\mathcal{A}_{i}}(\mathsf{c})\) as:
\[\min\{|w|\mid\mathsf{c}\xrightarrow{w}(\mathbf{x},p,m)\ \exists p\in C_{i},m<2\mathsf{K}^{2}, \mathbf{x}\in\overline{\mathcal{W}}^{p,m}_{i}\}\]
The notion of distance play a key role in determining which parts of the run of a witness can be pumped out if it is not minimal. The following lemma is a crucial component in proving the equivalence of weighted odca. By Lemma 22, the lexicographically minimal reachability witness has a special form and this plays a crucial role in proving Lemma 32.
**Lemma 32**.: _Let \(\mathsf{c}\) be a configuration of weighted odca \(\mathcal{A}\). If \(\operatorname{dist}_{\mathcal{A}}(\mathsf{c})<\infty\) then, \(\operatorname{dist}_{\mathcal{A}}(\mathsf{c})=\frac{a}{b}n_{\mathsf{c}}+d\) where \(a,b\in[0,3\mathsf{K}^{7}]\) and \(|d|<42\mathsf{K}^{14}\)._
Proof.: Without loss of generality, let us consider the weighted odca \(\mathcal{A}_{1}\) and a configuration \(\mathsf{c}\) of \(\mathcal{A}_{1}\). Let us assume that \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c})<\infty\). This means that \(\mathsf{c}\to^{*}\mathsf{d}\) with \(\mathbf{x_{d}}\in\overline{\mathcal{W}}_{1}^{p,m}\) for some \(p\in C_{1}\) and \(m<2\mathsf{K}^{2}\). Since \(n_{\mathsf{d}}=m\), by Lemma 22, we know that there is a word \(u=u_{1}u_{2}^{r}u_{3}\) (with \(r\geq 0\)) such that that \(\mathsf{c}\xrightarrow{u}\mathsf{d}\) where \(|u|=\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c}),|u_{1}u_{3}|\leq 9 \mathsf{K}^{7},\ |u_{2}|\leq 3\mathsf{K}^{7}\) and \(u_{2}\) has a negative counter effect \(\ell\). Let \(g\) be the combined counter effect of \(u_{1},u_{3}\) and \(\alpha=\frac{|u_{2}|}{\ell}\). Since \(|u_{1}u_{3}|\leq 9\mathsf{K}^{7}\), we have \(|g|\leq 9\mathsf{K}^{7}\).
\[\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c}) =\frac{n_{\mathsf{c}}-n_{\mathsf{d}}-g}{\ell}|u_{2}|+|u_{1}u_{3}|\] \[=\alpha n_{\mathsf{c}}-\underbrace{\alpha(n_{\mathsf{d}}+g)+|u_{ 1}u_{3}|}_{d}\]
Since \(1\leq\alpha\leq 3\mathsf{K}^{7}\) it follows that \(-42\mathsf{K}^{14}<d<42\mathsf{K}^{14}\). Hence proved.
The polynomials \(\operatorname{poly}_{1}\) and \(\operatorname{poly}_{2}\) were picked so that the configuration pairs with equal distance always lie in the belt space. Therefore, the background space points either have unequal or infinite distances.
**Lemma 33**.: _For any configuration pair \(\langle\mathsf{c},\mathsf{d}\rangle\), in the background space, either \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c})\neq\operatorname{dist}_{ \mathcal{A}_{2}}(\mathsf{d})\) or \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c})=\operatorname{dist}_{ \mathcal{A}_{2}}(\mathsf{d})=\infty\)._
Proof.: Assume for contradiction that there is a configuration pair \(\langle\mathsf{c},\mathsf{d}\rangle\), in the background space such that \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c})=\operatorname{dist}_{ \mathcal{A}_{2}}(\mathsf{d})<\infty\). Since \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c})=\operatorname{dist}_{ \mathcal{A}_{2}}(\mathsf{d})\). From Lemma 32, there exists \(a_{1},b_{1},a_{2},b_{2}\in[0,3\mathsf{K}^{7}]\) and \(d_{1},d_{2}<42\mathsf{K}^{14}\) such that
\[\frac{a_{1}}{b_{1}}n_{\mathsf{c}}+d_{1}=\operatorname{dist}_{\mathcal{A}_{1}}( \mathsf{c})=\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{d})=\frac{a_{2}}{b_{ 2}}n_{\mathsf{d}}+d_{2}\]
Therefore \(|\frac{a_{1}}{b_{1}}n_{\mathsf{c}}-\frac{a_{2}}{b_{2}}n_{\mathsf{d}}|\leq|d_{2 }-d_{1}|<42\mathsf{K}^{14}\). This satisfies the belt condition and is a configuration pair in the belt space. This contradicts our initial assumptions.
The following lemma shows that the length of the run \(\Pi\) in the background space is polynomially bounded in \(\mathsf{K}\) and the counter values of the first background point in \(\Pi\). The proof is similar to that in [3] and is given below.
**Lemma 34**.: _If \(\mathsf{h}_{j}=\langle\mathsf{c}_{j},\mathsf{d}_{j}\rangle\) is the first configuration pair in the background space during \(\Pi\), then \(\ell-j\) is bounded by a polynomial in \(n_{\mathsf{c}_{j}},n_{\mathsf{d}_{j}}\) and \(\mathsf{K}\)._
Proof.: Let \(\mathsf{h}_{j}=\langle\mathsf{c}_{j},\mathsf{d}_{j}\rangle\) be the first configuration pair in the background space during the run \(\Pi\), then from Lemma 33, either \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c}_{j})=\operatorname{dist}_{ \mathcal{A}_{2}}(\mathsf{d}_{j})=\infty\) or \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c}_{j})\neq\operatorname{ dist}_{\mathcal{A}_{2}}(\mathsf{d}_{j})\). We separately consider the two cases.
_Case-1, \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathsf{c}_{j})=\operatorname{dist}_{ \mathcal{A}_{2}}(\mathsf{d}_{j})=\infty\): then we prove that the remaining length of the witness from \(\langle\mathsf{c}_{j},\mathsf{d}_{j}\rangle\) is bounded by \(2\mathsf{K}^{2}\). Assume for contradiction that this is not the case and \(\mathsf{c}_{j}\sim_{2\mathsf{K}^{2}}\mathsf{d}_{j}\) but \(\mathsf{c}_{j}\not\equiv\mathsf{d}_{j}\). Let \(v\in\Sigma^{>2\mathsf{K}^{2}}\) be the word which
distinguishes \(\mathtt{c}\) and \(\mathtt{d}\). Therefore, there exists a prefix of \(v\), \(u\in\Sigma^{|v|-2\mathsf{K}^{2}}\), and \(i=\ell-2\mathsf{K}^{2}\) such that \(\langle\mathtt{c}_{j},\mathtt{d}_{j}\rangle\xrightarrow{u}\langle\mathtt{c}_{i},\mathtt{d}_{i}\rangle\) and \(\mathtt{c}_{i}\not\equiv_{2\mathsf{K}^{2}}\mathtt{d}_{i}\).
Since \(v\) is a minimal witness \(\mathtt{c}_{i}\equiv_{2\mathsf{K}^{2}-1}\mathtt{d}_{i}\) and \(\mathtt{c}_{i}\not\equiv_{2\mathsf{K}^{2}}\mathtt{d}_{i}\). Since \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathtt{c}_{j})=\operatorname{dist}_{ \mathcal{A}_{2}}(\mathtt{d}_{j})=\infty\), there exists configurations \(\mathtt{\bar{c}}_{i}\) and \(\mathtt{\bar{d}}_{i}\) in the underlying automaton \(\mathcal{B}\) such that \(\mathtt{c}_{i}\sim_{2\mathsf{K}^{2}}\bar{\mathtt{c}}_{i}\) and \(\mathtt{d}_{i}\sim_{2\mathsf{K}^{2}}\bar{\mathtt{d}}_{i}\). Since \(\mathtt{c}_{i}\equiv_{2\mathsf{K}^{2}-1}\mathtt{d}_{i}\), it follows that \(\mathtt{\bar{c}}_{i}\sim_{2\mathsf{K}^{2}-1}\bar{\mathtt{d}}_{i}\). From the equivalence result of weighted automata, we know that if two configurations of a weighted automata with \(k\) states are non-equivalent, then there is a word of length less than \(k\) which distinguishes them. Therefore, this is sufficient to prove that the underlying weighted automata with \(\mathtt{\bar{c}}_{i}\) and \(\mathtt{\bar{d}}_{i}\) as initial distributions are equivalent, and thus \(\mathtt{\bar{c}}_{i}\sim_{2\mathsf{K}^{2}}\bar{\mathtt{d}}_{i}\). This allows us to deduce that \(\mathtt{c}_{i}\equiv_{2\mathsf{K}^{2}}\mathtt{d}_{i}\), which is a contradiction. Therefore, the remaining length of the witness from \(\langle\mathtt{c}_{j},\mathtt{d}_{j}\rangle\) is bounded by \(2\mathsf{K}^{2}\).
_Case-2, \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathtt{c}_{j})\neq\operatorname{dist}_{ \mathcal{A}_{2}}(\mathtt{d}_{j})\)_: Without loss of generality, we suppose \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathtt{c}_{j})>\operatorname{dist}_{ \mathcal{A}_{2}}(\mathtt{d}_{j})\). By definition of \(\operatorname{dist}_{\mathcal{A}_{2}}\), there exists \(u\in\Sigma^{\operatorname{dist}_{\mathcal{A}_{2}}(\mathtt{d}_{j})}\), \(i>j\) and a configuration \(\mathtt{\bar{c}}\) of the underlying automaton \(\mathcal{B}\) such that \(\mathtt{c}_{j}\xrightarrow{u}\mathtt{c}_{i}\), \(\mathtt{d}_{j}\xrightarrow{u}\mathtt{d}_{i}\), \(\mathtt{c}_{i}\sim_{2\mathsf{K}^{2}}\bar{\mathtt{c}}_{i}\) and \(\mathtt{d}_{i}\not\sim_{2\mathsf{K}^{2}}\bar{\mathtt{c}}_{i}\). Therefore \(\mathtt{c}_{i}\not\equiv_{2\mathsf{K}^{2}}\mathtt{d}_{i}\). By definition, there exists \(v\in\Sigma^{\leq 2\mathsf{K}^{2}}\) such that \(f_{\mathcal{A}_{1}}(v,\mathtt{c}_{i})\neq f_{\mathcal{A}_{2}}(v,\mathtt{d}_{i})\) and hence \(f_{\mathcal{A}_{1}}(uv,\mathtt{c}_{j})\neq f_{\mathcal{A}_{2}}(uv,\mathtt{d}_ {j})\). As \(uv\in\Sigma^{\operatorname{dist}_{\mathcal{A}_{2}}(\mathtt{d}_{j})+2\mathsf{K }^{2}}\), we get that \(\mathtt{c}_{j}\not\equiv_{\operatorname{dist}_{\mathcal{A}_{2}}(\mathtt{d}_{j })+2\mathsf{K}^{2}}\mathtt{d}_{j}\). Therefore, there is \(w\in\Sigma^{\leq\min\{\operatorname{dist}_{\mathcal{A}_{1}}(\mathtt{c}_{j}), \operatorname{dist}_{\mathcal{A}_{2}}(\mathtt{d}_{j})\}+2\mathsf{K}^{2}}\) that distinguishes \(\mathtt{c}_{j}\) and \(\mathtt{d}_{j}\).
Let \(\alpha,\beta\in[1,3\mathsf{K}^{7}]\) be co-prime. We say configuration pairs \(\langle\mathtt{c},\mathtt{d}\rangle\) and \(\langle\mathtt{e},\mathtt{f}\rangle\) are \(\alpha\)-\(\beta\)_related_ if \(\mathtt{p}_{\mathtt{c}}=p_{\mathtt{e}}\), \(p_{\mathtt{d}}=p_{\mathtt{f}}\) and \(\alpha\cdot n_{\mathtt{c}}-\beta\cdot n_{\mathtt{d}}=\alpha\cdot n_{\mathtt{e} }-\beta\cdot n_{\mathtt{f}}\). Roughly speaking, two configuration pairs are \(\alpha\)-\(\beta\) related if they have the same state pairs and lie on a line with slope \(\frac{\alpha}{\beta}\). An \(\alpha\)-\(\beta\)_repetition_ is a run \(\bar{\pi}_{1}=c_{i}\tau_{i}c_{i+1}\tau_{i+1}\cdots\tau_{j-1}c_{j}\) that lies inside a belt with slope \(\frac{\alpha}{\beta}\) such that \(c_{i}\) and \(c_{j}\) are \(\alpha\)-\(\beta\) related. The following lemma bounds the counter values of the first configuration in the background space, if it exists, during the run \(\Pi\).
**Lemma 35**.: _If \(\mathtt{h}_{j}\) is the first background point in \(\Pi\) then, counter values of \(\mathtt{h}_{j}\) are less than \(\mathsf{K}^{5}\cdot 42\mathsf{K}^{14}\)._
Proof.: Let \(\mathtt{h}_{j}\) be the first point in the background space during the run \(\Pi\). Assume for contradiction that \(n_{\mathtt{c}_{j}}\) is greater than \(\mathsf{K}^{5}\cdot 42\mathsf{K}^{14}\). Let \(\Pi=\mathtt{h}_{0}\tau_{0}\cdots\mathtt{h}_{j-1}\tau_{j-1}\mathtt{h}_{j} \cdots\mathtt{h}_{\ell}\) be a run of a minimal witness. Since \(\mathtt{h}_{j}\) is the first point in the background space
in this run and \(n_{\mathfrak{c}_{j}}>\mathsf{K}^{5}\cdot 42\mathsf{K}^{14}\), there exists \(0<i<j\) such that the sub-run \(\Pi_{b}=\mathtt{h}_{i}\tau_{i}\mathtt{h}_{i+1}\cdots\tau_{j-2}\mathtt{h}_{j-1}\) lies inside a belt \(B\) with slope \(\frac{\alpha}{\beta}\) for some \(\alpha,\beta\in[1,3\mathsf{K}^{7}]\). Since we are looking at the run of a minimal witness, from Lemma 33 either \(\mathfrak{c}_{j}\not\equiv_{2\mathsf{K}^{2}}\mathtt{d}_{j}\) or \(\operatorname{dist}(\mathfrak{c}_{j})\neq\operatorname{dist}(\mathfrak{d}_{j})\). We separately consider the two cases.
_Case-1:_\(\operatorname{dist}_{\mathcal{A}_{1}}(\mathfrak{c}_{j})\neq\operatorname{ dist}_{\mathcal{A}_{2}}(\mathtt{d}_{j})\): Without loss of generality, let us assume \(\operatorname{dist}_{\mathcal{A}_{1}}(\mathfrak{c}_{j})<\operatorname{dist}_{ \mathcal{A}_{2}}(\mathtt{d}_{j})\). Therefore there exists \(t\in\mathbb{N}\) with \(j<t\leq\ell\) and configuration pair \(\mathtt{h}_{t}\) such that \(m=n_{\mathfrak{c}_{t}}<2\mathsf{K}^{2}\), \(p=p_{\mathfrak{c}_{t}}\) and \(\mathbf{x}_{\mathfrak{c}_{t}}\in\overline{\mathcal{W}}_{1}^{p,m}\). We show that we can pump some portion out from \(\Pi_{b}\) to reach a configuration in the background space with unequal distance and smaller counter values.
Since \(n_{\mathfrak{c}_{j}}>\mathsf{K}^{5}\cdot 42\mathsf{K}^{14}\), by Pigeonhole principle, there exists indices \(i_{0}<i_{1}<i_{2}<\cdots,i_{\mathsf{K}^{2}}<i_{0}^{\prime}<i_{1}^{\prime}<i_{2 }^{\prime},\cdots,<i_{\mathsf{K}^{2}}^{\prime}\) such that for all \(r\in[1,\mathsf{K}^{2}]\), (1) \(\mathtt{h}_{i_{r-1}}\) and \(\mathtt{h}_{i_{r}}\) are \(\alpha\)-\(\beta\) related and lie in belt \(B\), (2) \(n_{\mathfrak{c}_{i_{r-1}}}<n_{\mathfrak{c}_{i_{r}}}=n_{\mathfrak{c}_{i_{r}^{ \prime}}}\), (3) \(p_{\mathfrak{c}_{i_{r}^{\prime}}}=p_{\mathfrak{c}_{i_{r-1}^{\prime}}}\), (4) for all \(t\in\mathbb{N}\) with \(i_{r}<t<j\), \(n_{\mathfrak{c}_{t}}>n_{\mathfrak{c}_{i_{r}}}\), and (5) for all \(t\in\mathbb{N}\) with \(j<t<i_{r}^{\prime}\), \(n_{\mathfrak{c}_{t}}<n_{\mathfrak{c}_{i_{r}^{\prime}}}\).
For \(r\in[0,\mathsf{K}^{2}]\) let \(\mathbb{A}_{r}\in\mathcal{F}^{\mathsf{K}\times\mathsf{K}}\) denote the matrix such that \(\mathbf{x}_{\mathfrak{c}_{i_{r}}}\mathbb{A}_{r}=\mathbf{x}_{\mathfrak{c}_{i_{ r}^{\prime}}}\) and \(\mathbb{B}_{r}\in\mathcal{F}^{\mathsf{K}\times\mathsf{K}}\) denote the matrix such that \(\mathbf{x}_{\mathfrak{c}_{i_{r}^{\prime}}}\mathbb{B}_{r}=\mathbf{x}_{ \mathfrak{c}_{t}}\in\overline{\mathcal{W}}_{1}^{p,m}\). Therefore for all \(r\in[0,\mathsf{K}^{2}]\), we have \(\mathbf{x}_{\mathfrak{c}_{i_{r}}}\mathbb{A}_{r}\mathbb{B}_{r}\in\overline{ \mathcal{W}}_{1}^{p,m}\). From Lemma 4, we have that there exits \(s,r\in[0,\mathsf{K}^{2}]\) with \(s<r\) such that \(\mathbf{x}_{\mathfrak{c}_{i_{s}}}\mathbb{A}_{r}\mathbb{B}_{s}\in\overline{ \mathcal{W}}_{1}^{p,m}\). Consider the sequence of transitions \(T^{\prime}=\tau_{0},\cdots,\tau_{i_{s}-1}\tau_{i_{r}},\cdots,\tau_{j-1}\) and let \(w=\mathtt{word}(T^{\prime})\). Let \(\mathtt{h}_{j^{\prime}}\) be the configuration such that \(\mathtt{h}_{0}\overset{w}{\underset{w}{\underset{w}{\underset{w}{\underset{w}{ \underset{w}{\underset{w}{\underset{w}{\underset{w}{\underset{w}{\underset{w} {\underset{w}{\underset{w}{\underset{w}{\underset{w}}{\underset{w}{\underset{w}}{ \underset{w}{\underset{w}{\underset{w}}{\underset{w}{\underset{w}}{\underset{w}{ \underset{w}}{\underset{w{}}{\underset{w}}{\underset{w{}}{\underset{w}}{\underset{w{}}{ \underset{w}}{\underset{w{}}{\underset{w{}}}{\underset{w{}}{\underset{w}}{\underset{w{}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}{\underset{w{}}}{\underset{w{}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}{\underset{w{}}}{\underset{w{}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{ \underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w{}}}{\underset{w}} {\underset{w{}}}{\underset{w{}}}{\underset{w}}{\underset{w}}{\underset{w}}{ }}{\underset{w}}{\underset{w}}{\underset{w}}{\underset{w}}{\underset{w}}{ }}{\underset{w}}{\underset{w}}{\underset{w}}{\underset{w}}{\underset{w}}{ \underset{w}}{\underset{w}}{\underset{w}}{\underset{w}}{\underset{w}}{\underset{w}}{ }\underset{w}{}}{\underset{w}}{\underset{w}}{\underset{w}}{\underset{w}}{ }\underset{w}}{\underset{w
\([1,\mathsf{K}^{2}]\), \(\mathtt{h}_{i_{r-1}}\) and \(\mathtt{h}_{i_{r}}\) are \(\alpha\)-\(\beta\) related with \(n_{e_{i_{r-1}}}<n_{e_{i_{r}}}\) and for all \(t\in\mathbb{N}\) with \(i_{r}<t<j\), \(n_{e_{t}}>n_{e_{i_{r}}}\).
Since it is the run of a minimal witness, we know that there exists \(\mathbb{A}\in\mathcal{F}^{2\mathsf{K}\times 2\mathsf{K}}\) such that \(\mathtt{x}_{j-1}\mathbb{A}\eta_{F}^{\top}\neq 0\). Consider the vector space \(\mathcal{U}=\{\mathtt{y}\in\mathcal{F}^{2\mathsf{K}}\mid\mathtt{y}\mathbb{A} \eta_{F}^{\top}=0\}\). For \(r\in[0,\mathsf{K}^{2}]\), let \(\mathbb{A}_{r}\) denote the matrices such that \(\mathtt{x}_{i_{r}}\mathbb{A}_{r}=\mathtt{x}_{j}\in\mathcal{U}\). Since \(\mathtt{x}_{i_{r}}\mathbb{A}_{r}\in\overline{\mathcal{U}}\) for all \(r\in[0,\mathsf{K}^{2}]\), from Lemma 4, we get that there exists \(r^{\prime}\in[0,r-1]\) such that \(\mathtt{x}_{c_{i^{\prime}_{r}}}\mathbb{A}_{r}\in\overline{\mathcal{V}}\). The sequence of transitions \(\tau_{i_{r}+1}\cdots\tau_{\ell}\) can be taken from \(\mathtt{h}_{i^{\prime}_{r}}\) since the counter values always stay positive. Consider the sequence of transitions \(T^{\prime}=\tau_{0}\cdots\tau_{i^{\prime}_{r}}\tau_{i_{r}+1}\cdots\tau_{\ell}\) and let \(w=\mathtt{word}(T^{\prime})\). The word \(w\) is a shorter witness than \(z\) and contradicts its minimality.
Finally, we prove that the counter values encountered during the run \(\Pi\) are polynomially bounded in \(\mathsf{K}\) using above lemmas.
Proof of Lemma 24.: Consider the run \(\Pi\). From Lemma 28, Lemma 29 and Lemma 35, we get that the counter values of configuration pairs inside the belt space during this run in polynomially bounded in \(\mathsf{K}\). Therefore, if it exists, the first background point in \(\Pi\) has polynomially bounded counter values. From Lemma 34, the length of \(\Pi\) after the first background point is polynomially bounded in \(\mathsf{K}\). Since initial space is already bounded by a polynomial in \(\mathsf{K}\), the maximum counter value in \(\Pi\) is polynomially bounded in \(\mathsf{K}\).
## 5. Regularity of ODCA is in \(\mathsf{P}\)
We say that a weighted odca \(\mathcal{A}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{ \eta})\) is regular if there is a weighted automaton \(\mathcal{B}\) that is equivalent to it. We look at the regularity problem - the problem of deciding whether a weighted odca is regular. We fix a weighted odca \(\mathcal{A}=(C,\delta,p_{0};\ Q,\boldsymbol{\lambda},\Delta,\boldsymbol{\eta})\) and use \(\mathsf{N}\) to denote \(|C|\cdot|Q|\).
The proof technique is adapted from the ideas developed by Bohm et al. [6] in the context of real-time oca. The crucial idea in proving regularity is to check for the existence of infinitely many equivalence classes. The proof relies on the notion of distance of configurations. Distance of a configuration is the length of a minimal word to be read to reach a configuration that does not have an \(\mathsf{N}\) equivalent configuration in the underlying automata. The challenge is to find infinitely many configurations reachable from the initial configuration, so that no two of them have same distance. Our main contribution is in designing a "pumping" like argument to show this.
**Theorem 36**.: _There is a polynomial time algorithm to decide whether a weighted odca is equivalent to some weighted automata._
Recall the definition of \(\mathrm{U}(\mathcal{A})\) from Definition 30. We use \(\mathsf{c}\) to denote some configuration of \(\mathcal{A}\) and \(\bar{\mathsf{c}}\) to denote some configuration of \(\mathrm{U}(\mathcal{A})\). For a \(p\in C\) and \(m\in\mathbb{N}\), we define
\[\mathcal{W}^{p,m}=\{\mathbf{x}\in\mathcal{F}^{|Q|}|\exists\bar{\mathsf{c}}\in \mathcal{F}^{\mathsf{N}},\mathsf{c}=(\mathbf{x},p,m)\sim_{\mathsf{N}}\bar{ \mathsf{c}}\}\.\]
The set \(\overline{\mathcal{W}}^{p,m}\) is \(\mathcal{F}^{|Q|}\setminus\mathcal{W}^{p,m}\). The distance of a configuration \(\mathsf{c}\) (denoted by \(\mathrm{dist}(\mathsf{c})\)) is
\[\min\{|w|\mid\mathsf{c}\xrightarrow{w}(\mathbf{x},p,m)\ \exists p\in C,m< \mathsf{N},\,\text{and}\,\,\mathbf{x}\in\overline{\mathcal{W}}^{p,m}\}\.\]
The following lemma shows when \(\mathcal{A}\) is not regular. Clean up, move to appendix, proof sketch.
**Lemma 37**.: _Let \(\mathsf{c}\) be an initial configuration of an odca\(\mathcal{A}\). Then the following are equivalent._
1. \(\mathcal{A}\) _is not regular._
2. _for all_ \(t\in\mathbb{N}\)_, there exists configurations_ \(\mathsf{d},\mathsf{e}\) _s.t._ \(n_{\mathsf{e}}<\mathsf{N},\mathsf{c}\xrightarrow{*}\mathsf{d}\xrightarrow{*} \mathsf{e}\)_,_ \(\mathsf{x}_{\mathsf{e}}\in\overline{\mathcal{W}}^{\mathsf{e},n_{\mathsf{e}}}\) _and_ \(t<\operatorname{dist}(\mathsf{d})<\infty\)_._
3. _there exists configurations_ \(\mathsf{d},\mathsf{e}\) _and a run_ \(\mathsf{c}\xrightarrow{*}\mathsf{d}\xrightarrow{*}\mathsf{e}\) _s.t._ \(\mathsf{N}^{2}+\mathsf{N}\leq n_{\mathsf{d}}\leq 2\mathsf{N}^{2}+\mathsf{N}\)_,_ \(\mathsf{x}_{\mathsf{e}}\in\overline{\mathcal{W}}^{\mathsf{p}_{\mathsf{e}},n_{ \mathsf{e}}}\) _with_ \(n_{\mathsf{e}}<\mathsf{N}\)_._
Proof.: \(3\to 2\): Consider an arbitrary \(q\in C\), \(m<\mathsf{N}\) and vector space \(\mathcal{V}=\mathcal{W}^{q,m}\). Let us assume for contradiction the complement of Point 2. That is, there exists a \(t\in\mathbb{N}\) such that for all configurations \(\mathsf{d}^{\prime}\) where \(\mathsf{c}\xrightarrow{*}\mathsf{d}^{\prime}\xrightarrow{*}\overline{ \mathcal{V}}\times\{q\}\times\{m\}\), \(\operatorname{dist}(\mathsf{d}^{\prime})\leq t\). Note that for all \(\mathsf{d}^{\prime}\) where \(n_{\mathsf{d}^{\prime}}>\mathsf{N}\), \(\operatorname{dist}(\mathsf{d}^{\prime})\geq n_{\mathsf{d}^{\prime}}-\mathsf{N}\). Hence there exists an \(M\in\mathbb{N}\) such that for all \(\mathsf{d}^{\prime}\) where \(\mathsf{c}\xrightarrow{*}\mathsf{d}^{\prime}\xrightarrow{*}\overline{ \mathcal{V}}\times\{q\}\times\{m\}\), \(n_{\mathsf{d}^{\prime}}\leq M\).
Consider a configuration \(\mathsf{d}\) where \(n_{\mathsf{d}}>\mathsf{N}^{2}+\mathsf{N}\) and a run \(\mathsf{c}\xrightarrow{*}\mathsf{d}\xrightarrow{*}\overline{\mathcal{V}} \times\{q\}\times\{m\}\). Point 3 shows the existence of such a run. For contradiction, it suffices to show there exists a \(\mathsf{d}^{\prime}\) such that \(\mathsf{c}\xrightarrow{*}\mathsf{d}^{\prime}\xrightarrow{*}\overline{ \mathcal{V}}\times\{q\}\times\{m\}\) and \(n_{\mathsf{d}^{\prime}}>n_{\mathsf{d}}\).
Let \(m=|Q|^{2}+1\). Since \(n_{\mathsf{d}}>\mathsf{N}^{2}+\mathsf{N}\), by Pigeonhole principle, there exists set of indices \(X=\{i_{1},i_{2},\cdots,i_{m}\}\) such that for \(k,l\in[1,m]\) with \(k<l\), we have \(i_{k}<i_{l}\), and for all \(h,j\in X\)\(p_{\mathsf{e}_{h}}=p_{\mathsf{c}^{\prime}_{h}}=p_{\mathsf{e}_{j}}=p_{ \mathsf{c}^{\prime}_{j}}\). Let \(u_{j},v_{j},w_{j}\) be words such that for all \(j\in X\), \(\mathsf{c}\xrightarrow{u_{j}}\mathsf{c}_{j}\xrightarrow{v_{j}}\mathsf{c}^{ \prime}_{j}\xrightarrow{w_{j}}\mathsf{e}\). For all \(j\in X\), let matrix \(\mathbb{A}_{j}\) and \(\mathbb{B}_{j}\) be such that \(\mathbf{x}_{\mathsf{c}^{\prime}_{j}}=\mathbf{x}_{\mathsf{c}_{j}}\mathbb{A}_ {j}\) and \(\mathbf{x}_{\mathsf{e}}=\mathbf{x}_{\mathsf{c}^{\prime}_{j}}\mathbb{B}_{j}\). We know that for all \(j\in X\), \(\mathbf{x}_{\mathsf{c}_{j}}\mathbb{A}_{j}\mathbb{B}_{j}\in\overline{\mathcal{V}}\). List the matrices \(\mathbb{A}_{i_{1}},\mathbb{A}_{i_{2}},\ldots,\mathbb{A}_{i_{m}}\) in sequence. From Lemma 4, it follows that, there exists \(i,j\in X\) with \(i<j\) such that \(\mathbf{x}_{\mathsf{c}_{j}}\mathbb{A}_{i}\mathbb{B}_{j}\in\overline{\mathcal{V}}\). Consider the run \(\pi(u_{j}v_{i}w_{j},\mathsf{c}_{1})\). It contains a configuration \(\mathsf{d}^{\prime}\) where \(n_{\mathsf{d}^{\prime}}>n_{\mathsf{d}}\).
\(2\to 1\): Assume for contradiction that for all \(t\in\mathbb{N}\), there exists configurations \(\mathsf{d},\mathsf{e}\) such that \(\mathsf{c}\xrightarrow{*}\mathsf{d}\xrightarrow{*}\mathsf{e}\), \(\mathbf{x}_{\mathsf{e}}\in\overline{\mathcal{W}}^{\mathsf{p}_{\mathsf{e}},n_{ \mathsf{e}}},n_{\mathsf{e}}<\mathsf{N}\) and \(t<\operatorname{dist}(\mathsf{d})<\infty\) but \(\mathcal{A}\) is regular. Let \(\mathcal{B}\) be the weighted automaton equivalent to \(\mathcal{A}\). We use \(|\mathcal{B}|\) to represent the number of states of \(\mathcal{B}\).
Let \(t_{1},t_{2},\ldots t_{|\mathcal{B}|+1}\in\mathbb{N}\) such that for \(i\in[1,|\mathcal{B}|]\), \(t_{i}<t_{i+1}\), and \(\mathsf{d}_{t_{i}}\) be such that \(\mathsf{c}\xrightarrow{*}\mathsf{d}_{t_{i}}\xrightarrow{*}(\mathbf{x}_{i},p_{ \mathsf{e}},n_{\mathsf{e}})\), \(\mathbf{x}_{i}\in\overline{\mathcal{W}}^{\mathsf{p}_{\mathsf{e}},n_{\mathsf{e}}}\) and \(t_{i}<\operatorname{dist}(\mathsf{d}_{t_{i}})<t_{i+1}<\infty\). Clearly \(\mathsf{d}_{t_{i}}\not\equiv\mathsf{d}_{t_{j}}\) for \(i\neq j\) and hence corresponds to two different states of \(\mathcal{B}\). Since we can find more than \(|\mathcal{B}|\) pairwise non-equivalent configurations, it contradicts the assumption that \(\mathcal{B}\) is equivalent to \(\mathcal{A}\).
\(1\to 3\): We prove the contrapositive of the statement. Let us assume that there is no configurations \(\mathsf{d},\mathsf{e}\) and a run \(\mathsf{c}\xrightarrow{*}\mathsf{d}\xrightarrow{*}\mathsf{e}\) such that \(\mathsf{N}^{2}+\mathsf{N}\leq n_{\mathsf{d}}\leq 2\mathsf{N}^{2}+\mathsf{N}\), \(\mathbf{x}_{\mathsf{e}}\in\overline{\mathcal{W}}^{\mathsf{p}_{\mathsf{e}},n_{ \mathsf{e}}}\) with \(n_{\mathsf{e}}<\mathsf{N}\). This implies that there does not exists a configuration \(\mathsf{d}^{\prime}\) such that \(n_{\mathsf{d}^{\prime}}>2\mathsf{N}^{2}\), \(\mathsf{c}\xrightarrow{*}\mathsf{d}^{\prime}\xrightarrow{*}(\mathbf{y},p_{ \mathsf{e}},n_{\mathsf{e}})\) for some \(\mathbf{y}\in\overline{\mathcal{W}}^{\mathsf{p}_{\mathsf{e}},n_{\mathsf{e}}}\). Assume for contradiction that there is such a run, then there exists a portion in this run that can be "pumped down" to get a run \(\mathsf{c}\xrightarrow{*}\mathsf{d}^{\prime\prime}\xrightarrow{*}(\mathbf{y}^{ \prime},p_{\mathsf{e}},n_{\mathsf{e}})\) for some configuration \(\mathsf{d}^{\prime\prime}\) such that \(\mathsf{N}^{2}+\mathsf{N}\leq n_{\mathsf{d}^{\prime\prime}}\leq 2\mathsf{N}^{2}+\mathsf{N}\) and \(\mathbf{y}^{\prime}\in\overline{\mathcal{W}}^{\mathsf{p}^{\mathsf{e}},n_{ \mathsf{e}}}\). This is a contradiction. Therefore all runs starting from configuration with counter value greater than or equal to \(\mathsf{N}^{2}+\mathsf{N}\) "looks" similar to a run on a weighted automaton. This allows us to simulate the runs of \(\mathcal{A}\) using a weighted automaton.
We now prove that the regularity problem for weighted odca is decidable in polynomial time.
Proof of Theorem 36.: Let \(\mathcal{A}\) be a weighted odca. Lemma 37 shows that if \(\mathcal{A}\) is not regular, then there are words \(u,v\in\Sigma^{*}\) and configurations \(\mathsf{d},\mathsf{e}\) such that there
is a run of the form \(\mathsf{c}\xrightarrow{u}\mathsf{d}\xrightarrow{v}\mathsf{e}\) such that \(\mathsf{N}^{2}+\mathsf{N}\leq n_{\mathsf{d}}\leq 2\mathsf{N}^{2}+\mathsf{N}\), \(\mathbf{x}_{\mathsf{e}}\in\overline{\mathcal{W}}^{p_{\mathsf{e}},n_{\mathsf{ e}}}\) with \(n_{\mathsf{e}}<\mathsf{N}\). The existence of such words \(u\) and \(v\) can be decided in polynomial time since the minimal length of such a path if it exists, is polynomially bounded in the number of states of the weighted odca by Lemma 20. This concludes the proof.
## 6. Covering
Let \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) be two uninitialised weighted odcas. We say \(\mathcal{A}_{2}\)_covers_\(\mathcal{A}_{1}\) if for all initial configurations \(\mathsf{c}_{0}\) of \(\mathcal{A}_{1}\) there exists an initial configuration \(\mathsf{d}_{0}\) of \(\mathcal{A}_{2}\) such that \(\mathcal{A}_{1}\langle\mathsf{c}_{0}\rangle\) and \(\mathcal{A}_{2}\langle\mathsf{d}_{0}\rangle\) are equivalent. We say \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\) are _coverable equivalent_ if \(\mathcal{A}_{1}\) covers \(\mathcal{A}_{2}\) and \(\mathcal{A}_{2}\) covers \(\mathcal{A}_{1}\). We show that the covering and coverable equivalence problems for uninitialised weighted odcas are decidable in polynomial time. The proof relies on the algorithm to check the equivalence of two weighted odcas and is given below.
**Theorem 38**.: _Covering and coverable equivalence problems of uninitialised weighted odca\(s\) are in \(\mathsf{P}\)._
Proof.: We fix two uninitialised weighted odcas\(\mathcal{A}_{1}=(C_{1},\delta_{1};\ Q_{1},\Delta_{1},\boldsymbol{\eta}_{1})\) and \(\mathcal{A}_{2}=(C_{2},\delta_{2};\ Q_{2},\Delta_{2},\boldsymbol{\eta}_{2})\) for this section. Without loss of generality, assume \(\mathsf{K}=|C_{1}|=|Q_{1}|=|C_{2}|=|Q_{2}|\). For \(i\in[1,\mathsf{K}]\) we define the vector \(\mathbf{e}_{i}\in\mathcal{F}^{\mathsf{K}}\) as follows:
\[\mathbf{e}_{i}[j]=\begin{cases}1,\text{ if }i=j\\ 0,\text{ otherwise}\end{cases}\]
For \(j\in[1,\mathsf{K}]\), \(q\in C_{1}\), we use \(\mathtt{h}_{j,q}\) to denote the configuration \((\mathbf{e}_{j},q,0)\) of \(\mathcal{A}_{1}\) and for \(i\in[1,\mathsf{K}]\), \(p\in C_{2}\), we use \(\mathsf{g}_{i,p}\) to denote the configuration \((\mathbf{e}_{i},p,0)\) of \(\mathcal{A}_{2}\).
**Claim 1**.: _There is a polynomial time algorithm to decide whether \(\mathcal{A}_{2}\) covers \(\mathcal{A}_{1}\langle\mathtt{h}_{j,q}\rangle\) for any \(j\in[1,\mathsf{K}]\) and \(q\in C_{1}\)._
Proof: First, we check, in polynomial time (equivalence with a zero machine), whether \(\mathcal{A}_{1}\langle\mathtt{h}_{j,q}\rangle\) accepts all words with weight \(0\in\mathcal{F}\). If that is the case, then \(\mathcal{A}_{1}\langle\mathtt{h}_{j,q}\rangle\) and \(\mathcal{A}_{2}\langle\mathsf{g}_{0}\rangle\) are equivalent for the configuration \(\mathsf{g}_{0}=(\{0\}^{\mathsf{K}},p,0)\), for any \(p\in C_{2}\). Otherwise, there is some word \(w_{0}\) accepted by \(\mathcal{A}_{1}\langle\mathtt{h}_{j,q}\rangle\) with non-zero weight \(s\) (returned by the previous equivalence check). Without loss of generality, we consider the smallest one, whose size is polynomial in \(\mathsf{K}\).
We pick a \(p\in C_{2}\) and check whether there exists an initial distribution from counter state \(p\) that makes the two machines equivalent. Assume that such an initial distribution exists and for all \(i\in[1,\mathsf{K}]\), let \(\alpha_{i}\) denote the initial weight on state \(q_{i}\in Q_{2}\). We use \(\boldsymbol{\alpha}\) to denote the resultant initial distribution. We initialise an empty set \(B\) to store a system of linear equations.
The following steps will be repeated at most \(\mathsf{K}\) times to check the existence of an initial distribution with initial state \(p\in C_{2}\). Let \(w\) be the counter-example returned by the equivalence query in the previous step. For all \(i\in[1,\mathsf{K}]\), we compute \(f_{\mathcal{A}_{2}\langle\mathsf{g}_{i,p}\rangle}(w)\). We add the linear equation \(\sum_{i=1}^{\mathsf{K}}\alpha_{i}\cdot f_{\mathcal{A}_{2}\langle\mathsf{g}_{i, p}\rangle}(w)=f_{\mathcal{A}_{1}\langle\mathtt{h}_{j,q}\rangle}(w)\) to \(B\) and compute values for \(\alpha_{i}\), \(i\in[1,\mathsf{K}]\), such that it satisfies the system of linear equations in \(B\). We check whether \(\mathcal{A}_{1}\langle\mathtt{h}_{j,q}\rangle\) and \(\mathcal{A}_{2}\langle(\alpha,p,0)\rangle\) are equivalent or not. If they are not equivalent, we get a new counter example that distinguishes them. Now we repeat the procedure to compute a new initial distribution.
Note that the above procedure is executed at most \(\mathsf{K}\) times to find an initial distribution if it exists. This is because we can find only \(\mathsf{K}\) many linearly independent linear equations in \(\mathsf{K}\) variables. Suppose the above procedure fails to find an initial distribution for which the machines are equivalent. In that case, there is an initial distribution of \(\mathcal{A}_{1}\) with initial counter state \(q\), for which \(\mathcal{A}_{2}\) with initial counter state \(p\) does not have an equivalent initial distribution. We now pick a different counter state of \(C_{2}\) and repeat the process until we find a \(p\in C_{2}\) for which the algorithm finds an equivalent initial distribution. If for all \(p\in C_{2}\), the algorithm returns false, then \(\mathcal{A}_{2}\) does not cover \(\mathcal{A}_{1}\langle\mathsf{h}_{j,q}\rangle\). \(\square_{Claim:1}\)
First, we show the existence of a polynomial time procedure to check whether \(\mathcal{A}_{2}\) covers \(\mathcal{A}_{1}\). For all \(j\in[1,\mathsf{K}]\), we check whether there exists an initial state \(p\in C_{2}\) such that \(\mathcal{A}_{2}\) with initial counter state \(p\) has an initial distribution that makes it equivalent to \(\mathcal{A}_{1}\langle\mathsf{h}_{j,q}\rangle\) using Claim 1. If we fail to find such a state in \(C_{2}\) then we return false. We repeat this procedure for all \(q\in C_{1}\). If for all \(q\in C_{1}\) there exists a \(p\in C_{2}\) such that \(\mathcal{A}_{2}\) with initial counter state \(p\) has an initial distribution that makes it equivalent to \(\mathcal{A}_{1}\langle\mathsf{h}_{j,q}\rangle\) for all \(j\in[1,K]\), then we say that \(\mathcal{A}_{2}\) covers \(\mathcal{A}_{1}\) otherwise we say that \(\mathcal{A}_{2}\) does not cover \(\mathcal{A}_{1}\). Let us see why this is true. For simplifying the arguments we fix a \(q\in C_{1}\). Assume that for all \(j\in[1,\mathsf{K}]\), there exits \(p\in C_{2}\) such that \(\mathcal{A}_{1}\langle\mathsf{h}_{j,q}\rangle\) is equivalent to the configuration \((\mathbf{x}_{j,q},p,0)\) for some \(\mathbf{x}_{j,q}\in\mathcal{F}^{\mathsf{K}}\). Now, any initial configuration \((\boldsymbol{\lambda},q,0)\) of \(\mathcal{A}_{1}\) is equivalent to the configuration \((\sum_{j=1}^{\mathsf{K}}\boldsymbol{\lambda}[j]\cdot\mathbf{x}_{j,q},p,0)\) of \(\mathcal{A}_{2}\).
The coverable equivalence problem can now be solved by checking whether \(\mathcal{A}_{1}\) covers \(\mathcal{A}_{2}\) and \(\mathcal{A}_{2}\) covers \(\mathcal{A}_{1}\), which can be done in time polynomial in \(\mathsf{K}\). \(\square\)
## 7. Conclusion
We introduced a new model called odca. The equivalence problem for non-deterministic odcas is in PSPACE. This is in contrast to non-deterministic oca, where the equivalence problem is undecidable. We observe that undecidability is a consequence of non-determinism occurring in counter actions. In the case of weighted odcas, we show that the reachability, equivalence, regularity, and covering problems are in \(\mathsf{P}\).
The natural way to extend the work is to consider epsilon transitions in the odca. We conjecture that the equivalence, regularity, and covering problems will be polynomial time decidable. Another possible direction is to look at pushdown systems partitioned into a deterministic stack and a finite state machine. In this case, a non-deterministic model can be determinized (similar to Theorem 8). Even though all our algorithms are in polynomial time, they may not be 'practical'. Considerable effort is required to find faster algorithms. We also leave open questions on learning and approximate equivalence of odcas.
## Acknowledgment
The authors would like to thank Dr. Rahul C S, School of Mathematics and Computer Science, IIT Goa, for his valuable and intuitive suggestions which helped us in solving the binary Co-VS reachability problem. Sreejith would like to thank DST Matrics grant for the project "Probabilistic Pushdown Automata". |
2309.15605 | Insights into bubble droplet interactions in evaporating polymeric
droplets | Polymer droplets subjected to a heated environment have significance in
several fields ranging from spray drying and powder formation to surface
coating. In the present work, we investigate the evaporation of a high
viscoelastic modulus aqueous polymeric droplet in an acoustically levitated
environment. Depending on the laser irradiation intensity, we observe
nucleation of a bubble in the dilute regime of polymer concentration, contrary
to the previously observed bubble nucleation in a semi-dilute entangled regime
for low viscoelastic modulus polymer droplets. After the bubble nucleation, a
quasi steady bubble growth occurs depending on the laser irradiation intensity
and concentrations. Our scaling analysis reveals that bubble growth follows
Plesset-Zwick criteria independent of the viscoelastic properties of the
polymer solution. Further, we establish that the onset of bubble growth has an
inverse nonlinear dependence on the laser irradiation intensity. At high
concentrations and laser irradiation intensities, we report the expansion and
collapse of polymer membrane without rupture, indicating the formation of an
interfacial skin with significant strength. The droplet oscillations are
primarily driven by the presence of multiple bubbles and, to some extent, by
the rotational motion of the droplet. Finally, depending on the nature of
bubble growth, different types of precipitate form contrary to the different
modes of atomization observed in low viscoelastic modulus polymer droplets. | Gannena K S Raghuram, Durbar Roy, D Chaitanya Kumar Rao, Aloke Kumar, Saptarshi Basu | 2023-09-27T12:08:26Z | http://arxiv.org/abs/2309.15605v1 | #### Insights into bubble droplet interactions in evaporating polymeric droplets
###### Abstract
Polymer droplets subjected to a heated environment have significance in several fields ranging from spray drying and powder formation to surface coating. In the present work, we investigate the evaporation of a high viscoelastic modulus aqueous polymeric droplet in an acoustically levitated environment. Depending on the laser irradiation intensity, we observe nucleation of a bubble in the dilute regime of polymer concentration, contrary to the previously observed bubble nucleation in a semi-dilute entangled regime for low viscoelastic modulus polymer droplets. After the bubble nucleation, a quasi steady bubble growth occurs depending on the laser irradiation intensity and concentrations. Our scaling analysis reveals that bubble growth follows Plesset-Zwick criteria independent of the viscoelastic properties of the polymer solution. Further, we establish that the onset of bubble growth has an inverse nonlinear dependence on the laser irradiation intensity. At high concentrations and laser irradiation intensities, we report the expansion and collapse of polymer membrane without rupture, indicating the formation of an interfacial skin with significant strength. The droplet oscillations are primarily driven by the presence of multiple bubbles and, to some extent, by the rotational motion of the droplet. Finally, depending on the nature of bubble growth, different types of precipitate form contrary to the different modes of atomization observed in low viscoelastic modulus polymer droplets.
Keywords: Drops and bubbles
## 1.Introduction
Polymer droplet and thin polymer film evaporation continue to evoke scientific curiosity in various applications ranging from targeted drug delivery, thin films, and coatings to surface patterning (Wilms 2005; Pathak and Basu 2016a). Understanding the kinetics and dynamics of evaporating polymer droplets is crucial for practical applications. Polymer droplet evaporation involves complex events such as solvent evaporation, subsequent build-up of polymer concentration at the air-liquid interface, and precipitate formation (Littringer et al. 2012; Al Zaitone et al. 2020). Depending on the initial polymer concentration and solvent evaporation rate, the accumulation of solute (polymer) at the surface aids in forming a gel
type layer, also called the skin layer (Okuzono et al. 2006; Pauchard and Allain 2003b; Pauchard and Allain 2003c). Based on the properties of the polymer and drying kinetics of droplets, the final morphology of polymer residue can be in the form of a wrinkled pattern (Pauchard and Allain 2003b), buckled structure (Pauchard and Allain 2003c), smooth solid precipitate (Raghuram et al. 2021), or ring pattern (Raghuram et al. 2021). Investigations on the evaporating polymer droplet have been performed in a contact environment (hydrophilic substrates) under natural drying conditions (Baldwin and Fairhurst 2014; Mamalis et al. 2015; Baldwin et al. 2011; Baldwin et al. 2012; Pauchard and Allain 2003a). Pauchard et al.(Pauchard and Allain 2003a) revealed the skin layer formation near the vapor/drop interface in evaporating sessile droplets. It was demonstrated that as the enclosed liquid volume decreases, the skin layer deforms, leading to buckling instability in the droplet. Depending on the experimental conditions, different shape instabilities have been reported, from buckled structure to the wrinkled pattern on the droplet surface. Baldwin et al. (Baldwin and Fairhurst 2014; Baldwin et al. 2011; Baldwin et al. 2012) and Mamalis et al.(Mamalis et al. 2015) explored the influence of molecular weight and concentration on final deposit formation. Depending on the competing effect between advective polymer build-up and diffusive flux near the three-phase contact line, pillars and puddle-like deposits on glass surfaces have been observed.
The bubble dynamics in multi-component droplets through external heating are extensively reported across various experimental configurations (Mura et al. 2014; Rao et al. 2018; Pathak and Basu 2016b; Rao et al. 2017; Miglani et al. 2014; Antonov and Strizhak 2019; Restrepo-Cano et al. 2022). In particular, Rao et al.(Rao et al. 2018; Rao et al. 2017) studied bubble dynamics and breakup mechanisms dynamics in burning multi-component miscible droplets. Different modes of bubble-induced droplet shape oscillations were reported depending on the size of the bubble, volatility differential, and concentration of the components. It was shown that a significantly larger volatility difference leads to severe shape oscillations induced by the breakup of a large bubble, whereas lower volatility differential results in mild shape oscillations caused by the breakup of a relatively small bubble.
Most experimental studies have been performed primarily to understand the dynamics of evaporating polymer droplets in a contact environment (hydrophilic substrates), and the literature on the evaporation of isolated polymer droplets (non-contact environment) of broad viscoelastic natures is scarce. The contact-free environment is provided by a relatively simplistic methodology of acoustic levitation (Rao et al. 2022; Gannena et al. 2022), which allows one to accurately capture the droplet shape oscillations and short spatio-temporal instabilities at the vapor-liquid interface (Gonzalez Avila and Ohl 2016).
In the context of acoustic levitation, Rao et al. (Rao and Basu 2020; Rao et al. 2020) investigated the dynamics of levitated emulsion droplets under external radiative heating. They reported that droplet breakup is categorized into three types: breakup through bubble growth, sheet breakup, and catastrophic breakup, depending on the onset of vapor bubble nucleation. It is also shown that the size of secondary droplets depends on the mode of droplet breakup. In the case of nanoparticle-laden droplets, Pathak et al. (Pathak and Basu
2016b) studied how nanoparticles could affect the dynamics of fuel droplets under external radiative heating. During evaporation, the accumulation of nanoparticles through orthokinetic aggregation leads to the formation of nanoparticle aggregates. These aggregates act as nucleation sites leading to heterogeneous boiling inside the droplet and subsequent breakup of parent droplets.
Previously, we investigated the coupled effect of the skin layer and bubble in evaporating low viscoelastic modulus polymer (Polyacrylamide) droplets under a heated environment. During evaporation, bubble nucleation in the semi-dilute entangled regime of polymer concentration results in membrane growth, followed by its rupture at low to medium irradiation intensities. At high irradiation intensities, the PAM droplets undergo universally observed ligament-mediated and sheet breakup (Gannena et al. 2022). However, it is essential to understand how the bulk viscoelasticity of polymer droplets can affect the underlying bubble and droplet dynamics. The current study explores the nature of skin layer and bubble interaction on the droplet dynamics in evaporating high viscoelastic modulus polymer (PEO) droplets. Note that the motivation of the present theoretical framework is to provide the approximate scales for various physical quantities observed during the experiments and explore the physics of the phenomenon. The exact analytical or numerical solutions of the coupled governing equations of momentum, heat and mass transfer is outside the scope of the present study.
This paper is organized as follows. Section 2 details materials and methods involving polymer solution preparation and its properties (SS 2.1), and experimental methodology (SS 2.2). The results and discussion involve global observations (SS 3.1), evaporation (SS 3.2), steady bubble growth, membrane dynamics (SS 3.3), droplet shape oscillations and precipitate formation (SS 3.4). The conclusions of the present study are provided in SS 4.
## 2 Materials and methods
### Polymer solution preparation and properties
Various concentrations of PEO (Sigma-Aldrich) solutions ranging from 0.06 to 2% (w/w) of molecular weight \(M_{W}\) of 4\(\times\)10\({}^{6}\) g mol-1 are prepared by dissolving Polyethylene oxide (PEO) powder in DI water. PEO solutions are stirred at 600 rpm for 24 hours to ensure proper mixing. The preparation methodology of PAM solutions can be found in (Gannena et al. 2022). Throughout the current experimental study concentration of the polymer solution (\(c\)) is normalized with overlap concentration (\(c^{*}\)). The value of the critical overlap concentration of PEO is \(c^{*}=0.071\%\) (\(w/w\)). It is obtained by applying the Flory relation \(c^{*}=\frac{1}{[n]}\), where \([n]=0.072M_{W}^{0.65}\) is obtained from the Mark-Houwink-Sakurada correlation (Tiratatandaja et al. 2006). Entanglement concentration for the current polymer obtained as \(c_{e}=~{}0.42\%\) w/w by using the relation \(c_{e}=~{}6c^{*}\). The solutions having concentration ratios \(c/c^{*}<1\),\(1<c/c^{*}<c_{e}/c^{*}\) and \(c/c^{*}>c_{e}/c^{*}\) are dilute regime, semi-dilute un-entangled regime, and semi-dilute entangled regime, respectively. More information on the definition of regimes of polymer concentrations and non-dimensionalization of concentrations for PAM can be found in (Gannena et al. 2022). To confirm the viscoelastic nature of PEO and PAM
solutions, rheological tests are performed on a rheometer (Anton Paar, MCR702) with cone and plate geometry. The diameter and angle of the cone and plate are 50 mm and 1\({}^{\circ}\), respectively. Figure 1 shows the inherent viscoelastic nature of PAM and PEO bulk solutions. Here \(G^{\prime}\) and \(G^{\prime\prime}\) represent storage and loss modulus, respectively. It can be seen that the storage and loss modulus of the PAM solution are in the \(O\left(10^{-2}-10^{-1}\right)Pa\), whereas for PEO storage and loss modulus are in the \(O\left(10^{1}\right)Pa\). The viscoelastic modulus of the polymer solutions depends on the entanglements present in the solution. A higher value of entanglements corresponds to a higher viscoelastic modulus of the polymer solution. The quantitative criteria governing the entanglements in each polymer solution is given by entanglement density \(N_{e}\). It is given as
\[N_{e}\ =\ (M_{W}/M_{e})(c/c^{*}) \tag{1}\]
Here \(M_{W}\) and \(M_{e}\) represents the molecular weight and entanglement molecular weight of the polymer, respectively. The \(M_{e}\) value for PEO and PAM is 2000 g mol-1 (Rubinstein and Colby 2003) and 9000-23000 g mol-1 (Plastics Technolgy) respectively. Although \(M_{W}\) and \(c/c^{*}\) are in same range for PAM and PEO, the lower value of \(M_{e}\) for PEO gives higher value of \(N_{e}\) for PEO compared to PAM. However, the current study focuses on exploring the effect of laser heating in high viscoelastic modulus polymer droplets without dwelling too much on the rheological differences between PAM and PEO solutions.
Figure 1: Behaviour of storage and loss modulus with shear strain. Here \(G^{\prime}\) and \(G^{\prime\prime}\) represent storage and loss modulus, respectively.
### Experimental methodology
Figure 2 depicts the experimental setup used in the current study. The droplets of PEO solutions comprising different concentrations are levitated using a single-axis acoustic levitator (Tec5) with 100 kHz frequency. The droplets are externally heated with a tunable continuous CO\({}_{2}\) laser (Synrad 48, wavelength \(\sim\)10.6 \(\upmu\)m, max power (P\({}_{\text{max}}\)) \(\sim\) 10 W) with a beam diameter of 3.5 mm. A high-speed camera (Photron SA5) and a high-speed laser for illumination (CAVILUX(r) Smart UHS, 640 nm) are used to capture the droplet evaporation and oscillation processes. The high-speed images are recorded at 10000 fps, and the spatial resolution of the recorded images is 6.7 \(\upmu\)m/pixel. The recorded grey scale images were contrast enhanced and converted to binary images. The droplet shape was then reconstructed using an edge detection methodology to obtain its maximum horizontal and vertical lengths \(D_{H}\) and \(D_{V}\), respectively. The equivalent diameter of the droplet is calculated using the relation, \(D=\sqrt[3]{D_{H}^{2}D_{V}}\). The above-mentioned measurement is performed using "Analyze particle" plugin in the "ImageJ (version 2.0)" software. The approximate diameter of the droplets used in the current study is \(0.95\pm 0.05\) mm. After evaporation, the precipitates are examined using a scanning electron microscope (SEM) (VEGA3, TESCAN) at EHT of 5KV, using a secondary electron detector. In the current study, the irradiation intensity from the laser is non-dimensionalized with the enthalpy of vaporization. More details on non-dimensionalization can be found in Gannena et al.(Gannena et al., 2022). An Infrared camera is used to measure the evolving droplet surface temperature with time. The IR camera (FLIR SC5200: pre-calibrated for a standard emissivity of 1 with an accuracy of \(\pm\)1 \({}^{\circ}\)C) is operated at 50 frames per second (fps) with a spatial resolution of 6.42 \(\upmu\)m/pixel. It has been reported that the emissivity for water is between 0.95 and 0.98 (Mikael'A, 2013; Wolfe and Zissis, 1978). The change in temperature due to the change in emissivity is 0.03 \({}^{\circ}\)C, which is assumed to be negligible. The captured images are processed using ALTAIR software (FLIR Systems, version 5.91.10.797) to extract the droplet temperature during the heating process. The temperature information is gained by defining a linear region of interest along the droplet diameter in each IR frame, and the maximum temperature on the surface of the droplet is calculated. The temperature at the droplet's core is anticipated to be higher than at the surface due to the volumetric nature of the absorption process (Abramzon and Sazhin, 2006). Further, the temperature at the interface is lower due to evaporative cooling effects.
Figure 2: Schematic representing (a) Side view and (b) top view of the experimental setup. The droplet is levitated using a single-axis acoustic levitator and evaporated using a continuous CO\({}_{2}\) laser. The droplet evaporation and bubble dynamics are captured with a high-speed camera, and a pulsed laser light source provides the backlighting. The surface temperature of the droplet is captured using an IR camera. \(D_{H}\) and \(D_{V}\) are the maximum horizontal and vertical lengths of droplets, respectively.
## 3 Results and discussions
### Global observations
A global overview summarizing the interaction between a continuous laser and an acoustically levitated droplet for a range of polymer concentrations and irradiation intensity is shown in figure 3. For concentrations above the entangled regime (\(c/c^{*}=28.2\)) and at a high irradiation intensity \(I^{*}=3.5\), after a period of smooth evaporation (Phase A), we observe bubble growth (Phase B) followed by shape oscillations and precipitate formation (Phases C and D). The bubble growth observed in Phase B can be attributed to asymmetric membrane growth (Gannena et al., 2022), which occurs in the early stages of evaporation with growth scales of \(O\) (\(10^{-4}\)) \(s\). Phases C and D are dominant in the later stages of evaporation. Similar phases are observed at \(c/c^{*}=28.2\) for \(I^{*}=1.5\) and \(c/c^{*}=2.8\) for all the irradiation intensities. The observations indicate that the dynamics in phases B, C and D are
Figure 3: (a) Global observations of the droplet evaporation and bubble dynamics associated with PEO aqueous solutions. The influence of polymer concentration and laser irradiation intensity is shown. Here, O () symbol indicates the order of time scale of occurrence. The scale bar represents 1 mm. (b) Temporal variation of normalized diameter of polymer droplet at \(I^{*}=1.5\) (c) Power spectrum density of diameter of polymer droplet at \(I^{*}=1.5\).
significantly affected by bubble growth depending on irradiation intensity and polymer concentration. The observed bubble dynamics differ from the previously reported membrane development, rupture, and breakup in evaporating low viscoelastic modulus (PAM) droplets (Gannena et al., 2022). Figure 3(b) shows the variation of normalized diameter (\(D/D_{0}\)) with normalized time (\(t/t_{d}\)) at \(I^{*}=1.5\) for PAM and PEO. Here, \(t_{d}\) denotes the time scale for thermal diffusion and is defined as
\[t_{d}=\frac{R_{0}^{2}}{\alpha_{l}}=\frac{\rho_{l}c_{\mathrm{p}}R_{0}^{2}}{k} \tag{3.1}\]
Where \(R_{0}\) is the initial radius of the droplet, \(\alpha_{l}\) is the thermal diffusion coefficient, \(\rho_{l}\) is the density of liquid, \(c_{\mathrm{p}}\) is the specific heat capacity, and \(k\) is the thermal conductivity of the liquid. The interaction of a typical PAM droplet with an IR laser in an acoustically levitated field consists of smooth evaporation, nucleation of bubble, bubble expansion, rupture of viscoelastic membrane and subsequent fragmentation of the polymeric droplet through various pathways (see inset figure 3(b)). The dynamics is significantly different from the PEO droplets. The regression data for \(c/c^{*}=28.2\) and \(I^{*}=1.5\) (PEO droplet) encapsulates phenomena like droplet evaporation, bubble growth, and droplet rotation, which is visible due to the entrapped bubble. A power spectrum is obtained for the evaporating polymer droplet to understand whether we can decompose the evaporation curve into its constituent physical components (see figure 3(c)). For evaporating PEO droplets at \(c/c^{*}=28.2\) and \(I^{*}=1.5\), we observe two frequency bands of \(O\) (\(10^{0}\)) Hz and \(O\) (\(10^{1}\)) Hz. These frequency bands indicate the evaporation and rotational frequencies of the droplet, respectively. Note that the high-frequency band of \(O\) (\(10^{1}\)) Hz is absent for evaporating PAM droplets, confirming that rotational dynamics are absent (see inset figure 3(c)). Furthermore, bubble nucleation occurs for \(c/c^{*}>10\) in a semi-dilute entangled regime for evaporating PAM droplets (Gannena et al., 2022). However, bubble nucleation occurs even in the dilute regime for evaporating PEO droplets. The essential physical mechanisms and their theoretical scales will be elucidated in subsequent sections.
The formation and subsequent dynamics of the vapor bubble during the evaporation of the high viscoelastic modulus polymeric droplet PEO is shown schematically in figure 4. As the polymeric droplet evaporates, the accumulated polymer along the interface forms an interconnected network of polymer, which undergoes a phase transition into a skin layer at a critical gelation polymeric concentration. This is followed by subsequent bubble nucleation as evaporation proceeds (Gannena et al. 2022). At low irradiation intensities, we observe the growth of a stationary bubble while protruding the droplet (see figure 4 and supplementary Movie2). At higher irradiation intensities, membrane expansion and collapse are observed (see figure 4 and supplementary Movie1 ). The membrane expansion and collapse without rupture signifies the creation of a significantly higher-strength skin layer compared to previously observed membrane growth, rupture and breakup in evaporating polyacrylamide (PAM) droplets (Gannena et al. 2022). The quantitative criteria for the formation of the skin layer is given by Peclet number (\(Pe\)). The Peclet number is defined as \(Pe=t_{dp}/t\), where \(t_{dp}\) represents the diffusion time scale of polymer molecules inside the droplet and \(t\) indicates the evaporation time scale of the droplet. Here, \(t_{dp}=D_{0}^{2}/D_{P}\), where \(D_{0}\) represents the initial diameter of the polymer droplet and \(D_{P}\) represents the self-diffusion coefficient of the polymer molecule. The self-diffusion coefficient of the polymer is defined as
\[D_{P}=\frac{K_{B}T_{R}}{6\pi\mu e} \tag{3.2}\]
where correlation length is defined as
\[\varepsilon=R_{g}\left(\frac{c}{c^{*}}\right)^{\frac{\theta}{1-3\theta}} \tag{3.3}\]
Here \(K_{B}\), \(T_{R}\), and \(\mu\) denote the Boltzmann constant, room temperature, and dynamic viscosity of the solvent, respectively. Excluded volume coefficient (\(\vartheta\)) is set to 0.588 (Raghuram et al. 2021). In the present experimental investigation, \(Pe\gg 1\) for all the irradiation intensities and
Figure 4: Illustration of skin layer formation and bubble dynamics in a typical evaporating polymeric droplet
it has been previously demonstrated that \(Pe\gg 1\) results in the creation of a skin layer (Gannena et al., 2022). The skin layer thickness \(h_{0}\) is calculated using the conservation of mass of polymer in the liquid droplet. It is defined as
\[h_{0}=0.5\left(D-\left(D_{0}\left(\frac{\beta^{3}\varphi_{g}-\varphi_{p}}{ \varphi_{g}\varphi_{g}-\varphi_{p}}\right)^{\!\!1/3}\right)\right) \tag{3.4}\]
where \(\varphi_{p}\) and \(\varphi_{g}\) represent initial polymer mass fraction and gelation mass fraction, respectively. \(\tilde{p}\) indicates the ratio of polymer density to liquid density. \(\varphi_{g}\) is assumed to be approximately equal to 1, where the polymer concentration is the highest throughout the droplet. More details on equation (3.4) can be referred from our previous work(Gannena et al., 2022). The skin layer thickness (\(h_{0}\)) is used for explaining the membrane dynamics (see SS 3.3). Depending on the concentration regime, the bubble nucleation differs significantly for low viscoelastic modulus (PAM) and high viscoelastic modulus (PEO) droplets. Figure 5 compares high-speed images of evaporating PEO and PAM droplets for \(c/c^{*}=2.8\) and \(c/c^{*}=3.3\) (at \(I^{*}=2.2\)), respectively. Here, both the chosen concentrations fall inside the semi-dilute unentangled regime of polymer solutions. At \(c/c^{*}=3.3\), for a fluid with a low viscoelastic modulus (PAM), the droplet evaporates without bubble nucleation (figure 5(b)). However, for high viscoelastic modulus (PEO) droplets, after a period of evaporation, a bubble starts to grow at around \(t=1500\;ms\) (figure 5(a)). This can be further corroborated by the temporal variation of non-dimensional diameter (\(D/D_{o}\)) with non-dimensional time (\(t/t_{d}\)) (see figure 5(c)). The pronounced oscillatory behaviour in the evaporation curve for PEO at \(c/c^{*}=2.8\) and \(I^{*}=2.2\) indicates droplet rotation. However, the evaporation curve is smooth for PAM at \(c/c^{*}=3.3\). The entanglement density \(N_{e}\) increases more steeply for PEO than PAM (see figure 5(d)). PEO leads to more entanglements than PAM, even at semi-dilute and dilute limits. This perhaps leads to the skin layer formation at a much earlier concentration regime (dilute and semi-dilute unentangled regime) and subsequent bubble nucleation at a given irradiation intensity. See figure S3 for further evidence of bubble nucleation in dilute and semi-dilute unentangled regimes of polymer concentrations in PEO fluid droplets at a specific irradiation intensity.
The experimental temporal surface temperature evolution of a polymer droplet and the experimental evidence of bubble expansion on the illuminating face of the droplet (towards laser direction) is provided through IR thermography. Figure 6 displays the time evolution of the surface temperature of evaporating PEO droplets under varying irradiation intensities and concentrations. At a concentration, \(c/c^{*}=14.1\) and irradiation intensities \(I^{*}\) ranging from
Figure 5: (a) High-speed snapshots of PEO droplet at \(c/c^{*}=2.8\) and \(I^{*}=2.2\) (b) High-speed snapshots of PAM droplet at \(c/c^{*}=3.3\) and \(I^{*}=2.2\). The scale bar indicates 1 mm. (c) Temporal variation of drop diameter at \(I^{*}=2.2\) (d) Variation of entanglement density with non-dimensional concentration.
Figure 6: Temporal evolution of the droplet surface temperature corresponding to (a) \(c/c^{*}=14.1\) (b) \(I^{*}=~{}3\). The dotted circle represents the bubble expansion after bubble nucleation. In figure (a), the inset represents high-speed snapshots of bubble expansion after bubble nucleation. The inset in figure (b) represents experimental IR images of evaporating polymer droplets at \(c/c^{*}=14.1\) and \(I^{*}=~{}3\).
0.7-3 (see figure 6(a)), surface temperature of the polymer droplet rises with time, reaches a saturation limit, and then abruptly peaks. The peak temperature corroborates the bubble expansion close to the skin layer of the evaporating polymer droplet (see inset figure 6(a)). At higher irradiation intensities, the surface temperature of the polymer droplet rapidly increases, and vice versa at lower irradiation intensities. Also, the peak temperature is attained much earlier for higher irradiation intensities compared to lower irradiation intensities indicating that the bubble nucleation occurs significantly earlier in the droplet's lifetime (assuming negligible time difference between nucleation and discernible bubble).
At an irradiation intensity of \(I^{*}=3\) irrespective of the concentrations (\(c/c^{*}\)) ranging from 0.84-14.1), the initial increase in surface temperature of the polymer droplet remains nearly constant. However, the peak temperature is attained much quicker for higher concentrations compared to lower concentrations, indicating that bubble nucleation occurs much early (see figure 6(b)). Further, evidence of bubble nucleation close to the illuminating face of the droplet can be explained through experimental IR images (see inset figure 6(b)). We can observe a high-temperature zone (\(>\)100\({}^{0}\) C), further confirming bubble nucleation and expansion on the illuminating face.
Figure 7(a) shows the variation of normalized diameter (\(D/D_{0}\)) with normalized time (\(t/t_{d}\)) at different irradiation intensities for \(c/c^{*}=14.1\). As expected, diameter reduction is faster for high irradiation intensity and lower for lowest irradiation intensity. The time-varying amplitude of the evaporation curve represents bubble growth, whereas time-varying oscillations represent the droplet's rotational motion. The exact extraction of bubble growth scales, its variation with irradiation intensities, concentrations, and their theoretical comparisons will be elucidated in SS 3.3. With regards to different concentrations at a particular irradiation intensity (see supplementary figure S4), the diameter reduction remains the same, implying evaporation time scales remain similar irrespective of polymer concentration. This enables a comparison of theoretical and experimental evaporation time scales for a particular concentration \(c/c^{*}=14.1\) at different irradiation intensities (see
Figure 7: (a) Temporal evolution of droplet diameter at different laser irradiation intensities corresponding to \(c/c^{*}=14.1\) (b) Comparison of experimental and theoretical evaporation time scales at different irradiation intensities for \(c/c^{*}=14.1\).
figure 7(b)). The theoretical evaporation time scales can be established from a diffusive law proposed by Sobac et al.(Sobac et al., 2019). The differential equation of the evolving drop radius can be written as
\[\frac{d\textit{D}}{dt}=\frac{4}{D}\left(\frac{c_{g}d_{v\textit{a}}}{c_{l}} \right)ln\ \left(\frac{1-X_{l}}{1-X_{\infty}}\right) \tag{3.5}\]
Using the initial condition \(D(t=0)=D_{0}\), the integration of the above equation gives
\[D(t)^{2}=\ D_{0}^{2}-8d_{v\textit{a}}\left(\frac{c_{g}}{c_{l}}\right)ln\ \left(\frac{1-X_{\infty}}{1-X_{l}}\right)t \tag{3.6}\]
where, \(c_{g}\) represents gas molar concentration, \(d_{v\textit{a}}\) represents the diffusion coefficient of vapor in air, \(D\) represents the diameter of the drop, \(X_{l}\) and \(X_{\infty}\) represent the mole fraction of vapor in the gas phase at the interface and far field, respectively.
Using the slope of the above equation, an approximate theoretical evaporation time scale can be written as
\[t_{e,theoretical}\sim D_{0}^{2}/8d_{v\textit{a}}\left(\frac{c_{g}}{c_{l}}\right) ln\ \left(\frac{1-X_{\infty}}{1-X_{l}}\right) \tag{3.7}\]
where \(X_{l}\) can be written as
\[X_{l}=\textit{exp}\left(\frac{-L^{*}}{R_{g}}\left(\frac{1}{r_{l}}-\frac{1}{r_{ b}}\right)\right) \tag{3.8}\]
where, \(T_{b}\) is the boiling temperature of the liquid and \(T_{l}\) is the interface temperature. \(L^{*}\) is the molar latent heat of vaporization of the liquid and \(R_{g}\) represents universal gas constant. Here \(X_{l}\) is calculated using the saturation interface temperature obtained from infrared thermography. The experimental and theoretical evaporation time scales are in good agreement with each other, especially at higher irradiation intensities (see figure 7(b)). Note that the effect of acoustic streaming and skin layer formation will have an impact on evaporation time scales for \(t_{e}>t_{d}\). However, for \(t_{e}\sim t_{d}\), the evaporation time scales are governed by the diffusion time scales, perhaps indicating a close agreement of experimental and theoretical evaporation time scales at high irradiation intensities. Further, the characteristic frequency of evaporation is given by \(f_{e}\sim 1/t_{e,theoretical}\) which is in \(O\ (10^{0})\) Hz. It closely matches the lower frequency band observed in figure 3(c), confirming that it represents evaporation.
### 3.3 Bubble growth (Phase B)
The energy interaction between the aqueous polymer droplet and the infrared laser is electromagnetic in origin. The droplet beam interaction could be understood based on the Mie-size parameter (Gannena et al. 2022) \(2\pi R_{0}/\lambda_{0}\). For small value of Mie-size parameter, the source function (distribution of electromagnetic energy) distributed throughout the droplet volume is uniform corresponding to a uniform thermal energy evolution throughout the droplet. For large droplets, the Mie-size parameter increases resulting in the source function developing peaks (hot spots) and the thermal energy evolution shows inhomogeneities. Different hydrodynamic and thermodynamic processes are observed based on the internal thermal energy evolution inside the droplet. Bubble nucleation is an example of one such process. The aqueous polymer, being infrared opaque, interacts with the incident laser beam's electromagnetic field and absorbs most of the incident photons. The various energy interactions that occur during droplet beam interactions are radiative heating of the droplet, thermal diffusive processes throughout and at the interface evaporative and diffusive cooling of the droplet in general. However, the diffusive process occurring over the droplet length scale is typically slower than the radiative heating of the droplet. Therefore, at an intermediate and high intensity (evaporation time is shorter than the thermal diffusive time scale), the diffusive cooling process is too slow, and the temperature at the hotspots keeps increasing before nucleation occurs due to the formation of the skin layer at the droplet interface.
The average temperature of the droplet can be modelled using the energy equation with a volumetric source term. The volumetric source term represents the absorption process (Gannena et al. 2022). Droplets irradiated by lasers are typically in a metastable state. Therefore, the phase transition from liquid water to vapor does not occur at \(100^{0}\) C. In case of rapid heating, the maximum temperature liquid water can achieve (i.e. the superheat limit)
Figure 8: (a) High-speed images of bubble growth for \(c/c^{*}=~{}14.1\) and \(I^{*}=0.7\) (b) Effect of irradiation intensity on bubble growth scales and its theoretical comparison (c) Effect of irradiation intensity on the onset time of bubble growth at different irradiation intensities and its theoretical comparison (d) Effect of polymer concentration on the bubble growth at \(I^{*}=0.7\).
preserving its liquid state is close to \(305^{0}\) C, beyond which spontaneous nucleation to vapor phase begins. The droplet hence can be in a liquid state beyond the boiling point at standard atmospheric pressure. As the temperature inside the droplet increases beyond the boiling point, and once the skin layer forms, bubble nucleation occurs much before the superheat temperature of liquid water is in a metastable state. The bubble growth in a viscoelastic medium is governed by the modified Rayleigh-Plesset equation (Gaudron et al. 2015).
\[R_{b}\ddot{R_{b}}+\frac{3}{2}\dot{R_{b}}^{2}+\frac{4\nu_{l}R_{b}}{R_{b}}+\frac {2\gamma}{\rho_{l}R_{b}}+\frac{E}{\rho_{l}}=\frac{p_{B}-p_{\infty}}{\rho_{l}} \tag{3.9}\]
where \(R_{b}\) is the bubble radius, \(\nu_{l}\) is the kinematic viscosity of the surrounding liquid around the bubble, \(\gamma\) is the surface tension at the vapor-liquid interface, \(\rho_{l}\) is the density of the liquid, and E is the elastic stress of the polymeric material. Usually, modelling the elastic stress for polymers mainly focuses on Maxwell-based models. However, systems that relax back to their original configuration are better modelled by Kelvin-Voigt polymeric models. Due to large deformations in polymeric systems, the infinitesimal strain assumption in most real-world scenarios is mainly invalid. This necessitates replacing the linear elasticity models with nonlinear finite strain elasticity models. Various nonlinear strain energy functions like neo-Hookean and Mooney-Rivlin approximations could be used. For the current experiments where the material relaxes back to its original state (PEO droplets, see supplementary Movie1 ) neo-Hookean models are much better. The elastic stress \(E\) therefore, can be written as
\[E=\frac{\eta}{2}\left[5-4\left(\frac{R_{b*}}{R_{b}}\right)-\left(\frac{R_{b*}} {R_{b}}\right)^{4}\right] \tag{3.10}\]
Notice \(E\) is directly proportional to the shear modulus \(\eta\). \(R_{b*}\) is the bubble size at the instant of nucleation and is typically of the order radius of gyration of polymer (It is in \(O\) (\(nm\))).
In addition, for the bubble initiation phase, when the ratio \(R_{b*}/R_{b}\) is close to unity, the correction terms are essential and elastic stress becomes time-varying and nonlinear. However, when the bubble has already grown to a size large enough such that the ratio \(R_{b*}/R_{b}\) is much smaller than unity, the elastic stress does not dependent on the bubble radius and becomes a constant. Therefore, the elastic term becomes important only during the initial stage of the nucleation process. Further, owing to the very high ratio of the driving pressure \(O\) (\(10^{5}\) Pa) compared to the shear modulus \(O\) (\(10^{1}\) Pa), the elastic term in the Rayleigh-Plesset equation could be neglected (for low shear modulus). The right-hand side of the Rayleigh-Plesset equation is the driving pressure difference across the bubble and is of the order of the atmospheric pressure. \(p_{B}\) is the pressure inside the bubble and \(p_{\infty}\) is the pressure in the liquid phase (polymeric droplet). For an initial bubble size of \(R_{b0}>>R_{b*}\), the vapor pressure \(p_{\nu 0}\) inside the bubble is related to the bubble size \(R_{b0}\) through the ideal gas equation (assuming the vapor inside the bubble to be an ideal gas) given by
\[p_{\nu 0}R_{b0}^{3}\propto\rho R_{g}T_{\infty} \tag{3.11}\]
where \(\rho\) is the vapor density of water vapor, \(T_{\infty}\) is the temperature of the liquid phase far away from the bubble, and \(R_{g}\) is the gas constant. As the bubble expands, the bubble pressure
varies with its size and gets coupled to the bubble temperature \(T_{B}\) through the ideal gas law as given by
\[p_{B}R_{b}^{3}\propto\rho R_{g}T_{B} \tag{3.12}\]
Dividing equation (3.12) by equation (3.11), we have
\[p_{B}=p_{\nu 0}\left(\frac{T_{B}}{T_{\infty}}\right)\left(\frac{R_{b 0}}{R_{b}}\right)^{3} \tag{3.13}\]
The total pressure inside the bubble at any given time is given by
\[p_{B}(t)=p_{\nu}(T_{B})+p_{\nu 0}\left(\frac{T_{B}}{T_{\infty}}\right)\left( \frac{R_{b0}}{R_{b}}\right)^{3} \tag{3.14}\]
Using equation (3.14) in equation (3.9), the right-hand side of the Rayleigh Plesset equation can be split into three terms as shown below
\[p_{B}(t)-p_{\infty}=I+II+III \tag{3.15}\]
where
\[I=\frac{p_{\nu}(T_{\infty})-p_{\infty}}{\rho_{l}} \tag{3.16}\] \[II=\frac{p_{\nu}(T_{B})-p_{\nu}(T_{\infty})}{\rho_{l}}\] (3.17) \[III=\frac{p_{\nu 0}}{\rho_{l}}\left(\frac{T_{B}}{T_{\infty}} \right)\left(\frac{R_{b0}}{R_{b}}\right)^{3} \tag{3.18}\]
Term I denotes the driving term in the far field region (far away from the bubble), term II represents the thermal term, and term III indicates the pressure inside the bubble.
If the liquid temperature is known, term I can be evaluated.
When the temperature difference between the bubble and the liquid temperature farther from the bubble \((T_{B}-T_{\infty})\) is small, we can use Taylor series expansion to estimate term II. Keeping first-order quantities in \((T_{B}-T_{\infty})\), term II becomes
\[\frac{p_{\nu}(T_{B})-p_{\nu}(T_{\infty})}{\rho_{l}}=B(T_{B}-T_{ \infty}) \tag{3.19}\]
where the coefficient \(B\) could be computed from the Claussius-Clapeyron equation.
The Claussius-Clapeyron equation is given by
\[\frac{dp}{dT}=\frac{L}{T\omega} \tag{3.20}\]
where \(L\) is the latent heat of vaporization, \(T\) is the temperature of phase change, \(\nu\) is the specific volume. Integrating equation (3.20) and dividing by \(\rho_{l}\) we have
\[\frac{p_{\nu}(T_{B})-p_{\nu}(T_{\infty})}{\rho_{l}}=\frac{\rho_{ \nu}(T_{\infty})L(T_{\infty})}{\rho_{l}T_{\infty}}(T_{B}-T_{\infty}) \tag{3.21}\]
Comparing equation (3.21) with equation (3.19), we have
\[B=\frac{L}{\rho_{l}T_{\infty}}\rho_{\nu}(T_{\infty}) \tag{3.22}\]
During phase change at approximately atmospheric pressure, all the properties could be evaluated at the saturated temperature corresponding to atmospheric pressure (boiling point). The scale for \((T_{B}-T_{\infty})\) is estimated using the thermal energy equations. In the current context, the liquid temperature is majorly dependent on radiative heating, as shown in our previous work at time scales smaller than the diffusive scales, which is valid for most bubble growth processes in the present context (Gannena et al. 2022). The average liquid temperature is therefore given by
\[T_{l}=T_{0}+A\int_{0}^{t}G\left(R(t),\mu\right)dt \tag{3.23}\]
where
\[A=\frac{3\alpha l_{0}}{2\mu^{3}\rho_{l}c_{l}} \tag{3.24}\]
and
\[G(R(t),\mu)=\frac{(\mu R+(\mu R-1)e^{2\mu R}+1)e^{-2\mu R}}{R^{3}} \tag{3.25}\]
Using equation (3.23), \((T_{B}-T_{\infty})\) could be computed and can be approximated by the degree of superheat \((T_{l}-T_{b})=\Delta T\) where \(T_{b}\) is the boiling point of the liquid at atmospheric pressure. The energy balance at the bubble boundary \(r=R_{b}\) is given by
\[4\pi R_{b}^{2}L\rho_{\nu}(T_{B})\dot{R_{b}}=4\pi R_{b}^{2}k_{l} \left(\frac{\partial r}{\partial r}\right)_{r=R_{b}} \tag{3.26}\]
where \(k_{l}\) is the thermal conductivity of the liquid. The total energy equation is essentially nonlinear due to the bubble growth and coupling of \((T_{B}-T_{\infty})\) and \(R_{b}(t)\). Using the Plesset-Zwick approach (Plesset and Zwick 1954), the nonlinear terms are modelled based on the assumption that the thermal boundary layer is smaller than the bubble radius, i.e., \(\delta_{T}\ll R_{b}(t)\). For the thin thermal boundary layer approximation, the unsteady thermal energy equation inside the liquid droplet can be solved with the Rayleigh-Plesset equation simultaneously in a coupled manner. On integrating the energy equation with respect to time and using the Rayleigh-Plesset equation,
The temperature difference \(T_{\infty}-T_{B}(t)\) evaluated based on the Plesset Zwick criterion is given as
\[T_{\infty}-T_{B}(t)=\left(\frac{\alpha_{l}}{\pi}\right)^{1/2} \int_{0}^{t}\frac{R_{b}^{2}(x)\left(\frac{\partial T}{\partial r}\right)_{r=R_ {b}(x)}}{(\int_{x}^{t}R_{b}^{4}(y)dxy)^{1/2}}\,dx \tag{3.27}\]
where, \(\alpha_{l}\) is the thermal diffusivity of the liquid. Further, \(x\) and \(y\) represent dummy variables for the above-mentioned definite integral for time. We should notice that the capillary pressure term \((2\gamma/\rho_{l}R_{b})\) does not appear in equation (3.27). This is due to the fact that the
capillary term is approximately three orders of magnitude smaller than the right-hand side driving pressure in the Rayleigh-Plesset equation. For a bubble size \(R_{b}\gtrsim R_{b0}\) that is bubble size of the order of \(10\)\(\upmu\)m, the capillary pressure term is one order smaller than the driving term. Further, due to the inverse dependence of the capillary pressure term on bubble radius, the contribution of capillary pressure to the Rayleigh-Plesset equation decreases even further as the bubble expands. Using the boundary condition at the bubble interface from equation (3.26), equation (3.27) can be rewritten as
\[T_{\infty}-T_{B}(t)=\frac{L\rho_{\nu}}{\rho_{l}c_{l}a_{l}^{1/2}} \Big{(}\frac{1}{\pi}\Big{)}^{1/2}\int_{0}^{t}\frac{R_{b}^{2}(x)\big{(}\frac{4R }{4t}\big{)}}{(\int_{x}^{t}R_{b}^{4}(\mathcal{V})dy)^{1/2}}dx \tag{3.28}\]
In general, in most real-world scenarios, bubble growth can be approximated by a power law of the form
\[R_{b}=R^{*}t^{n} \tag{3.29}\] \[T_{\infty}-T_{B}(t)=\frac{L\rho_{\nu}}{\rho_{l}c_{l}a_{l}^{1/2}} R^{*}t^{n-1/2}C(n) \tag{3.30}\]
where
\[C(n)=n\left(\frac{4n+1}{\pi}\right)^{1/2}\int_{0}^{1}\frac{x^{3n-1}dx}{(1-z^{4 n+1})^{1/2}} \tag{3.31}\]
where \(0<n<1\). The Plesset-Zwick bubble growth curve is given by \(n=1/2\). Note for \(n=1/2\), \((T_{\infty}-T_{B}(t))\) is constant, indicating that both \(T_{\infty}\) and \(T_{B}\) change at the same rate. The approximate bubble growth length scale can also be obtained as a scaling consequence of equation (3.26) and employing the thin boundary layer assumption due to the Plesset-Zwick approximation. Equation (3.26) can be rewritten in terms of dominant scales as
\[4\pi R_{b}^{2}L\rho_{\nu}(T_{B})\dot{R_{b}}=4\pi R_{b}^{2}k_{l} \left(\frac{\partial T}{\partial r}\right)_{r=R_{b}}\sim 4\pi R_{b}^{2}k_{l} \frac{4\tau}{\delta_{\tau}} \tag{3.32}\]
The thermal boundary layer scales as \(\delta_{T}\sim\sqrt{\alpha_{l}t}\) in general. Using the time-varying thermal boundary length scale, the boundary condition at the wall can be written in terms of dominant scales as
\[L\rho_{\nu}(T_{B})\frac{dR_{b}}{dt}\sim k_{l}\frac{4\tau}{\sqrt{ \alpha_{l}t}} \tag{3.33}\]
Simplifying further, we have
\[dR_{b}\sim\frac{k_{l}aTdt}{\rho_{\nu}(T_{B})\sqrt{\alpha_{l}t}} \tag{3.34}\]
Integrating equation (3.34), we have
\[\int dR_{b}\sim\int\frac{k_{l}aT}{\rho_{\nu}(T_{B})\sqrt{\alpha_{l }}}t^{-1/2}dt \tag{3.35}\] \[R_{b}-R_{b0}\sim\frac{2k_{l}aT}{\rho_{\nu}(T_{B})L_{\sqrt{\alpha _{l}}}}\frac{t^{1/2}\sqrt{\alpha_{l}}}{\sqrt{\alpha_{l}}} \tag{3.36}\]
Dividing by \(R_{0}\) and using \(\alpha_{l}=k_{l}/\rho_{l}c_{l}\) we have
\[\frac{R_{b}-R_{b0}}{R_{0}}\sim\frac{2k_{l}\Delta T}{\rho_{v}(T_{B})L\alpha_{l}} \Big{(}\frac{\alpha_{l}t}{R_{0}^{2}}\Big{)}^{1/2} \tag{3.37}\]
Simplifying further and recognizing the diffusion time scale \(t_{d}\) we have
\[\frac{\Delta R_{b}}{R_{0}}\sim\frac{\rho_{l}c_{l}\Delta T}{\rho_{v}L}\Big{(} \frac{t}{t_{d}}\Big{)}^{1/2} \tag{3.38}\]
The bubble radius scale can also be obtained using equation (3.29) and equation (3.30)
\[R_{b}\sim\frac{1}{c(1/2)}\frac{\rho_{l}c_{l}\Delta T}{\rho_{v}L}(\alpha_{l}t)^ {1/2} \tag{3.39}\]
where \(C(1/2)\) could be computed from equation (3.31).
After the initial fast bubble growth phase near the nucleation point, the bubble growth rate relatively slows down and grows as \(R_{b}\sim(t)^{1/2}\) (refer to eq 3.39). As the bubble grows to a size comparable to the droplet radius, a noticeable bump can be observed on the illumination face of the droplet. The bubble always appears on the illumination face of the droplet because the temperature of the droplet is higher on the illumination face compared to the shadow face. The temperature decreases exponentially as the laser beam traverses from the illumination to the shadow face due to Beer-Lamberts law (Gannena et al., 2022). The time it takes to observe a detectable bump in the droplet is known as the onset time (\(t_{onset}\)) and the corresponding droplet length scale is known as onset radius (\(R_{onset}\)) or diameter (\(D_{onset}\)). The approximate length scale of the bubble could be computed using a unique feature of the experimental configuration. Generally, the droplet in a levitated field rotates about its center of mass axis due to the conservation of angular momentum. Figure 8(a) shows the droplet and the bump caused by the bubble growing inside during various phases of its rotation. Once the bubble nucleates and grows, we observe a bump on the illumination face of the droplet (\(t=2000\ ms\) here) and measure its horizontal length scale \(D_{2}\). As the droplet rotates, we see the view as it appeared at \(t=0\ ms\). Thus, an approximate bubble length scale is estimated as \(D_{2}-D_{1}\). Note that the bubble remains stationary once it nucleates due to the low Reynolds number (Re) flow inside the droplet. The approximate liquid Reynolds number can be determined as follows. From the mass conservation boundary condition at the bubble liquid interface, the approximate liquid velocity (\(V_{I}\)) can be written as \(V_{I}{\sim}V_{b}(\rho_{v}/\rho_{l})\), where \(V_{b}{\sim}\frac{dR_{b}}{dt}\) and \(\rho_{v}\) represents bubble velocity and vapor density, respectively. The Reynolds number (\(Re{\sim}\rho_{l}V_{l}D_{onset}/\mu_{l}\)) comes out to be in \(O\) (\(10^{-7}\)) implying neglible flow conditions inside the droplet. The experimental bubble length scale is plotted in figure 8(b). The theoretical scale of bubble growth according to equation (3.38) is plotted, and it agrees well with the experimental data within the experimental uncertainty range for various irradiation intensities. Higher values of \(I^{*}\) corresponds to a higher degree of superheat for various \(I^{*}\). The degree of superheat enters the bubble growth equation in the coefficient of \((t)^{1/2}\). The onset times (\(t_{onset}\)) can also be related to the irradiation intensity \(I^{*}\) through the bubble growth equation (3.38) and through the degree of superheat \(\Delta T\). The degree of superheating is related to \(I^{*}\). This can be understood from the equation of the liquid temperature (3.23).
\[\Delta T\propto A\propto I_{0} \tag{3.40}\]
From equation (3.38), we have,
\[R_{b}\propto\Delta Tt^{1/2} \tag{3.41}\]
Using equation (3.40) in equation (3.41) and realizing that at the onset time \(R_{b}\propto R_{\text{onset}}\) we have
\[t_{onset}^{1/2}\propto\frac{R_{onset}}{I_{0}} \tag{3.42}\]
Simplifying further, we have
\[t_{onset}\propto\frac{R_{onset}^{2}}{I_{0}^{2}} \tag{3.43}\]
The irradiation intensity can be expressed in its non-dimensional form as
\[I^{*}=\ \frac{1.5D_{0}I_{0}}{h_{\nu}a_{l}\rho_{l}} \tag{3.44}\]
where \(D_{0}\) is the initial diameter of the droplet, \(h_{\nu}\) represents latent heat of vaporization.
\[t_{onset}\propto(R_{onset}/I^{*})^{2} \tag{3.45}\]
The theoretical \(t_{onset}\) curves obtained for various non-dimensional irradiation intensities and concentrations pass closely through the experimentally observed \(t_{onset}\) values (see figure 8(c)). The constant of proportionality for equation (3.45) is obtained by calibrating the onset radius and onset time scale appearing in equation (3.45) at the lowest value of \(I^{*}\). Furthermore, at a specific irradiation intensity, the bubble growth scales are higher for larger \(c/c^{*}\) compared to a lower \(c/c^{*}\) (see figure 8(d)). The critical gelation concentration \(\vartheta_{g}\) to form a skin layer reaches much earlier in the evaporation time scale, thus leading to earlier nucleation of bubble and subsequently larger bubble growth scales for high \(c/c^{*}\) compared to lower \(c/c^{*}\) at a particular irradiation intensity.
We model the membrane growth dynamics (see figure 9(a)) using a spring, mass, and damper model with external forcing from the pressure difference across the membrane interface coupled to the maximum pressure inside the bubble through the Clausius-Clapeyron equation. The governing dynamical law for the viscoelastic membrane is given as (Gannena et al. 2022)
\[\ddot{L}_{m}+2\zeta\omega_{n}\dot{L}_{m}+\omega_{n}^{2}L_{m}=\frac{c}{R_{m}} \tag{3.46}\]
where
\[C=\frac{3m_{in}R_{g}T_{B}}{16\pi\rho_{m}h_{0}R_{onset}^{2}} \tag{3.47}\]
\[\omega_{n}\sim\sqrt{\frac{2E}{\rho_{m}R_{onset}^{2}}} \tag{3.48}\]
The parameters \(C\) and \(\omega_{n}\) are computed using the onset radius (\(R_{onset}\)), membrane thickness (\(h_{0}\)), and the temperature inside the bubble (\(T_{B}\)). The bubble temperature is approximately the same as the liquid temperature (\(T_{l}\)) owing to the Plesset Zwick criteria given by equation (3.30) for \(n=1/2\), and hence the liquid temperature computed by using equations (3.23, 3.24, and 3.25) is used. For most parametric values of the irradiation intensities (\(I^{*}=~{}0.7-2.2\)) quasi steady bubble growth is observed for all values of polymeric concentration
Figure 9: (a) High-speed images of membrane growth for \(c/c^{*}=28.2\) and \(I^{*}=3.5\) (b) Membrane growth dynamics at \(c/c^{*}=~{}28.2\), \(I^{*}=~{}3.5\) and its theoretical comparison. The scale bar represents 1 mm.
considered in this study. However, for extreme irradiation intensities (\(I^{*}=3\),\(3\),\(5\)), the bubble pressure rises rapidly due to the rapid temperature rise. Owing to the bubble's rapid expansion rate and pressure, the polymeric membrane expansion becomes highly transient in contrast to the quasi-steady nature of bubble growth for low irradiation intensities. Using \(E\)\(\sim\)\(O(10^{9})\) Pa (Jee et al. 2013) and \(\zeta\)\(\sim\)\(O(10^{2})\) in equation (3.46) and solving for the growth length scale, we get a growth time scales of \(0.1\) ms which is of the same order of magnitude as the experimental growth rate of the polymeric membrane. Note that \(E>\Delta P\) (where \(\Delta P\) represents pressure difference across the membrane which is in \(O(10^{5})\) Pa) leads to membrane expansion and collapse without rupture (see supplementary Movie1 ). Figure 9(b) shows the comparison of the theoretical membrane growth scale according to equation (3.46) with the experimental membrane growth length scale (\(L_{m}\)) which is in reasonable agreement within the experimental uncertainty. Note that due to the complicated shape of the membrane expansion, the experimental length scales are extracted manually.
### Shape oscillations and precipitate formation (Phases C and D)
The shape oscillations are majorly driven by the presence of a bubble inside the droplet and, to an extent, by the rotational motion of the droplet.
The bubble growth close to the illuminating face of the droplet causes the evaporating polymer droplet to become asymmetric. This enabled us to quantify the rotational motion of the evaporating polymer droplet in a levitation field. The time-varying frequencies of the rotational motion of the droplet are obtained by performing power spectrum operation on the diameter within short time intervals. Once we know the temporal variation of diameter and frequency of droplet rotation, the relation between them in a levitated experimental configuration can be obtained as follows. From the principle of angular momentum conservation of the droplet (assuming negligible losses due to air resistance at the air-droplet interface)
\[I\omega=k \tag{3.49}\]
where \(I=mD^{2}/10\) represents the moment of inertia of the droplet, m is the mass of the droplet, D is the diameter of the droplet, and \(\omega=2\pi f\) represents the frequency of rotation of the droplet.
\[\frac{mD^{2}}{10}2\pi f=k \tag{3.50}\]
\[\frac{\rho_{l}\pi^{2}fD^{5}}{30}=k \tag{3.51}\]
\[\frac{D^{5}(t)}{D^{5}(t_{ref})}=\frac{f(t_{ref})}{f(t)} \tag{3.52}\]
\[f(t)=f(t_{ref})\left(\frac{D(t_{ref})}{D(t)}\right)^{5} \tag{3.53}\]
\[f(t)\propto\frac{1}{D^{5}(t)} \tag{3.54}\]
Here \(f(t_{ref})\) and \(D(t_{ref})\) represents frequency and diameter at the reference initial time scale in the evaporation process of the droplet. The relation between rotation frequency and droplet diameter is implicitly independent of concentration and rotation. Hence the experimental and theoretical comparison of diameter variation with rotational frequency of droplet is carried out for \(c/c^{*}=~{}14.1\) and \(I^{*}=~{}1.5\). The experimental and theoretical values show reasonable agreement within the experimental uncertainty (see figure 10). Further, the theoretical frequency scale closely matches the higher frequency band in the power spectrum of diameter regression (see figure 3(c)).
Further, we observe vigorous droplet shape oscillations primarily due to the presence of bubble. The droplet shape oscillations are quantified by estimating the change in the droplet position (x-coordinate and y-coordinate) with time. The shape oscillations over a wide range of irradiation intensities and concentrations are characterized through a non-dimensional parameter referred as bubble growth index (\(\alpha\)), which is defined as \(\alpha=\left(D_{b,max}/D_{onset}\right)^{3}\). Where \(D_{b,max}\) represents the maximum expansion diameter of the polymer droplet due to the bubble growth and \(D_{onset}\) represents the diameter of the polymer droplet at the onset of nucleation. It is observed that the major shape oscillations are characterized by \(\alpha>=1\), which are predominant at high concentrations and high irradiation intensities (\(c/c^{*}=14.1\),\(28.2\)_and_\(I^{*}=3\),\(3.5\)). Similarly, mild shape oscillations are characterized by \(0.5<=\alpha<1\). These small-scale oscillations are dominant at high concentrations and low irradiation intensities (\(c/c^{*}=14.1\),\(28.2\)_and_\(I^{*}=0.7-2.2\)). Finally, \(\alpha<0.5\) signifies minor shape oscillations observed at low concentrations and all irradiation intensities(\(c/c^{*}<14.1\),\(28.2\)_and_\(I^{*}=0.7-2.2\)). The experimental and theoretical values show reasonable agreement within the experimental uncertainty (see figure 10).
Figure 10: Experimental and theoretical comparison of variation of rotational frequency of the droplet with diameter for \(c/c^{*}=~{}14.1\) and \(I^{*}=~{}1.5\).
\(10\) and \(I^{*}=0.7-3.5\)). Figure 11(a) shows a typical major shape oscillation event. The oscillations start when the bubble/membrane expands and collapses into the polymer droplet. The major shape oscillations are characterized by volumetric shape distortions, stretching and reorientation of the polymer droplet (see inset figure 11(a)). Figure 11(a) depicts the typical centroid trajectory of the polymer droplet when the membrane (bubble) collapses back into the parent droplet. The normalized centroid coordinates of the polymer droplet are defined as \(X^{*}=(X-X_{onset})/X_{onset}\) and \(Y^{*}=(Y-Y_{onset})/Y_{onset}\) where \(X_{onset}\) and \(Y_{onset}\) represents centroid coordinates of the polymer droplet when membrane (bubble) collapse occurs. As evident from figure 11(a), once the membrane (bubble) collapses into the parent droplet, the droplet oscillates vigorously along both the horizontal and vertical directions. The maximum non-dimensional displacement in the centroid's positive x-coordinate and y-coordinate position is \(2.4\) and \(0.4\), respectively, while the non-dimensional displacement in the negative coordinates is \(-2\) and \(-0.6\), respectively. In contrast, centroid displacement for mild to minor shape oscillations is comparatively smaller than for major shape oscillations. For mild shape oscillations, the maximum non-dimensional displacement in the positive x-coordinate and y-coordinate position of the centroid is \(0.1\) and \(0.15\), respectively, while the non-dimensional displacement in the negative coordinates is \(-1.5\) and \(-0.3\), respectively (see supplementary figure S5). Whereas for very mild shape oscillations, the maximum non-dimensional displacement in the centroid's positive x-coordinate and y-coordinate position is \(1.6\) and \(0.15\), respectively, while the non-dimensional displacement in the negative coordinates is \(-1.3\) and \(-0.15\), respectively (see figure 11(b)). In the case of \(\alpha>1\), nucleation of multiple bubbles and their subsequent coalescence leads to a complex 'Dumbell' shape. This leads to droplets experiencing vigorous motion in both horizontal and vertical directions. In the regime \(\alpha<1\), the expanding bubble remains stationary close to the illumination face of the droplet for most of the droplet evaporation lifetime, resulting in only minor shape oscillations. Power spectrum density operation on X-center of mass (X-CM) and Y-center of mass (Y-CM) revealed that the dominant frequencies of droplet oscillations are in \(O\)\((10^{1}-10^{2})\) Hz. The frequencies remain independent of concentration and irradiation intensity, implying that these oscillations are driven by levitation system parameters contrary to the amplitude of the oscillations (trajectories), which are guided by the bubble growth and evaporation dynamics.
Figure 11: (a) Typical trajectory of the centroid of levitated polymer droplet for \(\alpha>1\). (b) Typical trajectory of the centroid of levitated polymer droplet for \(0<\alpha<0.5\). (c) Power spectrum density of X-center of mass (X-CM) of evaporating polymer droplet at \(I^{*}=1.5\) and \(c/c^{*}=14.1\) (d) Power spectrum density of Y–center of mass (Y-CM) of evaporating polymer droplet at \(I^{*}=1.5\) and \(c/c^{*}=14.1\). The frequencies of centroid motion remain the same irrespective of concentration and irradiation intensity.
Figure 12: Summary depicting the effect of nucleated bubbles on underlying polymer droplet dynamics and final precipitates in different regimes.
When the concentration of PEO is large (\(c/c^{*}>10\)) and at high irradiation intensities, a significant amount of nucleation sites are formed, resulting in large bubble formation. The expanding bubble eventually stretches the skin layer and expands as a viscoelastic membrane (Gannena et al. 2022). Due to significantly higher elasticity, the membrane stretches and collapses into the parent droplet. These events are observed intermittently throughout the evaporation phase in the event \(\alpha>1\). For \(\alpha<1\), due to smaller bubble growth, the unique membrane expansion and collapse is not observed. The bubble size substantially affects the polymer droplet's final morphology. At high concentrations and irradiation intensities, where the bubble diameter is significantly large (\(\alpha>1\)), the final structure of the polymer droplet resembles a shell structure (figure 12). For \(\alpha<0.5\), where the size of the bubble is significantly smaller owing to fewer nucleation sites, the final structure is a smooth solid precipitate. The bubble nucleation in multi-component, polymeric droplets with the low viscoelastic modulus (PAM) results in different modes (ligament mediated, catastrophic, micro-explosion) of atomization of droplets which has not been observed in polymeric droplets with the high viscoelastic modulus (PEO) even at significantly high concentrations and irradiation intensities. The significantly higher strength of the skin layer hampers the droplet atomization process.
## 4 Conclusions
A comprehensive experimental investigation is performed to understand the bubble dynamics and droplet shape oscillations in acoustically levitated polymer droplets under external radiative heating. The conclusions derived from the present work are as follows:
1. High viscoelastic modulus droplets experience evaporation and droplet shape oscillations without nucleation-induced atomization, in contrast to the occurrence of previously reported distinct modes of atomization in low viscoelastic modulus polymer droplets. This is attributed to the increased entanglement density at the skin layer, which directly correlates with the higher skin layer strength. The steeper increase in entanglement density (\(N_{e}\)) for High viscoelastic modulus (PEO) droplets leads to the nucleation of vapor bubbles in dilute, semi-dilute unentangled regimes contrary to the occurrence of bubble nucleation in low viscoelastic modulus (PAM) droplets in the semi-dilute entangled regime.
2. Depending on laser irradiation intensity and polymer concentration, four temporal phases are observed: droplet evaporation (phase A), vapor bubble nucleation followed by bubble/ membrane growth (phase B), shape oscillations and precipitate formation (phases C and D). The time scale for the droplet evaporation phase is in \(O(10^{0}\)-\(10^{1})\) s. The theoretical time scale obtained from a diffusive evaporative law predicts the experimental evaporation time scale. This subsequently predicts a low-frequency band observed in the power spectrum of diameter regression.
3. The scaling analysis shows that the quasi-steady bubble length scale \(R_{b}\) varies temporally as \((t)^{1/2}\) for low irradiation intensities. The membrane growth dynamics are modelled at high irradiation intensities using a spring mass damper system. There is good agreement between the theoretical (\(O(10^{-4})s\)) and experimental (\(O(10^{-4})s\)) growth time scales. Further, it is
observed that the onset time (\(t_{onset}\)) of bubble growth varies with irradiation intensity as \(t_{onset}\propto(R_{onset}/I^{*})^{2}\).
4. From the principle of conservation of angular momentum, it is shown that the frequency of rotation varies with droplet diameter as \(f\propto 1/D^{5}\). In addition, the theoretical rotation frequency agrees well with the high-frequency band observed in the power spectrum of diameter regression.
5. Finally, a bubble growth index \(\alpha\) is defined, which characterizes the final precipitates formed after evaporation. \(\alpha>1\) and \(0.5<\alpha<1\) is characterized by shell-like precipitates, whereas for \(\alpha<0.5\), solid precipitates are observed. These findings are contrary to the different atomization modes observed for low viscoelastic modulus polymer droplets in similar concentration regimes, further deciphering the role played by the skin layer in the dynamics of evaporating polymeric droplets.
### Declaration of Interests
The authors declare no conflict of interest.
|
2309.04708 | UnitModule: A Lightweight Joint Image Enhancement Module for Underwater
Object Detection | Underwater object detection faces the problem of underwater image
degradation, which affects the performance of the detector. Underwater object
detection methods based on noise reduction and image enhancement usually do not
provide images preferred by the detector or require additional datasets. In
this paper, we propose a plug-and-play Underwater joint image enhancement
Module (UnitModule) that provides the input image preferred by the detector. We
design an unsupervised learning loss for the joint training of UnitModule with
the detector without additional datasets to improve the interaction between
UnitModule and the detector. Furthermore, a color cast predictor with the
assisting color cast loss and a data augmentation called Underwater Color
Random Transfer (UCRT) are designed to improve the performance of UnitModule on
underwater images with different color casts. Extensive experiments are
conducted on DUO for different object detection models, where UnitModule
achieves the highest performance improvement of 2.6 AP for YOLOv5-S and gains
the improvement of 3.3 AP on the brand-new test set (URPCtest). And UnitModule
significantly improves the performance of all object detection models we test,
especially for models with a small number of parameters. In addition,
UnitModule with a small number of parameters of 31K has little effect on the
inference speed of the original object detection model. Our quantitative and
visual analysis also demonstrates the effectiveness of UnitModule in enhancing
the input image and improving the perception ability of the detector for object
features. | Zhuoyan Liu, Bo Wang, Ye Li, Jiaxian He, Yunfeng Li | 2023-09-09T07:30:20Z | http://arxiv.org/abs/2309.04708v1 | # UnitModule: A Lightweight Joint Image Enhancement Module for Underwater Object Detection
###### Abstract
Underwater object detection faces the problem of underwater image degradation, which affects the performance of the detector. Underwater object detection methods based on noise reduction and image enhancement usually do not provide images preferred by the detector or require additional datasets. In this paper, we propose a plug-and-play Underwater joint image enhancement **Module** (UnitModule) that provides the input image preferred by the detector. We design an unsupervised learning loss for the joint training of UnitModule with the detector without additional datasets to improve the interaction between UnitModule and the detector. Furthermore, a color cast predictor with the assisting color cast loss and a data augmentation called Underwater Color Random Transfer (UCRT) are designed to improve the performance of UnitModule on underwater images with different color casts. Extensive experiments are conducted on DUO for different object detection models, where UnitModule achieves the highest performance improvement of 2.6 AP for YOLOv5-S and gains the improvement of 3.3 AP on the brand-new test set (\(\text{URPC}_{test}\)). And UnitModule significantly improves the performance of all object detection models we test, especially for models with a small number of parameters. In addition, UnitModule with a small number of parameters of 31K has little effect on the inference speed of the original object detection model. Our quantitative and visual analysis also demonstrates the effectiveness of UnitModule in enhancing the input image and improving the perception ability of the detector for object features.
## 1 Introduction
Underwater object detection faces significant challenges. Due to the absorption of light of different wavelengths in the water medium and the suspended particles in water, underwater images usually suffer from degradation such as color cast [50, 2, 1], blurring, _etc_. We argue that such degradation introduces noise to the image, making it difficult for the object detection network to learn the original features of the object. Some works do not consider the effect of noise on underwater object detection [47, 65, 30, 7, 33, 12, 9, 44]. The degradation leads to poor performance and the generalization of the detector on different underwater datasets or underwater environments [49, 68].
Underwater object detection algorithms are usually deployed on embedded devices where processing power is limited and real-time processing is required. Therefore, lightweight detection models are required underwater. Research shows that the generalization performance of lightweight models is limited [4]. This limitation makes it difficult for lightweight models to learn about different underwater noises. We believe that the underwater lightweight detection model devotes some of its attention to generalizing noise, which reduces the performance of the model on detection. In this work, we focus on the way of noise reduction that improves the attention of the model to the detection.
For the mentioned problems, some works [75, 77, 25, 70, 11, 19, 78] provide enhanced images for the detector by pre
Figure 1: The inference flow for the detector with UnitModule. UnitModule is jointly trained with the detector. The enhanced image is only visualized for display, it is actually an intermediate tensor in forward propagation. |
2307.00149 | Hierarchical Neural Coding for Controllable CAD Model Generation | This paper presents a novel generative model for Computer Aided Design (CAD)
that 1) represents high-level design concepts of a CAD model as a three-level
hierarchical tree of neural codes, from global part arrangement down to local
curve geometry; and 2) controls the generation or completion of CAD models by
specifying the target design using a code tree. Concretely, a novel variant of
a vector quantized VAE with "masked skip connection" extracts design variations
as neural codebooks at three levels. Two-stage cascaded auto-regressive
transformers learn to generate code trees from incomplete CAD models and then
complete CAD models following the intended design. Extensive experiments
demonstrate superior performance on conventional tasks such as random
generation while enabling novel interaction capabilities on conditional
generation tasks. The code is available at
https://github.com/samxuxiang/hnc-cad. | Xiang Xu, Pradeep Kumar Jayaraman, Joseph G. Lambourne, Karl D. D. Willis, Yasutaka Furukawa | 2023-06-30T21:49:41Z | http://arxiv.org/abs/2307.00149v1 | # Hierarchical Neural Coding for Controllable CAD Model Generation
###### Abstract
This paper presents a novel generative model for Computer Aided Design (CAD) that 1) represents high-level design concepts of a CAD model as a three-level hierarchical tree of neural codes, from global part arrangement down to local curve geometry; and 2) controls the generation or completion of CAD models by specifying the target design using a code tree. Concretely, a novel variant of a vector quantized VAE with "masked skip connection" extracts design variations as neural codebooks at three levels. Two-stage cascaded auto-regressive transformers learn to generate code trees from incomplete CAD models and then complete CAD models following the intended design. Extensive experiments demonstrate superior performance on conventional tasks such as unconditional generation while enabling novel interaction capabilities on conditional generation tasks. The code is available at [https://github.com/samxuxiang/hnc-cad](https://github.com/samxuxiang/hnc-cad).
Machine Learning, ICML, ICML
## 1 Introduction
From automobiles to airplanes, excavators to elevators, manade objects are created using Computer Aided Design (CAD) software. Most modern CAD design tools employ the "Sketch and Extrude" style workflow (Camba et al., 2016; Shahin, 2008), where designers 1) draw loops of 2D curves as outer and inner boundaries to create 2D profiles; 2) extrude the 2D profiles to 3D shapes; and 3) add or subtract 3D shapes to build complex CAD models.
CAD models created in this way have a natural tree structure which supports local edits. The curves at the leaves of the tree can be adjusted and the extrusions regenerated to update the final shape. For designers, it is also important that edits preserve "design intent". Otey et al (Otey et al., 2018) defines design intent as "a CAD model's anticipated behavior when altered" while Martin (Martin, 2023) describe it as "relationships between objects, so that a change to one can propagate automatically to others". Although "Sketch and Extrude" allows local changes, it does not provide the relationships required to give the anticipated behavior when the model is edited. A computational system with understanding of design intent would revolutionize the practice of CAD. The system would help designers in 1) generating a diverse set of CAD models given high-level design concepts; 2)
Figure 1: We propose three-level hierarchical neural coding for controllable CAD model generation. Our system learns high-level design concepts as discrete codes at different levels, enabling more diverse and higher-quality generation (top); novel user controls while specifying design intent (bottom-left); and autocompleting a partial CAD model under construction (bottom-right).
modifying existing CAD models while constraining certain model properties or 3) auto-completing designs interactively (See Figure 1).
Unfortunately, such a system is not yet available for designers. A current industry standard is to manually specify parameters and equations which define the positions and sizes of profiles, and constraints to align geometry. This process, known as Parametric CAD, requires specialized skills (Yares, 2013) and easily breaks with unanticipated edits. Figure 2 illustrates examples, where editing the geometry of a poorly constrained CAD model breaks the original design intent. State-of-the-art research employs machine learning techniques to automatically generate CAD models, e.g. Wu et al. (2021). However, existing works do not make use of the hierarchical nature of CAD designs to provide effective design control.
This paper presents a novel generative network that captures the design intent of a CAD model as a three-level tree of neural codes, from local geometric features to global part arrangement; and controls the generation or completion of CAD models subject to the design intent specified by the code tree or an incomplete CAD model. CAD models are generated as sequences of modeling operations, then converted into the industry standard boundary representation (B-Rep) format for editing in mechanical CAD software.
Concretely, a novel variant of the vector quantized VAE (Van Den Oord et al., 2017) with "masked skip connection" learns design variations as three neural codebooks from a large-scale sketch-and-extrude CAD dataset (Wu et al., 2021). The masked skip connection is simple yet effective at extracting well-abstracted codebooks, making the relationships of codes and generated geometry intuitive. Then, two-stage cascaded auto-regressive transformers learn to generate 1) three-level code trees given an incomplete CAD model, 2) complete CAD model given the code tree and the incomplete data. Designers can also directly provide a code tree for model generation.
Qualitative and quantitative evaluations against other generative baselines show that our system generates more realistic and complicated models in a random generation task. In user-controlled conditional generation tasks, our system demonstrates flexible and superior geometry control, enabled by the hierarchical code tree representation, over the current state-of-the-art deep learning-based generative models (i.e., SkeAgen (Xu et al., 2022), DeepCAD (Wu et al., 2021)). In summary, we make the following contributions:
\(\bullet\) A neural code tree representation encoding hierarchical design concepts that enables generation of high quality and complex models, design intent aware user editing, and design auto-completion.
\(\bullet\) A novel variant of VQ-VAE with a masked skip connection for enhanced codebook learning.
\(\bullet\) State-of-the-art performance in CAD model generation over the previous SOTA methods.
## 2 Related Work
**Constructive Solid Geometry (CSG)**: CSG builds complex shapes as Boolean combinations of simple primitives. Recent works utilized this representation for reconstructing CAD shapes with program synthesis (Du et al., 2018; Nandi et al., 2017, 2018; Sharma et al., 2018; Ellis et al., 2019; Tian et al., 2019), and unsupervised learning (Kania et al., 2020; Ren et al., 2021; Chen et al., 2020; Yu et al., 2022; 2023). Although a CSG tree can be converted into B-rep by building equivalent primitives and applying Boolean operations with solid modeling kernel, parametric CAD (Campa et al., 2016), where a sequence of 2D sketches are built and extruded to 3D, is the dominant paradigm for designing mechanical parts and supports easy parametric editing.
**Direct CAD Generation**: Some recent works focused on directly generating CAD models without any supervision from CAD modeling sequences, by building the geometry of parametric curves (Wang et al., 2020) and surfaces (Sharma et al., 2020) with fixed (Smirnov et al., 2021) or arbitrary topology for sketches (Willis et al., 2021) and solid models(Wang et al., 2022; Guo et al., 2022; Jayaraman et al., 2022). We focus more on controllable generation of parametric CAD in the form of sketch and extrude sequences.
**Sketch and Extrude CAD Generation**: Recent availability of large-scale datasets for parametric CAD has enabled learning based methods to leverage the CAD modeling sequence history (Willis et al., 2021; Wu et al., 2021; Xu et al., 2022) and sketch constraints (Seff et al., 2020) to generate engineering sketches and solid models. The generated sequences can be parsed with a solid modeling kernel to obtain editable parametric CAD files containing 2D engineering sketches (Willis et al., 2021; Para et al., 2021; Ganin et al., 2021; Seff et al., 2021) or 3D CAD shapes (Wu
Figure 2: Example failures of parametric CAD, editing a design (a) by shortening or extending (green) the table. Inconsistent areas are highlighted in red.
et al., 2021; Xu et al., 2022). Additionally, the generation can be influenced by a target B-rep (Willis et al., 2021; Xu et al., 2021), sketches (Li et al., 2020; Seff et al., 2021), images (Ganin et al., 2021), voxel grids (Lambourne et al., 2022) or point clouds with (Uy et al., 2021) and without sequence guidance (Ren et al., 2022; Li et al., 2023). But this kind of control is on a global level, while we aim for hierarchical control on both global and local levels to support applications like design preserved edits and autocomplete.
**User-Controlled CAD Generation**: Providing user control over the generation process, while preserving design intent, is key for adoption of generative models in real world CAD software. Although previous approaches can produce diverse shapes based on high level guidance, enabling user control over the generation process is more challenging. In the Sketch2CAD framework (Li et al., 2020), a network is trained to predict CAD operations that correspond with segmented sketch strokes, enabling a user interface for sketch based CAD modeling. Free2CAD (Li et al., 2022) generalizes this system by additionally learning how to segment a complete sketch into groups that can be mapped to CAD operations. These works focus on localized control over the design process, and require significant user input. Recent works also leverage text prompts (Wu et al., 2023; Kodnongbua et al., 2023) and user-specified guidelines (Cheng et al., 2023). SkexGen (Xu et al., 2022) allows users to explore design variations with disentangled global control over over the topology and geometry of CAD shapes. However, their approach simply aids in creating a new design from scratch and cannot be easily modified to provide an interactive experience that users expect for smartly editing CAD models or autocompleting their next steps to save effort. Different from existing works, our method leverages the natural hierarchies which exist inside the CAD models to provide both global and local control over the generation process.
## 3 Hierarchical CAD Properties
A sketch and extrude CAD model is naturally hierarchical (see Figure 3) with a _loop_ defining a closed path of connected curves, a _profile_ defining a closed area in the sketch plane bounded by one outer loop and some inner loops, and a _solid_ representing a set of extruded profiles that are combined to form the entire model. Our goal is to enable local and global control in the generation of CAD models where users edit any of these entities and expect the rest to be updated sensibly automatically. To achieve this, we capture this hierarchy in the latent space of our neural networks. At higher levels of the hierarchy, the network learns the relative positions of lower level geometric entities, that is, the bounding boxes of the profiles and extrusions which make up the model. Concretely, we consider a CAD model as a (S)olid-(P)profile-(L)oop tree:
**Loop (\(L\))**: At the leaf of the tree, we have loops. Each loop consists of a set of lines and arcs or a circle. The properties of a loop (\(L\)) is defined as a series of x-y coordinates separated by special <SEP> tokens:
\[L=\{(x_{1},y_{1}),(x_{2},y_{2}),\texttt{<SEP>},(x_{3},y_{3}),\ \ldots\}. \tag{1}\]
Lines are represented by the xy-coordinates of _two_ points. Here we use the start and end of the curve. Arcs are represented by _three_ points including start, middle and end point. Circles are represented by _four_ equally spaced points lying on the curve. With this representation, the curve types can be identified by the number of points as in (Willis et al., 2021). We sort the curves in a loop so that the initial curve is the one with the smallest starting point coordinate, and the next one is its connected curve in counterclockwise order.
**Profile (\(P\))**: The profile is above the leaf level. Since the loop geometry is captured at the leaf level, the properties of a profile node is defined as a series of 2D bounding box parameters of the loops within the sketch plane:
\[P=\{(x_{i},y_{i},w_{i},h_{i})\}_{i=1}^{N_{i}^{\text{loop}}}. \tag{2}\]
\(i\) is the index of the \(N_{i}^{\text{loop}}\) loops within a profile. \((x_{i},y_{i})\) is the bottom-left corner of the bounding box. \((w_{i},h_{i})\) is the width and height. We determine the order of bounding box parameters in profile \(P\) by sorting the bottom-left corner of all the 2D bounding boxes in ascending order.
**Solid (\(S\))**: Above the profile level, we have the 3D solid model formed by extruding one or more profiles. The properties of a solid node captures the arrangement of extruded profiles using a series of 3D bounding box parameters:
\[S=\{(x_{j},y_{j},z_{j},w_{j},h_{j},d_{j})\}_{j=1}^{N_{j}^{\text{predus}}}. \tag{3}\]
\(j\) is the index of the \(N_{j}^{\text{profile}}\) extruded profiles within a model. \((x_{j},y_{j},z_{j})\) is the bottom-left corner of the bounding box and \((w_{j},h_{j},d_{j})\) is its dimension. Likewise, the parameters in \(S\) is sorted by the bottom-left corner of all the extruded 3D bounding boxes in ascending order.
Figure 3: Our hierarchical tree representation of a CAD model, with which a novel VQ-VAE learns codebooks at the levels of solid, profile, and loop.
## 4 Three-Level Codebook Learning
Given a dataset of sketch and extrude CAD models in the (S)solid-(P)profile-(L)oop tree format, a novel variant of the vector quantized VAE (VQ-VAE) (Van Den Oord et al., 2017; Razavi et al., 2019) learns their latent patterns as three discrete codebooks, which encode a CAD model into a tree of neural codes for downstream applications.
Following SkexGen (Xu et al., 2022), the foundation of our architecture for learning codebooks is a VQ-VAE, consisting of a Transformer encoder \(E\) and decoder \(D\) (see Figure 4). We learn (L)oop, (P)profile, and (S)olid codebooks independently. Different from SkexGen and previous work on masked learning (He et al., 2022), we apply masking on a skip-connection from the encoder input to the decoder input. Intuitively, a standard VQ-VAE (i.e., without skip connection) is trained to recover instance-specific input details, which would be a challenge for the quantized code if it is learning instance-agnostic design patterns. A naive skip connection allows the decoder to cheat by directly copying the input. Masking the skip connection forces the decoder to relate partial details from unmasked elements and fill-in missing ones, where the relation is guided by design patterns encoded in the code.
**Encoder**: Consider a (L)oop node \(L\) (Equation 1), containing a series of x-y coordinates and special <SEP> tokens. We use a \(65\)D one-hot vector to represent a token, where a coordinate is quantized to a 6 bit (i.e., 64D) (Xu et al., 2022; Seff et al., 2021) and <SEP> requires one extra dimension. Let \(T_{t}^{E}\) denote the 256D embedding of the \(t^{\text{th}}\) token for the Transformer encoder. The embedding is initialized as:
\[T_{t}^{E}\leftarrow\begin{cases}\text{MLP}(W_{\text{emb}}x_{t}\parallel W_{ \text{emb}}y_{t})+\gamma_{t}\ \ \text{ (for x-y)},\\ \text{MLP}(W_{\text{emb}}\text{<SEP>}\parallel W_{\text{emb}}\text{<SEP>})+ \gamma_{t}.\end{cases} \tag{4}\]
\(W_{\text{emb}}\) is a \(65\times 32\) token embedding matrix. \(\parallel\) is the concatenation operator. MLP is a 2-layer multilayer perception. \(\gamma_{t}\) is a learnable 256D positional embedding. Second case is for <SEP> where value is repeated twice. For (P)profile and (S)olid codebooks, we process each of the 2D or 3D bounding box parameters the same way as \(x_{t}\), \(y_{t}\) coordinates, except with no <SEP> tokens.
**Vector Quantization**: The outputs of the encoder (\(E\)), with sequence length \(T\), are first average pooled, forming \(\overline{E}(T^{E})\). The standard vector quantization procedure is then applied to obtain a 256D codebook vector \(\mathbf{c}\). More specifically, we compare the Euclidean distance between codebook vector \(\mathbf{b}\) and encoded \(\overline{E}(T^{E})\) and perform a nearest neighbor lookup.
\[\mathbf{c}\leftarrow\mathbf{b}_{k},\quad\text{where }k=\text{argmin}_{i} \left|\left|\overline{E}(T^{E})-\mathbf{b}_{i}\right|\right|^{2}. \tag{5}\]
**Decoder with Masked Skip Connection**: The decoder takes the quantized code \(\mathbf{c}\) and the input series of x-y coordinates and <SEP> tokens with masking, and predicts the masked tokens. For example, in the case of a loop node, any of the \(x_{t}\), \(y_{t}\) and <SEP> tokens could be masked (concretely 30% to 70% of the tokens per model randomly). Let \(T_{t}^{D}\) denote the embedding of the \(t_{\text{th}}\) token as an input to the decoder. Each token is embedded exactly as in Equation 4, except that embeddings of masked tokens are replaced with a learnable shared 32D mask token embedding \(m\).
The 256D codebook vector \(\mathbf{c}\) from the encoder is concatenated together with \(\{T_{t}^{D}\}\) and passed to the decoder (\(D\)), which has 4 self-attention layers. The idea here is to force the encoder to learn useful latent features that can help the decoder to predict the masked tokens. Finally, an MLP is applied to each token embedding (except the codebook vector) after the decoder to produce \((2\times 65)\)D logits, a pair of probability values over the 65 class labels for predicting the
Figure 4: Left: VQ-VAE with masked skip connection for codebook learning. Given a CAD model as a construction sequence (e.g., x, y, S), an MLP and a Transformer encoder convert the input to latent codes (\(T_{t}^{E}\)), and a vector quantization extracts a code (\(c\)) after average pooling. A Transformer decoder recovers the input sequence, conditioned on the vector-quantized code (\(c\)) and the masked input sequence (\(T_{t}^{D}\)). Grey color represents input tokens that were masked out. Right: Controllable CAD generation module with two-stage auto-regressive generators. Given a partial CAD model, a model encoder converts it to latent embeddings (\(T_{t}^{E}\)). The first auto-regressive Transformer generates hierarchical neural codes (\(T_{t}^{C}\)) conditioned on the encoded embeddings. The second auto-regressive Transformer generates a new CAD model.
xy-coordinates or the <SEP> token.
**Loss Function**: The training loss consists of three terms:
\[\sum_{t}\text{EMD}\Big{(}D(\mathbf{c},\{T_{t}^{D}\}),\ \mathbbm{1}_{T_{t}} \Big{)}+\] \[\big{|}\big{|}sg[\overline{E}(T^{E})]-\mathbf{c}\big{|}\big{|}_{2}^ {2}+\beta\,\big{|}\big{|}\overline{E}(T^{E})-sg[\mathbf{c}]\big{|}\big{|}_{2}^ {2}. \tag{6}\]
The first term is the squared Earth Mover's Distance Loss between the decoder output probability and the corresponding data property's one hot encoding \(\mathbbm{1}_{T_{t}}\). The loss is only applied at masked tokens. We use the EMD loss function from (Hou et al., 2016) which assumes ordinal class labels and penalizes predictions closer to the ground-truth less than those far away. This works better than a cross-entropy loss since x-y coordinates carry distance relations, allowing the loss to focus on predictions far away from the ground-truth. Note that we treat the <SEP> token in loop data properties differently by applying the standard cross-entropy loss on it as this is not an ordinal class label.
The second and third terms are the codebook and commitment losses used in VQ-VAE (Van Den Oord et al., 2017; Razavi et al., 2019). \(sg\) denotes the stop-gradient operation, which is the identity function in forward pass but blocks gradients in backward pass. \(\beta\) scales the commitment loss and is set to 0.25. We use the exponential moving average updates with a decay rate of 0.99 (Razavi et al., 2019).
## 5 Controllable CAD Generation
Loop, profile, and solid codebooks allow to express the design concepts of a CAD model as hierarchical neural codes, enabling diverse and high-quality generation, novel user controls specifying design intent, and autocompletion of incomplete CAD models. Concretely, given an incomplete CAD model as a sketch and extrude construction sequence: 1) A model encoder turns the input sequence into latent embeddings; 2) An auto-regressive Transformer generates a code tree, conditioned on the embedded input sequence; and 3) The second auto-regressive Transformer generates the full CAD models, conditioned on the embedded input sequence and a code tree.
**Model Encoder:** The model encoder backbone is the standard Transformer encoder module with 6 self-attention layers. We borrow the format used in SkexGen (Xu et al., 2022) and represent a model as a sequence of tokens, each of which is a one-hot vector, uniquely determining a curve type, quantized curve parameter and quantized extrusion parameter. The encoder converts the one-hot vectors into a series of 256D latent embeddings \(\{T_{t}^{E}\}\). 1
Footnote 1: As in SkexGen, we encode “geometry” and “extrusion” sequence separately and concatenate the embeddings to get \(T_{t}^{E}\). For experiments with 2D sketches, only the geometry encoder is used.
**Code Tree Generator:**\(G_{\text{code}}\) is an autoregressive decoder which generates a hierarchy of codes \(\{T_{t}^{C}\}\). A code is assigned to each (S)old, (P)profile, or (L)oop from a corresponding codebook, conditioned on the encoded embeddings \(\{T_{t}^{E}\}\). Similar to the hierarchical property representation (section 3), hierarchical codes are represented as a series of feature vectors indicating either a code or a separator token. Concretely, a feature is a one-hot vector whose size is the total number of codes in the three codebooks plus one for the separator. For example, consider the code tree in Figure 3, consisting of a model with one solid, two profiles, and two or four loops. This tree is represented as features in the following order [S, <SEP>, P, L, L, <SEP>, P, L, L, L, L]. Here we perform depth-first traversal of the neural code tree and the boundary command <SEP> is used to indicate a new grouping of profile and loop codes.
\(G_{\text{code}}\) has 6 self-attention (SA) layers interleaved with 6 cross-attention (CA) layers. The first SA layer is over the query tokens \(\{T_{t}^{C}\}\), each of which is initialized by a position encoding \(\gamma_{t}\) and autoregressively estimated. The input to each of the CA layers is \(\{T_{t}^{E}\}\). Each SA or CA layer has 8-heads attentions, followed by an Add-Norm layer. A query token \(\{T_{t}^{C}\}\) will have a generated code index, which is converted to a code \(T_{t}^{C}\). A separator is replaced by a learnable embedding.
\[T_{t}^{C}\leftarrow\begin{cases}\text{Codebook}(T_{t}^{C})+\gamma_{t}&\text{( for code)},\\ W_{\text{emb}}\text{<SEP>}&+\gamma_{t}&\text{(for <SEP>)}.\end{cases} \tag{7}\]
Codebook denotes the mapping from a code index to the code. We train \(G_{\text{code}}\) with the standard cross-entropy loss. Note that for unconditional generation, we remove the partial CAD model encoder and train SA layers with query tokens (\(\{T_{t}^{C}\}\)) only, without cross-attention layers and \(\{T_{t}^{E}\}\).
**Model Generator:** The model generator is the second auto-regressive decoder \(G_{\text{cad}}\), generating a sketch-and-extrude CAD model. \(G_{\text{cad}}\) is the same as the SkexGen decoder (Xu et al., 2022) except that partial CAD model embeddings \(\{T_{t}^{E}\}\) and the hierarchical neural codes \(\{T_{t}^{C}\}\) control the generation via the cross-attention layers, while SkexGen only allows the specification of global codes. The architecture specification is the same as the first decoder. The query tokens (\(T_{t}^{out}\)) contain the generated CAD command sequences as one-hot vectors (Xu et al., 2022), where we use the same standard cross entropy loss.
## 6 Evaluation
This section presents unconditional and conditional generation results, which demonstrate 1) Higher quality, diversity, and complexity compared to current state-of-the-art; 2) Controllable generation via hierarchical neural codes; and 3) Two important applications, user-edit and auto-completion.
### Experiment Setup
**Dataset**: We use the large-scale DeepCAD dataset (Wu et al., 2021) with ground-truth sketch-and-extrude models. DeepCAD contains 178,238 sketch-and-extrude models with a split of \(90\%\) train, \(5\%\) validation, and \(5\%\) test samples. We detect and remove duplicate models from the training set as in prior works (Willis et al., 2021; Xu et al., 2022). After extracting the hierarchical properties for (L)oop, (P)profile, and (S)olid (section 3), we also remove duplicate properties for each level. Lastly, we use a CAD model for training only when the number of solids is at most 5, the number of loops is at most 20 for every profile, the number of curves is at most 60 for every loop, and the total number of commands in the sketch-and-extrude sequence is at most 200. After the duplicate removal and filtering, the training set contains 102,114 solids, 60,584 profiles, 150,158 loops for codebook learning, and 124,451 sketch-and-extrude sequences for CAD model generation training. For CAD engineering drawings, we follow SkexGen (Xu et al., 2022) and extract sketches from DeepCAD. A total of 99,650 sketches are used for training after duplicate removal.
**Implementation Details**: Models are trained on an Nvidia RTX A6000 GPU with a batch size of 256. The codebook module and the generation module are trained for 250 and 350 epochs, respectively. We use the improved Transformer backbone with pre-layer normalization as in (Wu et al., 2021; Xu et al., 2022). Input embedding dimension is \(256\). Feed-forward dimension is \(512\). Dropout rate is \(0.1\). Each Transformer network in the generation module has 6 layers with 8 attention heads. The codebook learning networks have 4 layers. We use the AdamW (Loshchilov and Hutter, 2018) optimizer with a learning rate of \(0.001\) after linear warm-up for 2000 steps. At test time, we use nucleus sampling (Holtzman et al., 2020) to autoregressively generate the codes and CAD tokens. To reduce overfitting, we follow (Xu et al., 2022) and augment the training data by adding a small random noise to the input curve coordinates.
VQ-VAEs suffer from codebook collapse and we employ an approach from Jukebox (Dhariwal et al., 2020) that reinitializes under-utilized codes (less than 7 mapped samples). To identify the optimal codebook size, we trained our model using different codebook sizes and evaluated the unconditional generation results using the \(5\%\) validation set in DeepCAD. Our analysis revealed that the model performance was best for codebook size ranging from 2,000 to 4,000, with larger codebook not providing noticeable improvement. Our final codebook size is around 3,500 for profile and solid, and 2,500 for loop. The compression ratio of dividing the number of unique data by the codebook size is approximately 60x for loop, 17x for profile, and 29x for solid.
### Metrics
Five established metrics quantitatively evaluate random generation. Three metrics are based on point clouds sampled on the model surfaces. Two metrics are based on generated tokens of sketch and extrude construction sequence.
**Point-cloud** metrics measure generation diversity and quality by sampling 2,000 points on each generated or ground-truth data and compare two sets of point clouds (Achlioptas et al., 2018; Wu et al., 2021; Xu et al., 2022).
\(\bullet\)_Coverage_ (COV) is the percentage of ground-truth models that have at least one matched generated sample. The matching process assigns every generated sample to its closest neighbor in the ground-truth set based on Chamfer Distance
Figure 5: Unconditional generation results by (a) DeepCAD, (b) SkexGen and (c) our method. The bottom three rows (red color) show complex samples with three or more sketch-extrude steps.
(CD) or Earth Mover's Distance (EMD). COV measures the diversity of generated shapes. If CAD generation suffers from mode collapse, generated shapes would only match a few ground-truth models, leading to low coverage scores.
\(\bullet\)_Minimum Matching Distance_ (MMD) reports the average minimum matching distance between the ground-truth set and the generated set.
\(\bullet\)_Jensen-Shannon Divergence_ (JSD) is the similarity between two probability distributions, measuring how often the ground-truth points clouds occupied similar locations as the generated point clouds. We voxelize the 3D space and count the number of points in each voxel. This gives us occupancy distributions for computing the JSD score.
**Token** metrics measure uniqueness (Willis et al., 2021). Numeric fields are quantized to 6-bit.
\(\bullet\)_Novel_ is the percentage of generated CAD sequence that does not appear in the training set.
\(\bullet\)_Unique_ is the percentage of generated data that appears once in the generated set.
### Unconditional Generation
We compare with two sketch-and-extrude baselines, DeepCAD (Wu et al., 2021) and SkexGen (Xu et al., 2022), for the unconditional generation task. Our cascaded auto-regressive system generates a code tree and then a CAD model. Each method generates 10,000 CAD models, which are compared with randomly selected 2,500 ground truth models from the test set.
**Quantitative Evaluation**: Table 1 reports the average scores across 3 different runs. Our method outperforms baselines on all three point cloud evaluation metrics, demonstrating great improvements in quality and diversity. The _Unique_ score of our method matches SkexGen and is significantly better than DeepCAD. For the _Novel_ score, our method is slightly worse than SkexGen, while still significantly better than DeepCAD, which is caused by the smaller training set that lacks diversity and has only a few complex shapes. SkexGen suffers less from this issue since it fails to generate very complex CAD models (see Figure 5).
**Qualitative Evaluation**: Figure 5 provides side-by-side qualitative comparisons at different steps of sketch-and-extrude. The figure shows that our approach generates well-structured CAD models, reminiscent of real-world examples. Generated solids have more complicated shape geometries and part arrangements. Additional qualitative results are available in Figure 15 and Figure 16. Also see Appendix C for the sketch generation results.
**Human Evaluation**: To evaluate the perceived quality of our generation results, we run a human evaluation following the methodology in (Jayaraman et al., 2022). As our hierarchical technique excels at generating complex models, we choose to perform the human evaluation on models with three or more extrusions. For the DeepCAD and SkexGen benchmarks, where control over the complexity of the generated models is not possible, we randomly sample models that have three or more extrusions from a larger pool of unconditional generation results. For each model created by a generative method, we randomly select another ground-truth model from DeepCAD and display renderings of the two
\begin{table}
\begin{tabular}{l c c c c c|c} \hline \hline Method & COV & MMD & JSD & Novel & Unique & Realism \\ & \% \(\uparrow\) & \(\downarrow\) & \(\downarrow\) & \% \(\uparrow\) & \% \(\uparrow\) & \% \(\uparrow\) \\ \hline DeepCAD & 80.62 & 1.10 & 3.29 & 91.7 & 85.8 & 38.7 \\ SkexGen & 84.74 & 1.02 & 0.90 & 99.1 & 99.8 & 46.9 \\ Ours & 87.73 & 0.96 & 0.68 & 93.9 & 99.7 & 49.2 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantitative evaluations on the CAD generation task based on the _Coverage_ (COV) percentage, _Minimum Matching Distance_ (MMD), _Jensen-Shannon Divergence_ (JSD), the percentage of _Unique_ and _Novel_ scores and _Realism_ as perceived by human evaluators.
Figure 6: Distribution of votes by 7 human evaluators comparing the realism of complex samples produced by the three methods with the training set.
Figure 7: Generated results from hierarchical code tree editing. Code is edited in the (L)oop level of the tree in the first row, the (P)profile level of the tree in the second row, and the (S)olid level of the tree in the third row. The code tree corresponding to each result is inset below.
side by side. The image pairs were presented to crowd workers from the Amazon Mechanical Turk workforce (Mishra, 2019), who were asked to evaluate which of the two is more "realistic". To assist the crowd workers with this task, we provide carefully chosen examples of complex CAD models and low quality generations. See Appendix A for details.
Each image pair was rated independently by 7 crowd workers and we record the number of times generated data was selected as more realistic than training data, giving us a "realism" score from 0 to 7. Figure 6 shows the distribution of the"realism" scores. We see that for our method the distribution is symmetric, indicating the crowd workers are unable to distinguish the generated models from the training. DeepCAD and SkexGen distributions are skewed towards "less realistic", indicating crowd workers were able to identify models generated by them as simplistic or malformed. We consider a generated model as more "realistic" than the training data if 4 or more of the 7 raters selected it. For our method, 49.2% of the generated models were more "realistic" than complex examples from the training data compared to 46.9% for SkexGen and 38.7% for DeepCAD.
### Controllable Generation
We demonstrate controllable generation in two "editing", and one "auto-completion" application scenarios.
**Code Tree Editing**: Given a code tree, a user can edit the code nodes at three different levels, achieving local and global modifications across the CAD hierarchy. This hierarchical control over the generation is unavailable in previous methods (Wu et al., 2021; Xu et al., 2022). Figure 7 illustrates the diverse and well-controlled generated results from editing of the code tree. We see that loop codes control the shape geometry, profile codes control the loop dimension and positioning in 2D, and finally solid code controls the height of extruded sketches and their 3D combination.
**Design-Preserving Editing**: With the code tree fixed, a user can preserve the current design while making local edits to the model parameters to iteratively refine it. Treating user edited parameters as partial input and reuse the previous neural codes, the model generator outputs a new CAD model following both the current design and the user edited values. Figure 8 demonstrates that after a user edit to the horizontal length of a local part, the bottom part in the left and the two supporting arms in the right adjusted their size accordingly to accommodate the user edit. This automatic process is the result of keeping the code tree that encodes the part connectivity and dimension relations.
**Autocompletion from User Input**: We consider partial user input in the format of one or multiple extruded profiles or loops. Our code tree generator can predict a set of likely codes from partial input and use the generated code together with the partial input to autocomplete the full CAD model. Figure 9 shows the sketch autocomplete results when a user provide partial loops and model completes the full
Figure 8: CAD parameter edits with fixed code trees. Red arrows indicate the individual parts edited by the user. Other parts automatically got modified.
Figure 10: Autocompleted CAD models (blue) from partial extruded profiles (gray).
Figure 9: Autocompleted sketches (column \(2\sim 4\)) from partial loops (column 1).
sketch. Likewise, Figure 10 shows the CAD autocomplete results from partial extruded profiles. Each row contains multiple generated results from different generated codes. Here we use top-1 sampling in the model generator to limit the generation diversity to code only. See Appendix B and Appendix C for additional results.
For comparison, we also implemented a nearest-neighbor search baseline. Partial CAD solids built from intermediate steps of the sketch-and-extrude formed a ground-truth incomplete CAD database. User input is the query and we compute its Chamfer distance to all shapes in the database. The k-nearest shapes are considered to be geometrically similar and we retrieve their corresponding ground-truth complete CAD models as the completed result. Figure 11 compares our generated results with the top-3 nearest neighbor results. Nearest-neighbor results have less diversity and fail to closely match the user input. In contrast, our results correctly auto-complete the user input with high diversity.
### Instance-Agnostic Design Pattern
To better understand the unsupervised features learned by the codebook, Figure 12 shows the data and code mapping after encoding. We see that data assigned to the same code share similar instance-agnostic design patterns, such as the oscillating pattern in the first row, while effectively ignoring data-specific details like the exact number of curves or its type. More visualization is in Appendix D.
## 7 Limitations
A primary failure mode of our current system is the lack of validity in the generated CAD models with self-intersecting edges or solids. Our loss functions do not explicitly penalize invalid geometries; future work is the addition of a loss function that explicitly penalizes the CAD model invalidity with domain knowledge. Another direction is to learn to recover from such failures, which currently poses a challenge due to the lack of an "invalid CAD model dataset". Lastly, another limitation of our approach is the use of the sketch-and-extrude CAD format that excludes other popular modeling operations such as revolve, mirror, and sweep.
## 8 Conclusion
We introduce a novel generative model for controllable CAD generation. A key to our approach is a three-level neural coding that captures design patterns and intent at different levels of the modeling hierarchy. This paper makes another step towards intelligent generative design with users in the loop. Extensive evaluations demonstrate major boosts in generation quality and promising applications of our hierarchical neural coding such as intent-aware editing or auto-completion.
## Acknowledgements
This research is partially supported by NSERC Discovery Grants with Accelerator Supplements and DND/NSERC Discovery Grant Supplement, NSERC Alliance Grants, and John R. Evans Leaders Fund (JELF).
Figure 11: Qualitative comparison between our method (center) and the nearest neighbor search baseline (right). Given the same partial user input (left), our method autocompletes the CAD model with better diversity and fidelity.
Figure 12: Loops (row 1,2) and profiles (row 3) encoded to the same code. Profiles shown as bounding boxes. |
2309.16287 | Predicting performance difficulty from piano sheet music images | Estimating the performance difficulty of a musical score is crucial in music
education for adequately designing the learning curriculum of the students.
Although the Music Information Retrieval community has recently shown interest
in this task, existing approaches mainly use machine-readable scores, leaving
the broader case of sheet music images unaddressed. Based on previous works
involving sheet music images, we use a mid-level representation, bootleg score,
describing notehead positions relative to staff lines coupled with a
transformer model. This architecture is adapted to our task by introducing an
encoding scheme that reduces the encoded sequence length to one-eighth of the
original size. In terms of evaluation, we consider five datasets -- more than
7500 scores with up to 9 difficulty levels -- , two of them particularly
compiled for this work. The results obtained when pretraining the scheme on the
IMSLP corpus and fine-tuning it on the considered datasets prove the proposal's
validity, achieving the best-performing model with a balanced accuracy of
40.34\% and a mean square error of 1.33. Finally, we provide access to our
code, data, and models for transparency and reproducibility. | Pedro Ramoneda, Jose J. Valero-Mas, Dasaem Jeong, Xavier Serra | 2023-09-28T09:33:47Z | http://arxiv.org/abs/2309.16287v1 | # Predicting Performance Difficulty From
###### Abstract
Estimating the performance difficulty of a musical score is crucial in music education for adequately designing the learning curriculum of the students. Although the Music Information Retrieval community has recently shown interest in this task, existing approaches mainly use machine-readable scores, leaving the broader case of sheet music images unaddressed. Based on previous works involving sheet music images, we use a mid-level representation, bootleg score, describing notehead positions relative to staff lines coupled with a transformer model. This architecture is adapted to our task by introducing an encoding scheme that reduces the encoded sequence length to one-eighth of the original size. In terms of evaluation, we consider five datasets--more than 7500 scores with up to 9 difficulty levels--, two of them particularly compiled for this work. The results obtained when pretraining the scheme on the IMSLP corpus and fine-tuning it on the considered datasets prove the proposal's validity, achieving the best-performing model with a balanced accuracy of 40.34% and a mean square error of 1.33. Finally, we provide access to our code, data, and models for transparency and reproducibility.
## 1 Introduction
Estimating the difficulty of a piece is crucial for music education, as it enables the effective structuring of music collections to attend to the student's needs. This has led to a growing research interest [1, 2, 3, 4], as well as the development of automatic systems for exploring difficulties by major industry players such as Muse Group [5, 6] and Yousician [7].
Previous research on predicting piano difficulty has primarily focused on symbolic machine-readable scores [1, 2, 4, 8, 9, 10]. Early studies explored feature engineering descriptors [1, 2] and the relationship between piano fingering and difficulty [8, 9, 10]. A recent study [4] used stacked recurrent neural networks and context attention for difficulty classification on machine-readable scores, employing embeddings from automatic piano fingering, piano expressive generation [11], and score information. This study found that modeling the score difficulty classification task as an ordinal regression problem [12] was advantageous, and using entire pieces for training, rather than fragments, was essential to avoid degraded performance.
Although symbolic machine-readable scores offer more interpretability [10], with all the music information completely accessible, their limited availability compared to sheet music images restricts the practical use of difficulty prediction tools for librarians, teachers, and students. Focusing on sheet music image analysis expands the range of available music, has the potential to preserve the cultural heritage of symbolic-untranscribed scores, and addresses the lack of diversity in Western classical piano curricula. By analyzing image-based sheet music, we aim
Figure 1: We consider the bootleg score mid-representation with a multi-task GPT-based recognition framework to predict the performance difficulty associated to a piano score directly from sheet images from multiple annotated collections with varied difficulty levels.
to create technology for highlighting historically under-represented communities like female composers [13, 14] and promoting diversity in piano education. This promotion is crucial since the piano teaching repertoire has remained mostly unchanged for decades [15], containing around 3,300 pieces [16], while projects such as IMSLP house remarkably larger databases.
One of the main challenges in working with sheet music is attaining a symbolic music-based representation for direct analysis. Although Optical Music Recognition (OMR) literature has considerably improved in creating such representations over the past 30 years, it remains an unsolved task [17]. Bootleg score [18] is an alternative to symbolic scores obtained with OMR. This mid-level symbolic representation keeps the most relevant primitives of the music content in a music sheet, which has shown remarkable success in several tasks [19, 20, 21, 22], especially in classification, such as piano composer classification [23, 24, 19] or instrument recognition [25].
We build on this literature, employing the GPT model [26] and bootleg score in our analysis. More precisely, we consider the approach by Tsai et al. [18], in which a GPT model pretrained on the IMSLP piano collection is finetuned for specific recognition tasks. With adequate adaptations, we hypothesize that this framework may also succeed in estimating performance difficulty on music sheet images.
As aforementioned, difficulty estimation benefits from the use of entire music pieces rather than excerpts to obtain adequate success rates. However, processing large sequence stands as a remarkable challenge in music processing, especially when addressing bootleg representations due its considerable verbosity. While some recent mechanisms address this issue in general learning frameworks (_e.g._, Flash Attention [27]), we extend the original proposal by Tsai et al. [18] with a multi-hot optimization target for GPT pretraining, and replace the categorical encoding with causal convolutional or feedforward projection layers to enhance performance and reduce costs.
Moreover, addressing data scarcity is crucial for promoting and establishing this task within the Music Information Retrieval community. As of now, the _Mikroksomos-difficulty_ (MK) [10] and _Can I Play It?_ (CIPI) [4] symbolic datasets stand for the only available annotated collections, out of which music sheet images can be obtained by engraving mechanisms. To enhance data availability and encourage further research, we have collected additional datasets from existing collections, namely _Pianostreet-difficulty_ (PS), _Freescore-difficulty_ (FS), and black female composers collection Hidden Voices (HV). This results in more than 7500 music pieces, spanning up to 9 difficulty levels and each annotated with a difficulty classification system. Although difficulty prediction contains a subjective element, global trends may emerge when examining multiple difficulty classification systems simultaneously. To our knowledge, no previous research has explored this aspect. Consequently, we propose a multitask approach to training simultaneously on CIPI, PS, and FS datasets. Finally, we also analyze the generalization of our proposed methodologies with the MK and HV benchmark datasets.
Considering all above, our precise contributions are: (i) we adopt the previous bootleg-representation literature [23, 24], pretraining a GPT model on IMSLP and finetuning it for our task, adapting the encoding scheme accordingly, as presented in Figure 1; (ii) we evaluate our proposal using a novel sheet music image collection of five datasets with more than 7,500 pieces with difficulty levels ranging up to 9; (iii) we propose a multi-task strategy for combining multiple difficulty classification systems from the datasets; (iv) we conduct extensive experiments to assess the proposed methodologies, including a zero-shot scenario for testing generalization and comparisons with previous proposals on the CIPI dataset; and (v) to promote the task, code, and models 1, and datasets 2 are publicly available.
Footnote 1: [https://github.com/PRamoneda/pdf-difficulty](https://github.com/PRamoneda/pdf-difficulty)
Footnote 2: [https://zenodo.com/record/8126801](https://zenodo.com/record/8126801)
## 2 Music Sheet Image Datasets
Due to the relative recentness of the field, the lack of annotated corpora has severely constrained the performance difficulty assessment. The earliest data assortments may be found in the works by Sebastian et al. [1] and Chiu et al. [2], which respectively collected 50 and 300 MIDI scores from different score repositories. However, these datasets were never publicly released.
To our best knowledge, the _Mikroksomos difficulty_ (MK) set by Ramoneda et al. [10], which comprises 147 piano pieces by Bela Bartok in a symbolic format graded by the actual composer, represents the first publicly available collection for the task at hand. More recently, the authors introduced the _Can I Play It?_ (CIPI) dataset [4], a collection of 652 piano works in different symbolic formats annotated after 9 different difficulty levels. Note that, while sheet music scores can be obtained by resorting to engraving mechanisms, the insights obtained may not apply to real-world scenarios.
To address this limitation, we compiled a set of real sheet music images of piano works together with their performance difficulty annotations from different music education and score-sharing platforms on the Internet. More
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline
**Dataset** & **Pieces** & **Classes** & **AIR** & **Noteheads** & **Composers** \\ \hline MK [10] & 147 & 147 &.78 & 49.2k & 1 \\ CIPI [4] & 652 & 9 &.33 & 1.1M & 29 \\ PS & 2816 & 9 &.24 & 7.2M & 92 \\ FS & 4193 & 5 &.37 & 5.8M & 747 \\ HV & 17 & 4 & 1 & 21.5k & 10 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Description of existing collections for performance difficulty estimation based on the number of pieces, classes, average imbalance ratio (AIR), noteheads, and composers. The dashed line differentiates the datasets based on symbolic (above) and image (below) sheet music.
precisely, we arranged three different collections attending to the source: (i) the _Pianostreet-difficulty_ (PS) set retrieved from [28] that depicts 2,816 works with 9 difficulty levels annotated by the Pianostreet team; (ii) the _Freescores-difficulty_ (FS) assortment from [29] that contains 4,193 pieces with 5 difficulty levels comprising a variety of compositions and annotations by the users of the platform; and (iii) the _Hidden Voices_ (HV) collection [30, 31], a set of 17 pieces by black female composers annotated with 4-level difficulty labels by musicologists of the Colorado Boulder Music Department.
Table 1 summarizes the main characteristics of commented publicly-available collections. The _average imbalance ratio_ (AIR), measured as the mean of the individual ratios between each difficulty class and the majority label in each collection, is also provided for reference purposes.
## 3 Methodology
Based on its success when addressing classification tasks from sheet music images [23, 25], our proposal considers the use of the so-called bootleg score representation coupled with a GPT-based recognition model to estimate the performance difficulty of a piece.
Introduced by [18], bootleg scores stand as a simple--yet effective--representation to encode the content of a sheet music image for certain recognition tasks. Formally, a bootleg score is a binary matrix of length \(w\) and \(h=62\) vertical positions--_i.e._, \(\mathcal{X}\in\{0,1\}^{w\times 62}\)--that respectively denote the temporal and pitch dimensions. Note that the \(w\) value represents the number of note heads detected by the bootleg extraction process. Our work resorts to this representation, being the use of alternative codifications posed as a future line to address.
The GPT recognition framework undergoes an unsupervised pretraining step on the IMSLP piano collection, which was originally used by [18]. Eventually, considering a set of labeled data \(\mathcal{T}\subset\mathcal{X}\times\mathcal{C}\) where \(\mathcal{C}=\left\{c_{1},\dots,c_{|\mathcal{C}|}\right\}\) denotes the possible difficulty levels, the model is fine-tuned to retrieve the recognition function \(\hat{f}:\mathcal{X}\rightarrow\mathcal{C}\) that relates a bootleg representation to a particular difficulty level. Based on previous work addressing this task [4], we consider an ordinal classification framework [12] as the difficulty grading scales naturally fit this formulation.
Despite being capable of addressing the task, the framework was noticeably affected by two factors: (i) the excessive length of the input sequences when pretraining the model; and (ii) the inconsistent definition of difficulty levels among corpora. Consequently, we introduce two mechanisms specifically devised to address these limitations.
### Sequence length in pretraining
One of the main drawbacks related to bootleg representations is their verbosity, as it depicts \(h=62\) elements per frame. To address this issue, Tsai et al. [23] proposed subdividing each column into groups of 8 elements and encoding each according to a vocabulary of \(|\sigma|=2^{8}\) elements. In this regard, the initial bootleg score \(x\in\{0,1\}^{w\times 62}\) is mapped to a novel space defined as \(\Sigma^{w\times 8}\). This representation is then flattened to undergo a categorical embedding process that maps it to a feature-based space denoted as \(\mathbb{R}^{8w\times 768}\), which is eventually used for pretraining the GPT model with 768-dim hidden states. Note that this process reduces the vocabulary size and remarkably increases the sequence length.
To address this issue, we propose substituting this tokenization process with an embedding layer that directly maps the bootleg score into a suitable representation, avoiding the extension of the initial length of the sequence. In this sense, the initial bootleg representation \(x\in\{0,1\}^{w\times 62}\) is mapped to a space defined as \(\mathbb{R}^{w\times 768}\) that serves as input to the GPT model with a fraction of the length of the encoding used by Tsai et al. [23]. Besides reducing the length of the sequences to process, we hypothesize that such an embedding may benefit the recognition model as a suitable representation is inferred for the task. In this regard, our experiments will compare two types of embedding approaches--more precisely, a fully-connected layer and a convolutional one, respectively denoted as FC and CNN--to quantitatively assess this claim.
Figure 2 graphically describes the approach by Tsai et al. [23] and the presented proposal. In opposition to the reference work, the proposal considers multi-hot encoding instead of discrete categorical index as the output of the GPT recognition framework, by using binary cross-entropy loss instead of negative log-likelihood loss.
### Multi-task learning of multiple difficulty classification systems
The pretrained GPT model can be simply finetuned for a performance difficulty classification task by adding a projection layer and a learnable classification token, as de
Figure 2: Comparison between the proposal by Tsai et al. [23]—denoted as (a)—and the presented proposal—highlighted as (b)—for a case of toy example with a duration of \(w=4\).
picted in Figure 3. However, the actual definition of the performance difficulty of a piece is a highly subjective problem that may bias--and, hence, remarkably hinder--the goodness of a recognition model. In this regard, we hypothesize that using a multi-task approach that attends different definitions of difficulty--_i.e._, a labeled assortment of data from multiple annotators--may benefit the generalization capabilities of the approach.
In this regard, we modify the reference architecture for the downstream task to include an additional classification layer for each training collection. While simple, such a proposal is expected to improve the overall recognition performance given the wider variety of data provided during the training process. Figure 3 graphically describes this proposal.
Finally, no pre-processing is done in relation to the label distribution of the corpora to avoid inducing any type of bias. In this regard, the sampling protocol of the model has been forced to maintain its original distributions.
## 4 Experimental Setup
### Data collections and assessment metrics
To validate the proposal, we have considered the five publicly-available data collections presented in Section 2, _i.e._, _Mikroksmos difficulty_ (MK) [10], _Can I Play It?_ (CIPI) [4], _Pianostreet-difficulty_ (PS) [28], _Freescores-difficulty_ (FS) [29], and _Hidden Voices_ (HV) [30, 31]. While MK and CIPI exclusively comprise symbolic scores, we engraved them into music sheets and included them due to the commented scarcity of annotated data.
We considered a 5-fold cross-validation scheme with a data partitioning of 60% for the finetuning phase after the pretraining stage with IMSLP together with two equal-size splits of the remaining data for validation and testing. Note that, since MK and HV are exclusively used for benchmark purposes, no partitioning is applied to them.
In terms of performance evaluation, we resort to two assessment criteria typically used in ordinal classification [32]: _accuracy within_\(n\) (Acc\({}_{n}\)) and _mean squared error_ (MSE). To adequately described them, let \(\mathcal{S}\subset\mathcal{X}\times C\) denote a set of test data and let \(\mathcal{S}_{c}=\{(x_{i},y_{i})\in\mathcal{S}:y_{i}=c\}\) with \(1\leq i\leq|\mathcal{S}|\) be the subset of elements in \(\mathcal{S}\) with class \(c\).
Based on this, Acc\({}_{n}\) is defined as:
\[\text{Acc}_{n}=\frac{1}{|\mathcal{C}|}\sum_{\forall c\in\mathcal{C}}\frac{ \left|\left\{y\in\mathcal{S}_{c}:\left|\hat{f}(x)-c\right|\leq n\right\} \right|}{|\mathcal{S}_{c}|} \tag{1}\]
where \(\hat{f}(\cdot)\) represents the trained recognition model and \(n\in\mathbb{N}_{0}\) denotes the tolerance or class-boundary relaxation that allows for errors in adjacent labels. In our experiments we consider the values of \(n=0\) (no tolerance) and \(n=1\) (smallest adjacency tolerance), respectively denoted as Acc\({}_{0}\) and Acc\({}_{1}\) in the rest of the work.
Regarding MSE, this figure of merit is defined as:
\[\text{MSE}=\frac{1}{|\mathcal{C}|}\sum_{\forall c\in\mathcal{C}}\frac{\sum_{ \forall x\in\mathcal{S}_{c}}\left(\hat{f}(x)-c\right)^{2}}{|\mathcal{S}_{c}|} \tag{2}\]
Finally, note that all these metrics are macro-averaged to account for the unbalanced nature of the data collections used in the work.
### Training procedure
As commented, the recognition model undergoes an initial pretraining stage considering the IMSLP corpus. During this stage, the model considers sequences of 256 tokens, each with a binary cross-entropy as a loss function. To speed up this process, the Flash Attention framework by [27] is also considered. For comparative purposes, all other parameters remain unaltered from the reference works [23].
After that, the model is finetuned on the downstream difficulty estimation task, considering an Adam optimizer [33] with a learning rate of \(10^{-5}\) and early stopping based on the Acc\({}_{0}\) and MSE metrics on the validation set. Moreover, a balanced sampler is considered to tackle the issue of unbalanced data collections. Ordinal Loss [12] is applied to train the difficulty prediction as an ordinal classification problem, while no loss weighting considered in the multi-task framework. For regularization and stable training, gradient clipping is set to \(10^{-4}\), with a batch size of 64 and L2 regularization. This optimization process is carried out exclusively on the last layer of the model, resorting the remaining parts to the weights obtained during the pretraining phase of the procedure.
Figure 3: Graphical description of the downstream architecture depicting the classification heads for the multi-task proposals as well as the single-head case of the reference work.
Note that while these processes may be further studied to account for the optimal solution that retrieves the best-performing results, such a study is out of the scope of the work and is left as future work to address.
## 5 Experiments and Results
This section presents the results obtained with the introduced experimental scheme. To adequately provide insights about the task, the section provides a series of individual experiments devoted to analyzing one aspect of the proposal: Section 5.1 analyzes the influence of the encoding scheme; Section 5.2 evaluates the influence of the multitask architecture; Section 5.3 delves on the ranking generalization in a zero-shot scenario; finally, Section 5.4 compares the attainable results when addressing the task from the symbolic versus the sheet-image domains.
### Encoding schemes experiment
This first experiment compares the performance of the two encoding schemes presented in Section 3.1, _i.e._, GPT\({}_{\textsc{FC}}\) and GPT\({}_{\textsc{CNN}}\). Table 2 presents the results obtained for the CIPI, FS, and PS collections for the three figures of merit considered.
As it may be observed, the GPT\({}_{\textsc{CNN}}\) experiment outperformed the GPT\({}_{\textsc{FC}}\) experiment in most evaluation metrics across the three datasets. More precisely, the GPT\({}_{\textsc{CNN}}\) consistently achieved the best performance in the Acc\({}_{0}\) metric for all data collections, showing an average improvement of \(1\%\) concerning the GPT\({}_{\textsc{CNN}}\) case. This trend remains for the rest of the figures of merit except for the case in the FS assortment, in which the results of the FC-based model outperform those of the CNN case.
Nevertheless, attending to the high standard deviations, the performance results of the two models show a remarkable overlap in performance, hence suggesting that both schemes are equally capable of performing the posed task of score difficulty analysis from sheet music images. In this regard, further work should explore other encoding alternatives to assess whether this performance stagnation is due to the representation capabilities of the considered embedding layers or due to the recognition framework.
### Multi-task learning experiment
In this second study, we assess the capabilities of the multitask framework proposed in Section 3.2 trained simultaneously on the CIPI, PS, and FS datasets for the two GPT\({}_{\textsc{FC}}^{\texttt{multi}}\) and GPT\({}_{\textsc{CNN}}^{\texttt{multi}}\) encoding schemes. Table 3 provides the results obtained.
Overall, the GPT\({}_{\textsc{FC}}^{\texttt{multi}}\) method had higher results than the GPT\({}_{\textsc{CNN}}^{\texttt{multi}}\) method on the CIPI and PS datasets, especially on Acc\({}_{0}\) and Acc\({}_{1}\). For CIPI, GPT\({}_{\textsc{FC}}^{\texttt{multi}}\) surpassed GPT\({}_{\textsc{CNN}}^{\texttt{multi}}\) with gains of 5.4% in Acc\({}_{0}\), 0.6% in Acc\({}_{1}\), and 0.1 in MSE. For PS, GPT\({}_{\textsc{FC}}^{\texttt{multi}}\) slightly exceeded GPT\({}_{\textsc{CNN}}^{\texttt{multi}}\) with a 3.7% improvement in Acc\({}_{1}\) and a 0.6-point reduction in MSE, while Acc\({}_{0}\) was nearly equal for both methods, although GPT\({}_{\textsc{CNN}}^{\texttt{multi}}\) had a smaller standard deviation. Both methods displayed similar performance on the FS dataset, with less than a 1% difference across all metrics. As a result, subsequent experiments will reference the GPT\({}_{\textsc{FC}}^{\texttt{multi}}\) model.
The comparison between Tables 2 and 3 shows a trend change with better results performed with the FC version of the models. The other major difference is the relative improvement between the GPT\({}_{\textsc{FC}}^{\texttt{multi}}\) method and the best previous model GPT\({}_{\textsc{CNN}}\) in the CIPI and slightly in the PS dataset. In contrast, the FS dataset results remain comparable. In CIPI, Acc\({}_{0}\) is 11.3% higher in GPT\({}_{\textsc{FC}}^{\texttt{multi}}\), and in PS, there is a relative improvement of 12.8%. For CIPI, Acc\({}_{1}\) sees a minor increase of 0.4%. MSE exhibits a small improvement of 3.6% for CIPI and 0.5% for PS. Possible reasons include label quality differences--CIPI annotated by a musicology team, PS labels provided by the platform, and FS crowdsourced by users--or the impact of dataset sizes--CIPI being the smallest and FS the largest.
### Ranking generalization experiment
In this experiment, we assess the ranking capabilities of the proposal in a zero-shot setting by utilizing the embeddings of the projection layer of the model (check Figure 3). We reduce the 768-dimensional embeddings to a single dimension using Principal Component Analysis (PCA) and employ the resulting values to rank the target pieces.
Table 4 shows the results obtained resorting to the
\begin{table}
\begin{tabular}{c c c c} \hline \hline Encoding & Acc\({}_{0}\) (\%) & Acc\({}_{1}\) (\%) & MSE \\ \hline _Can I Play it?_ & & & \\ GPT\({}_{\textsc{FC}}\) & 34.3(6.1) & 78.1(4.6) & 1.6(0.3) \\ GPT\({}_{\textsc{CNN}}\) & **36.2(8.2)** & **81.7(1.5)** & **1.4(0.1)** \\ \hline _PianoStreet_ & & & \\ GPT\({}_{\textsc{FC}}\) & 30.9(3.8) & 71.1(9.6) & 2.1(0.4) \\ GPT\({}_{\textsc{CNN}}\) & **31.8(1.6)** & **78.8(1.8)** & **1.9(0.1)** \\ _FreeScores_ & & & \\ GPT\({}_{\textsc{FC}}\) & 46.6(1.9) & **92.5(1.0)** & **0.8(0.1)** \\ GPT\({}_{\textsc{CNN}}\) & **47.3(3.4)** & 92.4(0.6) & 0.8(0.1) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Results of comparing the encoding schemes GPT\({}_{\textsc{FC}}\) and GPT\({}_{\textsc{CNN}}\). Bold values highlight the best results per collection and metric.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Encoding & Acc\({}_{0}\) (\%) & Acc\({}_{1}\) (\%) & MSE \\ \hline GPT\({}_{\textsc{FC}}^{\texttt{multi}}\) & & & \\ CIPI & **40.3(4.3)** & **82.0(1.4)** & **1.3(0.1)** \\ PS & 35.9(3.1) & **78.2(3.4)** & **1.9(0.2)** \\ FS & 45.8(2.5) & 92.0(1.4) & **0.8(0.1)** \\ GPT\({}_{\textsc{CNN}}^{\texttt{multi}}\) & & & \\ CIPI & 34.9(5.0) & 81.4(1.3) & 1.4(0.1) \\ PS & **35.9(2.8)** & 74.5(3.4) & 2.7(0.2) \\ FS & **45.9(1.2)** & **92.4(2.1)** & 0.8(0.1) \\ \hline \hline \end{tabular}
\end{table}
Table 3: Results of multi-task learning experiment when evaluated on different test collections for the two encoding schemes. Bold values highlight the best results per collection and metric.
Kendall rank correlation coefficient, \(\tau_{c}\), for all data collections discussed in the experiment, considering both the single-task and multi-task frameworks posed. Note that MK and HV are only used for benchmarking purposes.
In the three training datasets, the multi-task architecture GPT\({}_{\texttt{FC}}^{\texttt{multi}}\) achieves the best performance with CIPI (\(\tau_{c}=0.68\)), PS (\(\tau_{c}=0.59\)), and FS (\(\tau_{c}=0.56\)). Unexpectedly, the FS method outperforms others in the datasets of the MK (\(\tau_{c}=0.61\)) and HV (\(\tau_{c}=0.56\)). This outcome may suggest that simultaneous training on all three datasets could limit generalizability. Alternatively, the presence of license-free pieces composed after 1900 in the FS dataset, which users have uploaded, might explain the difference.
The HV dataset displays notably lower generalizability, possibly due to the smaller number of pieces, resulting in higher standard deviations. Potential bias similar to MK could also arise from the predominance of pre-20th-century data in CIPI and PS. These factors might affect the zero-shot experiment's performance. However, we must also acknowledge that most composers used for training are white males, and the HV results are significantly worse than the rest of the datasets. Therefore, future research should investigate and minimize the potential gender gap in difficulty prediction tasks.
### Comparison with previous approaches
This last experiment compares the goodness of the proposed methodology in sheet music scores against other image-based approaches and with the symbolic-oriented methods domain. Regarding sheet image methods, we consider the reference method by Tsai et al. [23] based on bootleg mid-representation, denoted as GPT\({}_{\texttt{ZGH}}\). Concerning the symbolic baseline, we reproduce the approach in [4] that proposes to describe the symbolic score in terms of piano fingering information, expressive annotations, and pitch descriptors to feed a recurrent model based on Gated Recurrent Units with attention layers (referred to as GRU+Att). Table 5 provides the results obtained. For comparative purposes, we only consider the CIPI dataset as the reference symbolic work accounted for that collection.
Examining the experiments, the GPT\({}_{\texttt{FC}}^{\texttt{multi}}\) model may be observed to outperform the other cases in the Acc\({}_{0}\) figure of merit. However, for the rest of the metrics, the reference symbolic case--denoted as GRU+Att--outperforms all image-oriented recognition models. Such a fact suggests that, while a bootleg score somehow suits this difficulty estimation task, a performance gap between this representation and pure symbolic notation needs to be addressed.
Finally, the GPT\({}_{\texttt{EMG}}\) model achieves the lowest performance of all alternatives, with remarkably lower accuracy rates than our proposal. Note that such a fact emphasizes the relevance of our work as a more suitable approach for performing difficulty estimation in sheet music images.
## 6 Conclusions
Estimating the performance difficulty of a music piece is a crucial need in music education to structure the learning curriculum of the students adequately. This task has recently gathered attention in the Music Information Retrieval field, given the scarce existing research works devoted to symbolic machine-readable scores. However, due to the limited availability of this type of data, there is a need to devise methods capable of addressing this task with image-based sheet music.
Attending to its success in related classification tasks, this work considers the use of a mid-level representation--namely, bootleg score--that encodes the content of a sheet music image with a GPT-based recognition framework for predicting the difficulty of the piece. Instead of directly applying this methodology, we propose using specific embedding mechanisms and multi-task learning to reduce the task complexity and improve its recognition capabilities. The results obtained with five different data collections--three of them specifically compiled for this work--prove the validity of the proposal as it yields recognition rates comparable to those attained in symbolic machine-readable scores.
Further work comprises assessing and proposing alternative representations to the bootleg scores (_e.g._, solutions based on Optical Music Recognition). Also, we consider that using smaller training sequences using hierarchical attention models or weak labels for varying-length piece fragments may report benefits in the process. Finally, the practical deployment of this proposal in real-world scenarios involving real users may report some additional insights about the validity of the proposal.
\begin{table}
\begin{tabular}{l c c c c c} \hline \hline \multirow{2}{*}{Train} & \multicolumn{6}{c}{Evaluation} \\ \cline{2-6} & CIPI & PS & FS & MK & HV \\ \hline CIPI &.67 (.01) &.56 (.02) &.56 (.01) &.67 (.05) &.50 (.05) \\ PS &.67 (.01) &.58 (.02) &.56 (.01) &.68 (.01) &.43 (.04) \\ FS &.64 (.04) &.55 (.01) &.56 (.02) & **.71 (.02)** & **.56 (.07)** \\ MULTI & **.68 (.02)** & **.59 (.02)** & **.56 (.01)** &.63 (.02) &.51 (.07) \\ \hline \hline \end{tabular}
\end{table}
Table 4: Zero-shot ranking results. Bold values denote the best-performing result on each evaluation dataset.
\begin{table}
\begin{tabular}{l c c c} \hline \hline Case & Acc\({}_{0}\) (\%) & Acc\({}_{1}\) (\%) & MSE \\ \hline _Symbolic_[4] & \multicolumn{3}{c}{} \\ & GRU+Att & 39.5(3.4) & **87.3(2.2)** & **1.1(0.2)** \\ _Tsat et al._[23] & \multicolumn{3}{c}{} \\ & GPT\({}_{\texttt{ZGH}}\) & 19.7(4.0) & 58.1(7.2) & 3.3(0.8) \\ _Proposal_ & \multicolumn{3}{c}{} \\ & GPT\({}_{\texttt{FC}}\) & 34.3(6.1) & 78.1(4.6) & 1.6(0.3) \\ \(\textsc{GPT}_{\texttt{CNN}}\) & 36.2(8.2) & 81.7(1.5) & 1.4(0.1) \\ \(\textsc{GPT}_{\texttt{FC}}^{\texttt{multi}}\) & **40.3(4.3)** & 82.0(1.4) & 1.3(0.1) \\ \hline \hline \end{tabular}
\end{table}
Table 5: Performance results for the symbolic [4] and Tsai et al. [23] methods as well as the proposed approach for the CIPI dataset. Bold values highlight the best result per figure of merit.
## 7 Acknowledgment
We want to thank T.J. Tsai and all his students, especially Daniel Yang, for having conducted the prior research on the bootleg score and, above all, for sharing all their work in the interest of Open Science. We are also grateful to Pedro D'Avila for bringing to our attention the work of Alejandro Cremaschi related to the Hidden Voices project. Lastly, we thank Alejandro Cremaschi and the University of Colorado Boulder Libraries team, David M. Hays and Jessica Quah, for providing us with the scores.
This work is funded by the Spanish Ministerio de Ciencia, Innovacion y Universidades (MCIU) and the Agencia Estatal de Investigacion (AEI) within the Musical AI Project - PID2019-111403GB-I00/AEI/10.13039/501100011033 and the Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Korea Government (MSIT) (NRF-2022R1F1A1074566).
|
2310.20656 | Non-Compositionality in Sentiment: New Data and Analyses | When natural language phrases are combined, their meaning is often more than
the sum of their parts. In the context of NLP tasks such as sentiment analysis,
where the meaning of a phrase is its sentiment, that still applies. Many NLP
studies on sentiment analysis, however, focus on the fact that sentiment
computations are largely compositional. We, instead, set out to obtain
non-compositionality ratings for phrases with respect to their sentiment. Our
contributions are as follows: a) a methodology for obtaining those
non-compositionality ratings, b) a resource of ratings for 259 phrases --
NonCompSST -- along with an analysis of that resource, and c) an evaluation of
computational models for sentiment analysis using this new resource. | Verna Dankers, Christopher G. Lucas | 2023-10-31T17:25:07Z | http://arxiv.org/abs/2310.20656v1 | # Non-Compositionality in Sentiment: New Data and Analyses
###### Abstract
When natural language phrases are combined, their meaning is often more than the sum of their parts. In the context of NLP tasks such as sentiment analysis, where the meaning of a phrase is its sentiment, that still applies. Many NLP studies on sentiment analysis, however, focus on the fact that sentiment computations are largely compositional. We, instead, set out to obtain non-compositionality ratings for phrases with respect to their sentiment. Our contributions are as follows: a) a methodology for obtaining those non-compositionality ratings, b) a resource of ratings for 259 phrases - Non-CompSST - along with an analysis of that resource, and c) an evaluation of computational models for sentiment analysis using this new resource.
## 1 Introduction
In NLP, the topics of the compositionality of language and neural models' capabilities to compute meaning compositionally have gained substantial interest in recent years. Yet, the meaning of linguistic utterances often does not adhere to strict patterns and can be surprising when looking at the individual words involved. This affects how those utterances behave in downstream tasks, such as sentiment analysis. Given a phrase or sentence, that task involves predicting the polarity as positive, negative or neutral. Sentiment largely adheres to compositional principles (Moilanen and Pulman, 2007, p.1): "If the meaning of a sentence is a _function_ of the meanings of its parts then the _global polarity_ of a sentence is a _function_ of the _polarities_ of its parts." Modelling sentiment as a compositional process is, therefore, often mentioned as a design principle for computational sentiment models (e.g. by Socher et al., 2013; Sutherland et al., 2020; Yin et al., 2020).
Nonetheless, one can think of examples where the sentiment of a phrase is unexpected given the sentiment of the individual parts (e.g. see Zhu et al., 2015; Hwang and Hidey, 2019; Barnes et al., 2019; Tahayna et al., 2022, for work on non-compositional sentiment). These include the case of sarcasm ("life is good, you should get one"), opposing sentiments ("terribly fascinating"), idiomatic expressions ("break a leg") and neutral terms that, when composed, suddenly convey sentiment ("yeah right"). Adequately capturing sentiment computationally requires both learning compositional rules and understanding when such exceptions exist, where most contemporary sentiment models are expected to learn that via mere end-to-end training on examples.
How can we identify whether the sentiment of a phrase is non-compositional? We design a protocol to obtain such non-compositionality judgments based on human-annotated sentiment. Our methodology (elaborated on in SS3) utilises phrases from the _Stanford Sentiment Treebank_ (SST) (Socher et al., 2013) and contrasts the sentiment of a phrase with control stimuli, in which one of the two sub-phrases has been replaced. Phrases whose annotated sentiment deviates from what is expected based on the controls are considered less compositional, as is illustrated in Figure 1. We analyse the resulting non-compositionality ranking of phrases
Figure 1: Illustration of how the observed sentiment deviates from the expected sentiment when viewing polarity as a function of the polarity of the subphrases. These examples were obtained from our newly annotated stimuli.
(SS4) and show how the constructed resource can be used to evaluate sentiment models (SS5). Our new resource (NonCompSST) can further improve the understanding of what underlies non-compositionality in sentiment analysis, and can complement existing evaluation protocols for sentiment analysis models.
## 2 Related work
Over the course of years, sentiment analysis systems went from using rule-based models and sentiment lexicons (e.g. Moilanen and Pulman, 2007; Taboada et al., 2011) to using recursive neural networks (Socher et al., 2013; Tai et al., 2015; Zhu et al., 2015), to abandoning the use of structure altogether by finetuning pretrained large language models (e.g. Perez et al., 2021; Camacho-collados et al., 2022; Hartmann et al., 2023), and recently, to abandoning training, via zero-shot generalisation (Wang et al., 2023). Crucial to the development of these systems has been the introduction of benchmarks, such as SST (Socher et al., 2013) and SemEval's Twitter benchmarks (e.g. Rosenthal et al., 2015; Nakov et al., 2016; Rosenthal et al., 2017).
Although the vast majority of related work focused on simply improving on benchmarks, there have been studies more closely related to ours, asking questions such as: How do phrases with opposing sentiment affect each other (Kiritchenko and Mohammad, 2016, 2016)? What is the role of negations (Zhu et al., 2014), modals and adverbs (Kiritchenko and Mohammad, 2016)? Do idioms have non-compositional sentiment (Hwang and Hidey, 2019)? Which linguistic phenomena are still problematic for SOTA sentiment systems (Barnes et al., 2019)? Can we incorporate compositional and non-compositional processing in one system (Zhu et al., 2015)? And can we computationally rank sentences according to their sentiment compositionality (Dankers and Titov, 2022)? Gaining a better understanding of the contexts in which sentiment functions non-compositionally and is challenging to predict is crucial for the evaluation of sentiment models in an age where sentiment benchmarks may appear saturated (Barnes et al., 2019).1
Footnote 1: As a concrete example, consider the widely-used binary SST sentiment analysis task contained in the GLUE benchmark (Wang et al., 2018): at the time of writing, SOTA performance for this task matches humans’ performance.
We position our work in this latter group of articles, of which that of Hwang and Hidey is most closely related.
## 3 Collecting non-compositionality ratings
Compositional processing of sentiment involves applying a function to the polarity of subphrases to obtain the polarity of the phrase. Turning this notion into a quantifiable metric requires us to measure the polarities and determine the composition function. Non-compositional phrases are then simply phrases whose sentiment deviates from what is expected. How do we implement this? We first select data (SS3.1) and then obtain sentiment labels for phrases through data annotation studies (SS3.2). We consider the composition function to be the default mapping from two subphrases with a specific sentiment to their combined sentiment. We obtain the default mapping by replacing subphrases with control stimuli and annotating sentiment for those modified phrases. Using those results, we can compute the non-compositionality ratings (SS3.3). Figure 2 summarises the full procedure.
### Materials
We first select phrases for which to obtain the non-compositionality ratings, along with control stimuli.
Data selectionWe obtain our data from the SST dataset, containing 11,855 English sentences from movie reviews (Pang and Lee, 2005), and sentiment annotations from Socher et al. (2013). The dataset provides sentiment labels for all full sentences and
Figure 2: The methodology summarised. Steps 1 and 4 consist of data pre- and postprocessing; steps 2 and 3 involve collecting data from participants via Prolific.
phrases contained in these sentences (all phrases that represent a node in the constituency parse trees of these sentences). We select candidate phrases to include in our dataset by applying the following constraints to the phrases: they consist of two subphrases that contain 3-8 tokens each, do not contain named entities and had a relatively high agreement in the original dataset. In Appendix A, we elaborate on the implementation of our constraints.
Selection of control subphrasesWe assume that if a subphrase behaves compositionally, replacing it with a control should not affect the overall sentiment of the phrase. How do we select control stimuli? By taking subphrases with the same sentiment (based on SST's sentiment labels) and phrase type (e.g. NP, PP, SBAR). For each phrase - consisting of subphrases \(A\) and \(B\) - we automatically select 32 candidate control subphrases and manually narrow them down to eight (\(A^{\prime}_{n}\), \(B^{\prime}_{n}\), where \(n\in\{1,2,3,4\}\)). During the manual annotation, we removed examples for which fewer than eight suitable control stimuli remained. Our final collection contains 500 phrases to be used in the human annotation study.
### Data annotation studies
We collect sentiment labels in two rounds using a 7-point scale. In Study 1, we obtain the sentiment for all subphrases involved to ensure that subphrases and their controls have the same sentiment. We then discard phrases for which \(A\) and/or \(B\) do not have more than three controls each, where we restrict the controls to those whose sentiment is at most 1 point removed from the sentiment of \(A\) (for \(A^{\prime}_{n}\)) or \(B\) (for \(B^{\prime}_{n}\)).
In Study 2, we collect sentiment labels for all subphrase combinations, namely the remaining 259 phrases and the 1554 phrases in which a control subphrase is inserted. For a phrase "\(A\)\(B\)", there are six controls: three that substitute \(A\), and three that substitute \(B\). Those substitutions could lead to ungrammatical constructions in spite of the data selection procedure, and participants can indicate that with a checkbox. Figure 4 in Appendix B displays example questions as shown to the participants. Participants were recruited via Prolific and annotated sentiment via a Qualtrics survey. In Study 1, 57 participants annotate 93 or 94 subphrases each. In Study 2, 90 participants annotate 60 or 61 subphrase combinations each. That way, every unique phrase and subphrase receives three annotations total. The inter-annotator agreement rates obtained were 0.60 and 0.64 for Study 1 and 2, respectively, in terms of Krippendorff's \(\alpha\) for ordinal data. Appendix B further discusses these studies along with ethical considerations and annotation statistics.
### Computing non-compositionality ratings
We obtain one sentiment label per phrase by averaging the annotations from Study 2. Afterwards, the non-compositionality ratings for a phrase "\(A\)\(B\)" are computed separately for \(A\) and \(B\). The rating for \(A\) is the difference between the sentiment of "\(A\)\(B\)" and the mean sentiment of phrases "\(A^{\prime}_{n}\)\(B\)" (\(n\in\{1,2,3\}\)), and vice versa for \(B\). Together, the two ratings express the non-compositionality of "\(A\)\(B\)". We compute four variants of the ratings: All, AllAbs, Max, MaxAbs, AllClean. The first two include \(A\) and \(B\) separately, the second two use one rating per phrase (the largest of the two). AllClean includes \(A\) and \(B\) separately but excludes any phrases that are considered ungrammatical (109 out of 1813 phrases involved in Study 2 were flagged for that).
## 4 Analysis of the ratings
What patterns can we identify in these ratings? We examine sentiment composition types, phrase lengths and syntactic categories of subphrases in Appendix C; only the sentiment composition type
Figure 3: MaxAbs non-compositionality ratings a) per composition type (‘-’, ‘-\(\sim\)’ and ‘+’ refer to negative, neutral and positive), and b) for figurative examples.
displays a clear pattern, illustrated by the ratings' distributions in Figure 3. The most compositional are the cases where the subphrases share their positive/negative sentiment, whereas combining opposites is the least compositional. Most phrases have an absolute rating within 1 point of our 7-point scale; only for 67 out of the 259 phrases, the MaxAbs non-compositionality rating exceeds 1.
What characterises the least compositional examples?2 The most prominent pattern is that figurative language is over-represented, which we can quantitatively illustrate by annotating all phrases as figurative and literal; the resulting MaxAbs distributions differ substantially (Figure 3). In Table 1, some examples of **figures of speech** are the "pressure cooker" (a container metaphor implying a stressful situation, Kovecses and Kovecses, 1990), the "nearly terminal case of the cutes" (suggesting one can die of cuteness for emphasis), the "sernet's smirk" (metaphorically used to invoke connotations about evil) and the hyperbole of "everyone [...] is a con artist and a liar".
Footnote 2: We include them in Appendix C, in Table 3.
Other atypical sentiment patterns that we observe require **common-sense reasoning** about terms that act as contextual valence shifters Polanyi and Zaenen (2006), e.g. to understand that "are long past" implies something about _current_ times, or that "eating oatmeal" relates to the blandness of that experience. Lastly, we also observe **discourse relations** between subphrases that modify the sentiment in a non-compositional manner. In "fans of the animated wildlife adventure show will be in warthog heaven", the parallel between 'wildlife' and 'warthog heaven' amplifies the positive sentiment in a way that would not have happened for fans of a fashion show. Similarly, "returns with a chase to end all chases" functions differently when it concerns the "master of the chase sequence" rather than anyone else. These examples illustrate sentiment compositions are often nuanced and that there is a long tail of atypical non-compositional phenomena.
## 5 Evaluating sentiment models
How can we employ the non-compositionality ratings to better understand the quality of sentiment systems? We illustrate this by recreating the ratings using SOTA pretrained neural models and comparing them to the humans' ratings.
Experimental setupTo obtain the ratings from models, we adapt SST3 to use the 7-point scale and exclude the phrases of interest from the training data. Per model type, we fine-tune three model seeds that we evaluate on the SST test set using \(F_{1}\)-score, and on NonCompSST using a) the correlation of the models' and humans' non-compositionality ratings (Pearson's \(r\)), and b) the \(F_{1}\)-score of NonCompSST phrases, using the humans' sentiment scores as labels. To obtain models' non-compositionality ratings, we average sentiment predictions from the three model seeds and apply the same postprocessing as applied to the human-annotated data (see SS3.3).
Footnote 3: Socher et al. (2013) published the original annotations by Amazon Mechanical Turk annotators; we process this data into SST-7.
We evaluate Roberta-base and -large Liu et al. (2019) along with variants of those models that are further trained on sentiment-laden data: TimeLM (the base model pretrained on tweets by Loureiro et al. (2022); the model of Camacho-collados et al. (2022), which is TimeLM fine-tuned to predict tweets' sentiment; BertTweet (a base
\begin{table}
\begin{tabular}{r l r r|l} \hline \hline \multicolumn{1}{c}{**subphrase A**} & \multicolumn{1}{c}{**subphrase B**} & \multicolumn{1}{c}{**Rating**} & \multicolumn{1}{c}{
\begin{tabular}{c} **Sentiment** \\ **Human** & **Roberta** \\ \end{tabular} } \\ \hline \hline a nearly terminal case & of the cutes & 4.11 & 5.33 & 2.00 \\ the franchise’s best years & are long past & -4.11 & 0.33 & 1.00 \\ all the excitement & of eating oatmeal & -3.00 & 2.00 & 2.33 \\ a pressure cooker & of horrified awe & -2.56 & 1.00 & 3.67 \\ fans of the animated wildlife adventure show & will be in warthog heaven & 1.56 & 5.67 & 5.00 \\ a real human soul & buried beneath a spellbinding serpent’s smirk & 1.56 & 4.67 & 5.00 \\ everyone involved with moviemaking & is a con artist and a liar & -1.33 & 0.00 & 1.00 \\ the modern master of the chase sequence & returns with a chase to end all chases & 1.11 & 6.00 & 5.00 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Examples of non-compositional phrases, with their Max rating (§3.3 describes how these ratings are computed) and the sentiment assigned by the annotators and by Roberta-large (details on how the model was trained are contained in §5). Red indicates negative sentiment, green indicates positive sentiment.
model pretrained on tweets by Nguyen et al. (2020); the model of Perez et al. (2021) (BertTweet fine-tuned to predict tweets' sentiment); Roberta-largeHartmann et al. (2021) fine-tuned on sentiment of comments from social media posts; and finally Roberta-base fine-tuned on sentiment from IMDB movie reviews Maas et al. (2011).4
Footnote 4: See Appendix D for further details on the experimental setup used to fine-tune these models on SST. Visit our repository for the data and code.
ResultsThe results in Table 2 suggest that even though the systems have very similar performance on the SST test set, there are differences in terms of the NonCompSST ratings: Roberta-base has the lowest SST \(F_{1}\), but is the second-best base model in terms of \(r\), only outperformed by IMDB-Roberta (i.e. the variant fine-tuned on movie reviews, the domain of SST). The Roberta-large model outperforms both Hartmann et al.'s model and the base models in terms of the SST \(F_{1}\) and NonCompSST correlations. Together, these observations suggest that pretraining or fine-tuning using data from a different domain can harm models' ability to capture nuanced sentiment differences required to estimate NonCompSST ratings. The SST performance is less sensitive to this, suggesting that our resource can provide a complementary view of sentiment systems.
Finally, we also inspect the NonCompSST \(F_{1}\)-scores for all 259 phrases and the 67 phrases with the highest non-compositionality ratings according to the human annotators, for which results are included in the final two columns of Table 2. On that subset, IMDB-Roberta and the model of Camacho-collados et al. (2022) achieve the highest \(F_{1}\)-score. These scores emphasise that for the top 67, sentiment is substantially harder to predict: non-compositional examples indeed present a larger challenge to sentiment models than compositional examples do.
## 6 Conclusion
Sentiment and compositionality go hand-in-hand: success in sentiment analysis is often attributed to models' capability to 'compose' sentiment. Indeed, the sentiment of a phrase is reasonably predictable from its subphrases' sentiment, but there are exceptions due to the ambiguity, contextuality and creativity of language. We made this explicit through an experimental design that determines _non_-compositionality ratings using humans' sentiment annotations and obtained ratings for 259 phrases (SS3). Even though most phrases are fairly compositional, we found intriguing exceptions (SS4), and have shown how the resource can be used for model evaluation (SS5). For future sentiment analysis approaches, we recommend a multi-faceted evaluation setup: to grasp the nuances of sentiment, one needs more than compositionality.
### Limitations
Our work makes several limiting assumptions about compositionality in the context of sentiment:
1. We maintain a simplistic interpretation of the composition 'function' but are aware that compositionality is considered **vacuous** by some (Zadrozny, 1994) since by using a generic
\begin{table}
\begin{tabular}{l|c|c c c c c c c} \hline \hline
**Model name** & SST & \multicolumn{6}{c}{NonCompSST} \\ & & Max & MaxAbs & All & AllAbs & AllClean & All & Top 67 \\ & \(F_{1}\) & \(r\) & \(r\) & \(r\) & \(r\) & \(r\) & \(F_{1}\) & \(F_{1}\) \\ \hline \hline - _Pretrained_ & & & & & & & & \\ Roberta-base, Liu et al. &.43 &.36 &.31 &.41 &.22 &.43 &.40 &.32 \\ Roberta-large, Liu et al. &.47 &.42 &.38 &.44 &.30 &.46 &.47 &.37 \\ - _Pretrained using sentiment-laden data_ & & & & & & & & \\ TuffleLM, Loureiro et al.\({}^{B}\) &.43 &.30 &.32 &.34 &.25 &.36 &.43 &.38 \\ BertTweet, Nguyen et al.\({}^{B}\) &.46 &.22 &.20 &.25 &.15 &.27 &.43 &.30 \\ - _Finetuned using sentiment-laden data_ & & & & & & & & \\ Camacho-collados et al.\({}^{B}\) &.45 &.33 &.36 &.36 &.22 &.38 &.49 &.43 \\ Pérez et al.\({}^{B}\) &.44 &.13 &.21 &.20 &.16 &.21 &.42 &.26 \\ Hartmann et al.\({}^{L}\) &.46 &.38 &.34 &.41 &.28 &.44 &.45 &.33 \\ IMDB Roberta\({}^{B}\) &.44 &.37 &.31 &.41 &.24 &.44 &.47 &.43 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Model evaluation using SST (\(F_{1}\)) and NonCompSST, according to correlation (Pearson’s \(r\)) between the humans’ and the models’ non-compositionality ratings, and the \(F_{1}\) of the 259 phrases and the 67 most non-compositional ones, measured using the humans’ annotations as labels. We indicate whether models are base (\(B\)) or large (\(L\)) and underline the highest performance per column.
notion of a 'function', any sentiment computation can be considered compositional. We, therefore, only consider some phrases non-compositional because of the strict interpretation of that 'function'. As a result, one might argue that whether a phrase such as "all the excitement of eating oatmeal" is non-compositional in terms of its sentiment is debatable. We agree with that; if you represent the sentiment with a very expressive representation, every sentiment computation is compositional. Our results only apply _given_ a very narrow interpretation of compositionality.
2. In this work, we restrict the notion of **meaning compositions** to the notion of **sentiment compositions**. Hence, there might be phrases that behave compositionally in terms of sentiment but are considered non-compositional otherwise. For instance, "rotten apple" carries negative sentiment, both literally and figuratively, and might thus be considered compositional in terms of sentiment.
In addition to that, the resource we developed has technical limitations:
1. Human annotators can provide **unreliable sentiment annotations**: they do not necessarily agree with one another, may lose focus while performing the task or may misunderstand the linguistic utterances they annotate. As a result, the resource inevitably contains some sentiment ratings that are inaccurate.
2. The resource we develop is small in **size**, which limits the robustness of the results when using the resource for experimentation. We would like to point out, however, that \(\gg\)259 annotations were involved in obtaining the ratings for these 259 phrases. The results illustrate that, in spite of these limitations, the resource can still lead to valuable conclusions.
Finally, the evaluation of the models in SS5 is somewhat limited, considering that the various models have been fine-tuned or pretrained by the mentioned authors using different experimental setups. Even though we then apply the same setup to fine-tune on SST, these differences need to be kept in mind when interpreting the results.
## Acknowledgements
We thank Kenny Smith and Ivan Titov for their suggestions throughout this project and Matthias Lindemann for his comments on a draft of this article. VD is supported by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences.
|
2309.14490 | "Can You Move It?": The Design and Evaluation of Moving VR Shots in
Sport Broadcast | Virtual Reality (VR) broadcasting has seen widespread adoption in major
sports events, attributed to its ability to generate a sense of presence,
curiosity, and excitement among viewers. However, we have noticed that still
shots reveal a limitation in the movement of VR cameras and hinder the VR
viewing experience in current VR sports broadcasts. This paper aims to bridge
this gap by engaging in a quantitative user analysis to explore the design and
impact of dynamic VR shots on viewing experiences. We conducted two user
studies in a digital hockey game twin environment and asked participants to
evaluate their viewing experience through two questionnaires. Our findings
suggested that the viewing experiences demonstrated no notable disparity
between still and moving shots for single clips. However, when considering
entire events, moving shots improved the viewer's immersive experience, with no
notable increase in sickness compared to still shots. We further discuss the
benefits of integrating moving shots into VR sports broadcasts and present a
set of design considerations and potential improvements for future VR sports
broadcasting. | Xiuqi Zhu, Chenyi Wang, Zichun Guo, Yifan Zhao, Yang Jiao | 2023-09-25T19:33:27Z | http://arxiv.org/abs/2309.14490v2 | # "Can You Move It?": The Design and Evaluation of Moving VR Shots in Sport Broadcast
###### Abstract
Virtual Reality (VR) broadcasting has seen widespread adoption in major sports events, attributed to its ability to generate a sense of presence, curiosity, and excitement among viewers. However, we have noticed that still shots reveal a limitation in the movement of VR cameras and hinder the VR viewing experience in current VR sports broadcasts. This paper aims to bridge this gap by engaging in a quantitative user analysis to explore the design and impact of dynamic VR shots on viewing experiences. We conducted two user studies in a digital hockey game twin environment and asked participants to evaluate their viewing experience through two questionnaires. Our findings suggested that the viewing experiences demonstrated no notable disparity between still and moving shots for single clips. However, when considering entire events, moving shots improved the viewer's immersive experience, with no notable increase in sickness compared to still shots. We further discuss the benefits of integrating moving shots into VR sports broadcasts and present a set of design considerations and potential improvements for future VR sports broadcasting.
Human-centered computingHuman-Computer InteractionEmpirical studies in HCI. +
Footnote †: corresponding author:[email protected]
+
Footnote †: corresponding author:[email protected]
## 1 Introduction
Virtual Reality (VR) broadcasting has emerged as a novel form of media distribution in recent years, with numerous practical applications in large concerts, parties, and major sporting events. The utilization of panoramic 360-degree broadcasting during the 2021 Tokyo Olympics and the 2022 Beijing Winter Olympics has facilitated an immersive viewing experience, thereby enhancing the satisfaction of remote audiences who could not attend the events due to various constraints. The increasing popularity and maturity of VR equipment and content have contributed to the greater understanding and acceptance of VR viewing by audiences [31, 10], consequently expanding the scope of VR's market opportunities [28].
While VR has the potential to revolutionize the way we experience sports events, there still needs to be solved in the production of VR sports content. The most significant issue is that traditional moving shots, such as pan shots and close-ups commonly used in broadcast, film, and television cannot be presented in VR broadcasting due to the 360-degree camera could not move, thereby hindering the immersive and dynamic viewing experience inherent to traditional audio-visual language. Although current VR photography technology cannot achieve moving or mixed camera shots in sports games, locomotion in VR games [56], films [33], and simulators [22, 37], have been widely explored. Previous research suggested that the incorporation of motion camera experience enhances the content's narrative and fosters a more three-dimensional viewing experience, including the sense of co-presence [33] and immersion [3], but also reveals that users' movement in VR environments may result in VR sickness [12, 14]. In this work, we suggest a hypothesis that incorporating motion shots into VR sports broadcasting, coupled with appropriate content design to manage users' VR sickness, could comprehensively enhance the viewing experience by compensating for the lack of aesthetic elements in the current VR experience, thus delivering a rich and immersive viewing experience for the future broadcast.
The primary aim of this paper is to investigate the viewer's experience of viewing moving shots in VR sports event broadcasting, with a specific focus on ice hockey. To begin, we conducted a field study involving observing a VR hockey broadcast test game and interviews with broadcast production experts to comprehend the current challenges associated with VR broadcasting. Our formative findings revealed a question among production staff and researcher regarding the strategy, advantage and approach behind designing moving VR shots instead of still shots.
In response to these challenges, we first proposed an event segmentation theory to divide the entire ice hockey game into distinct clips. Furthermore, we developed a digital twin space, the Virtual Ice Hockey (VIH) system, which incorporated four motion shots based on the fundamental audio-visual language of television and film and a still shot, serving to display each Cinematic VR (CVR) clip [36]. Subsequently, we conducted two rounds of user experiments and collected data through capture participants' perspectives,
Figure 1: These three figures show three frames of the dribbling clip we designed in the VIH system, captured by a moving shot. Figure1-a, b and c are the contents of the participants as they watch the panel shot.
questionnaires, user interviews, wherein participants viewed single CVR clips and entire event CVR clips, respectively.
Our findings indicate that while there is no significant difference between moving shots and still shots in single clips, the incorporation of moving shots significantly enhances the overall immersive experience of viewers in VR sports broadcasts compared to still shots, with no notable increase in VR sickness. We further discuss the benefit of integrating moving shots in VR sports broadcast and offers valuable insights for evaluating and designing moving shots in future research and production of VR sports broadcasts.
This paper contributes to the HCI community in two ways. Firstly, we report on the issues with current VR hockey broadcasts and explore how they can be explored using digital twins. Secondly, we designed various moving shots for VR sports broadcasts from single clips to entire events and evaluated the audience's viewing experience to suggest the movement of VR shots in sports broadcasts.
## 1 Related Work
### VR Sports Broadcast
Sports event broadcasting plays a crucial role in expanding the influence of sports events, disseminating event information, and meeting the demands of sports consumption [57, 6]. In recent years, with the rapid development and commercialization of VR technology, many broadcasters have embraced this medium for sports event broadcasts [20, 28, 31, 58]. Research indicates that audiences are increasingly receptive to this novel form of viewing experience [31, 10]. As sports broadcasting continues to gain popularity, audience expectations for immersive viewing experiences and enhanced services are on the rise. The emergence of VR broadcasts reflects spectators' preference for experiencing the game as if they were physically present, surpassing the traditional television viewing experience [35, 29]. For example, Daehwan et al. proposed that VR's primary function is to provide an immersive experience to sport media consumers by enhancing tele-presence [24].
Nevertheless, current VR sports broadcasts face challenges related to user adoption, business models, and content production [28]. These issues are typical for an industry that is still in its early stages, yet the advantages of VR sports broadcasting are becoming increasingly evident to viewers and related industries. With VR technology, spectators can enjoy a realistic experience of mega sports games while saving time and reducing costs [58, 47, 24]. Furthermore, the advent of VR has opened up new revenue streams for professionals in the field, including coaches, athletes, and broadcasters, who have already started to benefit from VR's development.
### The Experience of Watching Moving VR Shots
Locomotion is an important component of VR since it can strongly influence user experience [56, 5, 7], which is also defined as self-propelled movement in virtual worlds [39]. In previous studies, two common locomotion techniques, teleportation, and continuous locomotion, were used as interaction methods in VR games [56], movies [33] and simulators [37, 22].
Virtual Reality Sickness (VRS) [12] was also known as Visually-Induced Motion Sickness (VIMS) [17], Cyber Sickness(CS) [38], or Virtual Simulator Sickness [18]. We collectively referred to these negative effects users experience during or after immersion into virtual reality as VR sickness in this paper [30, 12, 26]. For VR locomotion, sensory conflict induced by the disparity in motion between two sensory systems - visual and vestibular is inevitable [51, 30]. Previous literature illustrated that, compared to teleportation, continuous locomotion allowed the viewer to move continuously, which usually caused significant VR sickness [11, 12, 14, 59]. By contrast, teleportation locomotion teleports the user from the current location to the destination [56], which also brings low VR sickness [59]. However, the cause of VR sickness is related to hardware, content, and human factors (i.e., age, gender, and VR related experience) [12]. Therefore, concluding which locomotion technique is better is generally filled with limitations.
Nevertheless, some studies suggested that teleportation may significantly reduce immersion [3] and increase spatial disorientation for VR viewers [2, 4]. Immersion is a very important and special feeling that VR brings to the experience. For VR, a person immersed in a virtual environment (VE) may identify with his or her virtual body (VB) and experience a sense of presence if their senses confirm that the VB is functioning effectively within the VE [55]. When the user feels a stronger sense of presence, a stronger sense of immersion is created, which define as in VE, the user interacts with the VE in some way and temporarily feels that this state is real [54]. In addition, for spatial disorientation, VR producers focus on how to attract the viewer's attention through various methods. Cagri et al. first designed and implemented a test environment for VR attention models inspired by various Visual Attention Models (VAMs) applied in film and television to collect the viewport trajectory when participants watched omnidirectional video [43]. Their results indicated that viewers do not pay different visual attention to the same content repeatedly, and this amount depends on the complexity of the camera movement of the omnidirectional video. Overall, viewing the content in VR in a locomotion way is a holistic experience. Although continuous locomotion in VR may mostly give viewers a stronger sense of VR sickness, continuous movement of refined quality will also bring users a more intense sense of immersion and accurate spatial awareness.
In this paper, we mainly explore the experience of watching different moving shots in VR environment. Thus, based on our experiences and literature reviews of moving shots in sports broadcasts, we choose continuous locomotion techniques, a process-oriented technique, as design strategies for VR moving shots [7]. This kind of moving shots provide opportunities for sectors easier focus on the content of process in VR sports broadcasts. Thereby, we further explore the immersion, VR sickness and content expression of different moving shots.
### Audio-Visual languages in VR Sport Broadcast
Audio-visual language is the means of expression of all video works. The VR video audio-visual languages was developed by the application of elements such as images, shots, and film editing. Image is the basic vocabulary of VR artistic language, film editing connects VR video like grammar and the shots offer the context [45]. The shots of virtual videos can be categorized as still shots, moving shots, and autonomous shots. In the current VR sports broadcasts, the most commonly used footage is the still camera shots and there are relatively few applications of moving shots. For example, in the 2018 Pyeongchang Winter Olympics, there were multiple panoramic cameras installed on the cross-country skiing course. Although the practice of VR sports broadcasts is available, the theoretical content is not sufficient. However, VR sports broadcasting still follows the content of the audio-visual language of VR video. Therefore, by analyzing the characteristics of VR video, we can provide a reference for VR sports broadcasting.
VR videos are emerging as a medium for both self-exploration and the establishment of social identity via bullet comments [60, 34]. The experience of watching VR video is not entirely passive nor fully active [60]. Thus, many current research focus on how to design a good VR video or cinematic VR (CVR). This medium lies in between traditional cinema and VR [46], thus the audio-visual language of VR video could be based on traditional cinematic language such as cognitive event segmentation and it could also offer new iterations in expansive visual technologies [32]. We summarized three key factors could affect the viewing experience when designing a CVR based on prior studies. (1) The height: people generally prefer lower camera heights to higher camera heights, and the VR video guidelines are recommended to place the camera at the head
height [44, 53]. (2) The distance from the object to the camera: People would have a more intense feeling when the object in the video stays around them than in the distances [48]. However, it would cause a negative effect on the experience when people are very close to the camera [23], so the ideal camera placement distance is between 2m and 3m [9]. (3) The editing: Editing techniques include the techniques based on film/television and VR features [19]. The most common technique is 'Fade' because it meets the audience's psychological expectations [15]. Regarding the timing of editing, the Probabilistic Experential Editing (PPE) proposed by Jessica Brillhart is currently the most accepted articulation point approach [8]. Jiang et al. recently introduced a new deep-learning framework for camera keyframing, enabling customized and automated video generation in virtual environments. They also provided a camera trajectory editing interface to support editors in managing timelines, characters, keyframes, and previews [21].
These characteristics of VR video would affect the viewer's experience to a certain extent. Based on these reviews and our knowledge, we believe these could provide experience and reference for VR Sports Broadcast. Thus, we followed these suggestions and guidance to design the moving shots in VR sports broadcasts based on the traditional audio-visual languages in our experiments.
## 3 System Design
### Field Study
Sports event broadcasting is a crucial way to expand the influence of sports events, disseminate event information and meet the derived sports consumption. Prior to our investigation, we conducted a comprehensive field study to explore several key objectives: (1) identify current challenges in VR ice hockey broadcasting, (2) assess the design and extent of how VR ice hockey broadcasts deliver a positive experience for the audience, and (3) explore the potential for incorporating moving shots in VR ice hockey broadcasts.
To address these questions, we conducted observations at an ice hockey stadium, where both traditional cameras and VR cameras were strategically positioned around the stands. Additionally, we engaged in discussions with experts in communication signals, VR production, and ice hockey coaching. Through these interactions, we found that regarding the current state of VR broadcast shots in ice hockey are still, as VR cameras lack mobility. This limitation hampers the dynamic nature of the broadcasts. The non-portability and high costs associated with the production process of VR sports broadcasts also pose significant challenges. Moreover, the fast-paced nature of ice hockey games presents difficulties in designing and evaluating comprehensive audio-visual language broadcasts, even if capturing moving VR shots becomes feasible.
### Event Segmentation Theory
Based on our field study, we believe that simplifying the content of VR broadcasts for ice hockey can enhance our study and analysis. To achieve this, we introduce the event segmentation theory. Inspired by Vladimir Propp's work, we suggest that narrative events can be meticulously structured around the concept of 'clip' and 'round' of athletes' actions [49]. The clip represents the fundamental narrative unit, while the round refers to a entire unit like a hockey game. By identifying key clips such as collision, defense, passing, hitting, dribbling, and tactical formation within a round, we can focus on important moments amidst the numerous actions in a hockey game.
### Implementation
To address the identified issues and difficulties from our field study, we believe there is much theoretical and exploratory work to be done before deploying VR moving shots in the field. Thus, we developed a Virtual Ice-Hockey (VIH) system within a digital twin environment using _Unreal Engine 4_ (see Figure 2). The VIH system encompasses three key clips: (1) enabling virtual athletes to perform predetermined movements, (2) allowing multiple 360-degree cameras to traverse along custom tracks at variable speeds, and (3) facilitating the selection of specific CVR clips for viewing.
To implement these clips, we utilized blueprints to configure multiple virtual cameras and govern the playback of CVR clips. Additionally, we collaborated with ice hockey players to capture video data of their actions, employing video motion capture techniques to extract skeletal movements. These movements were then applied to athlete models in _Blender_ to generate a comprehensive set of ice hockey actions. Importing the action set into _Unreal Engine 4_, we entrusted hockey experts with designing the virtual ice hockey event.
## 4 Study 1
Our primary research question aims to comprehend the impact of moving shots in VR ice hockey broadcasting. Therefore, in the initial study, our focus was to investigate whether moving shots could enhance the viewing experience for the audience, while also determining the optimal approach to designing these shots within a single clip. Subsequently, we delve into the performance of moving shots throughout the entire event in the second experiment.
### Participants
We enlisted 12 participants (10 female, 2 male) recruited through social media and word of mouth, with ages ranging from 21 to 29 (M=24.16, SD=2.64). Participants were requested to disclose their frequency of VR technology usage and any experiences of 3D vertigo or motion sickness before the experiments. Among the participants, ten had no prior experience with VR, while only two had limited exposure to VR on a few occasions. None of the participants reported experiencing 3D vertigo in their self-reports. All participants possessed normal or corrected-to-normal vision. Monetary compensation was provided to participants, equivalent to the time dedicated to the experiment.
### Study Design
In the present study, we constructed an event following the principles of event segmentation theory. This event dubbed a 'defense-offense transition', was simulated as a round in the VIH system. This simulation incorporated three distinct clips: dribbling, stealing, and shooting. For each of these clips, we designed five diverse CVR shots. These consisted of four moving shots and one still shot to examine the viewers' engagement and experience. The design strategies behind these shots are elucidated in Figure 3.
**1) Capturing CVR Aesthetics:** The aesthetic design of our CVR shots was inspired by traditional ice hockey television broadcasts to represent the event while ensuring visual appeal accurately. **2)
Figure 2: The interface and overview of VIH system.
**Mitigating VR Sickness:** VR sickness, often a result of VR movement and other hardware-related factors [12], is an unavoidable concern. To address content-related causes of this issue, we established the camera zone at a personal distance and incorporated full body shots [23]. Furthermore, we introduced slow start/stop indexing and fade-in/fade-out effects for each moving shot. **3) Adhering to Audio-Visual Language Principles in Film and Television:** In CVR, viewpoint/point-of-view surpasses the limitations of traditional shots, offering viewers the freedom to explore the scene [13]. Consequently, our shot design incorporates four foundational audio-visual movements (track-in, track-out, pan, and dolly) following the guiding principles of film and television.
To augment the realism of our CVR shots, we integrated three audio clips, each featuring different content, such as crowd cheers, player movements, and puck strikes. We crafted three distinct moving shots for each clip, their trajectories aligned with four different methods of virtual camera motion. Subsequently, film and hockey experts were invited to review and select the shots that best encapsulated the ice hockey viewing experience. Overall, we curated 15 shots across the three clips, each lasting approximately 7 seconds.
The _'Track-in shot'_ exhibits a linear trajectory with no camera rotation, moving forward as the event unfolds. 2) The _'Track-out shot'_ shares a similar motion track with the 'Track-in shot', except it moves backward, constantly facing the athlete. 3) The _Pan shot'_ follows a curved path, moving laterally relative to the athlete. 4) The _'Dolly shot'_ advances along a parallel path beside the athlete. 5) For the solitary _'Still shot'_, we positioned it in the front row of the stadium stand, mimicking the perspective offered by prevalent VR live broadcast shots.
### _Materials_
The experiment was conducted within a 4x4 square meter area. Participants were equipped with an HTC Vive Pro2 Head-Mounted Display (HMD) and seated in a mobile chair. Two base stations were positioned diagonally to ensure stable signal transmission. A laptop, placed on a nearby circular table, was used for participants to complete the questionnaire (refer to Figure 5). The objective of this study was to undertake a comprehensive evaluation to analyze the viewing experience facilitated by different VR shots. Accordingly, we collected data on the following dimensions:
_Virtual Reality Sickness:_ Participants were asked to complete the Virtual Reality Sickness Questionnaire (VRSQ) [25] after viewing each shot. The VRSQ consists of nine symptoms, including general discomfort, fatigue, eyestrain, difficulty focusing, headache, fullness of the head, blurred vision, dizziness (with eyes closed), and vertigo. To quantify the participants' discomfort, we utilized a Likert scale ranging from 1 ('I do not experience this symptom at all') to 5 ('I experience this symptom intensely'). As per [25]'s instructions, VRSQ scores ranged from 9 ('I do not experience VR sickness at all') to 45 ('I experience intense VR sickness').
_Immersive Experience:_ After each shot, participants were requested to fill out the Immersive Experience Questionnaire for Film and TV (FilmIEQ) [52] in order to evaluate the aesthetics and viewing experience of this shot. To gauge the immersive experience quantitatively, we employed a Likert scale that ranged from 1 ('I do not experience this feeling at all') to 7 ('I experience this feeling
Figure 4: These four pictures depict four types of traditional audio-visual language based on the movement of the VR camera. From top to bottom and left to right, they are ‘Track-in’, ‘Track-out’, ‘Dolly’, and ‘Pan’.
Figure 5: The experimental setup in a 4x4 square meter area.
Figure 3: These three figures demonstrate the four shots movement tracks based on 2D audio-visual language and the athletes’ movement tracks in three clips, including (a) dribbling, (b)stealing, and (c)shooting. The yellow and red circles represent the athletes of the two teams, with six on each side. The four colors of the camera tracks represent different audio-visual languages, green for ‘Track-in’, blue for ‘Track-out’, pink for ‘Pan’, and brown for ‘Dolly’.
intensely'). The FilmIEQ includes 24 questions across four factors: captivation, real-world dissociation, comprehension, and transportation. As per [52]'s instructions, the scope of FilmIEQ scores ranged from 24 ('I am not at all immersed in this CVR clip') to 168 ('I am deeply immersed in this CVR clip').
_Participants' Perspective:_ For each shot, we captured the participants' perspectives at regular intervals. Every 0.5 seconds, a frame from the participant's viewpoint was recorded. These frames were subsequently analyzed to determine whether the participants' viewing aligned with our predefined instructions.
### Procedure
The study commenced with a detailed introduction of the experimental procedure to the participants, which was succeeded by obtaining their informed consent and gathering demographic data. Subsequently, participants were outfitted with the Head-Mounted Display (HMD) and viewed five shots with randomized order without interruption. After the initial viewing, participants were asked to review the sequence of five shots one by one. After each shot, participants removed the HMD and completed a questionnaire on a provided laptop. This approach was designed to minimize cognitive biases associated with the novelty and potential discomfort of the initial viewing experience [40].
The same viewing process, including the randomization of shots during the second viewing, was carried out for the following two clips. A 5-minute break separated each clip viewing to decrease fatigue [50]. Participants were further requested to share brief impressions of the CVR clips after each viewing session.
After the viewing sessions, a brief 15-minute interview was conducted with each participant to gain deeper insights into the shots' design and their viewing experience and perceptions. Consequently, the complete experimental procedure, including the interview, lasted approximately one hundred and five minutes per participant, with participants exposed to the VR environment for at least 10 minutes.
### Result
In this study, 12 participants completed the VRSQ and the FilmIEQ. We captured and analyzed their perspectives throughout the experiment, and we recorded some key user voices and insights, which are reported in the subsequent discussion sections. In addition, we performed descriptive data analyses on the collected variables and illustrated the results in graphical form. We detail the findings for each variable in the subsequent paragraphs.
_Virtual Reality Sickness:_ Our analysis revealed that 'Track-in' and 'Track-out' shots consistently generated higher VRSQ scores than the remaining shot 'Pan', 'Dolly', and 'Still' across various clips. The sole exception was clip3, in which shots3-4 scored higher than shots3-1 and shots3-2, as depicted in Figure 7-left. Despite these variations, the one-way ANOVA test results indicated no statistically significant differences among the shots within each clip--clip1 clip1 (F(4, 55)=0.95, p\(>\)0.05), clip2 (F(4, 55)=0.63, p\(>\)0.05), and clip3 (F(4, 55)=0.13, p\(>\)0.05). The average VRSQ scores for the 15 shots ranged from 10.67 to 14.83(M=12.00, SD=4.16), suggesting that none induced significant VR sickness among the participants. This outcome supports that our shot movement designs are suitable and easily tolerated.
_Inmersive Experience:_ Our analysis revealed that the 'Pan' and 'Dolly' shots consistently achieved higher FilmIEQ scores than the 'Track-in' and 'Track-out' shots across all clips, as depicted in Figure 7-right. Furthermore, all four types of moving shots exhibited superior FilmIEQ scores compared to the single still shot in each clips. The average FilmIEQ scores for the 15 shots ranged from 91.17 to 115.42, with a mean (M) of 107.01 and a standard deviation (SD) of 22.19. However, the one-way ANOVA test indicated no statistically significant variations among the different shots in clip1 (F(4,55) =2.31, p\(>\)0.05), clip2 (F(4,55) =0.66, p\(>\)0.05) and clip3 (F(4,55) =0.13, p\(>\)0.05).
_Participants' Perspective:_ We captured approximately 15 frames of participants' perspectives for each shot. Our analysis of these frames suggested that they could be amalgamated into a complete depiction of the clips. Each frame contained relevant information (i.e., players and pucks), with only a few frames displaying less pertinent content (i.e., ceilings and bleachers). These findings imply that our shot movement designs are easy to follow and understand. Thus, we did not conduct further quantitative analysis of these frames.
## 5 Study2
Going a step further to understand our primary research question, we following conducted a study to explore the impact of moving shots throughout the entire event.
### Participants
We recruited another group of 12 participants (5 female, 7 male) via social media channels and personal referrals, aged between 20 and 29 (M=23, SD=3.1). Each participant was asked to provide information on their frequency of VR technology usage, as well as any previous experiences of 3D vertigo or motion sickness, prior to the commencement of the experiments. Among the participants, two had no previous VR experience, six had sporadic exposure to VR, and four reported extensive experience with VR usage or development. Notably, none of the participants disclosed any instances of 3D vertigo in their self-reports. All participants possessed normal or corrected-to-normal vision. Commensurate with the time dedicated to the experiment, compensation was provided to all participants.
### Study Design
In this study, we further designed five different shots (track-in, track-out, pan, dolly, and still) to explore and analyze their watching experience across the entire CVR event. Since we aim to extend the previous CVR camera tracks in Study1, we combined the three
Figure 6: These two box-and-whisker plots illustrate the distribution of total VRSQ _(left)_ and FilmEQ _(right)_ scores across five shots in three clips, including clip1: dribbling, clip2: stealing, and clip3: shooting.
clips from study 1 to form a entire event clip with hard cuts in the editing. The complete camera tracks are illustrated in Figure7. Specifically, the'still' shots are strategically maintained in their original position from Study 1, specifically in the front row of the stadium stand. This position was chosen to mimic the perspective of prevalent VR live broadcast shots without editing throughout the clip. To minimize the VR sickness of our shots, we used hard-cut as our transition method between clips and allowed participants freely control their point of view by only giving their starting angle [15, 40]. Subsequently, we solicited feedback from experts in hockey and a cinema director to review the content and shot movement, ensuring that our combined clips offered clear storytelling and were easy to follow. They first emphatically affirmed our shot design and editing methods. Additionally, they recommended aligning our cuts with key moments in ice hockey, such as hits and passes. They also suggested adding sound effects when the hockey puck was hit to amplify the impact of the cuts. We modified the shots according to their suggestions and used them in the following experiments. Overall, we curated 5 combined shots, each lasting approximately 30 seconds.
### Materials and Procedure
In the present study, we employed the same experimental apparatus and questionnaires as in Study 1. As observed in Study 1, the majority of participants accurately followed our shot directions, and their viewing angles predominantly conformed to our design. Consequently, additional perspective captures were deemed unnecessary in this study.
During this study, participants were firstly requested to provide demographic information such as their age, field of study, and any pre-existing conditions like VR-induced vertigo or motion sickness. After an introduction and explanation of the process, participants viewed the complete set of five shots initially, followed by a second round where they experienced a randomized sequence of the same shots. This sequencing rationale is aligned with the procedure outlined in Study 1. During the second round of viewing, participants were instructed to remove the HMD and subsequently complete the pertinent questionnaires using a laptop, each time after the appraisal of a given shot. The total duration of the experiment was around 40 minutes, with participants exposed to the VR environment for at least 5 minutes.
### Results
_Virtual Reality Sickness:_ In the present study, 12 participants completed both the initial VRSQ and the subsequent FilmIEQ questionnaires. The VRSQ results revealed marginal variance in the mean scores of the five shots. Specifically, the 'Track-in' shot (M=14.67, SD=5.44) and the 'Dolly' shot (M=14.67, SD=5.22) shared an identical mean score, while the 'Track-out' shot (M=16.25, SD=6.22) and the 'Pan' shot (M=16.67, SD=7.67) had slightly higher scores (Figure 8-left). The 'Still' shots exhibited the lowest VRSQ score (M=13.25, SD=3.84). However, according to the one-way ANOVA test, these variations in scores between shots were not statistically significant (F(4,55)=0.67, p\(>\)0.05). The overall range of VRSQ scores across the five shots, from 13.25 to 16.6 (M=15.06, SD=1.64), suggested that the movement designs of the shots were adequately tolerable and easy to follow.
_Immersive Experience:_ Conversely, the FilmIEQ results demonstrated statistical significance. The one-way ANOVA test (F(4,55)=7.18, p\(<\)0.05) with multiple comparisons revealed significant disparities between shots (Figure 8-right). The 'Track-in' shot (M=126.33, SD=12.22) achieved the highest mean score, with the 'Dolly' (M=122.5, SD=21.29) and 'Track-out' (M=122.0, SD=19.44) shots ranked second and third, respectively. The 'Still' shot (M=94.17, SD=14.19) obtained the lowest mean score, while the 'Pan' shots displayed a relatively lower performance (M=103.5, SD=21.72). Multiple comparison results showed that the 'Track-in', 'Track-out' and 'Dolly' shots scored significantly higher than the 'Pan' and 'Still' shots. However, there were no significant differences within the group consisting of 'Track-in', 'Track-out', and 'Dolly' shots, and similarly, between the 'Pan' and 'Still' shots.
## 6 Discussion
### Analysis of Integrating Moving Shots in Virtual Reality Sports Broadcasting
The audio-visual language is a fundamental component of moving shots in the traditional video and allows directors to dictate their creative vision to the production team. This critical aspect translates seamlessly into broadcasting, where the director communicates with the camera crew to capture, transition, and edit various shots. Consequently, we proposed integrating audio-visual language into VR moving shots and explored its benefits relative to the existing still shots, explicitly concerning the audience's viewing experience.
According to our study, several notable findings emerged. Primarily, the duration of exposure to the VR moving camera significantly influences the user's immersion. In Study 1, a single clip scenario, no discernible difference between shots was reported in the FilmIEQ results. However, Study 2 revealed that immersive experiences levels in the case of three moving shots were significantly higher than that of the 'Still' shot and remaining moving shots. We posit that this difference primarily arises from the duration of user exposure to the VR environment. Participants in the interviews often expressed that single clips needed longer to elicit a response, despite having been viewed once. Conversely, participants in Study 2 reported that experiencing the narrative of the entire round led to an enhanced sense of immersion. Furthermore, even though participants were exposed to the VR environment for longer durations, the specifically
Figure 7: These two figures demonstrate four combined shots movement tracks based on 2D audio-visual language and Study1 Design, with the athletes’ movement tracks for a defense-offense transition event. The three colors of the camera tracks represent different audio-visual language, green for ‘Track-in’, blue for ‘Track-out’, pink for ‘Pan’, and brown for ‘Dolly’.
designed VR shots did not significantly contribute to an increased sense of VR sickness.
Secondly, our findings suggest that, within the VR environment, moving shots are generally better received and provide a superior experience compared to still shots. As evidenced in Study 2, significant differences in immersion were noted; the moving camera significantly enhanced the user's viewing experience without inducing VR sickness. This implies that incorporating more moving footage in future VR sports broadcasts could be advantageous.
Lastly, we found that user preferences and attitudes toward VR broadcasts from different shots are highly individualized. In our brief interviews, participants were asked about their preferred footage type (i.e., the one that provided them with the best experience), but their responses needed a discernible pattern. For instance, one participant preferred the Track-in shot due to their preference for third-person games, while another favored the Pan shot to gain a broader perspective. Another interesting finding in the results of Experiment 2 is that the Pan shot significantly differs from the other three moving shots in terms of immersion. Therefore, it is imperative to develop generalized guidance for applying audio-visual language in VR sports broadcasting.
In conclusion, our findings lay the groundwork for an initial theory on VR moving shots in sports broadcasting, thereby opening up avenues for future research. However, we acknowledge that the issue of sample size might pose a limitation to our analysis. Thus, subsequent studies could delve deeper into related VR broadcast applications, the audio-visual language system, and large sample viewer experiences.
### Design Implications for VR Moving Shots in Broadcasting
Through the insights gathered from our study, along with our production experiences, we also found some of the design implications for VR moving shots. The production of VR moving shots involves three core elements: orientation, the primary content subject in the first frame, and the VR camera's trajectory. These elements align with those found in conventional film and television production. However, while viewing moving VR shots, viewers' orientation and subject content cannot be confined beyond the initial frame. Unlike 2D media, users in the VR environment possess greater autonomy, which introduces increased uncertainty into the VR narrative. The viewers' potential fear of missing out (FOMO) may result in attention distraction and a reduced sense of presence [1]. Consequently, the first frame of every shot in VR broadcasting becomes critical as it is the only initial control point for producers to guide the viewers' experience. Our Study 1 findings illustrate that viewers continue following the main subject presented in the initial frame.
In addition, the moving trajectory of VR shots holds significant narrative value in conveying the story of the entire sports event. Narrative methods are indispensable in time-based media like movies, broadcasts, novels, and games, guiding the viewer through the scene [16]. The narration of the entire sports event unfolds through sequential shots, each corresponding to a different narrative element. For moving shots in VR, proper guidance and editing methods are required to help the director effectively present the narrative content. This guidance can be incredibly potent during event climaxes, where well-designed moving shots can provide viewers with a significantly immersive VR sports event experience. Nonetheless, most existing methods and strategies are designed for still VR shots [8, 27, 41]. Drawing upon our experiences and knowledge, we propose that future research could amalgamate the characteristics and theories of existing 2D video and VR still shots. For longer-duration experiences like VR sports broadcasts, the narrative is essential in retaining viewers' attention and interest. Designing moving shots based on different event content could introduce new narrative possibilities in VR media. For instance, directors can employ pan shots to guide the viewer through calmer events or track-in shots to intensify tension during conflicting events.
### Enhancements and Prospective Developments for VR Sports Broadcasting
The field of VR sports broadcasting has been evolving over several years. Many professional games (i.e., NHL and NBA) have already adopted VR broadcasting as an optional viewing method. However, our field study and consultations with domain experts have identified several issues that require resolution.
Firstly, current VR broadcasts predominantly rely on still 360-degree shots, lacking the utilization of dynamic 360-degree shots. Our research findings highlight the significant advantages of incorporating moving VR shots, as they enhance viewers' immersion and facilitate the capture of crucial information. Nonetheless, several challenges hinder the production of moving VR shots, including the potential induction of VR sickness during viewing and technical limitations of the equipment, such as the absence of zoom and rotation functionalities. Although conventional equipment like sliders or drones are commonly used in broadcasting, adapting them to different sports stadiums poses considerable implementation challenges and limitations. Volumetric imaging technology, which enables real-time footage capture and 3D space reconstruction, holds promise as a viable solution for VR broadcast technology. However, existing VR moving footage remains insufficient for prolonged, dizziness-free viewing, currently serving primarily as a highlight reel to enrich the viewer experience [42].
Additionally, the interactive opportunities provided to audiences within VR sports broadcasts still need to be improved. Our survey reveals that in current VR broadcasts, broadcasters deploy panoramic cameras at specific locations around the stadium, such as the stands, bench, and VIP box. The only interactive option for viewers is
Figure 8: These two box-and-whisker plots illustrate the distribution of total VRSQ _(left)_ and FilmIEQ _(right)_ scores across five shots in the combined clip. The * represents the statistically significant difference between the two shots.
switching between different camera perspectives. Thus, We propose that innovative interactive designs could enrich viewers' experiences by providing greater autonomy of choice. For instance, designing an interactive interface within VR for selecting different shots via natural interaction methods, introducing multi-sensory (tactile and olfactory) viewing experiences or adding bullet comments as a communication methods when watching with friends could substantially enhance audience engagement. Nevertheless, we advocate preserving a general viewing method that remains accessible and enjoyable for viewers who may not be well-versed in sports.
Lastly, the current state of VR sports broadcasting lacks a comprehensive, end-to-end production solution. Unlike movies and videos, sports broadcasting combines competitive, entertaining, and unpredictable features [6, 57]. Traditional sports broadcasting is divided into pre-game, in-game, and post-game phases, encompassing pre-game previews and commentary, in-game commercials and live feeds, highlight replays, and post-game interviews and commentary. Due to the unique attributes of VR, the original broadcast system and content cannot be directly transferred to VR live broadcasts. Current challenges include devising a specialized system for VR sports broadcasts, tailoring broadcast shots to suit VR characteristics, and enhancing the existing platform's diversity, excitement, and uniqueness of VR event broadcasts. Nonetheless, Wang et al. introduced a video creation tool, "Write-A-Video', that facilitates straightforward video generation by leveraging existing video libraries and simple text input [61]. Consequently, we believe that such technology could soon find application in producing VR sports broadcasts and recaps.
## 7 Conclusion
This paper presented two user studies investigating the impact of moving shots in VR broadcasting, utilizing ice hockey as a representative example. Through our field research, we pinpointed several challenges in the VR broadcasting of ice hockey. To make it feasible, we introduced the concept of event segmentation and developed an ice hockey digital twin environment for the following research. In Study 1, we asked participants to view single CVR clips featuring five distinct shots (four moving and one still) based on the principles of audio-visual language. Participants then completed questionnaires concerning their sense of immersive experience and any VR-induced discomfort. Our results indicated no substantial difference in viewing experiences between still and moving shots for individual clips. Proceeding to Study 2, we curated four extended moving shots and one still shot for a given event. We replicated the same evaluation process as in Study 1. Intriguingly, we observed that several moving shots significantly outperformed the still shots regarding viewer experience. To sum up, we propose that moving shots offer distinct advantages over still shots in VR broadcasting, providing viewers with a more immersive and comprehensive experience. In light of our investigation and analysis of moving shots in VR ice hockey broadcasts, we have also discussed potential design considerations for VR production and provided suggestions for improving future VR broadcast applications.
###### Acknowledgements.
We extend our heartfelt appreciation to Xinyi Wang, Yuqi Wu, and Kaige Zhang for their diligent help to this project, as well as to all participants for their time and efforts. This research was supported by Tsinghua University Initiative Scientific Research Program (20213080010), the Foundation of the Ministry of Education of China (22YJCZH041) and the Sichuan Animation Research Center Program of China (DM202213)
|
2309.04683 | Tensor Ranks and the Fine-Grained Complexity of Dynamic Programming | Generalizing work of K\"unnemann, Paturi, and Schneider [ICALP 2017], we
study a wide class of high-dimensional dynamic programming (DP) problems in
which one must find the shortest path between two points in a high-dimensional
grid given a tensor of transition costs between nodes in the grid. This
captures many classical problems which are solved using DP such as the knapsack
problem, the airplane refueling problem, and the minimal-weight polygon
triangulation problem. We observe that for many of these problems, the tensor
naturally has low tensor rank or low slice rank.
We then give new algorithms and a web of fine-grained reductions to tightly
determine the complexity of these problems. For instance, we show that a
polynomial speedup over the DP algorithm is possible when the tensor rank is a
constant or the slice rank is 1, but that such a speedup is impossible if the
tensor rank is slightly super-constant (assuming SETH) or the slice rank is at
least 3 (assuming the APSP conjecture). We find that this characterizes the
known complexities for many of these problems, and in some cases leads to new
faster algorithms. | Josh Alman, Ethan Turok, Hantao Yu, Hengzhi Zhang | 2023-09-09T04:40:32Z | http://arxiv.org/abs/2309.04683v2 | # Tensors Ranks and the Fine-Grained Complexity of Dynamic Programming
###### Abstract
Generalizing work of Kunnemann, Paturi, and Schneider [ICALP 2017], we study a wide class of high-dimensional dynamic programming (DP) problems in which one must find the shortest path between two points in a high-dimensional grid given a tensor of transition costs between nodes in the grid. This captures many classical problems which are solved using DP such as the knapsack problem, the airplane refueling problem, and the minimal-weight polygon triangulation problem. We observe that for many of these problems, the tensor naturally has low tensor rank or low slice rank.
We then give new algorithms and a web of fine-grained reductions to tightly determine the complexity of these problems. For instance, we show that a polynomial speedup over the DP algorithm is possible when the tensor rank is a constant or the slice rank is 1, but that such a speedup is impossible if the tensor rank is slightly super-constant (assuming SETH) or the slice rank is at least 3 (assuming the APSP conjecture). We find that this characterizes the known complexities for many of these problems, and in some cases leads to new faster algorithms.
## 1 Introduction
Dynamic programming (DP) is one of the most common algorithmic paradigms, used throughout the theory and practice of diverse computational domains. See [CLRS01] chapter 14 for a detailed introduction.
When one solves a problem using DP, a natural question arises: is this the fastest algorithm to solve the problem? Recently, fine-grained complexity has been used to show that for many important problems, the answer is yes. For instance, researchers have established conditional lower bounds for the longest common subsequence [ABW15, BK15], edit distance [BI15], Frechet distance [Bri14], regular expression matching [BI16], context free grammar parsing [ABBK17], and RNA folding [ABBK17] problems, showing that there is no algorithm (whether or not it uses DP) that is faster than the standard DP algorithm by a polynomial factor.
On the other hand, there are some notable examples where a natural DP formulation is _not_ the fastest known way to solve a problem. Consider, for instance, the polygon triangulation problem from computational geometry. In this problem, we are given as input a convex polygon with \(n\) nodes, where each node \(i\) has a weight \(w_{i}\). For each triple \(i,j,k\) of nodes, a triangle with those nodes as vertices has weight \(w_{i}\cdot w_{j}\cdot w_{k}\). The weight of a triangulation of the polygon is the sum of the weights of its constituent triangles. The goal in the problem is to find the triangulation of the polygon with minimum weight. This problem has applications in point visibility [Her89], mesh generation [BE95], computer graphics [NM95], and even in visual cryptography [SSMB12].
Polygon triangulation has a natural DP formulation as follows. Let \(T[i,j]\) denote the minimum weight of a triangulation of the polygon consisting of just nodes \(i,i+1,i+2,\ldots,j\) with an edge drawn between nodes
\(i\) and \(j\). Thus our goal is to compute \(T[1,n]\), and these values satisfy the recurrence
\[T[i,j]=\min_{i<k<j}\Bigl{\{}T[i,k]+T[k,j]+w_{i}\cdot w_{j}\cdot w_{k}\Bigr{\}}.\]
(Since there is an edge from \(i\) to \(j\) in the polygon, there must be a triangle involving those two nodes and a third node \(k\); the recurrence corresponds to iterating over the choices of that third node.)
This recurrence leads to a DP algorithm which solves the problem in time \(O(n^{3})\). However, Hu and Shing showed in [13, 13] that the problem can actually be solved much faster, in time \(O(n\log n)\). This is a surprising example where geometric techniques lead to a faster algorithm than the natural DP formulation.
### Least Weight Subsequence Problems
Kunnemann, Paturi, and Schneider [11] initiated a general study of these phenomena. They studied a general class of problems intended to capture many one-dimensional DP problems called Least Weight Subsequence (LWS) problems: Given as input a positive integer \(n\) and an \(n\times n\) matrix \(w\), compute the value \(T[n]\) defined by the recurrence:
\[T[j]=\begin{cases}0&\text{if }j=0\\ \min_{0\leq i<j}\Bigl{\{}T[i]+w[i,j]\Bigr{\}}&\text{otherwise}.\end{cases} \tag{1}\]
LWS was first introduced by Hirschbert and Lamore [10] to capture many known DP problems, including longest increasing subsequence [14], airplane refueling [10], coin change [11], nested boxes [11], pretty printing [12], and various knapsack problems.
For illustration purposes, consider the longest increasing subsequence (LIS) problem: given an array of \(n\) integers \(X=[x_{1},\dots,x_{n}]\), return the length of the longest strictly increasing subsequence in \(X\)[14]. LIS can be formulated as an LWS problem by setting
\[w[i,j]=\begin{cases}-1&\text{if }x_{i}<x_{j}\\ \infty&\text{otherwise}.\end{cases}\]
Notice that \(w[i,j]\) equals negative one when \(x_{i}\) can be added to a subsequence which ends in \(x_{j}\), thus increasing the length of a strictly increasing subsequence by \(1\). Since LIS is a maximization problem and LWS is a minimization problem, the weights are \(-1\), not \(1\), and the solution is given by \(-T[n]\). Many algorithmic problems can be formulated as an LWS instance by appropriately setting the weight matrix \(w\).
Figure 1: An example polygon triangulation problem. The polygon \(P(i,j)\) is partitioned into \(3\) parts by choosing \(k\) and forming a triangle \((i,j,k)\) whose weight is \(w_{i}\cdot w_{j}\cdot w_{k}\).
Straightforward DP solves the LWS problem in \(O(n^{2})\) time. Since the input matrix \(w\) has \(\Omega(n^{2})\) entries, it requires quadratic time to read the input, so a faster algorithm isn't possible in general. However, if the \(w\) matrix is given in a smaller, compressed form, then one may hope for subquadratic-time algorithms.1
Footnote 1: There has also been prior work on algorithms which assume \(w\) has some additional structure which does not mean \(w\) is compressible, but which lets one solve the problem without looking at most entries of \(w\). For instance, [11] and [12] give \(O(n\log n)\) and \(O(n)\) time algorithms, respectively, for solving LWS with concave weights, i.e., when the entries of the matrix \(w\) are promised to satisfy a quadrangle inequality. See also [10, 11, 12].
One example that [13] focuses on is the case when \(w\) is a low-rank matrix. If \(w\) has rank \(r<n^{o(1)}\), then one can be given as input matrices \(A,B\in\mathbb{R}^{n\times r}\) such that \(w=A\times B^{T}\), so the input size to the problem is only \(n^{1+o(1)}\).
Interestingly, Kunnemann showed via fine-grained reductions that this problem is subquadratic-time _equivalent_ to the well-studied Min-IP problem for vectors of dimension \(r\): Given as input \(x_{1},\ldots,x_{n},y_{1},\ldots,y_{n}\in\mathbb{R}^{r}\), find the \(x_{i},y_{j}\) minimizing the inner product \(\langle x_{i},y_{j}\rangle\). This problem can be solved in time \(O(n^{2-1/r})\) using geometric techniques [14, 1, 1], and thus has a truly-subquadratic time algorithm whenever \(r\) is a constant. On the other hand, it is known that assuming the Strong Exponential Time Hypothesis (SETH), the Min-IP problem requires time \(n^{2-o(1)}\) even when \(r\) is slightly super-constant \(r=2^{\log^{*}n}\)[10], and thus the DP algorithm is essentially optimal. (Here \(\log^{*}\) denotes the very slowly-growing iterated logarithm function.)
In this paper, we investigate the optimality of higher-dimensional DP formulations. We focus especially on generalizing LWS with low-rank matrices. As we will see, the known one-dimensional reductions do not generalize in a straightforward way, leading to a variety of generalizations and an intricate landscape of results. We will find that many classical higher-dimensional DP problems like the polygon triangulation problem are captured by our generalizations.
There are two choices to be made when generalizing LWS with low-rank matrices to higher dimensions: what is the higher-dimensional generalization of matrix rank (section 1.2) and what is the higher-dimensional generalization of the LWS recurrence (section 1.3).
### Generalizations of matrix rank
The rank of a matrix has many equivalent definitions. However, when these definitions are generalized to higher-dimensional tensors, they lead to different notions. Prominent examples with applications in algebra, combinatorics, algorithm design, and complexity theory include the rank, subrank, border rank, slice rank, flattening rank, analytic rank, and geometric rank (see, e.g., [15, 16]). It is thus not clear, in general, which notion to use when generalizing results involving low-rank matrices.
We focus here on the two notions which arise naturally in known DP problems: tensor rank and slice rank.
Tensor RankA \(d\)-dimensional (order-\(d\)) tensor \(w\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}\) has rank \(1\) if there are vectors \(x_{1}\in\mathbb{R}^{n_{1}},\ldots,x_{d}\in\mathbb{R}^{n_{d}}\) such that, for all \(i_{1}\in[n_{1}],\ldots,i_{d}\in[n_{d}]\) we have \(w[i_{1},\ldots,i_{d}]=x_{1}[i_{1}]\cdot x_{2}[i_{2}]\cdots x_{d}[i_{d}]\). More generally, the rank of tensor \(w\) is the minimum non-negative integer \(k\) such that there are rank \(1\) tensors \(w_{1},\ldots,w_{k}\) for which \(w=w_{1}+\cdots+w_{k}\). This notion is sometimes also called canonical polyadic decomposition (CPD) rank.
For instance, in the polygon triangulation discussed earlier, the tensor \(w\) whose entry \(w[i,j,k]\) gives the weight of triangle \((i,j,k)\) has rank \(1\) because the weight of the triangle \((i,j,k)\) is \(w[i,j,k]=x_{i}\cdot x_{j}\cdot x_{k}\).
For another example, consider the airplane refueling problem: an airplane is traveling on a grid with dimension \(k\) such that each point in the grid is a refueling airport. The airplane starts at location \((1,\ldots,1)\) and wants to arrive at location \((n,\ldots,n)\). The cost of flying from \((i_{1},\ldots,i_{\ell-1},j_{\ell},i_{\ell+1},\ldots,i_{k})\) to \((i_{1},\ldots,i_{k})\) is \(w[i_{1},\ldots,i_{k},j_{\ell}]\) (the airplane can only flies on the grid). The problem asks to minimize the cost of traveling.
One commonly studied cost of traveling from \((i_{1},\ldots,i_{\ell-1},j_{\ell},i_{\ell+1},\ldots,i_{k})\) to \((i_{1},\ldots,i_{k})\) is \(w[i_{1},\ldots,i_{k},j_{\ell}]=(k-(i_{\ell}-j_{\ell}))^{2}\) for a fixed constant \(k\)[11], which has rank 4 since
\[(k-(i_{\ell}-j_{\ell}))^{2}=i_{\ell}^{2}\cdot 1+1\cdot j_{\ell}^{2}+(i_{\ell}-k) \cdot(-2j_{\ell})+(i_{\ell}-\frac{k}{2})\cdot(-2k).\]
Slice RankA \(d\)-dimensional (order-\(d\)) tensor \(w\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}\) has slice rank 1 if there is a \(j\in[d]\), a vector \(a\in\mathbb{R}^{n_{j}}\), and a \((d-1)\)-dimensional tensor \(b\in\mathbb{R}^{n_{1}\times\cdots\times n_{j-1}\times n_{j+1}\times\cdots \times n_{d}}\) such that, for all \(i_{1}\in[n_{1}],\ldots,i_{d}\in[n_{d}]\) we have \(w[i_{1},\ldots,i_{d}]=a[i_{j}]\cdot b[i_{1},\ldots,i_{j-1},i_{j+1},\ldots,i_{ d}]\). More generally, the slice rank of tensor \(w\) is the minimum non-negative integer \(k\) such that there are slice rank 1 tensors \(w_{1},\ldots,w_{k}\) for which \(w=w_{1}+\cdots+w_{k}\). Slice rank was recently introduced in the context of solving the cap set problem from extremal combinatorics [13, 14, 15, 16, BCC\({}^{+}\)17], but it has since found applications in algorithm design and complexity theory [1, 1, 2, 1, 1] and other areas of combinatorics [1, 16, 17, 18, 19].
It is immediate that if a tensor \(w\) has rank \(d\), then it has slice rank at most \(d\). However, there are many natural situations where the slice rank of a tensor may be much lower than its rank, and we could hope to take advantage of this to design faster algorithms.
For example, another reasonable cost function for the airplane refueling problem is the one that depends only on the destinations, e.g., each airport charges a fee for landing at that airport. In this scenario, the cost function would have slice rank 1 but very large rank. We discuss the details in Section 5.1.
### Higher-dimensional Lws recurrences
Many problems solvable by LWS can be naturally generalized to higher dimensions, which motivates us to study high dimensional version of the LWS recurrence. We focus on two new recurrences which capture most examples. The first, which we call kD LWS, is perhaps the most general.
**Definition 1.1** (kD Lws).: _For a positive integer \(k\), consider \((k+1)\)-dimensional tensors \(w_{1},\ldots,w_{k}\) of size \(n\times n\times\cdots\times n\), where \(w_{\ell}[i_{1},\ldots,i_{k},j]\in\{-W,\ldots,W,\infty\}\) for all \(1\leq\ell\leq k\). The kD LWS problem asks, given as input \(w_{1},\ldots,w_{k}\), to determine \(T[n,\ldots,n]\) given the dynamic programming recurrence relation:_
\[T\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}=\begin{cases}0&\text{if $j_{1}=j_{2}= \ldots=j_{k}=1$}\\ \min_{1\leq\ell\leq k}\Bigl{\{}\min_{1\leq i_{\ell}<j_{\ell}}\Bigl{\{}T\Big{[} j_{1},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1},\ldots,j_{k}\Big{]}+w_{\ell}\Big{[}j_{1},j _{2},\ldots,j_{k},i_{\ell}\Big{]}\Bigr{\}}\Bigr{\}}\text{ otherwise}.\end{cases}\]
Intuitively, to compute \(T[j_{1},j_{2},\ldots,j_{k}]\), we look at all _previous_ terms in the table \(T\) that differ from \((j_{1},j_{2},\ldots,j_{k})\) by _one_ coordinate. For example, when \(k=2\), 2D LWS can be expressed as
\[T[i,j]=\begin{cases}0&\text{if $i=j=1$}\\ \min\begin{cases}\min_{1\leq k<i}\{T[k,j]+w_{1}[i,j,k]\}&\\ \min_{1\leq k<j}\{T[i,k]+w_{2}[i,j,k]\}&\text{otherwise}.\end{cases}\end{cases}\]
kD LWS captures high-dimensional analogs of many previous problems solved by LWS and also some new problems which we discuss below. This includes higher dimensional airplane refueling (see Section 5.1 below), kMin-IP (Section 3.1), all-pairs shortest paths (Section 3.2), multiple nested box chains (Section 5.2), etc. An illustration of 2D LWS is shown in Figure 2.
Static kD LWSSimilar to [15], we also define a notion of "static" kD LWS in which we are given some entries in the DP table, and we would like to compute new entries which depend only on the given entries. The main idea of [Static]kD LWS is that we have the information \((T[i_{1},\ldots,i_{k}])\) for all \((i_{1},\ldots,i_{k})\) on a "band" \(D_{a,a+N}\) and we want to compute \(T[i_{1},\ldots,i_{k}]\) for all \((i_{1},\ldots,i_{k})\) on the next "band" \(D_{a+N,a+2N}\). A band \(D_{\alpha,\beta}\) is defined to be all \((i_{1},\ldots,i_{k})\) such that their sum \(i_{1}+\cdots+i_{k}\) is in the interval \([\alpha,\beta)\).
**Definition 1.2**.: ([Static]kD Lws) _Given intervals \(D_{a,a+N},D_{a+N,a+2N}\) together with correctly computed values \(T[i_{1},\ldots,i_{k}]\) for all \(1\leq\ell\leq k\) and \((i_{1},\ldots,i_{k})\in D_{a,a+N}\), [Static]kD LWS asks to determine_
\[T^{\prime}\Big{[}j_{1},\ldots,j_{k}\Big{]}=\min_{1\leq\ell\leq k}\Bigg{\{}\min_ {a-I_{\ell}\leq i_{\ell}<a+N-I_{\ell}}\Big{\{}T\Big{[}j_{1},\ldots,j_{\ell-1}, i_{\ell},j_{\ell+1},\ldots,j_{k}\Big{]}+w_{\ell}\Big{[}j_{1},j_{2},\ldots,j_{k},i_{ \ell}\Big{]}\Big{\}}\Bigg{\}}\]
_for all \((j_{1},j_{2},\ldots,j_{k})\in D_{a+N,a+2N}\)._
For illustration purposes, consider the [Static]2D LWS problem: given correctly computed values \(T[i,j]\) for all \((i,j)\in D_{a,a+N}\), determine
\[T^{\prime}[i,j]=\min\begin{cases}\min_{a-i\leq k<a+N-i}\{T[k,j]+w_{1}[i,j,k]\} \\ \min_{a-j\leq k<a+N-j}\{T[i,k]+w_{2}[i,j,k]\}\end{cases}\]
for all \((i,j)\in D_{a+N,a+2N}\).
Figure 3 depicts the idea.
[KPS17] showed that in the \(k=1\) case, [Static]LWS is subquadratic-_equivalent_ to the original LWS problem. We will find that the relationships among the higher-dimensional versions are more complicated.
### Polygon Triangulation
kD LWS as we defined above captures many different high-dimensional DP problems, but it is not the only conceivable way to generalize LWS. We consider here another example we call 2D LWS\({}^{\textsf{PT}}\), which captures optimization over sets that are counted by the Catalan numbers.
Figure 2: 2D LWS. To compute \(T[i,j]\), we take the minimum of all possible white circles (plus their respective tensor values \(w\)) such that their coordinates differ from the target by one coordinate.
Figure 3: [Static]2D LWS. To calculate \(T^{\prime}[i,j]\) (black circle), we take the minimum over all possible white circles (plus their respective weight values \(w\)) such that they share all but one coordinate with \(T[i,j]\).
Recall that the Catalan numbers \(C_{0},C_{1},C_{2},\ldots\) can be defined recursively by \(C_{0}=1\) and
\[C_{n}=\sum_{k=1}^{n}C_{k-1}\cdot C_{n-k}. \tag{2}\]
\(C_{n}\) counts many different combinatorial objects, such as the number of triangulations of a convex polygon with \(n+2\) vertices, and the number of binary trees with \(n+1\) leaves. (See, e.g., [14].) The variable \(k\) being summed over in Equation (2) typically corresponds to ways to partition the combinatorial object into two smaller parts. This leads to our new definition, in which we want to instead _minimize_ over all ways to partition the object:
**Definition 1.3** (2d \(\mathsf{LWS}^{\mathsf{PT}}\)).: _Given as input an \(n\times n\times n\) tensor \(w\), the 2D \(\mathsf{LWS}^{\mathsf{PT}}\) problem asks to compute the value of \(T[n,n]\) given the dynamic programming recurrence relation:_
\[T[i,j]=\begin{cases}0&\text{if }j-i\leq 1\\ \min_{i<k<j}\left\{T[i,k]+T[k,j]+w[i,j,k]\right\}&\text{otherwise.}\end{cases}\]
For instance, this captures the polygon triangulation problem defined above when \(w[i,j,k]=w_{i}\cdot w_{j}\cdot w_{k}\), which is unsurprising as polygon triangulations are counted by the Catalan numbers; this inspires the name 2D \(\mathsf{LWS}^{\mathsf{PT}}\). We show in Section 6 below that this recurrence also captures other natural problems such as optimal binary search tree construction (optimizing over binary trees, which are counted by Catalan numbers) and matrix chain multiplication (optimizing over sequences of properly matched parentheses, again counted by Catalan numbers). Furthermore, in each of these examples, the rank (for polygon triangulation) or slice rank (for the other two examples) of \(w\) is \(1\).
### Main Results and Proof Structure Overview
Reduction notationBefore stating our results, we introduce one piece of notation for denoting the results of fine-grained reductions between problems with different running times. For computational problems \(P,Q\), we say that \(P\) "reduces" to \(Q\), denoted by \(P\to Q\) if a polynomial speedup for \(Q\) yields a polynomial speedup for \(P\). More precisely, suppose \(P,Q\) are solved in time \(T_{p},T_{q}\), respectively, via the straightforward algorithms. We say that \(P\) "reduces" to \(Q\), denoted by \(P\to Q\) if for every \(\varepsilon>0\) there exists a \(\delta>0\) such that, given a \(O(T_{q}^{1-\varepsilon})\) time algorithm for \(Q\), one gets a \(O(T_{p}^{1-\delta})\) time algorithm for \(P\).
For example, \(\mathsf{SAT}\to\mathsf{Min}\text{-}\mathsf{IP}_{n,c\log n}\) means that if there is an algorithm for \(\mathsf{Min}\text{-}\mathsf{IP}_{n,c\log n}\) with running time \(O(n^{2-\varepsilon})\) for some \(\varepsilon>0\), then there is an algorithm for \(\mathsf{SAT}\) with running time \(O(2^{(1-\delta)n})\) for some \(\delta>0\). \(\mathsf{Min}\text{-}\mathsf{IP}\to\mathsf{Min}\text{-}\mathsf{IP}\) means that if there is an algorithm for \(\mathsf{Min}\text{-}\mathsf{IP}\) with running time \(O(n^{2-\varepsilon})\) for some \(\varepsilon>0\), then there is an algorithm for \(\mathsf{Min}\text{-}\mathsf{IP}\) with running time \(O(n^{3-\delta})\) for some \(\delta>0\). When it may be unclear, the formal statements of our results are stated in the theorems below.
#### 1.5.1 kd \(\mathsf{LWS}\) Hierarchy and Hardness.
We establish a hierarchy of kd \(\mathsf{LWS}\) problems and describe their connections to \(\mathsf{kMin}\text{-}\mathsf{IP}\), summarized by the following diagram.
These results, more precisely stated, are described by the following four theorems.
Building on Chen's reduction [1] from SAT to \(\mathsf{Min}\text{-}\mathsf{IP}_{n,2^{O(\log^{*}n)}}\), we show that more generally, SAT also reduces to \(\mathsf{kMin}\text{-}\mathsf{IP}\). (Note that \(\mathsf{kMin}\text{-}\mathsf{IP}\) reduces to \((\mathsf{k}-1)\mathsf{Min}\text{-}\mathsf{IP}\) in a straightforward way, but a reduction in the other direction is not known.)
**Theorem** (Theorem A.6).: _Assuming \(\mathsf{SETH}\), there is no algorithm for \(\mathsf{kMin}\text{-}\mathsf{IP}_{n,2^{O(\log^{*}n)}}\) running in time \(O(n^{k-\varepsilon})\) for any \(\varepsilon>0\)._
Just as \(\mathsf{Min}\text{-}\mathsf{IP}\) reduces to \(\mathsf{LWS}\), we show that \(\mathsf{kMin}\text{-}\mathsf{IP}\) reduces to \((\mathsf{k}-1)\mathsf{D}\)\(\mathsf{LWS}\).
**Theorem** (Theorem 3.1).: _Suppose there exists an algorithm for \((\mathsf{k}-1)\mathsf{D}\)\(\mathsf{LWS}\) with rank \(d\) with running time \(O(n^{k-\varepsilon})\) for some \(\varepsilon>0\), then there exists an algorithm for \(\mathsf{kMin}\text{-}\mathsf{IP}\) with rank \(d\) with running time \(O(n^{k-\delta})\) for some \(\delta>0\)._
By a divide and conquer method similar to [12, Lemma 3.5], we show that \(\mathsf{kD}\)\(\mathsf{LWS}\) can be reduced to \([\mathsf{Static}]\mathsf{kD}\)\(\mathsf{LWS}\).
**Theorem** (Theorem 3.2).: _Suppose there exists an algorithm for \([\mathsf{Static}]\mathsf{kD}\)\(\mathsf{LWS}_{n,N,d}\) with running time \(O(N^{2-\varepsilon}\cdot n^{k-1})\) for some \(\varepsilon>0\), then there exists an algorithm for \(\mathsf{kD}\)\(\mathsf{LWS}_{n,d}\) with running time \(O(n^{k+1-\delta})\) for some \(\delta>0\)._
In addition, we show that \([\mathsf{Static}]\mathsf{kD}\)\(\mathsf{LWS}\) also exhibits a hierarchy similar to \(\mathsf{kMin}\text{-}\mathsf{IP}\).
**Theorem** (Theorem 3.4).: _Suppose there exists an algorithm for \([\mathsf{Static}](\mathsf{k}-1)\mathsf{D}\)\(\mathsf{LWS}_{n,N,d}\) with running time \(O(N^{2-\varepsilon}\cdot n^{k-2})\) for some \(\varepsilon>0\), then there exists an algorithm for \([\mathsf{Static}]\mathsf{kD}\)\(\mathsf{LWS}_{n,N,d}\) with running time \(O(N^{2-\delta}\cdot n^{k-1})\) for some \(\delta>0\)._
As one consequence of these reductions, \([\mathsf{Static}]\mathsf{kD}\)\(\mathsf{LWS},\mathsf{kD}\)\(\mathsf{LWS}\) all reduce to \(\mathsf{Min}\text{-}\mathsf{IP}\), which has a truly sub-quadratic algorithm when the vector length is constant, leading to a polynomial speedup:
**Corollary**.: _For every constant \(c>0\) there is an \(\varepsilon>0\) such that \(\mathsf{kD}\)\(\mathsf{LWS},[\mathsf{Static}]\mathsf{kD}\)\(\mathsf{LWS}\) can be solved in time \(O(n^{k+1-\varepsilon})\), \(O(N^{2-\varepsilon}\cdot n^{k-1})\) respectively if the rank of the tensor \(w\) is at most \(c\)._
For one application of this corollary, we show in Section 5.1 that the generalized airplane refueling problem [14] in higher dimensions can be solved polynomially faster than the straightforward DP formulation.
Our reductions from SAT also give hardness of \(\mathsf{kD}\)\(\mathsf{LWS}\) assuming \(\mathsf{SETH}\), showing that the fast algorithm cannot extend much beyond constant rank:
**Corollary**.: _Under \(\mathsf{SETH}\), for any \(k>1\) and \(\varepsilon>0\), there is no algorithm running in time \(O(n^{k+1-\varepsilon})\) for \(\mathsf{kD}\)\(\mathsf{LWS}\) when the weight tensor has rank \(2^{O(\log^{*}n)}\)._
Since for rank \(r\), the input size is \(nr\), one could have imagined a better running time than \(O(n^{k+1})\) for any \(r\ll n^{k}\); our lower bound shows that it is (conditionally) impossible even for the slightly super-constant \(r=2^{\Omega(\log^{*}n)}\).
#### 1.5.2 Slice Rank in \(\mathsf{kD}\) Lws.
Slice rank is a another way to define tensor rank. If a tensor with rank \(d\), then it trivially has slice rank at most \(d\), which makes \(\mathsf{2D}\) LWS with slice rank more powerful. Indeed, we show that \(\mathsf{2D}\) LWS with slice rank becomes hard very quickly. Interestingly, this new hardness result builds on the \(\mathsf{APSP}\) conjecture, rather than \(\mathsf{SETH}\); the \(\mathsf{APSP}\) conjecture has not previously been connected to LWS problems to our knowledge.
**Theorem** (Theorem 3.5).: _Assuming the \(\mathsf{APSP}\) conjecture, there is no truly sub-cubic algorithm for \(\mathsf{2D}\) LWS or \([\mathsf{Static}]\)\(\mathsf{2D}\) LWS with slice rank \(3\)._
However, we can design a truly sub-cubic algorithm for \(\mathsf{2D}\) LWS with slice rank \(1\).
**Theorem** (Theorem 3.7,Theorem 3.6).: _There are truly sub-cubic time algorithms for \(\mathsf{2D}\) LWS and \([\mathsf{Static}]\)\(\mathsf{2D}\) LWS with slice rank 1._
We then use this to design faster algorithms for natural dynamic programming problems. For instance, we generalize the nested boxes problem defined in [10] to a multiple nested boxes problem and show that it can be formulated as \(\mathsf{kD}\) LWS with slice rank \(1\), and thus it can be solved in time \(O(n^{k+1-\varepsilon})\) for some \(\varepsilon>0\).
**Theorem** (Theorem 5.6).: _Multiple nested boxes with dimension \(k\) can be solved in time \(O(n^{k+1-\varepsilon})\) for some \(\varepsilon>0\)._
We also show how to give a polynomial speedup for the airplane refueling problem in dimension \(k\) if the cost only depends on where the airplane lands, since that would mean the tensor has slice rank \(1\). We discuss the details in Section 5.1.
#### 1.5.3 Hardness of Polygon Triangulation.
We show similar algorithms and hardness for \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\).
**Corollary** (Corollary 4.2).: _Under \(\mathsf{SETH}\), there is no truly sub-cubic algorithm for \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\) with weight function whose rank is \(2^{O(\log^{*}n)}\) or above._
**Corollary** (Corollary 4.4).: _Under \(\mathsf{APSP}\) conjecture, there is no truly sub-cubic algorithm for \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\) with weight function whose slice rank is \(3\) or above._
These results are proved by making use of a reduction to \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\) from a _special case_ of \(\mathsf{2D}\) LWS where the two tensors \(w_{1},w_{2}\) must be equal. We then show that our previous hardness results hold even in this special case, yielding the above corollaries.
In fact, previous work shows that in some special cases when \(w\) has rank \(1\), \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\) can be solved in truly sub-cubic time.
**Theorem** ([10, 11, 12, 13]).: _Suppose there exists \(x_{i}\in\mathbb{N},1\leq i\leq n\) such that \(w[i,j,k]=x_{i}\cdot x_{j}\cdot x_{k}\) for all \(1\leq i,j,k\leq n\), then \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\) with tensor \(w\) can be solved in \(O(n\log n)\) time._
Our results help explain why the examples of \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\) problems where faster algorithms are known correspond to tensors \(w\) of rank or slice rank \(1\); see Section 6 for more details.
### Organization
Section 2 contains the preliminaries: our notation, background on fine-grained complexity, and definitions of relevant problems. Section 3 discusses the \(\mathsf{kD}\) LWS hierarchy and hardness, proving that a polynomial speedup over the standard DP algorithm is possible when the tensor rank is \(O(1)\) but impossible when the tensor rank is \(2^{O(\log^{*}n)}\) (assuming \(\mathsf{SETH}\)). Section 4 discusses the polygon triangulation problem \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\) and its connections with \(\mathsf{2D}\) LWS. Sections 5 and 6 respectively discuss applications of \(\mathsf{kD}\) LWS and \(\mathsf{2D}\) LWS\({}^{\mathsf{PT}}\) to various real-world problems.
Preliminaries
In this section, we state our core problems and relevant problems in fine-grained complexity. We also state our notations for convenience.
For a problem \(P\), we say that it is truly sub-quadratic (or sub-cubic) if there exists an algorithm for it with running time \(O(n^{2-\varepsilon})\) (or \(O(n^{3-\varepsilon})\)) for any \(\varepsilon>0\). We say that \(P\) and \(Q\) are sub-quadratic (sub-cubic) equivalent if \(P\) is truly sub-quadratic (sub-cubic) if and only if \(Q\) is truly sub-quadratic (sub-cubic).
For a positive integer \(n\), we let \([n]=\{1,\ldots,n\}\). We assume the word-RAM model with word size \(\Theta(\log n)\) throughout this paper, and assume that all integer inputs are chosen from \(\{-W,\ldots,W,\infty\}\) where \(W\) fits in a constant number of words. We usually use \(d\) to denote the length of vectors (rank of a tensor) in our problems, \(k\) to denote the dimension of our problems, and \(n\) to denote the input size. We put these parameters as subscripts of problems. Since we will discuss both rank and slice rank, we make it clear which we are talking about in the discussions. In this paper, "\(\mathsf{kD}\mathsf{LWS}\) has (slice) rank \(d\)" means the array \(w\) has (slice) rank \(d\).
For a \(k\)-dimensional dynamic programming array \(T\) with entries \(T[i_{1},i_{2},\ldots,i_{k}]\), we let \(I_{\ell}\) denote the sum of all \(i_{k}\)'s except \(i_{\ell}\). Let \(D_{a,b}\) denote the set of all \((i_{1},\ldots,i_{k})\) such that \(a\leq i_{1}+\ldots+i_{k}<b\).
### Strong Exponential Time Hypothesis and \(\mathsf{Min}\mathsf{-IP}\)
We state some important problems and definitions in fine-grained complexity that we will refer to later.
**Conjecture 2.1** (Strong Exponential Time Hypothesis (\(\mathsf{SETH}\))).: _For every \(\varepsilon>0\), there exists a positive integer \(k\) such that \(\mathsf{kSAT}\) requires time \(\Omega(2^{(1-\varepsilon)n})\)._
\(\mathsf{SETH}\) is well-known for being a stronger version of \(\mathsf{P}\neq\mathsf{NP}\), i.e. it implies \(\mathsf{P}\neq\mathsf{NP}\).
**Definition 2.2** (\(\mathsf{OV}_{n,d}\)).: _Given two sets of vectors \(A=\{a_{1},\ldots,a_{n}\},B=\{b_{1},\ldots,b_{n}\}\) such that \(a_{i},b_{j}\in\{0,1\}^{d}\) for all \(i,j\), the Orthogonal Vectors problem (\(\mathsf{OV}_{n,d}\)) asks to determine whether there exists \(1\leq i,j\leq n\) such that \(\langle a_{i},b_{j}\rangle=0\)._
In [20], it is shown that assuming \(\mathsf{SETH}\), for any positive \(c\) there exists \(\varepsilon>0\) such that \(\mathsf{OV}_{n,c\log n}\) cannot be solved in time \(O(n^{2-\varepsilon})\).
**Definition 2.3** (\(\mathsf{Min}\mathsf{-IP}_{n,d}\)).: _Given two sets of vectors \(A=\{a_{1},\ldots,a_{n}\},B=\{b_{1},\ldots,b_{n}\}\) such that \(a_{i},b_{j}\in\{-W,\ldots,W,\infty\}^{d}\) for all \(i,j\) and a natural number \(r\in\mathbb{Z}\), the Minimal Inner Product (\(\mathsf{Min}\mathsf{-IP}_{n,d}\)) asks to determine whether there exists \(1\leq i,j\leq n\) such that \(\langle a_{i},b_{j}\rangle\leq r\)._
\(\mathsf{OV}_{n,d}\) trivially reduces to \(\mathsf{Min}\mathsf{-IP}_{n,d}\), and in fact these two problems are sub-quadratic equivalent [10]. It is known that this decision version of \(\mathsf{Min}\mathsf{-IP}_{n,d}\) is sub-quadratic equivalent to its counting version where it asks to output \(\min\limits_{1\leq i,j\leq n}\langle a_{i},b_{j}\rangle\). We will use the counting version throughout the rest of the paper.
### Higher Dimension \(\mathsf{OV},\mathsf{Min}\mathsf{-IP}\)
\(\mathsf{OV},\mathsf{Min}\mathsf{-IP}\) can be naturally generalize to higher dimensions as follows.
**Definition 2.4** (\(\mathsf{kOV}_{n,d}\)).: _Given \(k\) sets of vectors_
\[X_{1}=\{x_{11},\ldots,x_{1n}\},\ldots,X_{k}=\{x_{k1},\ldots,x_{kn}\}\]
_such that \(x_{ij}\in\{0,1\}^{d}\) for all \(i,j\), \(\mathsf{kOV}_{n,d}\) asks to determine whether there exists \(1\leq i_{1},\ldots,i_{k}\leq n\) such that_
\[\langle x_{1,i_{1}},x_{2,i_{2}},\ldots,x_{k,i_{k}}\rangle=0.\]
**Definition 2.5** (\(\mathsf{kMin}\mathsf{-IP}_{n,d}\)).: _Given \(k\) sets of vectors_
\[X_{1}=\{x_{11},\ldots,x_{1n}\},\ldots,X_{k}=\{x_{k1},\ldots,x_{kn}\}\]
_such that \(x_{ij}\in\{-W,\ldots,W,\infty\}^{d}\) for all \(i,j\) and a natural number \(r\in\mathbb{Z}\), \(\mathsf{kMin}\mbox{-}\mathsf{IP}_{n,d}\) asks to determine whether there exists \(1\leq i_{1},\ldots,i_{k}\leq n\) such that_
\[\langle x_{1,i_{1}},x_{2,i_{2}},\ldots,x_{k,i_{k}}\rangle\leq r.\]
Just from the definitions, \(\mathsf{kOV}\) trivially reduces to \(\mathsf{kMin}\mbox{-}\mathsf{IP}\). In addition, it is not hard to show \(\mathsf{kOV}_{n,d}\to(\mathsf{k}-1)\mathsf{OV}_{n,d}\) and \(\mathsf{kMin}\mbox{-}\mathsf{IP}_{n,d}\to(\mathsf{k}-1)\mathsf{Min}\mbox{-} \mathsf{IP}_{n,d}\) for all \(k\geq 3\).
**Lemma 2.6**.: _Suppose there exists an algorithm for \((\mathsf{k}-1)\mathsf{OV}_{n,d}\) that runs in time \(O(n^{k-1-\varepsilon})\) for some \(\varepsilon>0\), then there exists an algorithm for \(\mathsf{kOV}_{n,d}\) that runs in time \(O(n^{k-\delta})\) for some \(\delta>0\)._
Proof.: Given an \(\mathsf{kOV}_{n,d}\) instance with sets \(X_{1},\ldots,X_{k}\), we trivially compute \(\langle x_{1,i_{1}},x_{2,i_{2}}\rangle\) for all \(1\leq i_{1},i_{2}\leq n\) using time \(O(n^{2}d)\). Now for each \(1\leq i_{1}\leq n\), run the algorithm for \((\mathsf{k}-1)\mathsf{OV}_{n,d}\) with \(x_{1,i_{1}}\cdot X_{2},X_{3},\ldots,X_{k}\). If there are no zeros then output no; otherwise output yes.
This algorithm is correct because we have covered all possible \(\langle x_{1,i_{1}},x_{2,i_{2}},\ldots,x_{k,i_{k}}\rangle\). The running time is \(O(n^{2}d)+O(n^{k-1-\varepsilon})\cdot n=O(n^{k-1-\varepsilon})\).
**Lemma 2.7**.: _Suppose there exists an algorithm for \((\mathsf{k}-1)\mathsf{Min}\mbox{-}\mathsf{IP}_{n,d}\) that runs in time \(O(n^{k-1-\varepsilon})\) for some \(\varepsilon>0\), then there exists an algorithm for \(\mathsf{kMin}\mbox{-}\mathsf{IP}_{n,d}\) that runs in time \(O(n^{k-\delta})\) for some \(\delta>0\)._
Proof.: The proof is exactly the same as the proof of Lemma 2.6.
### All Pair Shortest Path
All Pair Shortest Path (\(\mathsf{APSP}\)) is a well-known problem in fine grained complexity and graph theory.
**Definition 2.8** (\(\mathsf{APSP}\)).: _Given an undirected graph \(G\) with nodes \(V=\{v_{1},\ldots,v_{n}\}\), the All Pair Shortest Path (\(\mathsf{APSP}\)) asks to determine the distance between \(v_{i}\) and \(v_{j}\) for all \(1\leq i<j\leq n\)._
Currently there is no algorithm for \(\mathsf{APSP}\) that runs in \(O(n^{3-\varepsilon})\) time for any \(\varepsilon>0\), and it is conjectured that no such algorithm exists.
**Conjecture 2.9** (\(\mathsf{APSP}\) Conjecture).: _There is no truly sub-cubic algorithm for \(\mathsf{APSP}\)._
It is known that \(\mathsf{APSP}\) is sub-cubic equivalent to many problems such that min-plus matrix multiplication, negative triangle etc [11]. Therefore, assuming \(\mathsf{APSP}\) conjecture, none of these problems are truly sub-cubic. We will use the hardness of these problems to obtain our hardness results.
**Definition 2.10** (\((\mathsf{min},+)\mathsf{MM}\)).: _Given two \(n\times n\) matrices \(A,B\), compute its min-plus product \(C\) where_
\[C[i,j]=\min_{1\leq k\leq n}\{A[i,k]+B[k,j]\}\]
_for all \(1\leq i,j\leq n\)._
**Definition 2.11** (NegativeTriangle).: _Given an undirected, weighted graph, determines whether there exists a triangle such that its weight (the sum of weights of three sides) is negative._
### Least Weight Subsequence
We formally define \(\mathsf{LWS}\) using the definition provided in [10].
**Definition 2.12** (\(\mathsf{LWS}\)).: _Consider a sequence of \(n\) data items \(x_{1},\ldots,x_{n}\) and a weight matrix \(w\) of size \(n\times n\) where \(w[i,j]\in\{-W,\ldots,W,\infty\}\) for all \(1\leq i<j\leq n\). The \(\mathsf{LWS}\) problem asks to determine \(T[n]\), which is defined by the following DP formulation:_
\[T[j]=\begin{cases}0&\text{if }j=0\\ \min_{0\leq i<j}\Bigl{\{}T[i]+w[i,j]\Bigr{\}}&\text{otherwise.}\end{cases}\]
Given a sequence of \(n\) items, \(\mathsf{LWS}\) computes a subsequence of those items which minimizes the total weight from the items chosen. We assume all the entries of \(w\) can be accessed in \(O(1)\) time. Figure 4 captures the idea of \(\mathsf{LWS}\).
[KPS17] also defines a "static" version of \(\mathsf{LWS}\) which is central to their reductions.
**Definition 2.13**.: ([Static]\(\mathsf{LWS}\)) _Fix an instance of \(\mathsf{LWS}\) with matrix \(w\). Given intervals \(I=\{a,a+1,\ldots,a+N-1\},J=\{a+N,a+N+1,\ldots,a+2N-1\}\), together with correctly computed values \(T[i]\) for all \(i\in I\), the \([\mathsf{Static}]\mathsf{LWS}\) problem asks to determine_
\[T^{\prime}[j]=\min_{i\in I}\Bigl{\{}T[i]+w[i,j]\Bigr{\}}\qquad\text{ for all }j\in J.\]
[Static]\(\mathsf{LWS}\) is a parallel, batch version of \(\mathsf{LWS}\) that applies the \(\mathsf{LWS}\) recurrence relation to many values of j at once, rather than to just a single \(j\) value at a time. Indeed, we can compute all \(T^{\prime}[j]\) values where \(j\in J\) in parallel because each \(T^{\prime}[j]\) only depends on the \(T[i]\) values where \(i\in I\), not on any other \(T^{\prime}[j]\) values.
We use the notation \(T^{\prime}\) to highlight that \(T^{\prime}[j]\) may not equal \(T[j]\) since \(T^{\prime}[j]\) is computed with partial information (in \(I\)). Figure 5 captures the idea of [Static]\(\mathsf{LWS}\).
### Higher Dimension \(\mathsf{LWS}\)
We now define the core problems that this paper discusses again, which is a generalization of \(\mathsf{LWS}\) to higher dimensions.
**Definition 2.14** (\(\mathsf{kD}\) \(\mathsf{LWS}\)).: _Fix a positive integer \(k\). Consider \((k+1)\)-dimensional tensors \(w_{1},\ldots,w_{\ell}\) such that \(w_{\ell}[i_{1},\ldots,i_{k},j]\in\{-W,\ldots,W,\infty\}\) for all \(1\leq i_{1},\ldots,i_{k},j\leq n,1\leq\ell\leq k\). The \(\mathsf{kD}\)\(\mathsf{LWS}\) problem asks to determine \(T[n,\ldots,n]\) given the dynamic programming recurrence relation:_
\[T\Bigl{[}j_{1},j_{2},\ldots,j_{k}\Bigr{]}=\begin{cases}0&\text{if }j_{1}=j_{2}= \ldots=j_{k}=1\\ \min_{1\leq\ell\leq k}\Bigl{\{}\min_{1\leq i_{\ell}<j_{\ell}}\Bigl{\{}T\Bigl{[} j_{1},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1},\ldots,j_{k}\Bigr{]}+w_{\ell} \Bigl{[}j_{1},j_{2},\ldots,j_{k},i_{\ell}\Bigr{]}\Bigr{\}}\Bigr{\}}\text{ otherwise.}\end{cases}\]
An illustration of 2D \(\mathsf{LWS}\) can be found in Figure 2.
Like [KPS17], we also define [Static]\(\mathsf{kD}\)\(\mathsf{LWS}\), a generalization of [Static]\(\mathsf{LWS}\) to higher dimensions which is central to our reductions.
**Definition 2.15**.: ([Static]\(\mathsf{kD}\) \(\mathsf{LWS}\)) _Given intervals \(D_{a,a+N},D_{a+N,a+2N}\) together with correctly computed values \(T_{\ell}[i_{1},\ldots,i_{k}]\) for all \(1\leq\ell\leq k\) and \((i_{1},\ldots,i_{k})\in D_{a,a+N}\), [Static]\(\mathsf{kD}\)\(\mathsf{LWS}\) asks to determine_
\[T^{\prime}\Bigl{[}j_{1},\ldots,j_{k}\Bigr{]}=\min_{1\leq\ell\leq k}\left\{\min_ {a-I_{\ell}\leq i_{\ell}<a+N-I_{\ell}}\Bigl{\{}T_{\ell}\Bigl{[}j_{1},\ldots,j_ {\ell-1},i_{\ell},j_{\ell+1},\ldots,j_{k}\Bigr{]}+w_{\ell}\Bigl{[}j_{1},j_{2}, \ldots,j_{k},i_{\ell}\Bigr{]}\Bigr{\}}\right\}\]
Figure 4: \(\mathsf{LWS}\). To compute the value of \(T^{\prime}[j]\) (black circle), we start from \(T[0]\) and go through all possible \(T[i]\) such that \(1\leq i<j\leq n\) and takes the minimum of all possible values (plus their respective weight values \(w\)).
Figure 5: [Static]\(\mathsf{LWS}\). To compute \(T^{\prime}[j]\) (black circle), we take the minimum of all possible white circles from \(T[a]\) to \(T[a+N-1]\) (plus their respective weight values \(w\)).
_for all \((j_{1},j_{2},\ldots,j_{k})\in D_{a+N,a+2N}\)._
An illustration of [Static]2D LWS can be found in Figure 3.
### Tensor Ranks
We give definitions of rank and slice rank for tensors. For notational convenience, we say that a problem has rank \(d\) if its associated array/tensor has rank \(d\).
**Definition 2.16** (Rank).: _We say that a \(k\)-dimension array \(w\) has rank\(d\) if there exists \(k\) sets_
\[X_{1}=\{x_{11},\ldots,x_{1n}\},\ldots,X_{k}=\{x_{k1},\ldots,x_{kn}\}\]
_of vectors with length \(d\) such that \(w[i_{1},\ldots,i_{k}]=\langle x_{1,i_{1}},x_{2,i_{2}},\ldots,x_{k,i_{k}}\rangle\) for all \(1\leq i_{1},\ldots,i_{k}\leq n\)._
**Definition 2.17** (Slice Rank).: _A \(k\)-dimensional (order-\(k\)) tensor \(w\in\mathbb{R}^{n_{1}\times n_{2}\times\cdots\times n_{d}}\) has slice rank \(1\) if there is a \(j\in[d]\), a vector \(a\in\mathbb{R}^{n_{j}}\), and a \((d-1)\)-dimensional tensor \(b\in\mathbb{R}^{n_{1}\times\cdots\times n_{j-1}\times n_{j+1}\times\cdots \times n_{d}}\) such that, for all \(i_{1}\in[n_{1}],\ldots,i_{d}\in[n_{d}]\) we have_
\[w[i_{1},\ldots,i_{d}]=a[i_{j}]\cdot b[i_{1},\ldots,i_{j-1},i_{j+1},\ldots,i_{d}].\]
_More generally, the slice rank of tensor \(w\) is the minimum non-negative integer \(k\) such that there are slice rank \(1\) tensors \(w_{1},\ldots,w_{k}\) for which \(w=w_{1}+\cdots+w_{k}\)._
### Polygon Triangulation
In computational geometry, a polygon triangulation is a partition of a polygon \(P\) into triangles. It is known that the number of partitions of a convex polygon is counted by the Catalan numbers [14]. We discuss the problem of finding the triangulation that minimizes the sum of weights of all triangles.
**Definition 2.18** (Polygon Triangulation).: _Let \(P(n)\) denote an \(n\)-sided convex polygon and fix an ordering of the polygon vertices. A triangulation of \(P(n)\) is a partition of \(P(n)\) into disjoint triangles using straight, internal edges between pair of nodes of \(P(n)\). For each triangle \((v_{i},v_{j},v_{k})\), let \(w(i,j,k)\) be its weight, and the weight of a partition is the sum of all weights of its triangles. The polygon triangulation problem asks to determine the minimal weight of all partitions._
For an example, consider the polygon triangulation of a polygon \(P\) with \(10\) sides (see figure 6). Starting from the side \((1,10)\), we choose a node \(6\) and partitions \(P\) into \(3\) parts \(P(1,6)\), triangle \((1,6,10)\) and \(P(6,10)\), denoted by the black dashed lines. We further partitions \(P(i,j)\) by choosing a node \(i<k<j\) and partitions it to \(P(i,k)\), triangle \((i,j,k)\) and \(P(k,j)\).
The polygon triangulation problem can be solved via dynamic programing, motivating our 2D LWS\({}^{\sf PT}\) problem.
Figure 6: An example polygon triangulation problem.
**Definition 2.19** (2d \(\mathsf{LWS}^{\mathsf{PT}}\)).: _Fix a tensor \(w\). The 2D \(\mathsf{LWS}^{\mathsf{PT}}\) problem asks to compute the value of \(T[n,n]\) given the dynamic programming recurrence relation:_
\[T[i,j]=\begin{cases}0&\text{if }i+j\leq n+2\\ \min_{0<k<i+j-n}\left\{T[n-j+k,j]+T[i,j-k]+w[i,j,k]\right\}&\text{ otherwise.}\end{cases}\]
_Under a change of variables/coordinates, this problem is equivalent to computing the value of \(T[1,n]\) given the dynamic programming recurrence relation:_
\[T[i,j]=\begin{cases}0&\text{if }j-i\leq 1\\ \min_{i<k<j}\left\{T[i,k]+T[k,j]+w[i,j,k]\right\}&\text{otherwise.}\end{cases}\]
It is not hard to see that polygon triangulation and 2D \(\mathsf{LWS}^{\mathsf{PT}}\) are the same problem: let \(T[i,j]\) denote the weight of the sub-polygon containing nodes \(i\) to \(j\) and \(w[i,j,k]\) be the weight of triangle \((v_{i},v_{j},v_{k})\). More generally, any problem which splits an interval \([i,j]\) at some point \(k\) where \(k\) is between \(i\) and \(j\) can be understood as a 2D \(\mathsf{LWS}^{\mathsf{PT}}\) instance. Figure 7 captures the idea of 2D \(\mathsf{LWS}^{\mathsf{PT}}\).
## 3 \(k\)-dimensional Least Weight Subsequence (kd \(\mathsf{LWS}\))
In this section, we discuss kd \(\mathsf{LWS}\) with rank and slice rank respectively. For kd \(\mathsf{LWS}\) with rank \(d\), we prove the reductions in the following diagram.
Figure 7: 2D \(\mathsf{LWS}^{\mathsf{PT}}\). To compute \(T[i,j]\) (black circle), we are taking the minimum over the sum of all possible **pairs** of white circles (plus their respective weight values \(w\)). The solution to 2D \(\mathsf{LWS}^{\mathsf{PT}}\) is found at \(T[1,n]\) in the left figure and at \(T[n,n]\) in the right figure due to a change of variables.
All our reductions preserve the rank of the problem, so this diagram shows that there exists truly sub-cubic algorithm for \(\mathsf{kD}\mathsf{LWS}\) and \([\mathsf{Static}]\mathsf{kD}\mathsf{LWS}\) when the rank is constant (because \(\mathsf{Min}\)-\(\mathsf{IP}\) does [22]). In addition, we show that \(\mathsf{kMin}\)-\(\mathsf{IP}\) reduces to \(\mathsf{kD}\mathsf{LWS}\) and \([\mathsf{Static}]\mathsf{kD}\mathsf{LWS}\), so their hardness are established. We delay our proof of SAT reducing to \(\mathsf{kMin}\)-\(\mathsf{IP}\) to Appendix A because it mimics the proof of SAT reducing to \(\mathsf{Min}\)-\(\mathsf{IP}\) in [10].
In addition, we show that \(\mathsf{2D}\mathsf{LWS},[\mathsf{Static}]\mathsf{2D}\mathsf{LWS}\) with slice rank \(3\) or above is \(\mathsf{APSP}\)-hard, and give truly sub-cubic algorithms for \(\mathsf{2D}\mathsf{LWS},[\mathsf{Static}]\mathsf{2D}\mathsf{LWS}\) with slice rank \(1\).
### Rank \(d\) [Static]\(\mathsf{kD}\mathsf{LWS}\) Hierarchy
In this section we establish a hierarchy for \(\mathsf{kD}\mathsf{LWS}\) and \([\mathsf{Static}]\mathsf{kD}\mathsf{LWS}\) with rank \(d\).
**Notations**:
* \(I_{j}=(\sum i_{\ell})-i_{j}\), \(I_{j,t}=(\sum i_{\ell})-i_{j}-i_{t}\).
* \(I^{\prime}_{j}=(\sum i^{\prime}_{\ell})-i^{\prime}_{j}\), \(I^{\prime}_{j,t}=(\sum i^{\prime}_{\ell})-i^{\prime}_{j}-i^{\prime}_{t}\).
* \(D_{a,b}\) is the set of all \((i_{1},\ldots,i_{k})\) such that \(a\leq i_{1}+\ldots+i_{k}<b\).
* \(D_{a}=D_{a,a+1}\) is the set of all \((i_{1},\ldots,i_{k})\) such that \(a=i_{1}+\ldots+i_{k}\).
**Theorem 3.1**.: \((\mathsf{kMin}\)-\(\mathsf{IP}\to(\mathsf{k}-1)\mathsf{D}\mathsf{LWS})\) _Suppose there exists an algorithm for \((\mathsf{k}-1)\mathsf{D}\mathsf{LWS}\) with rank \(d\) with running time \(O(n^{k-\varepsilon})\) for some \(\varepsilon>0\), then there exists an algorithm for \(\mathsf{kMin}\)-\(\mathsf{IP}\) with rank \(d\) with running time \(O(n^{k-\delta})\) for some \(\delta>0\)._
Proof.: Given an \(\mathsf{kMin}\)-\(\mathsf{IP}\) instance with
\[X_{1}=\{x_{11},\ldots,x_{1n}\},\ldots,X_{k}=\{x_{k1},\ldots,x_{kn}\},\]
such that \(x_{ij}\in\{-W,\ldots,W,\infty\}^{d}\) for all \(1\leq i\leq k,1\leq j\leq n\), we define \(k\) sets of vectors
\[Y_{1}=\{y_{11},\ldots,y_{1n}\},\ldots,Y_{k-1}=\{y_{k-1,1},\ldots,y_{k-1,n}\}, Y_{k}=\{y_{k1},\ldots,y_{kn}\}\]
as follows: for all \(1\leq\ell\leq k-1\),
\[y_{\ell,j}=\begin{cases}0^{d}&\text{if $1\leq j\leq(k-1)n$}\\ x_{\ell,j\bmod n}&\text{if $(k-1)n+1\leq j\leq kn$}.\end{cases}\]
In addition, let
\[y_{kj}=\begin{cases}x_{k,j\bmod n}&\text{if $1\leq j\leq(k-1)n$}\\ 0^{d}&\text{if $(k-1)n+1\leq j\leq kn$}.\end{cases}\]
We claim that running \((k-1)\mathsf{D}\mathsf{LWS}_{n,d}\) algorithm with
\[w_{\ell}[i_{1},\ldots,i_{k}]=\langle y_{1,i_{1}},\ldots,y_{k,i_{k}}\rangle\]
for all \(\ell\) will give us \(T[kn,\ldots,kn]=\min\langle x_{1,i_{1}},\ldots,x_{k,i_{k}}\rangle\). First notice that by our construction, when \((i_{1},\ldots,i_{k-1})\notin[(k-1)n+1,kn]^{k-1}\), we have \(w_{\ell}[i_{1},\ldots,i_{k}]=0\). Therefore, \(T[i_{1},\ldots,i_{k}]=0\) for all \((i_{1},\ldots,i_{k})\) such that \((i_{1},\ldots,i_{k-1})\notin[(k-1)n+1,kn]^{k-1}\).
Now we use induction to show that
\[T\Big{[}(k-1)n+j_{1},\ldots,(k-1)n+j_{k-1}\Big{]}=\min\Big{\langle}x_{1,i_{1}},\ldots,x_{k,i_{k}}\Big{\rangle}\]
where the minimum is taken over all \(1\leq i_{1}\leq j_{1},\ldots,1\leq i_{k-1}\leq j_{k-1}\) and \(1\leq i_{k}\leq n\). This would suffice because if \(j_{\ell}=n\) for all \(1\leq\ell\leq k-1\), we would get the minimal inner product. The base case is when \(j_{\ell}=1\) for all \(1\leq\ell\leq k-1\). Then we have
\[T\Big{[}(k-1)n+1,\ldots,(k-1)n+1\Big{]}\] \[=\min_{1\leq\ell\leq k-1}\Bigg{\{}\min_{1\leq i_{\ell}<(k-1)n+1} \Big{\{}T\Big{[}(k-1)n+1,\ldots,(k-1)n+1,i_{\ell},(k-1)n+1,\ldots,(k-1)n+1\Big{]} +\] \[\qquad w_{\ell}\Big{[}(k-1)n+1,\ldots,(k-1)n+1,i_{\ell}\Big{]} \Big{\}}\Bigg{\}}\] \[=\min_{1\leq\ell\leq k-1}\Bigg{\{}\min_{1\leq i_{\ell}<(k-1)n+1} \Big{\{}w_{\ell}\Big{[}(k-1)n+1,\ldots,(k-1)n+1,i_{\ell}\Big{]}\Big{\}} \Bigg{\}}\] \[=\min_{1\leq\ell\leq k}\Big{\langle}x_{11},x_{21},\ldots,x_{k-1, 1},x_{k,\ell}\Big{\rangle}.\]
For the induction step, we have
\[T\Big{[}(k-1)n+j_{1},\ldots,(k-1)n+j_{k-1}\Big{]}\] \[=\min_{1\leq\ell\leq k-1}\Bigg{\{}\min_{1\leq i_{\ell}<(k-1)n+j_{ \ell}}\Big{\{}T\Big{[}(k-1)n+j_{1},\ldots,(k-1)n+j_{\ell-1},(k-1)n+i_{\ell},(k -1)n+j_{\ell+1},\ldots,(k-1)n+j_{k-1}\Big{]}+\] \[\qquad w_{\ell}\Big{[}(k-1)n+j_{1},\ldots,(k-1)n+j_{k-1},i_{\ell} \Big{]}\Big{\}}\Bigg{\}}\] \[=\min_{1\leq\ell\leq k-1}\Bigg{\{}\min_{1\leq i_{\ell}<(k-1)n+1} \Big{\{}w_{\ell}\Big{[}(k-1)n+j_{1},\ldots,(k-1)n+j_{k-1},i_{\ell}\Big{]} \Big{\}},\] \[\qquad\min_{(k-1)n+1\leq i_{\ell}<(k-1)n+j_{\ell}}\Big{\{}T\Big{[} (k-1)n+j_{1},\ldots,(k-1)n+j_{\ell-1},(k-1)n+i_{\ell},(k-1)n+j_{\ell+1},\ldots,(k-1)n+j_{k-1}\Big{]}\Big{\}}\Bigg{\}}\Bigg{\}}\Bigg{\}}\] \[=\min_{1\leq\ell\leq k-1}\Bigg{\{}\min_{1\leq\ell\leq n}\Big{\langle} x_{1,j_{1}},\ldots,x_{k-1,j_{k-1}},x_{k,\ell}\Big{\rangle},\] \[\qquad\min_{(k-1)n+1\leq i_{\ell}<(k-1)n+j_{\ell}}\Big{\{}T\Big{[} (k-1)n+j_{1},\ldots,(k-1)n+j_{\ell-1},(k-1)n+i_{\ell},(k-1)n+j_{\ell+1},\ldots,(k-1)n+j_{k-1}\Big{]}\Big{\}}\Bigg{\}}.\]
By induction hypothesis,
\[\min_{(k-1)n+1\leq i_{\ell}<(k-1)n+j_{\ell}}\Big{\{}T\Big{[}(k-1)n+j_{1}, \ldots,(k-1)n+j_{\ell-1},(k-1)n+i_{\ell},(k-1)n+j_{\ell+1},\ldots,(k-1)n+j_{k-1 }\Big{]}\Big{\}}\]
is the minimum over all \(\langle x_{1,i_{1}},\ldots,x_{1,i_{k}}\rangle\) over
\[(i_{1},\ldots,i_{k})\in[1,j_{1}]\times\ldots[1,j_{\ell-1}]\times[1,j_{\ell}-1] \times[1,j_{\ell+1}]\times\ldots\times[1,j_{k-1}]\times[1,n].\]
Thus we are taking the minimum over all \((i_{1},\ldots,i_{k})\) in
\[\{j_{1}\}\times\ldots\times\{j_{k}\}\times[1,n]\bigcup_{1\leq\ell\leq k-1} \Big{(}[1,j_{1}]\times\ldots[1,j_{\ell-1}]\times[1,j_{\ell}-1]\times[1,j_{\ell+ 1}]\times\ldots\times[1,j_{k-1}]\times[1,n]\Big{)}=[1,j_{1}]\times\ldots\times[1, j_{k-1}]\times[1,n],\]
which concludes the induction.
**Theorem 3.2**.: (\(\mathsf{kD}\mathsf{LWS}\to[\mathsf{Static}]\mathsf{kD}\mathsf{LWS}\)) _Suppose there exists an algorithm for \([\mathsf{Static}]\mathsf{kD}\mathsf{LWS}_{n,N,d}\) with running time \(O(N^{2-\varepsilon}\cdot n^{k-1})\) for some \(\varepsilon>0\), then there exists an algorithm for \(\mathsf{kD}\mathsf{LWS}_{n,d}\) with running time \(O(n^{k+1-\delta})\) for some \(\delta>0\)._
Proof.: Given an \(\mathsf{kD}\mathsf{LWS}\) instance, we define a subproblem
\[\mathcal{S}\Big{(}D_{\alpha,\beta},\Big{\{}t\Big{[}j_{1},\ldots,j_{k}\Big{]}:( j_{1},\ldots,j_{k})\in D_{\alpha,\beta}\Big{\}}\Big{)}\]
as follows: Given \(D_{\alpha,\beta}\) and \(t[j_{1},\ldots,j_{k}]\) for all \((j_{1},\ldots,j_{k})\in D_{\alpha,\beta}\) where
\[t\Big{[}j_{1},\ldots,j_{k}\Big{]}=\min_{1\leq\ell\leq k}\Big{\{}\min_{1\leq i_ {\ell}<\alpha-J_{\ell}}\Big{\{}T\Big{[}j_{1},\ldots,j_{\ell-1},i_{\ell},j_{ \ell+1},\ldots,j_{k}\Big{]}+w_{\ell}\Big{[}j_{1},\ldots,j_{k},i_{\ell}\Big{]} \Big{\}},\infty\Big{\}},\]
computes
\[T\Big{[}j_{1},\ldots,j_{k}\Big{]}=\min_{1\leq\ell\leq k}\Big{\{}\min_{1\leq i_ {\ell}<j_{\ell}}\Big{\{}T\Big{[}j_{1},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1}, \ldots,j_{k}\Big{]}+w_{\ell}\Big{[}j_{1},\ldots,j_{k},i_{\ell}\Big{]}\Big{\}} \Big{\}}\Big{\}}\]
for all \((j_{1},\ldots,j_{k})\in D_{\alpha,\beta}\). Notice that a call
\[\mathcal{S}\Big{(}D_{k,kn},\Big{\{}t\Big{[}j_{1},\ldots,j_{k}\Big{]}:(j_{1}, \ldots,j_{k})\in D_{k,kn}\Big{\}}\Big{)}\]
with
\[t\Big{[}j_{1},\ldots,j_{k}\Big{]}=\begin{cases}w_{\ell}\Big{[}j_{1},\ldots,j_{ k},1\Big{]}&\text{ if $j_{\ell}=1$ for some $\ell$}\\ \infty&\text{otherwise}\end{cases}\]
will solve the instance because only those who have a coordinate with \(1\) will be assigned a value.
Now we solve \(\mathcal{S}\) using Algorithm 1 below.
To see that the algorithm is correct, we use induction on \(\beta-\alpha\). When \(\alpha=\beta\) we want to compute \(T[j_{1},\ldots,j_{k}]\) for all \((j_{1},\ldots,j_{k})\in D_{\alpha}\), but by definition \(t[j_{1},\ldots,j_{k}]=T[j_{1},\ldots,j_{k}]\) so we are done.
Now line 4 of the algorithm correctly outputs \(\Big{\{}T\Big{[}j_{1},\ldots,j_{k}\Big{]}:(j_{1},\ldots,j_{k})\in D_{\alpha, \alpha+m-1}\Big{\}}\) because we input the correct \(t\Big{[}j_{1},\ldots,j_{k}\Big{]}\) and by induction hypothesis. In line 6 we compute for all \((j_{1},\ldots,j_{k})\in D_{\alpha+m,\beta-1}\):
\[t^{\prime}\Big{[}j_{1},\ldots,j_{k}\Big{]} =\min\Big{\{}t\Big{[}j_{1},\ldots,j_{k}\Big{]},T^{\prime}\Big{[}j _{1},\ldots,j_{k}\Big{]}\Big{\}}\] \[=\min\Bigg{\{}\min_{1\leq\ell\leq k}\Big{\{}\min_{1\leq i_{\ell}< \alpha-J_{\ell}}\Big{\{}T\Big{[}j_{1},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1}, \ldots,j_{k}\Big{]}+w\Big{[}j_{1},\ldots,j_{k},i_{\ell}\Big{]}\Big{\}}\Big{\}},\] \[\qquad\qquad\min_{1\leq\ell\leq k}\Big{\{}\min_{\alpha-J_{\ell}\leq i _{\ell}<\alpha+m-J_{\ell}}\Big{\{}T\Big{[}j_{1},\ldots,j_{\ell-1},i_{\ell},j_{ \ell+1},\ldots,j_{k}\Big{]}+w\Big{[}j_{1},\ldots,j_{k},i_{\ell}\Big{]}\Big{\}} \Big{\}}\Bigg{\}}\] \[=\min_{1\leq\ell\leq k}\Big{\{}\min_{1\leq i_{\ell}<\alpha+m-J_{ \ell}}\Big{\{}T\Big{[}j_{1},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1},\ldots,j_{k} \Big{]}+w\Big{[}j_{1},\ldots,j_{k},i_{\ell}\Big{]}\Big{\}}\Big{\}}.\]
Therefore, these are the correct values for all \((j_{1},\ldots,j_{k})\in D_{\alpha+m,\beta-1}\) for applying \(\mathcal{S}\) again in line 7, where we correctly output \(\Big{\{}T\Big{[}j_{1},\ldots,j_{k}\Big{]}:(j_{1},\ldots,j_{k})\in D_{\alpha+m, \beta-1}\Big{\}}\). Finally, if \(\beta-\alpha=2m\), our algorithm computes
```
1if\(\alpha=\beta\)then
2Return\(T\big{[}j_{1},j_{2},\ldots,j_{k}\big{]}=t\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}\) for all \((j_{1},j_{2},\ldots,j_{k})\in D_{\alpha}\)
3\(m\leftarrow\Big{\lceil}\frac{\beta-\alpha}{2}\Big{\rceil}\)
4\(\Big{\{}T\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}:j_{1},j_{2},\ldots,j_{k}\in D_ {\alpha,\alpha+m}\Big{\}}\gets S\Big{(}D_{\alpha,\alpha+m},\Big{\{}t \Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}\in D_{\alpha,\alpha+m}\Big{\}}\Big{)}\)
5 Solve a [Static]RD LWS instance on \(D_{\alpha,\alpha+m},D_{\alpha+m,\beta}\) with correctly computed values \[\Big{\{}T\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}:(j_{1},j_{2},\ldots,j_{k})\in D _{\alpha,\alpha+m}\Big{\}}\] and output \[\Big{\{}T^{\prime}\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}:(j_{1},j_{2},\ldots, j_{k})\in D_{\alpha+m,\beta}\Big{\}}\] Let \(t^{\prime}\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}=\min\Big{\{}t\Big{[}j_{1},j_ {2},\ldots,j_{k}\Big{]},T^{\prime}\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}\Big{\}}\) for all \((j_{1},j_{2},\ldots,j_{k})\in D_{\alpha+m,\beta}\)
6\(\Big{\{}T\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}:(j_{1},j_{2},\ldots,j_{k})\in D _{\alpha+m,\beta}\Big{\}}\gets S\Big{(}D_{\alpha+m,\beta},\Big{\{}t^{ \prime}\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}:(j_{1},j_{2},\ldots,j_{k})\in D _{\alpha+m,\beta}\Big{\}}\Big{)}\)
7if\(\beta-\alpha=2m\)then
8
9\(T\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}=\min\Bigl{\{}t\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]},\min_{\alpha-J_{\ell}\leq i_{\ell}<\beta-J_{\ell}}\Big{\{}T\Big{[} j_{1},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1},\ldots,j_{k}\Big{]}+w\Big{[}j_{1}, \ldots,j_{k},i_{\ell}\Big{]}\Big{\}}\Big{\}}\Big{\}}\)
10for all \((j_{1},j_{2},\ldots,j_{k})\in D_{\beta}\).
11Return\(\Big{\{}T\Big{[}j_{1},j_{2},\ldots,j_{k}\Big{]}:(j_{1},j_{2},\ldots,j_{k})\in D _{\alpha,\beta}\Big{\}}\)
```
**Algorithm 1**\(\mathcal{S}\to\) [Static]RD LWS
the minimum over all \(i_{\ell}\) such that \((j_{1},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1},\ldots,j_{k})\in D_{1,\beta-1}\), so we are outputting the correct values.
The runtime of our algorithm for \(\mathcal{S}\), when \(\beta-\alpha=N\), can be expressed as
\[T_{\mathcal{S}}(n,N)\leq 2T_{\mathcal{S}}\Big{(}n,\frac{N}{2}\Big{)}+T_{\sf Static }\Big{(}n,\frac{N}{2}\Big{)}+O(Nn^{k-1})\]
because we have \(2\) recursive calls with size \(\frac{N}{2}\), run a \([\sf Static]\) algorithm with length \(\frac{N}{2}\), and computes \(t^{\prime}[i_{1},\ldots,i_{k}]\) for \(O(Nn^{k-1})\) values. By assumption we have \(T_{\sf Static}\Big{(}\frac{N}{2}\Big{)}\leq O(N^{2-\varepsilon}n^{k-1})\), so this recursive formula becomes
\[T_{\mathcal{S}}(n,N)\leq 2T_{\mathcal{S}}\Big{(}n,\frac{N}{2}\Big{)}+O(N^{2- \varepsilon}n^{k-1}).\]
Solving it gives \(T_{\mathcal{S}}(n,N)\leq O(N^{2-\varepsilon}\cdot n^{k-1}\cdot\log N)\leq O(N ^{2-\delta}n^{k-1})\) for some \(\delta>0\). Therefore,
\[T_{\mathcal{S}}(n,n)\leq O(n^{k+1-\delta}).\]
Since \(\sf SAT\) reduces to \(\sf kMin\sf lp_{n,2^{O(\log^{*}n)}}\) (Appendix A), we immediately get hardness results of \(\sf kD\)LWS and \([\sf Static]\)kD LWS from \(\sf SETH\):
**Corollary 3.3**.: _Assuming \(\sf SETH\), there is no \(O(n^{k+1-\varepsilon})\) time algorithm for \(\sf kD\)LWS or \([\sf Static]\)kD LWS with rank at least \(2^{O(\log^{*}n)}\) for any \(\varepsilon>0\)._
Finally, we show that just like \(\sf kMin\sf lp\), \([\sf Static]\)kD LWS also exhibits a hierarchy.
**Theorem 3.4**.: \(([\sf Static]\)kD LWS\(\to[\sf Static](k-1)\)D LWS) _Suppose there exists an algorithm for \([\sf Static](k-1)\)D LWS\({}_{n,N,d}\) with running time \(O(N^{2-\varepsilon}\cdot n^{k-2})\) for some \(\varepsilon>0\), then there exists an algorithm for \([\sf Static]\)kD LWS\({}_{n,N,d}\) with running time \(O(N^{2-\delta}\cdot n^{k-1})\) for some \(\delta>0\)._
Proof.: Given an \([\sf Static]\)kD LWS\({}_{n,d}\) instance with \(D_{a,a+N},D_{a+N,a+2N}\) together with correctly computed values \(T[j_{1},\ldots,j_{k}]\) for all \(1\leq\ell\leq k,(i_{1},\ldots,i_{k})\in D_{a,a+N}\), we want to compute
\[T\Big{[}j_{1},\ldots,j_{k}\Big{]}=\min_{1\leq\ell\leq k}\bigg{\{}\min_{a-J_{ \ell}\leq i_{\ell}<a+N-J_{\ell}}\Big{\{}T\Big{[}j_{1},\ldots,j_{\ell-1},i_{ \ell},j_{\ell+1},\ldots,j_{k}\Big{]}+w_{\ell}\Big{[}j_{1},\ldots,j_{k},i_{ \ell}\Big{]}\Big{\}}\bigg{\}}\]
for all \((j_{1},j_{2},\ldots,j_{k})\in D_{a+N,a+2N}\). Fix some \(n-a-N\leq j\leq n\). For any \((j,j_{2},\ldots,j_{k})\in D_{a+N,a+2N}\) we have
\[T\Big{[}j,j_{2},\ldots,j_{k}\Big{]} =\min_{1\leq\ell\leq k}\bigg{\{}\min_{a-J_{\ell}\leq i_{\ell}<a+N -J_{\ell}}\Big{\{}T\Big{[}j,j_{2},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1}, \ldots,j_{k}\Big{]}+w_{\ell}\Big{[}j,j_{2},\ldots,j_{k},i_{\ell}\Big{]}\Big{\}} \bigg{\}}\] \[=\min\Bigg{\{}\min_{2\leq\ell\leq k}\Big{\{}\min_{1\leq i_{\ell}<j _{\ell}}T\Big{[}j,j_{2},\ldots,j_{\ell-1},i_{\ell-1},j_{\ell+1},\ldots,j_{k} \Big{]}+w_{\ell}\Big{[}j,j_{2},\ldots,j_{\ell-1},i_{\ell},j_{\ell+1},\ldots,j_{ k}\Big{]}\Big{\}},\] \[\min_{1\leq i_{1}<j_{1}}\Big{\{}T\Big{[}i_{1},j_{2},\ldots,j_{k} \Big{]}+w_{1}\Big{[}i_{1},j_{2},\ldots,j_{k}\Big{]}\Big{\}}\bigg{\}}\]
We can compute the first term in the minimum using a \([\sf Static](k-1)\)D LWS algorithm with time \(O(N^{2-\varepsilon}\cdot n^{k-2})\) and the second term using a LWS algorithm with time at most \(O(N^{2}\cdot n)\). This is because after we fix \(j\), \(w[j,\ldots]\) still has rank \(d\) but one less dimension. Repeat this process for all the \(j\) on all \(k\) coordinates to solve \([\sf Static]\)kD LWS, and the total running time is at most
\[kn\cdot\Big{(}O(N^{2}\cdot n)+O(N^{2-\varepsilon}\cdot n^{k-2})\Big{)}=O(N^{2- \delta}\cdot n^{k-1})\]
for some \(\delta>0\).
### Slice Rank 2d Lws
In this section, we show that 2D LWS, [Static]2D LWS with even slice rank 3 is \(\mathsf{APSP}\)-hard, but 2D LWS, [Static]2D LWS with slice rank 1 is truly sub-cubic.
**Theorem 3.5**.: _Assuming the \(\mathsf{APSP}\) conjecture, there is no truly sub-cubic algorithm for 2D LWS with slice rank \(3\)._
Proof.: We reduce NegativeTriangle to 2D LWS with slice rank 3. Given an undirected graph \(G=(V,E)\) where \(V=\{v_{1},\ldots,v_{n}\}\), we use \(w\) to denote the weight function of an edge or a triangle. For convenience we let \(w(v_{a},v_{a})=\infty\). We define both our tensors to be
\[\alpha[i,j,k]=f_{1}(i,k)\cdot g_{1}(j)+f_{2}(i,j)\cdot g_{2}(k)+f_{3}(k,j)\cdot g _{3}(i),\]
where
\[f_{1}(i,k)=\begin{cases}w(v_{i-n},v_{k})\text{ if }i\in[n+1,2n],k\in[1,n] \\ 0\text{ otherwise}\end{cases},g_{1}(j)=\begin{cases}1\text{ if }j\in[n+1,2n]\\ 0\text{ otherwise}\end{cases}\]
\[f_{2}(i,j)=\begin{cases}w(v_{i-n},v_{j-n})\text{ if }i,j\in[n+1,2n]\\ 0\text{ otherwise}\end{cases},g_{2}(k)=\begin{cases}1\text{ if }k\in[1,n]\\ 0\text{ otherwise}\end{cases}\]
\[f_{3}(k,j)=\begin{cases}w(v_{k},v_{j-n})\text{ if }k\in[1,n],j\in[n+1,2n]\\ 0\text{ otherwise}\end{cases},g_{3}(i)=\begin{cases}1\text{ if }i\in[n+1,2n]\\ 0\text{ otherwise.}\end{cases}\]
We claim that running 2D LWS with \(\alpha\) will solve the NegativeTriangle instance by \(T[2n,2n]\) being the minimum weight of all triangles. In fact, we prove that for all \(n+1\leq i,j\leq 2n\), \(T[i,j]\) is the minimum weight of all triangles \(v_{a},v_{b},v_{c}\) such that \(1\leq a\leq i-n,1\leq b\leq j-n,1\leq c\leq n\).
Observe that:
* When \(i,j,k\in[1,n]\), we have \(\alpha[i,j,k]=0\). Therefore, \(T[i,j]=0\) for all \(1\leq i,j\leq n\).
* When \(i\in[n]\) and \(j\in[n+1,2n]\), \(\alpha[i,j,k]=0\). Therefore, \(T[i,j]=0\) when \(i\in[n],j\in[n+1,2n]\).
* When \(j\in[n]\) and \(i\in[n+1,2n]\), \(\alpha[i,j,k]=0\). Therefore, \(T[i,j]=0\) when \(j\in[n],j\in[n+1,2n]\).
* When \(i,j\in[n+1,2n]\), \(\alpha[i,j,k]=w(v_{i-n},b_{j-n},v_{k})\) if \(k\in[n]\) and \(\alpha[i,j,k]=0\) if \(k\in[n+1,2n]\).
Finally, we use induction on \((i,j)\) to prove the claim. When \(i=j=n+1\), we have
\[T[n+1,n+1]=\min_{1\leq k\leq n}\Big{\{}\alpha[n+1,n+1,k]\Big{\}}=\min_{1\leq k \leq n}\Big{\{}w(v_{i-n},v_{j-n},v_{k})\Big{\}}=\infty.\]
When \(i=n+2,j=n+1\),
\[T[n+2,n+1]=\min_{1\leq k\leq n}\Big{\{}\alpha[n+2,n+1,k]\Big{\}}=\min_{1\leq k \leq n}\Big{\{}w(v_{2},v_{1},v_{k})\Big{\}}.\]
Similarly, when \(i=n+1,j=n+2\),
\[T[n+1,n+2]=\min_{1\leq k\leq n}\{w(v_{1},v_{2},v_{k})\}.\]
Now for general \((i,j)\in[n+1,2n]^{2}\), we have
\[T[i,j] =\min\Big{\{}\min_{1\leq k<i}\Big{\{}T[k,j]+\alpha[i,j,k]\Big{\}},\min_{1\leq k<j}\Big{\{}T[i,k]+\alpha[i,j,k]\Big{\}}\Big{\}}\] \[=\min\Big{\{}\min_{1\leq k\leq n}\Big{\{}T[k,j]+\alpha[i,j,k]\Big{\}},\min_{n+1\leq k<i}\Big{\{}T[k,j]+\alpha[i,j,k]\Big{\}},\min_{1\leq k\leq n} \Big{\{}T[i,k]+\alpha[i,j,k]\Big{\}},\] \[\min_{n+1\leq k<j}\Big{\{}T[i,k]+\alpha[i,j,k]\Big{\}}\Big{\}}\] \[=\min\Big{\{}\min_{1\leq k\leq n}\Big{\{}w(v_{i-n},v_{j-n},v_{k}) \Big{\}},\min_{n+1\leq k<i}\Big{\{}T[k,j]\Big{\}},\min_{n+1\leq k<j}\Big{\{}T[ i,k]\Big{\}}\Big{\}}.\]
By induction hypothesis we know we are taking the minimum over all triangles \((v_{a},v_{b},v_{c})\) such that
\[(a,b,c) \in[1,i-n-1]\times[1,j-n]\times[n]\bigcup[1,i-n]\times[1,j-n-1] \times[n]\bigcup\{i-n\}\times\{j-n\}\times[n]\] \[=[1,i-n]\times[1,j-n]\times[n].\]
Therefore our claim is proved.
Now we show that [Static]2D \(\mathsf{LWS}\) with slice rank 1 can be solved in truly sub-cubic time (which implies 2D \(\mathsf{LWS}\) with slice rank 1 is also truly sub-cubic) but there is no truly sub-cubic algorithm for 2D \(\mathsf{LWS}\) with slice rank 2 assuming APSP conjecture.
**Theorem 3.6**.: [Static]2D \(\mathsf{LWS}_{n,N}\) _with slice rank \(1\) can be solved in \(O(n^{2}\cdot N^{1-\varepsilon})\) for some \(\varepsilon>0\)._
Proof.: The idea is similar to the proof of Theorem 3.4. We reduce [Static]2D \(\mathsf{LWS}\) with slice rank 1 to [Static]\(\mathsf{LWS}\) with rank 1, for which we know there exists an \(O(N^{1-\varepsilon}\cdot n)\) time algorithm.
Given an instance of [Static]2D \(\mathsf{LWS}\) with slice rank 2, tensors \(w_{1},w_{2}\), there are 3 possibilities for \(w_{1}\):
* \(w_{1}[i,j,k]=f(i,k)\cdot g(j)\) for some \(f,g\). In this case, we fix an \(i\in D_{a+N,a+2N}\) such that \(w_{1}[i,j,k]\) becomes a matrix with rank 2. Now we can run [Static]\(\mathsf{LWS}\) algorithm with rank 2 (at most \(n\) times) to compute \[\min_{1\leq k^{\prime}<j^{\prime}}\left\{T[i,k^{\prime}]+w_{1}[i,j^{\prime},k^ {\prime}]\right\}\] for all \((i,j^{\prime})\in D_{a+N,a+2N}\) in time \(O(N^{1-\varepsilon}n)\) for some \(\varepsilon>0\). Doing this for all \(i\), we can compute \[\min_{1\leq k<j}\left\{T[i,k]+w_{1}[i,j,k]\right\}\] for all \((i,j)\in D_{a+N,a+2N}\) in time at most \(O(N^{1-\varepsilon}\cdot n^{2})\).
* \(w_{1}[i,j,k]=f(i,j)\cdot g(k)\) for some \(f,g\). This is similar to the previous case but instead of fixing \(i\), we fix \(j\). Similarly, we can compute \[\min_{1\leq k<j}\left\{T[i,k]+w_{1}[i,j,k]\right\}\] for all \((i,j)\in D_{a+N,a+2N}\) in time \(O(N^{1-\varepsilon}\cdot n^{2})\).
* \(w_{1}[i,j,k]=f(j,k)\cdot g(i)\) for some \(f,g\). Again we can run [Static]\(\mathsf{LWS}\) for each \(j\) and compute \[\min_{1\leq k<j}\left\{T[i,k]+w_{1}[i,j,k]\right\}\] for all \((i,j)\in D_{a+N,a+2N}\) in time \(O(N^{1-\varepsilon}\cdot n^{2})\).
The analysis for \(w_{2}[i,j,k]\) is the same, and thus we can compute
\[\min_{1\leq k<i}\left\{T[k,j]+w_{2}[i,j,k]\right\}\]
for all \((i,j)\in D_{a+N,a+2N}\) in time \(O(N^{1-\varepsilon}\cdot n^{2})\). Finally take the pairwise minimum to give
\[\min\Big{\{}\min_{1\leq k<j}\left\{T[i,k]+w_{1}[i,j,k]\right\}\!,\min_{1\leq k <i}\left\{T[k,j]+w_{2}[i,j,k]\right\}\!\Big{\}}\]
for all \((i,j)\in D_{a+N,a+2N}\).
**Theorem 3.7**.: 2D \(\mathsf{LWS}\) _with slice rank 1 is truly sub-cubic._
Proof.: Immediately follows from Theorem 3.6 and the fact that our reduction from 2D \(\mathsf{LWS}\) to [Static]2D \(\mathsf{LWS}\) in Theorem 3.2 preserves slice rank.
## 4 Polygon Triangulation (2d \(\mathsf{LWS}^{\mathsf{PT}}\))
In this section, we discuss the polygon triangulation problem 2D \(\mathsf{LWS}^{\mathsf{PT}}\) and its connections with 2D \(\mathsf{LWS}\). It was shown in [13, 14, 15, 16], that if \(w[i,j,k]=x_{i}\cdot x_{j}\cdot x_{k}\) for all \(i,j,k\) with \(x_{i},x_{j},x_{k}>0\) for all \(i\), then 2D \(\mathsf{LWS}^{\mathsf{PT}}\) can be solved in \(O(n^{2})\) time. We establish several conditional hardness results of 2D \(\mathsf{LWS}^{\mathsf{PT}}\) based on \(\mathsf{SETH}\) and \(\mathsf{APSP}\) conjecture. Namely, 2D \(\mathsf{LWS}\) where \(w_{1}=w_{2}\) can be reduced to 2D \(\mathsf{LWS}^{\mathsf{PT}}\) with rank/slice rank unchanged, and thus finding the optimal triangulation for certain weight functions (rank \(2^{O(\log^{*}n)}\) or slice rank 3) is hard under \(\mathsf{SETH}\).
### Low Rank Polygon Triangulation is \(\mathsf{SETH}\)-hard
**Theorem 4.1**.: (2D \(\mathsf{LWS}\to\) 2D \(\mathsf{LWS}^{\mathsf{PT}}\)). _There exists an \(O(nd)\) time reduction from 2D \(\mathsf{LWS}_{n}\) with rank \(d\), \(w_{1}=w_{2}=w\), to 2D \(\mathsf{LWS}^{\mathsf{PT}}_{2n}\) with rank \(d\)._
Proof.: Given an 2D \(\mathsf{LWS}_{n}\) instance \(T\) with rank \(d\) tensor \(w[i,j,k]=\langle\mu_{i},\sigma_{j},\tau_{k}\rangle\) and recurrence relation
\[T[i,j]=\min\Big{\{}\min_{1\leq k<j}\Big{\{}T[i,k]+w[i,j,k]\Big{\}},\min_{i<k \leq n}\Big{\{}T[k,j]+w[i,j,k]\Big{\}}\Big{\}},\]
we want to compute \(T[1,n]\). We construct an 2D \(\mathsf{LWS}^{\mathsf{PT}}_{2n}\) instance \(T^{\prime}\) as follows.
\[\mu^{\prime}_{i}=\begin{cases}\mu_{i}\text{ if }i\in[1,n]\\ 0^{d}\text{ if }i\in[n+1,2n],\end{cases}\sigma^{\prime}_{j}=\begin{cases}0^{d} \text{ if }j\in[1,n]\\ \sigma_{j-n}\text{ if }j\in[n+1,2n],\end{cases}\tau^{\prime}_{k}=\begin{cases} \tau_{k}\text{ if }k\in[1,n]\\ \tau_{k-n}\text{ if }k\in[n+1,2n],\end{cases}\]
and define a \(2n\times 2n\times 2n\) tensor as \(w^{\prime}[i,j,k]=\langle\mu^{\prime}_{i},\sigma^{\prime}_{j},\tau^{\prime}_{ k}\rangle\). We make a few observations:
* \(w^{\prime}[i,j,k]=0\) when \((i,j)\notin[1,n]\times[n+1,2n]\), and thus \(T^{\prime}[i,j]=0\) for all \((i,j)\notin[1,n]\times[n+1,2n]\).
* When \((i,j)\in[n]\times[n+1,2n]\), \(w^{\prime}[i,j,k]=w[i,j-n,k]\) for all \(k\in[n]\) and \(w^{\prime}[i,j,k]=w[i,j-n,k-n]\) for all \(k\in[n+1,2n]\).
We now prove that for all \((i,j)\in[n]\times[n+1,2n]\), we have \(T^{\prime}[i,j]=T[i,j-n]\), which will suffice since when \(i=1,j=2n\), we have \(T^{\prime}[1,2n]=T[1,n]\).
We proceed by induction. The base case is \(T^{\prime}[n,n+1]=0\). Now for each \((i,j)\in[1,n]\times[n+1,2n]\), by induction hypothesis we have
\[T^{\prime}[i,j] =\min_{i<k<j}\Big{\{}T^{\prime}[i,k]+T^{\prime}[k,j]+w^{\prime}[i, j,k]\Big{\}}\] \[=\min\Big{\{}\min_{i<k\leq n}\Big{\{}T^{\prime}[i,k]+T^{\prime}[k,j]+w^{\prime}[i,j,k]\Big{\}},\min_{n+1\leq k<j}\Big{\{}T^{\prime}[i,k]+T^{ \prime}[k,j]+w^{\prime}[i,j,k]\Big{\}}\Big{\}}\] \[=\min\Big{\{}\min_{i<k\leq n}\Big{\{}T^{\prime}[k,j]+\langle\mu^{ \prime}_{i},\sigma^{\prime}_{j},\tau^{\prime}_{k}\rangle\Big{\}},\min_{1\leq k <j-n}\Big{\{}T^{\prime}[i,k-n]+\langle\mu^{\prime}_{i},\sigma^{\prime}_{j}, \tau^{\prime}_{k}\rangle\Big{\}}\Big{\}}\] \[=T[i,j-n].\]
The time for our reduction is \(O(nd)\), so the proof is complete.
**Corollary 4.2**.: _Under \(\mathsf{SETH}\), there is no truly sub-cubic algorithm for 2D \(\mathsf{LWS}^{\mathsf{PT}}\) with weight function whose rank is \(2^{O(\log^{*}n)}\) or above._
Proof.: When we reduce 3Min-IP to 2D \(\mathsf{LWS}\) in Theorem 3.1, the tensors \(w_{\ell}\) that we use are all the same, so Theorem 3.1 immediately gives a reduction from 3Min-IP to 2D \(\mathsf{LWS}\) with \(w_{1}=w_{2}\) (preserving rank), which further reduces to 2D \(\mathsf{LWS}^{\mathsf{PT}}\) (preserving rank) by Theorem 4.1.
### Constant Slice Rank Polygon Triangulation is -hard
In fact, the we can modify the reduction in Theorem 4.1 such that it preserved slice rank as well.
**Theorem 4.3**.: (2D \(\mathsf{LWS}\to\) 2D \(\mathsf{LWS}^{\mathsf{PT}}\)_, slice rank version_) _There exists an \(O(nd)\) time reduction from 2D \(\mathsf{LWS}_{n}\) with slice rank \(d\), \(w_{1}=w_{2}=w\), to 2D \(\mathsf{LWS}^{\mathsf{PT}}_{2n}\) with slice rank \(d\)._
Proof.: Given an 2D \(\mathsf{LWS}\) instance with \(n\times n\) tensor \(w\) with rank \(d\), by the proof of Theorem 4.1, we know it suffices to construct a \(2n\times 2n\) tensor \(w^{\prime}\) such that
* \(w^{\prime}[i,j,k]=0\) when \((i,j)\notin[1,n]\times[n+1,2n]\).
* When \((i,j)\in[n]\times[n+1,2n]\), \(w^{\prime}[i,j,k]=w[i,j-n,k]\) for all \(k\in[n]\) and \(w^{\prime}[i,j,k]=w[i,j-n,k-n]\) for all \(k\in[n+1,2n]\).
Now for each slice in \(w\), there are three possibilities. For each case we convert the slice into a new slice with dimension \(2n\times 2n\times 2n\) such that \(w^{\prime}\) is the sum of them.
* \(f(i,k)\cdot g(j)\): let \[f^{\prime}(i,k)=\begin{cases}f(i-n,k)\text{ if }i\in[1,n],k\in[1,n]\\ f(i-n,k-n)\text{ if }i\in[1,n],k\in[n+1,2n]\\ 0\text{ otherwise}\end{cases},g^{\prime}(j)=\begin{cases}g(j)\text{ if }j\in[n+1,2n]\\ 0\text{ otherwise}.\end{cases}\] As a result, \(f^{\prime}(i,k)\cdot g^{\prime}(j)=f(i,k)\cdot g(j-n)\) when \(i\in[1,n],j\in[n+1,2n]\) and \(k\in[n]\), \(f^{\prime}(i,k)\cdot g^{\prime}(j)=f(i,k-n)\cdot g(j-n)\) when \(i,j\in[n+1,2n]\) and \(k\in[n+1,2n]\), and \(f^{\prime}(i,k)\cdot g^{\prime}(j)=0\) if \((i,j)\notin[1,n]\times[n+1,2n]\). Thus it satisfies the conditions above.
* \(f(i,j)\cdot g(k)\): let \[f^{\prime}(i,j)=\begin{cases}f(i,j-n)\text{ if }i\in[1,n],j\in[n+1,2n]\\ 0\text{ otherwise}\end{cases},g^{\prime}(k)=\begin{cases}g(k)\text{ if }k\in[n]\\ g(k-n)\text{ if }k\in[n+1,2n].\end{cases}\] As a result, \(f^{\prime}(i,j)\cdot g^{\prime}(k)=f(i,j-n)\cdot g(k)\) when \((i,j)\in[1,n]\times[n+1,2n]\) and \(f^{\prime}(i,j)\cdot g^{\prime}(k)=0\) otherwise. Again it satisfies the conditions above.
* \(f(j,k)\cdot g(i)\): let \[f^{\prime}(j,k)=\begin{cases}f(j-n,k)\text{ if }j\in[n+1,2n],k\in[n]\\ f(j-n,k-n)\text{ if }j\in[n+1,2n],k\in[n+1,2n]\\ 0\text{ otherwise}.\end{cases},g^{\prime}(i)=\begin{cases}g(i)\text{ if }i\in[n]\\ 0\text{ otherwise}.\end{cases}\] When \((i,j)\in[n]\times[n+1,2n],k\in[n]\), \(f^{\prime}(j,k)\cdot g^{\prime}(i)=f(j-n,k)\cdot g(i)\), and when \((i,j)\in[n]\times[n+1,2n],k\in[n+1,2n]\), \(f^{\prime}(j,k)\cdot g^{\prime}(i)=f(j-n,k-n)\cdot g(i)\). When \((i,j)\notin[n]\times[n+1,2n]\), \(f^{\prime}(j,k)\cdot g^{\prime}(i)=0\). It satisfies the conditions above.
Therefore, the sum of these slices must also satisfy the conditions of \(w^{\prime}\) imposed in Theorem 4.1, and the reduction takes \(O(nd)\) time.
**Corollary 4.4**.: _Under \(\mathsf{APSP}\) conjecture, there is no truly sub-cubic algorithm for 2D \(\mathsf{LWS}^{\mathsf{PT}}\) with weight function whose slice rank is \(3\) or above._
Proof.: When we reduce \(\mathsf{APSP}\) to 2D \(\mathsf{LWS}\) in Theorem 3.5, the tensors \(\alpha\) that we use are the same, so Theorem 3.5 immediately gives a reduction from \(\mathsf{APSP}\) to 2D \(\mathsf{LWS}\) with \(w_{1}=w_{2}\), which further reduces to 2D \(\mathsf{LWS}^{\mathsf{PT}}\) (preserving slice rank) by Theorem 4.1.
## 5 Applications of \(\mathsf{kD}\) Lws
In Section 3, we have shown that \(\mathsf{kD}\) LWS can solve \(\mathsf{kMin}\)-IP and \(\mathsf{APSP}\) with different tensors. In this section we discuss more applications of \(\mathsf{kD}\) LWS.
### Higher Dimension Airplane Refueling
The airplane refueling problem was brought by [12] as an example of LWS.
**Definition 5.1** (Airplane Refueling).: _Suppose an airplane needs to fly between 2 given airports which are distance \(R\) apart. Suppose there are \(n-1\) different refueling stops at distance \(x_{1},\ldots,x_{n-1}\) from the departure point and all stops lie on the segment between departure and destination points. We can let \(0=x_{0}<x_{1}<\ldots<x_{n}=R\). The cost of flying \(\ell\) miles is \((k-\ell)^{2}\) for some \(k>0\) (we prefer flying close to \(k\) miles), and the goal is to fly from departure point to arrival point while minimizing the cost._
It is not hard to see setting \(w[i,j]=(x_{j}-x_{i}-k)^{2}\) in LWS will solve the problem since \(T[j]\) is always the minimum cost of flying from \(x_{0}\) to \(x_{j}\). \(w\) has rank 4 because
\[w[i,j]=x_{j}^{2}\cdot 1+1\cdot x_{i}^{2}+(-2x_{j})\cdot(x_{i}+2k)+k\cdot(2x_{i }+k),\]
and [12] shows that airplane refueling can be solved in linear time.
In the real world, it is usually unlikely that all refueling stops are located on a single line. In addition, the plane can move in multiple directions. The higher dimension airplane refueling problem is motivated by these observations.
**Definition 5.2** (Higher Dimension Airplane Refueling).: _Suppose an airplane needs to fly between two given airports on a \(k\)-dimensional grid with \(n\) points at each dimension. Each point in the grid represents a refueling stop, and the cost of flying from stop \((i_{1},\ldots,i_{\ell-1},j_{\ell},i_{\ell+1},\ldots,i_{k})\) to \((i_{1},\ldots,i_{k})\) to is \(c(i_{1},\ldots,i_{k},j_{\ell})\). The problem asks the minimum cost of flying from \((1,\ldots,1)\) to \((n,\ldots,n)\)._
Notice that this is closer to real-world scenario where trains need to travel on railways, or we are driving in a city with well-organized roads.
Setting \(w[i_{1},\ldots,i_{k+1}]=c[i_{1},\ldots,i_{k+1}]\) in \(\mathsf{kD}\) LWS will solve the problem because \(T[i_{1},\ldots,i_{k}]\) will always be the minimum cost of flying from \((1,\ldots,1)\) to \((i_{1},\ldots,i_{k})\). If we were to follow the cost function suggested in [12], then we have \(c[i_{1},\ldots,i_{k},j_{\ell}]=(L-(i_{\ell}-j_{\ell}))^{2}\), which has constant rank and thus it can be solved in time \(O(n^{k+1-\varepsilon})\) for some \(\varepsilon>0\).
Another natural scenario is that the cost of flying from \((i_{1},\ldots,i_{\ell-1},j_{\ell},i_{\ell+1},i_{k})\) to \((i_{1},\ldots,i_{k})\) only depends on \((i_{1},\ldots,i_{k})\). It mimics the scenario that the airplane is charged a fee upon arrival.
**Definition 5.3** (Arrival Fee Airplane Refueling).: _Suppose an airplane needs to fly between two given airports on a \(k\)-dimensional grid with \(n\) points at each dimension. Each point in the grid represents a refueling stop, and the cost of flying from stop \((i_{1},\ldots,i_{\ell-1},j_{\ell},i_{\ell+1},\ldots,i_{k})\) to \((i_{1},\ldots,i_{k})\) to is \(c(i_{1},\ldots,i_{k})\). The problem asks the minimum cost of flying from \((1,\ldots,1)\) to \((n,\ldots,n)\)._
In the arrival fee airplane refueling problem with dimension \(k\), the tensor has slice rank 1, so by Theorem 3.6 it can be solved in time \(O(n^{k+1-\varepsilon})\) for some \(\varepsilon>0\).
### Multiple Nested Boxes
Nested boxes problem, or box stacking problem, is a famous example with a DP solution.
**Definition 5.4** (Nested Boxes).: _Given \(n\) boxes in \(d\) dimension of size \((b_{1},\ldots,b_{d})\), find the longest chain such that each box fits into the next (without rotation). We say that box \(a\) of size \((a_{1},\ldots,a_{d})\) fits into box \(b\) of size \((b_{1},\ldots,b_{d})\) if \(a_{i}\leq b_{i}\) for all \(1\leq i\leq d\)._
[13] proves that nested boxes is sub-quadratic equivalent to the vector domination problem defined in [11] and both can be solved by LWS: sort the boxes by column in increasing order as \(B_{1},\ldots,B_{n}\) and set \(w_{ij}\) to be \(-1\) if \(B_{j}\) contains \(B_{i}\) and \(0\) otherwise.
It is natural to consider the case where we are allowed to have multiple locations to put the boxes, which motivates our multiple nested boxes problem.
**Definition 5.5** (Multiple Nested Boxes).: _Given \(n\) boxes in \(d\) dimension of size \((b_{1},\ldots,b_{d})\) and \(k\) piles, find the maximum number of boxes we can use such that each in each pile, each box fits into the next (without rotation). We say that box \(a\) of size \((a_{1},\ldots,a_{d})\) fits into box \(b\) of size \((b_{1},\ldots,b_{d})\) if \(a_{i}\leq b_{i}\) for all \(1\leq i\leq d\)._
**Theorem 5.6**.: _Multiple nested boxes with \(k\) dimension can be solved in time \(O(n^{k+1-\varepsilon})\) for some \(\varepsilon>0\)._
Proof.: We first sort all the boxes by their volume in increasing order \(B_{1},\ldots,B_{n}\), and let
\[w_{\ell}[i_{1},\ldots,i_{k},j]=\begin{cases}-1&\text{ if }B_{j}\text{ fits into }B_{i_{\ell}}\\ 0&\text{ otherwise.}\end{cases}\]
To see that this indeed solves multiple nested boxes, we claim that \(-T[i_{1},\ldots,i_{k}]\) is the maximum number of boxes over all assignments such that the rank of the outer box in \(t\)-th pile is at most \(i_{t}\). This will solve the multiple nested boxes when \(i_{1}=\ldots=i_{k}=n+1\).
We proceed by induction. When \(i_{1}=\ldots=i_{k}=1\), we have \(T[i_{1},\ldots,i_{k}]=0\). There is no box with rank \(0\) so we cannot put any boxes. Now for general \((i_{1},\ldots,i_{k})\), by recurrence we have
\[T\Big{[}i_{1},\ldots,i_{k}\Big{]}=\min_{1\leq\ell\leq k}\Big{\{}\min_{1\leq j _{\ell}<i_{\ell}}\Big{\{}T\Big{[}i_{1},\ldots,i_{\ell-1},j_{\ell},i_{\ell+1},i_ {k}\Big{]}+w_{\ell}\Big{[}i_{1},\ldots,i_{k},j_{\ell}\Big{]}\Big{\}}\Big{\}}.\]
For any assignment \((u_{1},\ldots,u_{k})\) to the piles such that the \(t\)-th pile outer box has rank \(u_{t}\leq i_{t}\), it can be achieved from adding \(B_{u_{\ell}}\) to the assignment \((u_{1},\ldots,u_{\ell-1},u_{\ell}^{\prime},u_{\ell+1},\ldots,u_{k})\) with the guarantee that \(B_{u_{\ell}^{\prime}}\) fits inside \(B_{u_{\ell}}\). This case is covered by the right-hand-side of the equation because by induction hypothesis,
\[-T\Big{[}u_{1},\ldots,u_{\ell-1},u_{\ell}^{\prime},u_{\ell+1},\ldots,u_{k} \Big{]}-w_{\ell}\Big{[}u_{1},\ldots,u_{k},u_{\ell}^{\prime}\Big{]}\]
is the maximum over the assignments under this procedure.
Therefore, right-hand-side of the equation is exactly all possible ways to achieve the assignment \((u_{1},\ldots,u_{k})\) such that \(u_{t}\leq i_{t}\) for all \(t\), so our kD LWS instance indeed solves the multiple nested boxes with \(k\) dimension. In addition, notice that \(w_{\ell}\) only depends on its \(\ell\)-th and last coordinate, so it can be expressed as a matrix. The same reasoning in Theorem 3.6 shows that it can be reduced to \([\mathsf{Static}](\mathsf{k}-\mathsf{1})\mathsf{D}\) LWS with rank \(1\), which implies that it can be solved in time \(O(n^{k+1-\varepsilon})\) for some \(\varepsilon>0\).
## 6 Applications of 2d Lws\({}^{\mathsf{PT}}\)
In section 4, we showed we can solve polygon triangulation and 2D LWS\({}^{\mathsf{PT}}\) with different tensors. In this section, we discuss more applications of 2D LWS\({}^{\mathsf{PT}}\).
### Matrix-Chain Multiplication
The matrix-chain multiplication problem was introduced in [1] and is defined as follows:
**Definition 6.1** (Matrix-Chain Multiplication).: _Given a chain of \(n\) matrices \(A_{1},\ldots,A_{n}\) where matrix \(A_{i}\) has dimension \(d_{i-1}\times d_{i}\), find the order of matrix multiplications which minimizes the number of scalar multiplications using the straightforward matrix multiplication algorithm._
Recall that multiplying an \(n\times m\) matrix by an \(m\times p\) matrix using the straightforward algorithm uses \(n\cdot m\cdot p\) scalar multiplications. Moreover, the order in which you multiply a chain of matrices determines the number of scalar multiplications performed. For instance, consider three matrices \(A_{1},A_{2},A_{3}\) with dimensions \((10,20)\), \((20,30)\), and \((30,40)\) respectively. Multiplying \((A_{1},A_{2})A_{3}\) takes
18000 scalar multiplications while multiplying \(A_{1}(A_{2}A_{3})\) takes \((20\cdot 30\cdot 40)+(10\cdot 20\cdot 40)=32000\) scalar multiplications.
Matrix-chain multiplication is a 2D \(\mathsf{LWS}^{\mathsf{PT}}\) problem where \(T[i,j]\) is the cost of multiplying matrices \(A_{i},\ldots,A_{j}\) and we want to find the \(k\) which minimizes the cost of multiplying \((A_{i},\ldots,A_{k})\) by \((A_{k+1},\ldots,A_{j})\). Multiplying matrices \((A_{i},\ldots,A_{k})\) would result in a matrix of dimension \((d_{i-1},d_{k})\) and multiplying \((A_{k+1},\ldots,A_{j})\) would result in a matrix of dimension \((d_{k},d_{j})\). Thus \(T[i,j]\) equals the cost of multiplying all matrices \(A_{i},\ldots,A_{k}\) (i.e. \(T[i,k]\)) and all matrices \(A_{k+1},\ldots,A_{j}\) (i.e. \(T[k,j]\)) plus the cost of multiplying those two resultant matrices together (i.e. \(d_{i-1}d_{k}d_{j}\)). Setting \(w_{1}[i,j,k]=w_{2}[i,j,k]=d_{i-1}d_{k}d_{j}\) would solve this problem.
Let us construct a vector \(d=[d_{0},d_{1},\ldots,d_{n}]\) with the dimensions of our matrices \(A_{1},\ldots,A_{n}\). Then \(w\) has a tensor rank of \(1\) because it can be represented as the product of different entries of \(d\), namely \(w[i,j,k]=d[i-1]\cdot d[j]\cdot d[k]\). Moreover, there exists an \(O(n\log n)\) time algorithm for this problem [13, 14]. Corollary 4.2 helps explain why this speedup is possible.
### Optimal Binary Search Tree
The optimal binary search tree construction problem was introduced in [14, 15] and is defined as follows:
**Definition 6.2** (Optimal Binary Search Tree).: _Given a sequence of \(n\) distinct keys \(h_{1},\ldots,h_{n}\) in sorted order where the probability of accessing key \(h_{i}\) is \(p_{i}\), construct a binary search tree from these keys which minimizes the expected access time._
This problem is a 2D \(\mathsf{LWS}^{\mathsf{PT}}\) instance where \(T[i,j]\) is the minimum cost binary search tree with keys \(h_{i},\ldots,h_{j}\). We want to chose a key \(h_{k}\) to be the root of the sub-tree containing keys \(h_{i},\ldots,h_{j}\) which minimizes the expected access time. The expected access time for a key \(h_{t}\) is \(p_{t}\cdot(d_{t}+1)\), the key's probability times its depth in the tree (i.e. the number of times this item has been accessed). We can compute this quantity incrementally, adding the probability \(p_{t}\) of key \(h_{t}\) once at every level it appears in the tree, summing up \(p_{t}\) a total of \(d_{t}\) times. Thus the expected cost of accessing keys \(h_{i},\ldots,h_{j}\) is \(w[i,j,k]=\sum_{t=i}^{j}p_{t}\).
\(w\) has a slice rank of \(1\) because it can be written as \(w[i,j,k]=a[k]\cdot b[i,j]\) where \(a[k]=1\) and \(b[i,j]=\sum_{t=i}^{j}p_{t}\). This observation, together with Corollary 4.4, recovers the known \(O(n^{2})\) time algorithm for this problem [11, 12].
|
2309.08091 | Geodesic flows of compact higher genus surfaces without conjugate points
have expansive factors | In this paper we show that a geodesic flow of a compact surface without
conjugate points of genus greater than one is time-preserving semi-conjugate to
a continuous expansive flow which is topologically mixing and has a local
product structure. As an application we show that the geodesic flow of a
compact surface without conjugate points of genus greater than one has a unique
measure of maximal entropy. This gives an alternative proof of
Climenhaga-Knieper-War Theorem. | Edhin Franklin Mamani | 2023-09-15T01:16:04Z | http://arxiv.org/abs/2309.08091v1 | # Geodesic flows of compact higher genus surfaces without conjugate points have expansive factors
###### Abstract
In this paper we show that a geodesic flow of a compact surface without conjugate points of genus greater than one is time-preserving semi-conjugate to a continuous expansive flow which is topologically mixing and has a local product structure. As an application we show that the geodesic flow of a compact surface without conjugate points of genus greater than one has a unique measure of maximal entropy. This gives an alternative proof of Climenhaga-Knieper-War Theorem.
## 1 Introduction
The fundamental work of Morse [22] is the main motivation for the study of existence of equivalences between geodesic flows of compact higher genus surfaces and hyperbolic geodesic flows. In 1980, Gromov [19] built a semi-conjugacy, not necessarily time-preserving, between the geodesic flow of a higher genus surface of non-positive curvature and the geodesic flow of a hyperbolic surface. Later, Ghys did the same for Anosov geodesic flows (his proof extends to compact surfaces without conjugate points) [18]. In 2018, Gelfert and Ruggiero [16] found a time-preserving semi-conjugacy between geodesic flows of compact surfaces without focal points and expansive flows. This result was extended to the case of compact surfaces without conjugate points and continuous Green bundles [17]. The main contribution is the following.
**Theorem 1.1**.: _Let \(M\) be a compact surface without conjugate points of genus greater than one and \(\phi_{t}\) be its geodesic flow. Then, there exists a continuous expansive flow \(\psi_{t}\) that is time-preserving semi-conjugate to \(\phi_{t}\), acting on a compact metric space \(X\) of topological dimension at least two. Moreover, \(\psi_{t}\) is topologically mixing and has a local product structure._
The good dynamical properties of the factor flow \(\psi_{t}\) allow us to apply Bowen-Franco's classical theory to get the unique measure of maximal entropy of \(\psi_{t}\)[14]. Combining this fact with Buzzi-Fisher-Sambarino-Vasquez's work [5] about extensions of expansive flows we get the following application.
**Theorem 1.2**.: _Let \(M\) be a compact surface without conjugate points of genus greater than one. Then, its geodesic flow \(\phi_{t}\) has a unique measure of maximal entropy._
This theorem was proved by Climenhaga, Knieper and War in 2020 for a family of compact \(n\)-manifolds without conjugate points, including compact higher genus surfaces [6]. Their approach uses an extension of Bowen-Franco's theory done by Climenhaga and Thompson [7]. Gelfert-Ruggiero's approach (assuming no focal points or continuous Green bundles) is different and gives a more direct alternative proof.
Let us briefly explain the new contributions of this work. The construction of the factor flow \(\psi_{t}\) is based on an equivalence relation equivariant by the geodesic flow, which give rises a quotient space \(X\) and a quotient flow \(\psi_{t}\). A careful study of a basis of the quotient topology of \(X\) is crucial in Gelfert-Ruggiero's works. This study relies on the structure of the expansive set \(\mathcal{R}_{0}\), i.e., the set of points whose equivalence class is a singleton. Assuming continuity of Green bundles, \(\mathcal{R}_{0}\) forms an open dense subset of the unit tangent bundle. However, in our setting, \(\mathcal{R}_{0}\) might not be open because the openness of \(\mathcal{R}_{0}\) is an open problem in general. Despite this, we show that \(\mathcal{R}_{0}\) is dense on the complement of the set of points with non-trivial equivalence class, that turns out to be enough to get a basis of the quotient topology.
Secondly, many ideas for the proof of the dynamical properties of \(\psi_{t}\) in Gelfert-Ruggiero's papers rely on the fact that \(X\) admits a 3-manifold structure. This allows us to choose a Riemannian metric in \(X\) that can be lifted to the universal covering \(\tilde{X}\) of \(X\). The Riemannian structure of \(\tilde{X}\) is important in the proofs. In our context, we do not know if \(X\) admits a manifold structure. It is metrizable, but a metric in \(X\) might not be a length metric, so it might not be lifted to \(\tilde{X}\). We tackle this problem with more general topological arguments.
Finally, we study the possible loss of topological dimension of the quotient and show that topological dimension of \(\tilde{X}\) is at least two.
The paper is organized as follows. Section 2 contains the preliminaries. In Section 3, we define the equivalence relation that give rises the quotient space and flow. Section 4 studies the topological properties of the factor flow. Section 5 deals with the topological dimension of the quotient space. Section 6 is concerned with the dynamical properties of the factor flow and we complete the proof of Theorem 1.1. Finally, Section 7 is devoted to show the uniqueness of the measure of maximal entropy of the geodesic flow.
## 2 Preliminaries
### Compact manifolds without conjugate points
In this subsection we give basic definitions and notations that we use throughout the paper. Let \((M,g)\) be a \(C^{\infty}\) compact connected Riemannian manifold, \(TM\) be its tangent bundle and \(T_{1}M\) be its unit tangent bundle. Consider the universal covering \(\tilde{M}\) of \(M\), the covering map \(\pi:\tilde{M}\to M\) and the natural
projection \(d\pi:TM\to TM\). The universal covering \((\tilde{M},\tilde{g})\) is a complete Riemannian manifold with the pullback metric \(\tilde{g}=\pi^{*}g\). A manifold \(M\) has no conjugate points if the exponential map \(\exp_{p}\) is non-singular at every \(p\in M\). In particular, \(\exp_{p}\) is a covering map for every \(p\in M\) (p. 151 of [9]).
Denote by \(\nabla\) the Levi-Civita connection induced by \(g\). A geodesic is a smooth curve \(\gamma\subset M\) with \(\nabla_{\dot{\gamma}}\dot{\gamma}=0\). For every \(\theta=(p,v)\in TM\), denote by \(\gamma_{\theta}\) the unique geodesic with initial conditions \(\gamma_{\theta}(0)=p\) and \(\dot{\gamma}_{\theta}(0)=v\). The geodesic flow \(\phi_{t}\) is defined by
\[\phi:\mathbb{R}\times TM\to TM\qquad(t,\theta)\mapsto\phi_{t}(\theta)=\dot{ \gamma}_{\theta}(t).\]
Parameterizing all geodesics by arc-length allows us to restrict \(\phi_{t}\) to \(T_{1}M\).
We now define a Riemannian metric on the tangent bundle \(TM\) (Section 1.3 of [25]). Denote by \(P:TM\to M\) and \(\tilde{P}:T\tilde{M}\to\tilde{M}\) the corresponding canonical projections. For every \(\theta=(p,v)\in TM\), the Levi-Civita connection induces the so-called connection map \(C_{\theta}:T_{\theta}TM\to T_{p}M\). These linear maps provide the linear isomorphism \(T_{\theta}TM\to T_{p}M\oplus T_{p}M\) with \(\xi\mapsto(d_{\theta}P(\xi),C_{\theta}(\xi))\). We define the horizontal subspace by \(\mathcal{H}(\theta)=\ker(C_{\theta})\) and the vertical subspace by \(\mathcal{V}(\theta)=\ker(d_{\theta}P)\). These subspaces decompose the tangent space by \(T_{\theta}TM=\mathcal{H}(\theta)\oplus\mathcal{V}(\theta)\). For every \(\xi,\eta\in T_{\theta}TM\), the Sasaki metric is defined by
\[\langle\xi,\eta\rangle_{s}=\langle d_{\theta}P(\xi),d_{\theta}P(\eta)\rangle_ {p}+\langle C_{\theta}(\xi),C_{\theta}(\eta)\rangle_{p}. \tag{1}\]
This metric induces a Riemannian distance \(d_{s}\) usually called Sasaki distance.
For every \(\theta\in T_{1}M\), denote by \(G(\theta)\subset T_{\theta}T_{1}M\) the subspace tangent to the geodesic flow at \(\theta\). Let \(N(\theta)\subset T_{\theta}T_{1}M\) be the subspace orthogonal to \(G(\theta)\) with respect to the Sasaki metric. For every \(\theta\in T_{1}M\), \(H(\theta)=\mathcal{H}(\theta)\cap N(\theta)\) and \(V(\theta)=\mathcal{V}(\theta)\cap N(\theta)\). From the above decomposition we have
\[T_{\theta}T_{1}M=H(\theta)\oplus V(\theta)\oplus G(\theta)\quad\text{ and }\quad N(\theta)=H(\theta)\oplus V(\theta).\]
So, every \(\xi\in N(\theta)\) has decomposition \(\xi=(\xi_{h},\xi_{v})\in H(\theta)\oplus V(\theta)\). We call \(\xi_{h}\) and \(\xi_{v}\) the horizontal and vertical components of \(\xi\) respectively.
### Horospheres and horocycles
In this subsection we assume that \((M,g)\) is a compact surface without conjugate points and genus greater than one. We introduce important asymptotic objects in the universal covering. We follow [12] and part II of [26]. Let \(\theta\in T_{1}\tilde{M}\) and \(\gamma_{\theta}\) be the geodesic induced by \(\theta\). We define the forward Busemann function by
\[b_{\theta}:\tilde{M}\to\mathbb{R}\qquad p\mapsto b_{\theta}(p)=\lim_{t\to \infty}d(p,\gamma_{\theta}(t))-t.\]
From now on, for every \(\theta=(p,v)\in T_{1}\tilde{M}\) we denote \(-\theta:=(p,-v)\in T_{1}\tilde{M}\). The stable and unstable horosphere of \(\theta\) are defined by
\[H^{+}(\theta)=b_{\theta}^{-1}(0)\subset\tilde{M}\quad\text{ and }\quad H^{-}( \theta)=b_{-\theta}^{-1}(0)\subset\tilde{M}.\]
We lift these horospheres to \(T_{1}\tilde{M}\). Denote by \(\nabla b_{\theta}\) the gradient vector field of \(b_{\theta}\). We define the stable and unstable horocycle of \(\theta\) by
\[\tilde{\mathcal{F}}^{s}(\theta)=\{(p,-\nabla_{p}b_{\theta}):p\in H^{+}(\theta) \}\quad\text{ and }\quad\tilde{\mathcal{F}}^{u}(\theta)=\{(p,\nabla_{p}b_{-\theta}):p\in H^{-}( \theta)\}.\]
The horocycles project onto the horospheres by the canonical projection \(\tilde{P}\). For every \(\theta\in T_{1}\tilde{M}\), we define the stable and unstable families of horocycles by
\[\tilde{\mathcal{F}}^{s}=(\tilde{\mathcal{F}}^{s}(\theta))_{\theta\in T_{1} \tilde{M}}\quad\text{ and }\quad\tilde{\mathcal{F}}^{u}=(\tilde{\mathcal{F}}^{u}(\theta))_{ \theta\in T_{1}\tilde{M}}.\]
We also define the center stable and center unstable sets of \(\theta\) by
\[\tilde{\mathcal{F}}^{cs}(\theta)=\bigcup_{t\in\mathbb{R}}\tilde{\mathcal{F}}^ {s}(\phi_{t}(\theta))\quad\text{ and }\quad\tilde{\mathcal{F}}^{cu}(\theta)=\bigcup_{t\in\mathbb{R}}\tilde{ \mathcal{F}}^{u}(\phi_{t}(\theta)).\]
We can define the above objects in the case of \(T_{1}M\). For every \(\theta\in T_{1}M\),
\[\mathcal{F}^{*}(\theta)=d\pi(\tilde{\mathcal{F}}^{*}(\tilde{\theta}))\subset T _{1}M\quad\text{ and }\quad\mathcal{F}^{*}=d\pi(\tilde{\mathcal{F}}^{*}),\quad*=s,u,cs,cu;\]
for any lift \(\tilde{\theta}\in T_{1}\tilde{M}\) of \(\theta\). Let us state some properties of these objects.
**Proposition 2.1** ([12, 26]).: _Let \(M\) be a compact surface without conjugate points of genus greater than one. Then, for every \(\theta\in T_{1}\tilde{M}\),_
1. _Busemann functions_ \(b_{\theta}\) _are_ \(C^{1,L}\) _with_ \(L\)_-Lipschitz unitary gradient for a uniform constant_ \(L>0\)__[_20_]__._
2. _Horospheres_ \(H^{+}(\theta),H^{-}(\theta)\subset\tilde{M}\) _and horocycles_ \(\tilde{\mathcal{F}}^{s}(\theta),\tilde{\mathcal{F}}^{u}(\theta)\subset T_{1} \tilde{M}\) _are embedded curves._
3. _The families_ \(\tilde{\mathcal{F}}^{s},\tilde{\mathcal{F}}^{u}\) _and_ \(\mathcal{F}^{s},\mathcal{F}^{u}\) _are continuous foliations of_ \(T_{1}\tilde{M},T_{1}M\) _respectively, and invariant by the geodesic flow: for every_ \(t\in\mathbb{R}\)_,_ \[\tilde{\phi}_{t}(\tilde{\mathcal{F}}^{s}(\theta))=\tilde{\mathcal{F}}^{s}( \tilde{\phi}_{t}(\theta)).\] (2)
### Morse's shadowing and consequences
In 1924, Morse [22] studied a special class of geodesics of closed surfaces of genus greater than one. These surfaces always admit a metric of negative curvature called hyperbolic metric. For this hyperbolic metric, its geodesics are called hyperbolic geodesics.
**Theorem 2.1** ([22]).: _Let \((M,g)\) be a compact surface without conjugate points of genus greater than one and \(\tilde{M}\) be its universal covering. Then, there exists \(R>0\) such that for every geodesic \(\gamma\subset\tilde{M}\) there exists a hyperbolic geodesic \(\gamma^{\prime}\subset\tilde{M}\) with Hausdorff distance between \(\gamma\) and \(\gamma^{\prime}\) bounded above by \(R\)._
Given two geodesics \(\gamma_{1},\gamma_{2}\subset\tilde{M}\), we say that \(\gamma_{1}\) and \(\gamma_{2}\) are asymptotic if \(d(\gamma_{1}(t),\gamma_{2}(t))\leq C\) for every \(t\geq 0\) and for some \(C>0\). If the last inequality holds for every \(t\in\mathbb{R}\), \(\gamma_{1}\) and \(\gamma_{2}\) are called bi-asymptotic. So, Theorem 2.1 says that \(\gamma\) and \(\gamma^{\prime}\) are bi-asymptotic with respect to the hyperbolic distance. This gives a uniform bound between bi-asymptotic geodesics.
**Theorem 2.2**.: _Let \((M,g)\) be a compact surface without conjugate points and genus greater than one. Then there exists \(Q(M)>0\) such that the Hausdorff distance between any two bi-asymptotic geodesics is bounded above by \(Q(M)\)._
For each \(\theta\in T_{1}\tilde{M}\), we define the intersections
\[I(\theta)=H^{+}(\theta)\cap H^{-}(\theta)\subset\tilde{M}\quad\text{ and }\quad \tilde{\mathcal{I}}(\theta)=\tilde{\mathcal{F}}^{s}(\theta)\cap\tilde{ \mathcal{F}}^{u}(\theta)\subset T_{1}\tilde{M}.\]
We call \(\tilde{\mathcal{I}}(\theta)\) the class of \(\theta\). For the canonical projection \(\tilde{P}\) we have \(\tilde{P}(\tilde{\mathcal{I}})=I(\theta)\).
We observe that for every \(\eta=(q,w)\in\tilde{\mathcal{I}}(\theta)\) with \(q\in I(\theta)\), the geodesic \(\gamma_{\eta}\) is bi-asymptotic to \(\gamma_{\theta}\). So, we can translate the bounds of Theorem 2.2 from bi-asymptotic geodesics to intersections between horospheres and horocycles. This fact is included in the following proposition.
**Proposition 2.2**.: _Let \(M\) be a compact surface without conjugate points of genus greater than one and \(\tilde{M}\) be its universal covering. Then, for every \(\theta\in T_{1}\tilde{M}\)_
1. \(I(\theta)\) _and_ \(\tilde{\mathcal{I}}(\theta)\) _are compact connected curves of_ \(\tilde{M}\) _and_ \(T_{1}\tilde{M}\) _respectively (Corollary_ 3.3 _of_ _[_27_]__)._
2. \(Diam(I(\theta))\leq Q\) _and_ \(Diam(\tilde{\mathcal{I}}(\theta))\leq\tilde{Q}\) _for some_ \(Q(M),\tilde{Q}(M)>0\)_._
We remark that for every \(\theta\in T_{1}M\) and every lift \(\tilde{\theta}\in T_{1}\tilde{M}\) of \(\theta\), we have
\[d\pi(\tilde{\mathcal{I}}(\tilde{\theta}))=\mathcal{I}(\theta)=\mathcal{F}^{s}( \theta)\cap\mathcal{F}_{u}(\theta).\]
**Definition 2.1**.: Let \(M\) be a compact surface without conjugate points and genus greater than one. We say that \(\theta\in T_{1}M\) is an expansive point and \(\mathcal{I}(\theta)\) is a trivial class if \(\mathcal{I}(\theta)\) is a single point. Otherwise, \(\theta\) is a non-expansive point and \(\mathcal{I}(\theta)\) is a non-trivial class.
The expansive points form the so-called expansive set
\[\mathcal{R}_{0}=\{\theta\in T_{1}M:\mathcal{F}^{s}(\theta)\cap\mathcal{F}^{u}( \theta)=\{\theta\}\}.\]
The complement of \(\mathcal{R}_{0}\) is called the non-expansive set. In addition, note that any non-trivial class \(\mathcal{I}(\theta)\) has two boundary points.
**Corollary 2.1** ([27]).: _Let \(M\) be a compact surface without conjugate points of genus greater than one and \(\tilde{M}\) be its universal covering. For every \(\theta\in T_{1}\tilde{M}\), if \(\eta=(q,w)\in\tilde{\mathcal{F}}^{s}(\theta)\) and \(\gamma_{\eta}\) is bi-asymptotic to \(\gamma_{\theta}\) then_
\[\eta\in\tilde{\mathcal{I}}(\theta)=\tilde{\mathcal{F}}^{s}(\theta)\cap\tilde{ \mathcal{F}}^{u}(\theta)\quad\text{ and }\quad q\in\tilde{I}(\theta)=H^{+}(\theta)\cap H^{-}( \theta).\]
### Visibility manifolds
This subsection introduces visibility manifolds and state some of their dynamical and geometric properties. Let \(M\) be a simply connected Riemannian manifold without conjugate points. For every \(x,y\in M\), denote by \([x,y]\) the geodesic segment joining \(x\) to \(y\). For \(z\in M\) we also denote by \(\sphericalangle_{z}(x,y)\) the angle at
formed by \([z,x]\) and \([z,y]\). We say that \(M\) is a visibility manifold if for every \(z\in M\) and every \(\epsilon>0\) there exists \(R(\epsilon,z)>0\) such that
\[\text{if }x,y\in M\text{ with }d(z,[x,y])>R(\epsilon,z)\quad\text{ then }\quad\sphericalangle_{z}(x,y)<\epsilon.\]
If \(R(\epsilon,z)\) does not depend on \(z\) then \(M\) is called a uniform visibility manifold.
**Theorem 2.3** ([10]).: _If \(M\) is a compact surface without conjugate points of genus greater than one then its universal covering is a uniform visibility manifold._
In 1973, Eberlein extended some transitivity properties of the geodesic flow to the case of compact manifolds without conjugate points. A foliation is called minimal if each of its leaves is dense.
**Theorem 2.4** ([10, 11]).: _Let \(M\) be a compact surface without conjugate points and genus greater than one. Then_
1. _The horospherical foliations_ \(\mathcal{F}^{s}\) _and_ \(\mathcal{F}^{u}\) _are minimal._
2. _The geodesic flow_ \(\phi_{t}\) _is topologically mixing._
3. _For every_ \(\theta,\xi\in T_{1}\tilde{M}\) _with_ \(\theta\not\in\tilde{\mathcal{F}}^{cu}(\xi)\) _there exists_ \(\eta_{1},\eta_{2}\in T_{1}\tilde{M}\) _such that_ \[\tilde{\mathcal{F}}^{s}(\theta)\cap\tilde{\mathcal{F}}^{cu}(\xi)=\tilde{ \mathcal{I}}(\eta_{1})\quad\text{ and }\quad\tilde{\mathcal{F}}^{s}(\xi)\cap\tilde{ \mathcal{F}}^{cu}(\theta)=\tilde{\mathcal{I}}(\eta_{2}).\]
We can transform item (3) into intersections of unstable horocycles and central stable sets. There exist \(t_{1},t_{2}\in\mathbb{R}\) such that
\[\tilde{\mathcal{F}}^{cs}(\theta)\cap\tilde{\mathcal{F}}^{u}(\xi)=\tilde{ \mathcal{I}}(\tilde{\phi}_{t_{1}}(\eta_{1}))\quad\text{ and }\quad\tilde{\mathcal{F}}^{cs}(\xi)\cap\tilde{ \mathcal{F}}^{u}(\theta)=\tilde{\mathcal{I}}(\tilde{\phi}_{t_{2}}(\eta_{2})).\]
The above intersections are called the heteroclinic connections of \(\tilde{\phi}_{t}\).
Recall that for a compact manifold of negative curvature, its geodesic flow is uniformly hyperbolic [1]. This provides invariant submanifolds (which agree with the horocycles) with hyperbolic behavior. However, for a general compact manifold without conjugate points, its geodesic flow might not be uniformly hyperbolic. Despite this, the horocycles still have some weak hyperbolicity. From Equation (1) in Subsection 2.1, we recall that \(d_{s}\) is the Sasaki distance restricted to \(T_{1}\tilde{M}\).
**Proposition 2.3** ([10]).: _Let \(M\) be a compact surface without conjugate points of genus greater than one and \(\tilde{M}\) be its universal covering. Then, there exist \(A,B>0\) such that for every \(\theta\in T_{1}\tilde{M}\) and every \(\eta\in\tilde{\mathcal{F}}^{s}(\theta)\),_
\[d_{s}(\tilde{\phi}_{t}(\theta),\tilde{\phi}_{t}(\eta))\leq Ad_{s}(\theta,\eta) +B,\quad\text{ for every }t\geq 0.\]
### Some dynamical and ergodic properties of continuous flows on compact metric spaces
We first introduce the dynamical properties. Let \(\psi_{t}:X\to X\) be a continuous flow acting on a compact metric space \(X\). We say that \(\psi_{t}\) is topologically transitive if there exists a dense orbit. The flow \(\psi_{t}\) is topologically mixing if for every open sets \(A,B\subset X\) there exists \(t_{0}>0\) such that \(\psi_{t}(A)\cap B\neq\emptyset\) for \(|t|\geq t_{0}\). For every \(x\in X\) and every \(\epsilon>0\), we define the
strong stable set of \(x\): \(W^{ss}(x)=\{y\in X:d(\psi_{t}(x),\psi_{t}(y))\to 0\mbox{ as }t\to\infty\}\),
\(\epsilon\)-strong stable set: \(W^{ss}_{\epsilon}(x)=\{y\in W^{ss}(x):d(\psi_{t}(x),\psi_{t}(y))\leq\epsilon, \mbox{ for every }t\geq 0\}\).
The strong unstable \(W^{uu}(x)\) and \(\epsilon\)-strong unstable \(W^{uu}_{\epsilon}(x)\) sets are defined similarly for \(t\leq 0\).
The flow \(\psi_{t}\) has a local product structure if for every \(\epsilon>0\) there exists \(\delta>0\) such that if \(x,y\in X\) satisfy \(d(x,y)\leq\delta\) then there exists a unique \(\tau\in\mathbb{R}\) with
\[|\tau|\leq\epsilon\quad\mbox{ and }\quad W^{ss}_{\epsilon}(x)\cap W^{uu}_{ \epsilon}(\phi_{\tau}(y))\neq\emptyset.\]
The orbit of the intersection point follows the orbit of \(x\) in the future and the orbit of \(y\) in the past.
The flow \(\psi_{t}\) is expansive if there exists \(\epsilon>0\) such that if \(x,y\in X\) satisfy
\[d(\phi_{t}(x),\phi_{\rho(t)}(y))\leq\epsilon\mbox{ for every }t\in\mathbb{R}\]
and some reparametrization \(\rho\), then there exists \(\tau\in[-\epsilon,\epsilon]\) with \(y=\phi_{\tau}(x)\). We call \(\epsilon\) a constant of expansivity of \(\phi_{t}\).
In the context of continuous flows without singularities acting on compact manifolds, the above definition is equivalent to Bowen-Walters expansivity definition [4]. We remark that Anosov flows are always expansive.
Let us define a special kind of semi-conjugacy between flows. Let \(\phi_{t}:Y\to Y\) and \(\psi_{t}:X\to X\) be two continuous flows acting on compact topological spaces. A map \(\chi:Y\to X\) is called a time-preserving semi-conjugacy if \(\chi\) is a continuous surjection satisfying \(\chi\circ\phi_{t}=\psi_{t}\circ\chi\) for every \(t\in\mathbb{R}\). In this case, we say that \(\psi_{t}\) is time-preserving semi-conjugate to \(\phi_{t}\) or is a time-preserving factor of \(\phi_{t}\).
We now give the ergodic properties. Let \(\psi_{t}:X\to X\) be a continuous flow acting on a compact metric space \((X,d)\). A Borel set \(Z\subset X\) is invariant by the flow if \(\psi_{t}(Z)=Z\) for every \(t\in\mathbb{R}\). A probability measure \(\nu\) on \(X\) is invariant by the flow if \((\psi_{t})_{*}\nu=\nu\) for every \(t\in\mathbb{R}\). Denote by \(\mathcal{M}(\psi)\) the set of all flow-invariant-measures on \(X\). A measure \(\nu\in\mathcal{M}(\psi)\) is ergodic if for every flow-invariant set \(A\subset X\), we have either \(\nu(Z)=0\) or \(\nu(Z)=1\).
Let \(Z\subset X\) be a flow-invariant Borel set and \(\nu\) be a flow-invariant measure supported on \(Z\). We define the metric entropy \(h_{\nu}(\psi,Z)\) of \(\nu\) with respect to the flow \(\psi\) as the metric entropy \(h_{\nu}(\psi_{1},Z)\) with respect to its time-1 map \(\psi_{1}\)[29]. For \(Z=X\) we write \(h_{\nu}(\psi)\). When \(Z\) is also compact, we define the topological entropy of \(Z\) as follows. For every \(\epsilon,T>0\) and every \(x\in Z\), we define the \((T,\epsilon)\)-dynamical balls by
\[B(x,\epsilon,T)=\{y\in Z:d(\psi_{s}(x),\psi_{s}(y))<\epsilon,s\in[0,T]\} \tag{3}\]
Denote by \(M(T,\epsilon,Z)\) the minimum cardinality of any cover of \(Z\) by \((T,\epsilon)\)-dynamical balls. The topological entropy of \(Z\) with respect to \(\psi\) is
\[h(\psi,Z)=\lim_{\epsilon\to 0}\limsup_{T\to\infty}\frac{1}{T}\log M(T,\epsilon,Z).\]
For \(Z=X\) we write \(h(\psi)\). We remark that \(h(\psi,Z)=h(\psi_{1},Z)\) where \(h(\psi_{1},Z)\) is the topological entropy of \(Z\) with respect to the time-1 map \(\psi_{1}\). For each flow-invariant compact set \(Z\subset X\), the variational principle [8] says
\[h(\psi,Z)=\sup_{\nu}h_{\nu}(\psi,Z), \tag{4}\]
where \(\nu\) varies over all flow-invariant measures supported on \(Z\). We say that \(\mu\in\mathcal{M}\) supported on \(Z\) is a measure of maximal entropy if \(h_{\mu}(\psi,Z)\) achieves the maximum in (4). If \(Z=X\) and \(\mu\) is the only measure satisfying this condition then \(\mu\) is the unique measure of maximal entropy for the flow \(\psi\).
## 3 The quotient flow
In what follows we will assume that \((M,g)\) is a compact surface without conjugate points and genus greater than one. The main idea of the section is to built a factor flow of the geodesic flow of \((M,g)\). We follow the constructions of Gelfert and Ruggiero [16].
The properties given in Subsection 2.3 suggest that an equivalence relation identifying each curve \(\mathcal{I}(\theta)\) with a single point \([\theta]\) will do the job. Two points \(\theta,\eta\in T_{1}M\) are equivalent, \(\theta\sim\eta\), if and only if \(\eta\in\mathcal{F}^{s}(\theta)\) and for every \(\tilde{\theta},\tilde{\eta}\in T_{1}\tilde{M}\) lifts of \(\theta,\eta\) with \(\tilde{\eta}\in\tilde{\mathcal{F}}^{s}(\tilde{\theta})\), it holds that \(\gamma_{\tilde{\theta}}\) and \(\gamma_{\tilde{\eta}}\) are bi-asymptotic.
**Lemma 3.1**.: _For every \(\eta,\theta\in T_{1}M\), \(\eta\sim\theta\) if and only if \(\eta\in\mathcal{I}(\theta)\)._
Proof.: If \(\eta\sim\theta\) then there exist lifts \(\tilde{\eta},\tilde{\theta}\) of \(\eta,\theta\) such that \(\tilde{\eta}\in\tilde{\mathcal{F}}^{s}(\tilde{\theta})\) and \(\gamma_{\tilde{\eta}},\gamma_{\tilde{\theta}}\) are bi-asymptotic. Corollary 2.1 says that \(\tilde{\eta}\in\tilde{\mathcal{I}}(\tilde{\theta})\) and projecting by \(d\pi\) we get \(\eta\in\mathcal{I}(\theta)\). The reverse implication is straightforward.
This lemma and the properties of the horospherical leaves guarantee the above relation is an equivalence relation. This relation induces a quotient space \(X\) and a quotient map
\[\chi:T_{1}M\to X\qquad\theta\mapsto\chi(\theta)=[\theta],\]
where \([\theta]\) is the equivalence class of \(\theta\). The geodesic flow and the quotient map induce a quotient flow
\[\psi:\mathbb{R}\times X\to X\qquad(t,[\theta])\mapsto\psi_{t}[\theta]=[\phi_{t }(\theta)]=\chi\circ\phi_{t}(\theta).\]
We endow \(X\) with the quotient topology. We state below some properties of these new objects.
**Lemma 3.2**.: _Let \(M\) be a compact surface without conjugate points of genus greater than one. Then,_
1. _The quotient space_ \(X\) _is a compact space._
2. _The quotient map_ \(\chi\) _is a time-preserving semi-conjugacy hence_ \(\psi_{t}\) _is a time-preserving factor of_ \(\phi_{t}\)_._
Proof.: For item 1, from the definition of the quotient topology we see that \(\chi\) is a continuous surjection. Since \(T_{1}M\) is compact so is \(X\).
For item 2, let us first show that \(\psi_{t}\) is well-defined. Let \([\eta]\in X\) and \(\xi\in T_{1}M\) be any representative of the equivalence class of \(\eta\). So, we have \(\xi\sim\eta\) hence \(\xi\in\mathcal{I}(\eta)=\mathcal{F}^{s}(\eta)\cap\mathcal{F}^{u}(\eta)\) by Lemma 3.1. By the invariance of the horospherical foliations it follows that \(\phi_{t}(\xi)\in\mathcal{F}^{s}(\phi_{t}(\eta))\cap\mathcal{F}^{u}(\phi_{t}( \eta))=\mathcal{I}(\phi_{t}(\eta))\) for every \(t\in\mathbb{R}\). This means that \(\phi_{t}(\xi)\sim\phi_{t}(\eta)\) and hence \(\psi_{t}[\xi]=[\phi_{t}(\xi)]=[\phi_{t}(\eta)]=\psi_{t}[\eta]\). Since \(\chi\) is a continuous surjection, the item follows from the definition of the quotient flow.
The following concept is useful for defining certain open sets of the quotient topology.
**Definition 3.1**.: Let \(A\) be a subset of \(T_{1}M\). We say that \(A\) is saturated with respect to \(\chi\), or simply saturated, if \(\chi^{-1}\circ\chi(A)=A\).
**Lemma 3.3**.: _Let \(M\) be a compact surface without conjugate points of genus greater than one. Then,_
1. _For every_ \(\eta\in T_{1}M\)_,_ \(\chi^{-1}\circ\chi(\eta)=\mathcal{I}(\eta)\)_._
2. _For every open saturated set_ \(U\subset T_{1}M\)_,_ \(\chi(U)\) _is an open set in_ \(X\)_._
Proof.: For item 1, \(\chi^{-1}\circ\chi(\eta)=\chi^{-1}[\eta]\) implies that \(\chi^{-1}\circ\chi(\eta)\) is the equivalence class of \(\eta\) seen as subset of \(T_{1}M\). By Lemma 3.1, this equivalence class agree exactly with \(\mathcal{I}(\eta)\). For item 2, by definition \(\chi(U)\) is open in the quotient topology if \(\chi^{-1}(\chi(U))\) is open in \(T_{1}M\). The result follows since \(\chi^{-1}(\chi(U))=U\).
We extend the above construction to the geodesic flow \(\tilde{\phi}_{t}\) of \((\tilde{M},\tilde{g})\). Following Lemma 3.1, we say that \(\eta,\theta\in T_{1}\tilde{M}\) are equivalent if and only if \(\eta\in\tilde{\mathcal{I}}(\theta)\). As before, the equivalence relation analogously induces a quotient space \(\tilde{X}\), a quotient map \(\tilde{\chi}\) and a quotient flow \(\tilde{\psi}_{t}\). In a similar way, \(\tilde{\chi}\) is a time-preserving semi-conjugacy between \(\tilde{\phi}_{t}\) and \(\tilde{\psi}_{t}\). Moreover, since \(T_{1}\tilde{M}\) is a covering space of \(T_{1}M\) with covering map \(d\pi\), we see that \(\tilde{X}\) is also a covering space of \(X\) with a covering map \(\Pi\) induced by \(d\pi\):
\[\Pi:\tilde{X}\to X,\qquad[\tilde{\eta}]\mapsto\Pi[\tilde{\eta}]=\chi\circ d \pi(\tilde{\eta}). \tag{5}\]
It is not hard to show that \(\Pi\) is a well-defined covering map using the map \(d\pi\).
Topological properties of the quotient flow
In this section we build a special basis of neighborhoods of the quotient topology of \(X\). We extend a construction made by Gelfert and Ruggiero for the case of compact higher genus surfaces without focal points (Section 4 of [16]). As an application, we show that \(X\) is a compact metrizable space.
We highlight that for every \(\eta\in T_{1}M\), the set \(\mathcal{F}^{s}(\eta)\) (\(\mathcal{F}^{u}(\eta)\)) is the orbit of a continuous complete flow: the stable (unstable) horocycle flow. The same holds for \(\tilde{\mathcal{F}}^{s}(\tilde{\eta}),\tilde{\mathcal{F}}^{u}(\tilde{\eta})\) for every lift \(\tilde{\eta}\in T_{1}\tilde{M}\) of \(\eta\). Each of such sets is a Lipschitz embedded curve and can be parametrized by arc-length \(c^{s}_{\eta}:\mathbb{R}\to\mathcal{F}^{s}(\eta),c^{u}_{\eta}:\mathbb{R}\to \mathcal{F}^{u}(\eta),\tilde{c}^{s}_{\eta}:\mathbb{R}\to\tilde{\mathcal{F}}^{ s}(\tilde{\eta})\) and \(\tilde{c}^{u}_{\eta}:\mathbb{R}\to\tilde{\mathcal{F}}^{u}(\tilde{\eta})\). In particular, each connected subset of \(\mathcal{F}^{s,u}(\eta),\tilde{\mathcal{F}}^{s,u}(\tilde{\eta})\) is homeomorphic to an interval.
The construction of a basis for the quotient space requires a better understanding of the set of expansive points \(\mathcal{R}_{0}\subset T_{1}M\). Assuming no focal points or continuous Green bundles, Gelfert and Ruggiero showed that \(\mathcal{R}_{0}\) is open and dense. This might no longer be the case if we drop these hypothesis. Let us start with the following elementary lemma.
**Lemma 4.1**.: _Any real interval cannot be the union of disjoint closed intervals._
**Lemma 4.2**.: _Let \(\theta\in T_{1}\tilde{M}\) be a non-expansive point and \(\tilde{\mathcal{I}}(\theta)\) be its non-trivial class. Then the boundary points of \(\tilde{\mathcal{I}}(\theta)\) are accumulated by expansive points._
Proof.: Let \(c:\mathbb{R}\to\tilde{\mathcal{F}}^{s}(\theta)\) be an arc-length parametrization of \(\tilde{\mathcal{F}}^{s}(\theta)\). Let \(a,b\in\mathbb{R}\) with \(a<b\) such that \(\tilde{\mathcal{I}}(\theta)=c([a,b])\). Suppose that the boundary point \(b\) is not accumulated by expansive points. Then, there exists \(\delta>0\) such that \((b,b+\delta)\) has no expansive points. So, either \((b,b+\delta)\) is a single class or it is a disjoint union of distinct classes. Since classes are closed, Lemma 4.1 shows that \([b,b+\delta]\) is a subset of a single class. So, \(b\) is a common point of both classes hence \([a,b+\delta]\) is a single class, a contradiction to \(b\) be a boundary point.
Next, for each \(\theta\in T_{1}\tilde{M}\), let us define a family of open neighborhoods \(A_{i}\) of \(\tilde{\mathcal{I}}(\theta)\) such that \(\tilde{\chi}(A_{i})\) are open neighborhoods of \(\tilde{\chi}(\theta)\) as well. For every \(\delta,\epsilon>0\), there exist \(a,b\in\mathbb{R}\) and a map
\[R:(a-\epsilon,b+\epsilon)\times(-\delta,\delta)\to T_{1}\tilde{M},\qquad\text{ satisfying the conditions:}\]
1. Let \((0,s)\mapsto R(0,s)\) be the arc-length parametrization of a \(\delta\)-neighborhood of \(\theta\) in \(V(\theta)\) with respect to the Sasaki metric where \(V(\theta)\) is the vertical subspace of \(\theta\).
2. For each \(s_{0}\in(-\delta,\delta)\), let \((r,s_{0})\mapsto R(r,s_{0})\) be the arc-length parametrization of the continuous curve \(R(\cdot,s_{0})\) in \(\tilde{\mathcal{F}}^{s}(R(0,s_{0}))\).
3. \(R([a,b],0)=\tilde{\mathcal{I}}(\theta)\), \(R(0,0)=\theta\) and \((r,0)\mapsto R(r,0)\) be the arc-length parametrization of a \(\epsilon\)-neighborhood of \(\tilde{\mathcal{I}}(\theta)\) in \(\tilde{\mathcal{F}}^{s}(\theta)\).
The image of \(R\) is denoted by \(\Sigma=\Sigma(\theta,\epsilon,\delta)=R((a-\epsilon,b+\epsilon)\times(-\delta, \delta))\). The continuity of the horospherical foliations ensures that \(R\) is a homeomorphism and \(\Sigma\) is a \(2\)-dimensional section containing \(\tilde{\mathcal{I}}(\theta)\). Note that \(\Sigma\) is foliated by stable horospherical leaves of points in \(V(\theta)\). Since these leaves are topologically transverse to the geodesic flow, \(\Sigma\) is a cross section. For \(\tau>0\), Brouwer's open mapping Theorem gives the following open \(3\)-dimensional neighborhood of \(\tilde{\mathcal{I}}(\theta)\):
\[B=B(\theta,\epsilon,\delta,\tau)=\bigcup_{|t|<\tau}\tilde{\phi}_{t}(\Sigma).\]
We next define a projection map \(Pr:B\to\Sigma\). For every \(\eta\in B\), \(Pr(\eta)\) is the projection of \(\eta\) along the geodesic flow \(\tilde{\phi}_{t}\). From the properties of \(\tilde{\phi}_{t}\), we see that \(Pr\) is a continuous surjection. For every \(\eta\in\Sigma\), we define the stable (unstable) interval and their intersection
\[W^{s}(\eta)=\tilde{\mathcal{F}}^{s}(\eta)\cap\Sigma,\quad W^{u}(\eta)=Pr( \tilde{\mathcal{F}}^{u}(\eta)\cap B)\quad\text{ and }\quad[\xi,\eta]=W^{s}(\xi)\cap W^{u}(\eta).\]
Since \(\Sigma\) is foliated by the stable horospherical foliation \(\tilde{\mathcal{F}}^{s}\), we see that \(W^{s}\) is exactly \(\tilde{\mathcal{F}}^{s}\) while \(W^{u}\) is not the unstable horospherical foliation \(\tilde{\mathcal{F}}^{u}\) because the geodesic flow cannot have a local section that is foliated by both \(\tilde{\mathcal{F}}^{s}\) and \(\tilde{\mathcal{F}}^{u}\).
To build a new cross section let us choose four expansive points \(\theta_{1},\theta_{2},\eta_{1},\eta_{2}\in\Sigma\). Since \(\tilde{\mathcal{I}}(\theta)\) is non-trivial, Lemma 4.2 says that for \(\epsilon>0\) there exists \(c\in(a-\epsilon,a),d\in(b,b+\epsilon)\) such that \(\theta_{1}=R(c,0)\) and \(\theta_{2}=R(d,0)\) are expansive points of \(W^{s}(\theta)\). This also implies that
\[\tilde{\mathcal{I}}(\theta)=R([a,b],0)\subset R([c,d],0) \tag{6}\]
We define the upper and lower region of \(\Sigma\) by
\[\Sigma_{+}=\{R(r,s):r\in(a-\epsilon,b+\epsilon),s>0\}\text{ and }\Sigma_{-}=\{R(r,s):r\in(a- \epsilon,b+\epsilon),s<0\}.\]
Pick some expansive points \(\eta_{1}\in W^{u}(\theta_{1})\cap\Sigma_{+}\) and \(\eta_{2}\in W^{u}(\theta_{2})\cap\Sigma_{-}\). The new cross section \(U=U(\theta,\epsilon,\delta,\theta_{1},\theta_{2},\eta_{1},\eta_{2})\subset\Sigma\) is the open \(2\)-dimensional region in \(\Sigma\) bounded by \(W^{u}(\theta_{1})\), \(W^{u}(\theta_{2})\), \(W^{s}(\eta_{1})\) and \(W^{s}(\eta_{2})\).
**Lemma 4.3**.: _The open cross section \(U\in\Sigma\) is a saturated set containing \(\tilde{\mathcal{I}}(\theta)\)._
Proof.: From relation (6) we see that \(\tilde{\mathcal{I}}(\theta)\) is surrounded by \(\theta_{1},\theta_{2}\) in \(W^{s}(\theta)\) hence it is included in \(U\). Now, suppose by contradiction that there exists a non-expansive \(\eta\in U\) such that \(\tilde{\mathcal{I}}(\eta)\) is not included in \(U\). This implies that \(\tilde{\mathcal{I}}(\eta)\) intersects the boundary of \(U\) at some \(\xi\). Since \(\eta\in\tilde{\mathcal{I}}(\xi)\) it follows that \(\eta\in W^{s}(\xi)\cap W^{u}(\xi)\). Thus \(\eta\) belongs to the boundary of \(U\), a contradiction.
As above, for \(\tau>0\) Brouwer's open mapping Theorem says that
\[A=A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1},\eta_{2})= \bigcup_{|t|<\tau}\tilde{\phi}_{t}(U)\]
is an open \(3\)-dimensional neighborhood of \(\mathcal{I}(\theta)\). Since \(U\) is saturated so is \(A\). Thus, for every \(\theta\in T_{1}\tilde{M}\), we have built a family
\[\{A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1},\eta_{2}): \epsilon,\delta,\tau>0,\theta_{1},\theta_{2}\in W^{s}(\theta),\eta_{1}\in W^{u} (\theta_{1}),\eta_{2}\in W^{u}(\theta_{2})\}\]
of saturated neighborhoods of \(\tilde{\mathcal{I}}(\theta)\). Consider the family of quotients of sets
\[\{\tilde{\chi}(A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1}, \eta_{2})):\epsilon,\delta,\tau>0,\theta_{1},\theta_{2}\in W^{s}(\theta),\eta_ {1}\in W^{u}(\theta_{1}),\eta_{2}\in W^{u}(\theta_{2})\}.\]
**Lemma 4.4**.: _For every \(\theta\in T_{1}\tilde{M}\), the family_
\[\mathcal{A}_{\theta}=\{\tilde{\chi}(A(\theta,\epsilon_{l},\delta_{m},\tau_{ n})):\epsilon_{l}=1/l,\delta_{m}=1/m,\tau_{n}=1/n\text{ with }l,m,n\in\mathbb{N}\}\]
_is a countable basis of neighborhoods of \([\theta]\in\tilde{X}\). Hence \(\tilde{X}\) is first countable and \(\{\mathcal{A}_{\theta}:\theta\in T_{1}\tilde{M}\}\) is a basis for the quotient topology of \(\tilde{X}\)._
Proof.: For every \(\theta\in T_{1}\tilde{M}\), we know that \(A=A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1},\eta_{2})\) is an open neighborhood of \(\tilde{\mathcal{I}}(\theta)\). Since \(A\) is saturated it follows that \(\tilde{\chi}^{-1}\circ\tilde{\chi}(A)=A\). We see that \(\tilde{\chi}(A)\subset\tilde{X}\) is an open set containing \([\theta]\) and \(\{\tilde{\chi}(A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1}, \eta_{2}))\}\) is a family of open neighborhoods of \([\theta]\in X\).
Choosing \(\epsilon,\delta,\tau>0\) small enough, every neighborhood \(V\) of \(\tilde{\mathcal{I}}(\theta)\) contains some \(A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1},\eta_{2})\). Given an open set \(U\subset X\) containing \([\theta]\), \(\tilde{\chi}^{-1}(U)\) is an open neighborhood of \(\tilde{\mathcal{I}}(\theta)\). So, there exists \(A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1},\eta_{2})\subset \tilde{\chi}^{-1}(U)\) and hence \(\tilde{\chi}(A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1}, \eta_{2}))\subset U\). Therefore the collection \(\{\tilde{\chi}(A(\theta,\epsilon,\delta,\tau,\theta_{1},\theta_{2},\eta_{1}, \eta_{2}))\}\) is a basis of neighborhoods of \([\theta]\in X\).
This property is not affected by specific choices of parameters \(\theta_{1},\theta_{2},\eta_{1},\eta_{2}\in\Sigma\) but by the parameters \(\epsilon,\delta,\tau>0\). Choosing \(\epsilon_{l}=1/l,\delta_{m}=1/m,\tau_{n}=1/n\) with \(l,m.n\in\mathbb{N}\) we still have \(\mathcal{A}_{\theta}=\{\tilde{\chi}(A(\theta,\epsilon_{l},\delta_{m},\tau_{n} )):l,m,n\in\mathbb{N}\}\) is a basis of neighborhoods of \([\theta]\). So, \(\mathcal{A}_{\theta}\) is a countable basis of neighborhoods of \([\theta]\).
This basis is important because it provides an explicit description of basic open sets of the quotient topology. So far, we have a family of neighborhoods for every \(\tilde{\mathcal{I}}(\theta)\subset T_{1}\tilde{M}\) and a basis of neighborhoods of \([\theta]\in\tilde{X}\). From Equation (5) in Section 3, we recall that \(d\pi:T_{1}\tilde{M}\to T_{1}M\) and \(\Pi:\tilde{X}\to X\) are covering maps. Projecting the above families of open neighborhoods by \(\Pi\) and \(d\pi\) respectively, we get corresponding families of open neighborhoods for every \(\mathcal{I}(\theta)\subset T_{1}M\) and every \([\theta]\in X\). Thus, \(X\) is first countable and \(\{\Pi(\mathcal{A}_{\theta}):\theta\in T_{1}\tilde{M}\}\) is a basis for the quotient topology of \(X\).
To show the metrizability of \(X\) we recall a basic topological result [30].
**Proposition 4.1**.: _If \(f:X\to Y\) is a continuous surjection from a compact metric space onto a Hausdorff space, then \(Y\) is metrizable._
**Lemma 4.5**.: _Let \(M\) be a compact surface without conjugate points and genus greater than one. Then, the quotient space \(X\) is a compact metrizable space._
Proof.: Since \(\chi\) is a continuous surjection, \(X\) is compact. We next show that \(X\) is Hausdorff. Choose two distinct points \([\theta],[\eta]\in X\) and suppose that \(\mathcal{F}^{s}(\theta)\cap\mathcal{F}^{s}(\eta)=\emptyset\). Choosing \(\delta\) small enough in Lemma 4.4, we can build disjoint basic
open sets because \(\mathcal{F}^{s}\) is a foliation of \(T_{1}M\). We now suppose that \(\mathcal{F}^{s}(\theta)\cap\mathcal{F}^{s}(\eta)\neq\emptyset\) hence \(\mathcal{F}^{s}(\theta)=\mathcal{F}^{s}(\eta)\). Choosing \(\epsilon\) small enough in Lemma 4.4, the basic open sets of \(\theta\) and \(\eta\) are disjoint because \(\mathcal{F}^{u}\) is a foliation of \(T_{1}M\). The result follows from the application of Theorem 4.1 to our case.
So, we choose some distance on \(X\) that is compatible with the quotient topology. We denote it by \(d\) and called it the quotient distance.
## 5 Topological dimension of the quotient space
This section is devoted to show that the topological dimension of the quotient space is at least two. We first define the topological dimension [23]. Let \(X\) be a topological space and \(\mathcal{U}\) be an open cover of \(X\). The order of \(\mathcal{U}\) is the smallest \(n\in\mathbb{N}\) such that every \(x\in X\) belongs to at most \(n\) sets of \(\mathcal{U}\). An open refinement of \(\mathcal{U}\) is another open cover, each of whose sets is a subset of a set in \(\mathcal{U}\). The topological dimension of \(X\) is the minimum \(n\) such that every \(\mathcal{U}\) has an open refinement of order \(n+1\) or less. We have as standard examples the open sets of \(\mathbb{R}^{n}\). For every open set \(U\subset\mathbb{R}^{n}\), the topological dimension of \(U\) is \(n\).
**Theorem 5.1** ([30]).: _Let \(f:X\to Y\) be a homeomorphism between topological spaces. Then, the topological dimension of \(X\) and \(Y\) are equal._
Let \(X\) be a topological space, \(f:X\to\mathbb{R}^{2}\) be a continuous map and \(U\subset X\) be an open set. We say that \(U\) is a topological surface if the restriction of \(f\) to \(U\) is a homeomorphism. Theorem 5.1 implies that every topological surface has topological dimension \(2\). Let \(\tilde{X}\) and \(X\) be the quotient spaces defined in Section 3. To estimate their topological dimensions we find a topological surface passing through every point.
**Lemma 5.1**.: _Let \(M\) be a compact surface without conjugate points and genus greater than one. Then, for every \([\theta]\in\tilde{X}\) there exists a topological surface \(S_{[\theta]}\) containing \([\theta]\). In particular, \(\tilde{X}\) and \(X\) have topological dimension at least two._
Proof.: Let \(\theta\in T_{1}\tilde{M}\) and \(V_{\theta}\) be the vertical fiber of \(\theta\). Using the geodesic flow we define the set
\[W_{\theta}=\bigcup_{t\in\mathbb{R}}\phi_{t}(V_{\theta})\subset T_{1}\tilde{M}.\]
Since \(V_{\theta}\) is homeomorphic to the circle \(S^{1}\), \(W_{\theta}\) is homeomorphic to a cylinder hence \(W_{\theta}\) is a topological surface. The divergence of geodesic rays guarantees that for every \(\eta,\xi\in V_{\theta}\), \(\eta\not\in\tilde{\mathcal{I}}(\xi)\). So, the restriction of \(\tilde{\chi}\) to \(W_{\theta}\) is injective hence bijective onto its image. This implies that \(\tilde{\chi}(W_{\theta})\subset\tilde{X}\) is homeomorphic to a cylinder and the result follows. Theorem 5.1 implies that the topological dimension of \(\tilde{X}\) is at least two. This conclusion extends to \(X\) because \(\tilde{X}\) and \(X\) are locally homeomorphic.
In [16, 17], Gelfert and Ruggiero showed that \(X\) and \(\tilde{X}\) are topological \(3\)-manifolds for: compact surfaces without focal points of genus greater than
one and compact surfaces without conjugate points, genus greater than one and continuous Green bundles. It is not known whether the dimension of the quotient space is 3 without assuming any of the above two conditions.
## 6 Topological dynamics of the quotient flow
Section 3 defines a quotient model: a continuous quotient flow \(\psi_{t}:X\to X\) time-preserving semi-conjugate to the geodesic flow \(\phi_{t}\). Our goal is to show that \(\psi_{t}\) has typical properties of hyperbolic topological dynamics like expansivity, local product structure and topological mixing. Notice that the geodesic flow \(\phi_{t}\) may not be expansive due to the presence of non-trivial strips.
**Theorem 6.1**.: _Let \(M\) be a compact surface without conjugate points of genus greater than one and \(\psi_{t}:X\to X\) be the quotient flow. Then, \(\psi_{t}\) is topologically mixing, expansive and has a local product. Moreover \(\psi_{t}\) has the pseudo-orbit tracing and specification properties._
We will prove Theorem 6.1 in several steps. Since the geodesic flow \(\phi_{t}\) is topologically mixing (Theorem 2.4) and \(\chi\) is a continuous time-preserving semi-conjugacy, we deduce that \(\psi_{t}\) is also topologically mixing.
Recall that for every \(\eta\in T_{1}M\), Lemma 4.4 gives a relationship between neighborhoods of \([\eta]\in X\) and special neighborhoods of \(\mathcal{I}(\eta)\). We use this basic open sets to get a relationship between Sasaki distance \(d_{s}\) (Equation (1)) and the quotient distance \(d\).
**Lemma 6.1**.: _Let \(Q>0\) be the Morse's constant given in Theorem 2.2. Then, there exist \(r_{0},s_{0}>0\) such that for every \([\xi],[\eta]\in X\) with \(d([\xi],[\eta])\leq r_{0}\) then_
\[d_{s}(\tilde{\xi},\tilde{\eta})\leq Q+s_{0},\]
_for some lifts \(\tilde{\xi},\tilde{\eta}\in T_{1}\tilde{M}\) of \(\xi,\eta\in T_{1}M\)._
Proof.: We consider the basic open sets \(A(\eta,\epsilon,\delta,\tau)\) provided by Lemma 4.4. For every \(\theta\in T_{1}M\), choose \(\epsilon,\delta,\tau>0\) small enough so that \(A(\theta,\epsilon,\delta,\tau)\) is evenly covered by \(d\pi\). Clearly, the family \(\mathcal{A}=\{\chi(A(\theta,\epsilon,\delta,\tau)):\theta\in T_{1}M\}\) is an open cover of the compact space \(X\). Let \(r_{0}>0\) be a Lebesgue number of \(\mathcal{A}\). Thus, for every \([\eta],[\xi]\in X\) with \(d([\eta],[\xi])\leq r_{0}\), there exists \(\theta\in T_{1}M\) such that
\[[\xi]\in B([\eta],r_{0})\subset\chi(A(\theta,\epsilon,\delta,\tau))\in \mathcal{A},\]
where \(B([\eta],r_{0})\) is the \(r_{0}\)-closed ball centered at \([\eta]\). Since \(A(\theta,\epsilon,\delta,\tau)\) is evenly covered by \(d\pi\), for every lift \(\tilde{\theta}\) of \(\theta\) there exist lifts \(\tilde{A}(\tilde{\theta},\epsilon,\delta,\tau)\) and \(\tilde{\eta},\tilde{\xi}\in\tilde{A}(\tilde{\theta},\epsilon,\delta,\tau)\) of \(A(\theta,\epsilon,\delta,\tau),\eta,\xi\) respectively. As \(\epsilon,\delta,\tau\) are small enough, there exists \(s_{0}>0\) so that \(Diam(A(\tilde{\theta},\epsilon,\delta,\tau))\leq Q+s_{0}\) hence \(d_{s}(\tilde{\eta},\tilde{\xi})\leq Q+s_{0}\).
**Lemma 6.2**.: _For every \(\epsilon>0\) there exists \(\delta>0\) such that if \([\xi]\in X\) satisfy \(d([\xi],\psi_{\tau}[\xi])\leq\delta\) for some \(\tau\in\mathbb{R}\), then \(|\tau|\leq\epsilon\)._
Proof.: By contradiction, suppose there exist \(\epsilon_{0}>0\) and sequences \([\xi_{n}]\in X,\tau_{n}\in\mathbb{R}\) such that for every \(n\geq 1\)
\[d([\xi_{n}],\psi_{\tau_{n}}[\xi_{n}])\leq\frac{1}{n}\quad\text{ and }\quad|\tau_{n}|\geq \epsilon_{0}. \tag{7}\]
Up to subsequences, we can assume that \(\tau_{n}\to T\) and \([\xi_{n}]\to[\xi]\). Since \(\psi_{t}\) is continuous, \(\psi_{\tau_{n}}[\xi_{n}]\to\psi_{T}[\xi]\). On the other hand inequalities (7) say that \([\xi_{n}]\) and \(\psi_{\tau_{n}}[\xi_{n}]\) converge to the same limit \([\xi]=\psi_{T}[\xi]\). This holds if and only if \(T=0\). Thus \(\tau_{n}\to 0\), which contradicts inequalities (7).
**Lemma 6.3**.: _The quotient flow \(\psi_{t}\) is expansive._
Proof.: Let \(r_{0}>0\) be given by Lemma 6.1. We first show that if there are two orbits of \(\psi_{t}\) having Hausdorff distance bounded by \(r_{0}\), then the orbits agree. Let \([\eta],[\xi]\in X\) with \(d(\psi_{t}[\eta],\psi_{\rho(t)}[\xi])\leq r_{0}\) for every \(t\in\mathbb{R}\) and some reparametrization \(\rho\). By Lemma 6.1, there exist lifts \(\tilde{\eta},\tilde{\xi}\in T_{1}\tilde{M}\) of \(\eta,\xi\) such that
\[d_{s}(\phi_{t}(\tilde{\eta}),\phi_{\rho(t)}(\tilde{\xi}))\leq Q+s_{0},\text{ for every }t\in\mathbb{R}.\]
Thus, the orbits of \(\tilde{\eta}\) and \(\tilde{\xi}\) have Hausdorff distance bounded by \(Q+s_{0}\) hence the orbits are bi-asymptotic. This implies that there exists \(\tau\in\mathbb{R}\) so that \(\tilde{\xi}\in\tilde{\mathcal{I}}(\phi_{\tau}(\tilde{\eta}))\) hence \([\xi]=\psi_{\tau}[\eta]\).
Given \(\epsilon>0\), Lemma 6.2 yields \(\delta_{1}>0\) satisfying its statement. Let \(\delta=\min(\delta_{1},r_{0})\). If the orbits of \([\eta]\) and \([\xi]\) have Hausdorff distance bounded by \(\delta\leq r_{0}\) then \([\xi]=\psi_{\tau}[\eta]\) and \(|\tau|\leq\epsilon\) because \(d([\eta],\psi_{\tau}[\eta])=d([\eta],[\xi])\leq\delta\leq\delta_{1}\).
### Local product structure
We now deal with the existence of a local product structure. Though \(\phi_{t}\) has no local product in the general case, \(\phi_{t}\) has a related property. To look for strong stable and unstable sets we start with the horospherical leaves of the geodesic flow. These leaves enjoy a sort of weak local product structure provided by their heteroclinic connections (Theorem 2.4): for every \(\eta,\xi\in T_{1}\tilde{M}\) with \(\xi\not\in\tilde{\mathcal{F}}^{cu}(-\eta)\), there exists \(\theta\in T_{1}\tilde{M}\) such that
\[\tilde{\mathcal{F}}^{s}(\eta)\cap\tilde{\mathcal{F}}^{cu}(\xi)=\tilde{ \mathcal{I}}(\theta). \tag{8}\]
Though these intersections always exist, they are generally not unique. This is because \(\theta\) may be non-expansive and in such a case \(\tilde{\mathcal{I}}(\theta)\) would be non-trivial.
The definition of \(\tilde{\chi}\) strongly suggests that the quotients of the horospherical leaves are natural candidates to be the strong stable and unstable sets of \(\psi_{t}\). For every \(\eta\in T_{1}M\), we define
\[V^{s}[\eta]=\chi(\mathcal{F}^{s}(\eta)),\qquad V^{u}[\eta]=\chi(\mathcal{F}^{ u}(\eta)),\]
\[V^{cs}[\eta]=\chi(\mathcal{F}^{cs}(\eta))\quad\text{ and }\quad V^{cu}[\eta]=\chi( \mathcal{F}^{cu}(\eta)).\]
We consider some connected components of \(V^{s}[\eta]\) and \(V^{u}[\eta]\). For every \([\eta]\in X\) and every open set \(U\subset X\) containing \([\eta]\), we denote by \(V^{s}[\eta]\cap U_{c}\) the connected
component of \(V^{s}[\eta]\cap U\) containing \([\eta]\). Similarly, we write \(V^{u}[\eta]\cap U_{c}\), \(V^{cs}[\eta]\cap U_{c}\) and \(V^{cu}[\eta]\cap U_{c}\). Let \([\eta],[\xi]\in X\) close enough. If there exists an open set \(U\subset X\) with \([\eta],[\xi]\in U\) such that \(V^{s}[\eta]\cap U_{c}\) and \(V^{cu}[\eta]\cap U_{c}\) intersect, then we define
\[V^{s}[\eta]\cap V^{cu}[\xi]=(V^{s}[\eta]\cap U_{c})\cap(V^{cu}[\eta]\cap U_{c})\,. \tag{9}\]
The heteroclinic connections (8) provide \(\theta\in T_{1}M\), \(\tau\in\mathbb{R}\) and lifts \(\tilde{\theta},\tilde{\eta},\tilde{\xi}\in T_{1}\tilde{M}\) of \(\theta,\eta,\xi\) such that \(\tilde{\xi}\not\in\tilde{\mathcal{F}}^{cu}(-\tilde{\eta})\) and
\[\tilde{\mathcal{F}}^{s}(\tilde{\eta})\cap\tilde{\mathcal{F}}^{cu}(\tilde{\xi })=\tilde{\mathcal{F}}^{s}(\tilde{\eta})\cap\tilde{\mathcal{F}}^{u}(\tilde{ \phi}_{\tau}(\tilde{\xi}))=\tilde{\mathcal{I}}(\tilde{\theta}).\]
\[V^{s}[\eta]\cap V^{cu}[\xi]=V^{s}[\eta]\cap V^{u}(\psi_{\tau}[\xi])=\chi\circ d \pi(\tilde{\mathcal{I}}(\tilde{\theta}))=[\mathcal{I}(\theta)]=[\theta].\]
By the definition of the quotient, this intersection is always unique if it exists. Denote by \(B([\xi],r)\) the open ball of radius \(r>0\) centered at \([\xi]\).
**Lemma 6.4**.: _Let \(r_{0}>0\) be given by Lemma 6.1. There cannot exist \(\epsilon_{0}>0\) and sequences \([\eta_{n}],[\xi_{n}]\in X\), \(t_{n}\in\mathbb{R}\) such that \(t_{n}\to\infty\) and for every \(n\geq 1\), \([\eta_{n}]\in V^{s}[\xi_{n}]\), \([\eta_{n}]\) belongs to the connected component of \(V^{s}[\xi_{n}]\cap B([\xi_{n}],r_{0})\) containing \([\xi_{n}]\),_
\[d([\eta_{n}],[\xi_{n}])\leq r_{0}\quad\text{ and }\quad d(\psi_{t_{n}}[\eta_{n} ],\psi_{t_{n}}[\xi_{n}])\geq\epsilon_{0}. \tag{10}\]
_An analogous statement holds for the unstable case._
Proof.: By contradiction suppose there exist the objects of the statement satisfying the inequalities (10). Lemma 6.1 says that there exist lifts \(\tilde{\eta}_{n},\tilde{\xi}_{n}\in T_{1}\tilde{M}\) of \(\eta_{n},\xi_{n}\) such that for every \(n\geq 1\), \(d_{s}(\tilde{\eta}_{n},\tilde{\xi}_{n})\leq Q+s_{0}\).
We claim that for every \(n\geq 1\), \(\tilde{\eta}_{n}\in\tilde{\mathcal{F}}^{s}(\tilde{\xi}_{n})\). Otherwise, for some covering isometry \(T\), \(T(\tilde{\eta}_{n})\in\tilde{\mathcal{F}}^{s}(\tilde{\xi}_{n})\). Hence \([\eta_{n}]=[d\pi(T(\tilde{\eta}_{n}))]\) does not belong to the connected component of \(V^{s}[\xi_{n}]\cap B([\xi_{n}],r_{0})\) containing \([\xi_{n}]\) and the claim is proved. By lemma 2.3 there exist \(A,B>0\) such that for every \(n\geq 1\),
\[d_{s}(\phi_{t}(\tilde{\eta}_{n}),\phi_{t}(\tilde{\xi}_{n}))\leq Ad_{s}(\tilde{ \eta}_{n},\tilde{\xi}_{n})+B\leq A(Q+s_{0})+B=C,\text{ for every }t\geq 0.\]
\[\text{Hence }\quad d_{s}(\phi_{t}(\phi_{t_{n}}\tilde{\eta}_{n}),\phi_{t}(\phi_{t_{n }}\tilde{\xi}_{n}))\leq C\text{ for every }t\geq-t_{n}. \tag{11}\]
Up to subsequences and using covering isometries we can assume that
\[\phi_{t_{n}}(\tilde{\eta}_{n})\to\tilde{\eta}\quad\text{ and }\quad\phi_{t_{n}}( \tilde{\xi}_{n})\to\tilde{\xi}. \tag{12}\]
Since horospherical foliations are invariant by the action of the geodesic flow, we get \(\phi_{t_{n}}(\tilde{\eta}_{n})\in\tilde{\mathcal{F}}^{s}(\phi_{t_{n}}(\tilde{ \xi}_{n}))\). Moreover, the continuity of the horospherical foliations gives \(\tilde{\eta}\in\tilde{\mathcal{F}}^{s}(\tilde{\xi})\). Now, let \(t\in\mathbb{R}\). Since \(t_{n}\to\infty\), we see that \(-t_{n}\leq t\) for \(n\) large enough and inequalities (11) yields \(d_{s}(\phi_{t}(\phi_{t_{n}}\tilde{\eta}_{n}),\phi_{t}(\phi_{t_{n}}\tilde{\xi}_{n }))\leq C\). By continuity we obtain \(d_{s}(\phi_{t}(\tilde{\eta}),\phi_{t}(\tilde{\xi}))\leq C\) for every \(t\in\mathbb{R}\).
As \(\tilde{\eta}\in\tilde{\mathcal{F}}^{s}(\tilde{\xi})\), Corollary 2.1 shows that \(\tilde{\eta}\in\tilde{\mathcal{I}}(\tilde{\xi})\) hence \([\eta]=[\xi]\). Applying the map \(\chi\circ d\pi\) to the sequences (12) we get
\[\chi\circ d\pi(\phi_{t_{n}}(\tilde{\eta}_{n}))\to\chi\circ d\pi(\tilde{\eta}) \quad\text{ and }\quad\chi\circ d\pi(\phi_{t_{n}}(\tilde{\xi}_{n}))\to\chi\circ d\pi(\tilde{ \xi}),\]
\[\psi_{t_{n}}[\eta_{n}]\to[\eta]\quad\text{ and }\quad\psi_{t_{n}}[\xi_{n}]\to[\xi].\]
Thus \(d(\psi_{t_{n}}[\eta_{n}],\psi_{t_{n}}[\xi_{n}])\to 0\) as \(n\to\infty\). This contradicts inequalities (10).
An intermediate result to show the relationship between the pairs \(W^{ss}[\eta]\),\(W^{uu}[\eta]\) and \(V^{s}[\eta]\),\(V^{u}[\eta]\) is the so-called uniform contraction. We prove this contraction for \(V^{s}[\eta]\) and \(V^{u}[\eta]\) but only for distances smaller than \(r_{0}\).
**Lemma 6.5**.: _Let \(r_{0}>0\) be given by Lemma 6.1. For every \(\epsilon>0\) and \(D\in(0,r_{0}]\) there exists \(T>0\) such that if \([\eta]\in V^{s}[\xi]\), \([\eta]\) belongs to the connected component of \(V^{s}[\xi]\cap B([\xi],r_{0})\) containing \([\xi]\) and \(d([\eta],[\xi])\leq D\), then_
\[d(\psi_{t}[\eta],\psi_{t}[\xi])\leq\epsilon\quad\text{ for every }t\geq T.\]
_An analogous result holds for the unstable case._
Proof.: By contradiction suppose there exist \(\epsilon_{0}>0,D_{0}\in(0,r_{0}]\) and sequences \([\eta_{n}],[\xi_{n}]\in X\), \(t_{n}\in\mathbb{R}\), such that \(t_{n}\to\infty\) and for every \(n\geq 1\), \([\eta_{n}]\in V^{s}[\xi_{n}]\), \([\eta_{n}]\) belongs to the connected component of \(V^{s}[\xi_{n}]\cap B([\xi_{n}],r_{0})\) containing \([\xi_{n}]\),
\[d([\eta_{n}],[\xi_{n}])\leq D_{0}\leq r_{0}\quad\text{ and }\quad d(\psi_{t_{n}}[ \eta_{n}],\psi_{t_{n}}[\xi_{n}])\geq\epsilon_{0}.\]
This contradicts Lemma 6.4 and proves the statement.
As an immediate consequence we see that \(V^{s}[\eta]\) and \(V^{u}[\eta]\) agree with the strong sets of \(\psi_{t}\) locally for distances smaller than \(r_{0}\).
**Lemma 6.6**.: _Let \(r_{0}>0\) be given by Lemma 6.1. If \([\eta]\in V^{s}[\xi]\), \([\eta]\) belongs to the connected component of \(V^{s}[\xi]\cap B([\xi],r_{0})\) containing \([\xi]\) and \(d([\eta],[\xi])\leq r_{0}\) then_
\[d(\psi_{t}[\eta],\psi_{t}[\xi])\to 0\quad\text{ as }\quad t\to\infty.\]
_In particular, \([\eta]\in W^{ss}[\xi]\). An analogous statement holds for the unstable case._
Proof.: For every \(n\geq 1\), set \(\epsilon_{n}=1/n\) and \(D=r_{0}\) in Lemma 6.5. So, there exists a sequence \(T_{n}\to\infty\) such that \(d(\psi_{t}[\eta],\psi_{t}[\xi])\leq\frac{1}{n}\) for every \(t\geq T_{n}\). This implies that \(d(\psi_{t}[\eta],\psi_{t}[\xi])\to 0\) as \(t\to\infty\).
The local product requires not only the intersection of \(W^{ss}[\eta]\) and \(W^{uu}[\eta]\) but the intersection of the \(\epsilon\)-strong sets \(W^{ss}_{\epsilon}[\eta]\) and \(W^{uu}_{\epsilon}[\eta]\). The following lemma sets a criterion to identify points of \(W^{ss}_{\epsilon}[\eta]\) and \(W^{uu}_{\epsilon}[\eta]\).
**Lemma 6.7**.: _Let \(r_{0}>0\) be given by Lemma 6.1. For every \(\epsilon>0\) there exists \(\delta\in(0,r_{0}]\) such that if \([\eta]\in V^{s}[\xi]\), \([\eta]\) belongs to the connected component of \(V^{s}[\xi]\cap B([\xi],r_{0})\) containing \([\xi]\) and \(d([\eta],[\xi])\leq\delta\), then_
\[d(\psi_{t}[\eta],\psi_{t}[\xi])\leq\epsilon\quad\text{ for every }t\geq 0.\]
_An analogous result holds for the unstable case._
Proof.: By contradiction suppose there exist \(\epsilon_{0}>0\) and sequences \([\eta_{n}],[\xi_{n}]\in X\), \(\delta_{n}\in(0,r_{0}]\) and \(t_{n}\in\mathbb{R}\), such that \(\delta_{n}\to 0\) and for every \(n\geq 1\), \([\eta_{n}]\in V^{s}[\xi_{n}]\), \([\eta_{n}]\) belongs to the connected component of \(V^{s}[\xi_{n}]\cap B([\xi_{n}],r_{0})\) containing \([\xi_{n}]\),
\[d([\eta_{n}],[\xi_{n}])\leq\delta_{n}\leq r_{0}\quad\text{ and }\quad d(\psi_{t_{n}}[ \eta_{n}],\psi_{t_{n}}[\xi_{n}])\geq\epsilon_{0}. \tag{13}\]
We claim that \(t_{n}\to\infty\). Otherwise \(t_{n}\) is bounded and after choosing a subsequence we have \(t_{n}\to T\in\mathbb{R}\). For suitable subsequences, inequalities (13) imply that \([\eta_{n}]\) and \([\xi_{n}]\) converge to the same limit \([\eta]\in X\). Since \(\psi_{t}\) is continuous, \(\psi_{t_{n}}[\eta_{n}]\) and \(\psi_{t_{n}}[\xi_{n}]\) converge to the same limit \(\psi_{T}[\eta]\). This contradicts inequalities (13) and proves the claim. Since \(t_{n}\to\infty\), inequalities (13) contradict Lemma 6.4 and prove the lemma.
From this and Lemma 6.6, we deduce that \(W^{ss}_{\epsilon}[\eta]\) and \(W^{uu}_{\epsilon}[\eta]\) agree with \(V^{s}[\eta]\) and \(V^{u}[\eta]\) locally. The following lemma states that if \([\eta]\) and \([\xi]\) are close enough then their intersection \([\theta]\) is close to \([\eta]\) and \([\xi]\).
**Lemma 6.8**.: _For every \(\epsilon>0\) there exists \(\delta>0\) such that if \([\eta],[\xi]\in X\), \([\theta]\in V^{s}[\eta]\cap V^{u}(\psi_{\tau}[\xi])\) and \(d([\eta],[\xi])\leq\delta\), then_
\[d([\theta],[\eta])\leq\epsilon,\quad d([\theta],\psi_{\tau}[\xi])\leq\epsilon \quad\text{ and }\quad|\tau|\leq\epsilon.\]
Proof.: By contradiction suppose there exist \(\epsilon_{0}>0\) and sequences \([\eta_{n}],[\xi_{n}],[\theta_{n}]\in X\), \(\tau_{n}\in\mathbb{R}\) such that for every \(n\geq 1\), \([\theta_{n}]\in V^{s}[\eta_{n}]\cap V^{u}(\psi_{\tau_{n}}[\xi_{n}])\), \(|\tau_{n}|\geq\epsilon_{0}\),
\[d([\eta_{n}],[\xi_{n}])\leq\frac{1}{n},\quad d([\theta_{n}],[\eta_{n}])\geq \epsilon_{0}\quad\text{ and }\quad d([\theta_{n}],\psi_{\tau_{n}}[\xi_{n}])\geq \epsilon_{0}. \tag{14}\]
Given \(r_{0}>0\) from Lemma 6.1, for every \(n\geq 1\) large enough, \(d([\eta_{n}],[\xi_{n}])\leq\frac{1}{n}\leq r_{0}\). So, we can choose lifts \(\tilde{\eta}_{n},\tilde{\xi}_{n},\tilde{\theta}_{n}\) of \(\eta_{n},\xi_{n},\theta_{n}\) such that \(d_{s}(\tilde{\eta}_{n},\tilde{\xi}_{n})\leq Q+s_{0}\) and \(\tilde{\theta}_{n}\) belongs to the fundamental domain containing \(\tilde{\eta}_{n}\) and \(\tilde{\xi}_{n}\). We claim that for every \(n\geq 1\),
\[\tilde{\theta}_{n}\in\tilde{\mathcal{F}}^{s}(\tilde{\eta}_{n})\cap\tilde{ \mathcal{F}}^{u}(\phi_{\tau_{n}}(\tilde{\xi}_{n})). \tag{15}\]
Otherwise there exist sequences of covering isometries \(T_{n},T^{\prime}_{n}\) such that \(\tilde{\theta}_{n}\in\tilde{\mathcal{F}}^{s}(T_{n}(\tilde{\eta}_{n}))\cap \tilde{\mathcal{F}}^{u}(\phi_{\tau_{n}}(T^{\prime}_{n}(\tilde{\xi}_{n})))\). Thus, there exists an open set \(U\subset X\) containing \([\eta_{n}]\) such that \([\theta_{n}]\) does not belong to the connected component of \(V^{s}[\eta_{n}]\cap U\) containing \([\eta_{n}]\). Similarly, \([\theta_{n}]\) does not belong to the connected component of \(V^{cu}[\xi_{n}]\cap U\) containing \([\xi_{n}]\). This contradicts the definition of intersection and proves the claim.
So, if we use the same covering isometries for all sequences and choose suitable subsequences, we can assume that \(\tilde{\eta}_{n}\to\tilde{\eta},\tilde{\xi}_{n}\to\tilde{\xi},\tilde{\theta}_ {n}\to\tilde{\theta}\) and \(\tau_{n}\to T\). Since we used the same covering isometries for the sequences, the continuity of the horospherical foliations applied to relation (15) yields
\[\tilde{\theta}\in\tilde{\mathcal{F}}^{s}(\tilde{\eta})\cap\tilde{\mathcal{F}}^ {u}(\phi_{T}(\tilde{\xi})). \tag{16}\]
We claim that \(\tilde{\eta}\in\tilde{\mathcal{I}}(\tilde{\xi})\). Otherwise \(\eta\not\in\mathcal{I}(\xi)\) and \(d([\eta],[\xi])>0\). But applying the limit to inequalities (14), we get \(d([\eta],[\xi])=0\) and the claim is proved.
The claim and relation (16) provide that \(\tilde{\theta}\in\tilde{\mathcal{F}}^{s}(\tilde{\eta})\cap\tilde{\mathcal{F}}^ {u}(\phi_{T}(\tilde{\eta}))\). From this, Corollary 2.1 shows that \(\tilde{\theta}\in\tilde{\mathcal{I}}(\tilde{\eta})\). Therefore \([\theta_{n}]\) and \([\eta_{n}]\) converge to the same limit \([\theta]=[\eta]\), which contradicts inequalities (14). This gives \(\delta_{1}>0\) such that \(d([\theta],[\eta])\leq\epsilon\). A similar reasoning yields \(\delta_{2}>0\) such that \(d([\theta],\psi_{\tau}[\xi])\leq\epsilon\).
Finally, we see that \(\tilde{\theta}\in\tilde{\mathcal{I}}(\tilde{\eta})\subset\tilde{\mathcal{F}}^ {u}(\tilde{\eta})\) hence \(\tilde{\theta}\in\tilde{\mathcal{F}}^{u}(\tilde{\eta})\cap\tilde{\mathcal{F}}^ {u}(\phi_{T}(\tilde{\eta}))\). This holds if and only if \(T=0\). Therefore \(\tau_{n}\to 0\), contradicting \(|\tau_{n}|\geq\epsilon_{0}\). We thus get \(\delta_{3}>0\) such that \(|\tau|\leq\epsilon\). Choosing \(\delta=\min(\delta_{1},\delta_{2},\delta_{3})\) we get the result.
This result is important because it grants the continuity of the local product structure.
**Lemma 6.9**.: _The quotient flow \(\psi_{t}\) has a local product structure._
Proof.: Let \(r_{0}>0\) be given by Lemma 6.1 and \(\epsilon\in(0,r_{0}]\). Consider \([\eta],[\xi]\in X\) and \([\theta]\in V^{s}[\eta]\cap V^{u}(\psi_{\tau}[\xi])\). By Lemma 6.7 there exists \(\delta_{1}>0\) such that if \(d([\theta],[\eta])\leq\delta_{1}\) and \(d([\theta],\psi_{\tau}[\xi])\leq\delta_{1}\) then
\[d(\psi_{t}[\theta],\psi_{t}[\eta])\leq\epsilon\quad\text{ and }\quad d(\psi_{-t}[ \theta],\psi_{-t}\psi_{\tau}[\xi])\leq\epsilon,\quad\text{for}t\geq 0. \tag{17}\]
For the same \(\epsilon>0\), Lemma 6.3 gives \(\delta_{2}>0\) such that expansivity holds. Set \(\delta_{m}=\min(\delta_{1},\delta_{2},\epsilon)\). By Lemma 6.8, for \(\delta_{m}>0\) there exists \(\delta>0\) such that if \([\eta],[\xi]\in X\), \([\theta]\in V^{s}[\eta]\cap V^{u}(\psi_{\tau}[\xi])\) and \(d([\eta],[\xi])\leq\delta\) then
\[d([\theta],[\eta])\leq\delta_{m},\quad d([\theta],\psi_{\tau}[\xi])\leq\delta _{m}\quad\text{ and }\quad|\tau|\leq\delta_{m}\leq\epsilon.\]
From this and \(\delta_{m}\leq\epsilon\leq r_{0}\), Lemma 6.6 implies that \([\theta]\in W^{ss}[\eta]\cap W^{uu}(\psi_{\tau}[\xi])\). Furthermore, since \(\delta_{m}\leq\delta_{1}\), \([\theta],[\eta],[\xi]\) satisfy inequalities (17) hence \([\theta]\in W^{ss}_{\epsilon}[\eta]\cap W^{uu}_{\epsilon}(\psi_{\tau}[\xi])\) and \(|\tau|\leq\epsilon\).
Finally, pseudo-orbit tracing and specification properties are a consequences of previous dynamical properties. More precisely,
1. By Theorem 7.1 of [28], if \(\psi_{t}\) is expansive and has a local product structure then \(\psi_{t}\) has the pseudo-orbit tracing property.
2. By Proposition 6.2 of [16], if \(\psi_{t}\) is expansive, topological mixing and has the pseudo-orbit tracing property then \(\psi_{t}\) has the specification property.
## 7 Uniqueness of the measure of maximal entropy of the geodesic flow
This section is devoted to the study of the uniqueness of the measure of maximal entropy of the geodesic flow. The existence of such a measure follows from a work by Newhouse [24]. Indeed, the result says that a smooth flow on a compact smooth manifold always has a measure of maximal entropy. By hypothesis, the geodesic flow \(\phi_{t}\) is a smooth flow acting on \(T_{1}M\) and the result follows. We remark that in our case, the geodesic flow has positive topological entropy [15].
The strategy for the proof of uniqueness of the measure of maximal entropy is the following. First of all, the properties of the factor flow implies that \(\psi_{t}\) has a unique mesaure of maximal entropy. Secondly, we will show that the lift of this measure to \(T_{1}M\) is the unique measure of maximal entropy for the geodesic flow.
Recall that the quotient model is a quotient flow \(\psi_{t}\) time-preserving semi-conjugate to the geodesic flow \(\phi_{t}\). Bowen and Franco found a criterion to get the uniqueness of the measure of maximal entropy [2, 14].
**Theorem 7.1**.: _Let \(\phi_{t}:X\to X\) be a continuous flow acting on a compact metric space. If \(\phi_{t}\) is expansive and has the specification property then \(\phi_{t}\) has a unique measure of maximal entropy._
From the previous section, Theorem 6.1 says that \(\psi_{t}\) is expansive and has the specification property. Applying Theorem 7.1 to our case we see that \(\psi\) has a unique measure of maximal entropy \(\nu\).
To lift \(\nu\) to \(T_{1}M\) and verify the uniqueness property we rely on a abstract theorem proved by Buzzi-Fisher-Sambarino-Vasquez for discrete systems [5]. They constructed a measure of maximal entropy using a classical argument due to Ledrappier and Walters [21]. We recall the construction for our setting. Let \(\phi_{t}:Y\to Y\) and \(\psi_{t}:X\to X\) be two continuous flows on compact metric spaces, \(\chi:Y\to X\) be a time-preserving semi-conjugacy and \(\nu\) be the measure of maximal entropy of \(\psi_{t}\). Assume that \(\psi_{t}\) is expansive, has the specification property and for every \(x\in X\),
\[h(\phi_{1},\chi^{-1}(x))=0. \tag{18}\]
Let \(\epsilon>0\) be an expansivity constant for \(\psi_{t}\). For each \(T>0\), we define the set
\[Per(T,\epsilon)=\{\chi^{-1}(\gamma)\subset Y:\gamma\text{ is a periodic orbit of }\psi_{t}\text{ with period in }[T-\epsilon,T+\epsilon]\}.\]
By expansivity this set is finite. The following lemma states a non-trivial fact about strips in our setting.
**Lemma 7.1**.: _Let \(M\) be a compact surface without conjugate points of genus greater than one and \(Per(T,\epsilon)\) be the set defined above. Then, every subset \(\chi^{-1}(\gamma)\in Per(T,\epsilon)\) is compact and invariant by the geodesic flow \(\phi_{t}\) and_
\[\chi^{-1}(\gamma)=\{\phi_{s}(\mathcal{I}(\xi)):s\in[0,S]\text{ with }S\in[T- \epsilon,T+\epsilon]\}=\phi_{[0,S]}(\mathcal{I}(\xi)).\]
_In particular, its lift \(\tilde{\phi}_{[0,S]}(\tilde{\mathcal{I}}(\tilde{\xi}))\subset T_{1}\tilde{M}\) is a strip of bi-asymptotic orbits of the geodesic flow \(\tilde{\phi}_{t}\) for any lift \(\tilde{\xi}\in T_{1}\tilde{M}\) of \(\xi\)._
We observe that the strip \(\chi^{-1}(\gamma)\) might not have closed orbits of the geodesic flow \(\phi_{t}\) hence its projection \(P(\chi^{-1}(\gamma))\subset M\) might not have closed geodesics of \((M,g)\). However, for every non-closed \(\phi_{t}\)-orbit \(\beta\subset\chi^{-1}(\gamma)\), Lemma 7.1 implies that \(\beta\) and its accumulation points remain in \(\chi^{-1}(\gamma)\). These properties might help to understand the geometry of these particular strips in future studies. Note that for compact surfaces without focal points, the geometry of strips is well-understood due to the flat strip Theorem [26].
It also follows from Lemma 7.1 that there exists a probability measure \(\mu_{\gamma}\) supported on \(\chi^{-1}(\gamma)\) and invariant by the geodesic flow \(\phi_{t}\). So, we can take the average
\[\mu_{T}=\frac{\sum_{\gamma}\mu_{\gamma}}{\#Per(T,\epsilon)},\]
where \(\gamma\) varies according to \(\chi^{-1}(\gamma)\in Per(T,\epsilon)\). We see that \(\mu_{T}\) is a probability measure on \(Y\) invariant by the flow \(\phi_{t}\). Let \(\mu\in\mathcal{M}(\phi)\) be an accumulation point
of the set \((\mu_{T})_{T>0}\) in the weak\({}^{*}\) topology. So, there exists a sequence \(T_{n}\to\infty\) such that \(\mu_{T_{n}}\to\mu\) weakly.
Notice that for every \(\chi^{-1}(\gamma)\in Per(T,\epsilon)\), \(\chi_{*}\mu_{\gamma}\) is a probability measure supported on \(\gamma\) and invariant by the flow \(\psi_{t}\). It follows that \(\chi_{*}(\mu_{T_{n}})\) is a probability measure supported on the union of periodic orbits of \(\psi_{t}\) with period in \([T-\epsilon,T+\epsilon]\). In this case, Bowen showed that \(\chi_{*}(\mu_{T_{n}})\to\nu\) in the weak\({}^{*}\) topology [3]. The continuity of \(\chi_{*}\) and \(\mu_{T_{n}}\to\mu\) provide that \(\chi_{*}\mu=\nu\).
We verify that \(\mu\) is a measure of maximal entropy. Since \((Y,\phi_{t},\mu)\) is an extension of \((X,\psi_{t},\nu)\), we have \(h_{\nu}(\psi_{1})\leq h_{\mu}(\phi_{1})\). Applying Bowen's formula [2] and assumption (18), we conclude that
\[h(\phi_{1})\leq h(\psi_{1})+\sup_{x\in X}h(\phi_{1},\chi^{-1}(x))=h(\psi_{1}) \quad\text{ hence }\quad h(\phi_{1})=h(\psi_{1}).\]
Since \(\nu\) is a measure of maximal entropy for \(\psi_{t}\), \(h(\phi_{1})=h(\psi_{1})=h_{\nu}(\psi_{1})\leq h_{\mu}(\phi_{1})\). So, every accumulation point \(\mu\in\mathcal{M}(\phi)\) of the set \((\mu_{T})_{T>0}\) satisfies:
\[\mu\text{ is a measure of maximal entropy for }\phi_{t}\quad\text{ and }\quad\chi_{*}\mu=\nu. \tag{19}\]
We state Buzzi-Fisher-Sambarino-Vasquez's Theorem for continuous systems. The proof is analogous to the discrete case with minor changes.
**Proposition 7.1**.: _Let \(\phi_{t}:Y\to Y\) and \(\psi_{t}:X\to X\) be two continuous flows on compact metric spaces, \(\chi:Y\to X\) be a time-preserving semi-conjugacy and \(\nu\) be the measure of maximal entropy of \(\psi_{t}\). Assume that \(\psi_{t}\) is expansive, has the specification property and_
1. \(h(\phi_{1},\chi^{-1}(x))=0\) _for every_ \(x\in X\)_._
2. \(\nu\bigg{(}\{\chi(y):\chi^{-1}\circ\chi(y)=\{y\}\}\bigg{)}=1\)_._
_Then, there exists a unique measure of maximal entropy \(\mu\) of \(\phi_{t}\) with \(\chi_{*}\mu=\nu\)._
We apply this proposition to our context. Let \(Y=T_{1}M\), \(\phi_{t}\) be the geodesic flow, \(X\) be the quotient space, \(\psi_{t}\) be the quotient flow, \(\chi\) be the quotient map and \(\nu\) be the unique measure of maximal entropy of \(\psi_{t}\). With these choices, the assumptions of Proposition 7.1 are satisfied except for Hypothesis 1 and 2. Regarding Hypothesis 1, we see that for every \([\eta]\in X\),
\[\chi^{-1}[\eta]=\chi^{-1}\circ\chi(\eta)=\mathcal{I}(\eta). \tag{20}\]
For compact surfaces without conjugate points and genus greater than one, Gelfert and Ruggiero [17] proved that \(h(\phi_{1},\mathcal{I}(\eta))=0\) for every \(\eta\in T_{1}M\). Therefore, Hypothesis 1 is satisfied. Moreover, condition (19) says that there exists a measure of maximal entropy \(\mu\) for the geodesic flow \(\phi_{t}\) such that \(\chi_{*}\mu=\nu\). So, to show the uniqueness it only remains to prove Hypothesis 2.
We express Hypothesis 2 of Proposition 7.1 in our context. By identity (20), this hypothesis has the following form
\[\{\chi(y):\chi^{-1}\circ\chi(y)=\{y\}\}=\{\chi(\eta)\in X:\mathcal{I}(\eta)=\{ \eta\}\}=\chi(\mathcal{R}_{0}).\]
Consequently, Hypothesis 2 becomes
\[\nu(\chi(\mathcal{R}_{0}))=1. \tag{21}\]
To prove this condition, we use Proposition 3.3 of Climenhaga-Knieper-War's work [6]. This proposition states a classical Katok's result in the context of geodesic flows of surfaces.
**Lemma 7.2**.: _Let \(M\) be a surface without conjugate points of genus greater than one and \(\mu\) be an ergodic measure on \(T_{1}M\) invariant by the geodesic flow._
\[\text{If }\quad h_{\mu}(\phi_{1})>0\quad\text{ then }\quad\mu(\mathcal{R}_{0})=1.\]
We prove below condition (21) and so the uniqueness of the measure of maximal entropy for the geodesic flow \(\phi_{t}\).
Proof.: As remarked above, by condition (19), \(\mu\) is a measure of maximal entropy and hence \(h_{\mu}(\phi_{1})=h(\phi_{1})>0\). Ergodic decomposition of \(\mu\) provides an ergodic component \(\tau\) with \(h_{\tau}(\phi_{1})>0\). Lemma 7.2 implies that \(\tau(\mathcal{R}_{0})=1\) hence \(\mu(\mathcal{R}_{0})>0\). So, we have
\[\nu(\chi(\mathcal{R}_{0}))=\chi_{*}\mu(\chi(\mathcal{R}_{0}))=\mu(\chi^{-1} \chi\mathcal{R}_{0})=\mu(\mathcal{R}_{0})>0.\]
Since \(\nu\) is ergodic and \(\chi(\mathcal{R}_{0})\) is invariant by \(\psi_{t}\), we get \(\nu(\chi(\mathcal{R}_{0}))=1\).
Finally, we remark that Climenhaga-Knieper-War [6] also showed that the unique measure of maximal entropy has full support. This property can be proven by our methods assuming that the expansive set \(\mathcal{R}_{0}\) is dense. For this, we first restate Proposition 7.3.15 of [13] in our context.
**Proposition 7.2**.: _Let \(X\) be a compact metric space, \(\psi_{t}:X\to X\) be a continuous expansive flow with the specification property and \(\nu\) be its unique measure of maximal entropy. Then, for every \(\epsilon>0\) there exist \(A_{\epsilon},B_{\epsilon}>0\) such that for every \(x\in X\) and every \(T>0\), we have \(A_{\epsilon}\leq e^{Th(\psi_{1})}\nu(B(x,\epsilon,T))\leq B_{\epsilon}\) where \(B(x,\epsilon,T)\) is the \((T,\epsilon)\)-dynamical ball defined in Equation (3) in Subsection 2.5._
**Proposition 7.3**.: _Let \(M\) be a compact surface without conjugate points of genus greater than one and \(\mu\) be its unique measure of maximal entropy. If \(\mathcal{R}_{0}\) is dense in \(T_{1}M\) then \(\mu\) has full support._
Proof.: For \(T=0\) and every \(\epsilon>0\), apply Proposition 7.2 to the quotient flow \(\psi_{t}:X\to X\) and its unique measure of maximal entropy \(\nu\). So, we have \(0<A_{\epsilon}\leq\nu(B(x,\epsilon,0))\leq B_{\epsilon}\) for every \(\epsilon>0\) and every \(x\in X\). Therefore \(\nu\) has full support on \(X\) since \(B(x,\epsilon,0)\) is just an open ball of radius \(\epsilon\) centered at \(x\). Now, let \(U\) be any open set of \(T_{1}M\). By density of \(\mathcal{R}_{0}\), there is an expansive point \(\xi\in U\). Note that the family of open saturated neighborhoods around \(\mathcal{I}(\xi)=\xi\) defined in Section 4 forms a basis of neighborhoods at \(\xi\). Hence there exists an open saturated set \(A=A(\xi,\epsilon^{\prime},\delta^{\prime},\tau^{\prime})\) included in \(U\). Since \(\nu\) has full support and \(\chi(A)\) is an open set of \(X\), the conclusion follows from
\[\mu(U)\geq\mu(A)=\mu(\chi^{-1}\chi(A))=\chi_{*}\mu(\chi(A))=\nu(\chi(A))>0.\]
Despite we assumed that \(\mathcal{R}_{0}\) is dense, we believe that this actually holds in our setting. For the moment, this density for the more general case is being studied for future work. This property holds for example for compact higher genus surfaces without conjugate points and with continuous Green bundles which include the case of surfaces without focal points.
## 8 Acknowledgments
I would like to thank my advisor Rafael Ruggiero for useful discussions. I appreciate the financial support of CAPES and FAPERJ funding agencies during the work. This article was supported in part by INCTMat under the project INCTMat-Faperj (E26/200.866/2018).
|
2306.00248 | TransAct: Transformer-based Realtime User Action Model for
Recommendation at Pinterest | Sequential models that encode user activity for next action prediction have
become a popular design choice for building web-scale personalized
recommendation systems. Traditional methods of sequential recommendation either
utilize end-to-end learning on realtime user actions, or learn user
representations separately in an offline batch-generated manner. This paper (1)
presents Pinterest's ranking architecture for Homefeed, our personalized
recommendation product and the largest engagement surface; (2) proposes
TransAct, a sequential model that extracts users' short-term preferences from
their realtime activities; (3) describes our hybrid approach to ranking, which
combines end-to-end sequential modeling via TransAct with batch-generated user
embeddings. The hybrid approach allows us to combine the advantages of
responsiveness from learning directly on realtime user activity with the
cost-effectiveness of batch user representations learned over a longer time
period. We describe the results of ablation studies, the challenges we faced
during productionization, and the outcome of an online A/B experiment, which
validates the effectiveness of our hybrid ranking model. We further demonstrate
the effectiveness of TransAct on other surfaces such as contextual
recommendations and search. Our model has been deployed to production in
Homefeed, Related Pins, Notifications, and Search at Pinterest. | Xue Xia, Pong Eksombatchai, Nikil Pancha, Dhruvil Deven Badani, Po-Wei Wang, Neng Gu, Saurabh Vishwas Joshi, Nazanin Farahpour, Zhiyuan Zhang, Andrew Zhai | 2023-05-31T23:45:29Z | http://arxiv.org/abs/2306.00248v1 | # TransAct: Transformer-based Realtime User Action Model for Recommendation at Pinterest
###### Abstract.
Sequential models that encode user activity for next action prediction have become a popular design choice for building web-scale personalized recommendation systems. Traditional methods of sequential recommendation either utilize end-to-end learning on realtime user actions, or learn user representations separately in an offline batch-generated manner. This paper (1) presents Pinterest's ranking architecture for Homefeed, our personalized recommendation product and the largest engagement surface; (2) proposes TransAct, a sequential model that extracts users' short-term preferences from their realtime activities; (3) describes our hybrid approach to ranking, which combines end-to-end sequential modeling via TransAct with batch-generated user embeddings. The hybrid approach allows us to combine the advantages of responsiveness from learning directly on realtime user activity with the cost-effectiveness of batch user representations learned over a longer time period. We describe the results of ablation studies, the challenges we faced during productionization, and the outcome of an online A/B experiment, which validates the effectiveness of our hybrid ranking model. We further demonstrate the effectiveness of TransAct on other surfaces such as contextual recommendations and search.
2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 2023 2023 23 2023 23 2023 2023 2023 2023 2023 2023 23 2023 23 2023 2023 23 23 2023 2023 2023 2023 23 2023 2023 2023 2023 2023 2023 2023 23 2023 2023 23 2023 2023 2023 202 2023 2023 202 2323 23 2023 2023 2023 23 2023 2023 2023 2023 23 2023 2023 2023 23 2023 232 2023 2023 2023 23 2023 2023 2023 2023 2023 23 20
Upon visiting Pinterest, users are immediately presented with the Homefeed page as shown in Figure 1, which serves as the primary source of inspiration and accounts for the majority of overall user engagement on the platform. The Homefeed page is powered by a 3-stage recommender system that retrieves, ranks, and blends content based on user interests and activities. At the retrieval stage, we filter billions of pins created on Pinterest to thousands, based on a variety of factors such as user interests, followed boards, etc. Then we use a pointwise ranking model to rank candidate pins by predicting their personalized relevance to users. Finally, the ranked result is adjusted using a blending layer to meet business requirements.
Realtime recommendation is crucial because it provides a quick and up-to-date recommendation to users, improving their overall experience and satisfaction. The integration of realtime data, such as recent user actions, results in more accurate recommendations and increases the probability of users discovering relevant items (Beng et al., 2017; Wang et al., 2018).
Longer user action sequences result in improved user representation and hence better recommendation performance. However, using long sequences in ranking poses challenges to infrastructure, as they require significant computational resources and can result in increased latency. To address this challenge, some approaches have utilized hashing and nearest neighbor search in long user sequences (Wang et al., 2018). Other work encodes users' past actions over an extended time frame to a user embedding (Wang et al., 2018) to represent long-term user interests. User embedding features are often generated as _batch_ features (e.g. generated daily), which are cost-effective to serve across multiple applications with low latency. The limitation of existing sequential recommendation is that they either only use realtime user actions, or only use a batch user representation learned from long-term user action history.
We introduce a novel realtime-batch hybrid ranking approach that combines both _realtime_ user action signals and _batch_ user representations. To capture the realtime actions of users, we present TransAct - a new transformer-based module designed to encode recent user action sequences and comprehend users' immediate preferences. For user actions that occur over an extended period of time, we transform them into a batch user representation (Wang et al., 2018).
By combining the expressive power of TransAct with batch user embeddings, the hybrid ranking model offers users realtime feedback on their recent actions, while also accounting for their long-term interests. The realtime component and batch component complement each other for recommendation accuracy. This leads to an overall improvement in the user experience on the Homefeed page.
The major contributions of this paper are summarized as follows:
* We describe Pinnability, the architecture of Pinterest's Homefeed production ranking system. The Homefeed personalized recommendation product accounts for the majority of the overall user engagement on Pinterest.
* We propose TransAct, a transformer-based realtime user action sequential model that effectively captures users' short-term interests from their recent actions. We demonstrate that combining TransAct with daily-generated user representations (Wang et al., 2018) to a hybrid model leads to the best performance in Pinnability. This design choice is justified through a comprehensive ablation study. Our code implementation is publicly available1. Footnote 1: Our code is available on Github: [https://github.com/pinterest/transformer_user_action](https://github.com/pinterest/transformer_user_action)
* We describe the serving optimization implemented in Pinnability to make feasible the computational complexity increase of 65 times when introducing TransAct to the Pinnability model. Specifically, optimizations are done to enable GPU serving of our prior CPU-based model.
* We describe online A/B experiments on a real-world recommendation system using TransAct. We demonstrate some practical issues in the online environment, such as recommendation diversity drop and engagement decay, and propose solutions to address these issues.
The remainder of this paper is organized as follows: Related work is reviewed in Section 2. Section 3 describes the design of TransAct and the details of bringing it to production. Experiment results are reported in Section 4. We discuss some findings beyond experiments in Section 5. Finally, we conclude our work in Section 6.
## 2. Related Work
### Recommender System
Collaborative filtering (CF) (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) makes recommendations based on the assumption that a user will prefer an item that other similar users prefer. It uses the user behavior history to compute the similarity between users and items and recommend items based on similarity. This approach suffers from the sparsity of the user-item matrix and cannot handle users who have never interacted with any items. Factorization machines (Wang et al., 2018; Wang et al., 2018), on the other hand, are able to handle sparse matrices.
More recently, deep learning (DL) has been used in click-through rate (CTR) prediction tasks. For example, Google uses Wide & Deep (Bird et al., 2016) models for application recommendation. The wide component achieves memorization by capturing the interaction between features, while the deep component helps with generalization by learning the embedding of categorical features using a feed forward network. DeepFM (Chen et al., 2017) makes improvements by learning
Figure 1. Pinterest Homefeed Page
both low-order and high-order feature interactions automatically. DCN (Deng et al., 2017) and its upgraded version DCN v2 (Deng et al., 2018) both aim to automatically model the explicit feature crosses. The aforementioned recommender systems do not work well in capturing the short-term interests of users since only the static features of users are utilized. These methods also tend to ignore the sequential relationship within the action history of a user, resulting in an inadequate representation of user preferences.
### Sequential Recommendation
To address this problem, sequential recommendation has been widely studied in both academia and the industry. A sequential recommendation system uses a behavior history of users as input and applies recommendation algorithms to suggest appropriate items to users. Sequential recommendation models are able to capture users' long-term preferences over an extended period of time, similar to traditional recommendation methods. Additionally, they also have the added benefit of being able to account for users' evolving interests, which enables higher quality recommendations.
Sequential recommendation is often viewed as a next item prediction task, where the goal is to predict a user's next action based on their past action sequence. We are inspired by the previous sequential recommendation method (Beng et al., 2017) in terms of encoding users' past action into a dense representation. Some early sequential recommendation systems use machine learning techniques, such as Markov Chain (Deng et al., 2018) and session-based K nearest neighbors (KNN) (Koh et al., 2018) to model the temporal dependencies among interactions in users' action history. These models are criticized for not being able to fully capture the long-term patterns of users by simply combining information from different sessions. Recently, deep learning techniques such as recurrent neural networks (RNN) (Li et al., 2019) have shown great success in natural language processing and have become increasingly popular in sequential recommendation. As a result, many DL-based sequential models (Deng et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) have achieved outstanding performance using RNNs. Convolutional neural networks (CNNs) (Wang et al., 2018) are widely used for processing time-series data and image data. In the context of sequential recommendation, CNN-based models can effectively learn dependency within a set of items users recently interacted with, and make recommendations accordingly (Wang et al., 2018; Wang et al., 2018). Attention mechanism is originated from the neural machine translation task, which models the importance of different parts of the input sentences on the output words (Beng et al., 2018). Self-attention is a mechanism known to weigh the importance of different parts of an input sequence (Wang et al., 2018). There have been more recommender systems that use attention (Wang et al., 2018) and self-attention (Beng et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018; Wang et al., 2018).
Many previous works (Wang et al., 2018; Wang et al., 2018; Wang et al., 2018) only perform offline evaluations using public datasets. However, the online environment is more challenging and unpredictable. Our method is not directly comparable to these works due to differences in the problem formulation. Our approach resembles a Click-through Rate (CTR) prediction task. Deep Interest Network (DIN) uses an attention mechanism to model the dependency within users' past actions in CTR prediction tasks. Alibaba's Behavior Sequence Transformer (BST) (Beng et al., 2017) is the improved version of DIN and is closely related to our work. They propose to use Transformer to capture the user interest from user actions, emphasizing the importance of the action order. However, we found that positional information does not add much value. We find other designs like better early fusion and action type embedding are effective when dealing with sequence features.
## 3. Methodology
In this section, we introduce TransAct, our realtime-batch hybrid ranking model. We will start with an overview of the Pinterested Homefeed ranking model, Pinnability. We then describe how to use TranAct to encode the realtime user action sequence features in Pinnability for the ranking task.
### Preliminary: Homefeed Ranking Model
In Homefeed ranking, we model the recommendation task as a pointwise multi-task prediction problem, which can be defined as follows: given a user \(u\) and a pin \(p\), we build a function to predict the probabilities of user \(u\) performing different actions on the candidate pin \(p\). The set of different actions contains both positive and negative actions, e.g. click, repiz1 and hide.
Footnote 1: A “repizit” on Pinterest refers to the action of saving an existing pin to another board by a user.
We build _Pinnability_, Pinterest's Homefeed ranking model, to approach the above problem. The high-level architecture is a Wide and Deep learning (WDL) model (Beng et al., 2017). The Pinnability model utilizes various types of input signals, such as user signals, pin signals, and context signals. These inputs can come in different formats, including categorical, numerical, and embedding features.
We use embedding layers to project categorical features to dense features, and perform batch normalization on numerical features. We then apply a feature cross using a full-rank DCN V2 (Deng et al., 2018) to explicitly model feature interactions. At last, we use fully connected layers with a set of output action heads \(\mathbf{H}=\{h_{1},h_{2},\dots,h_{k}\}\) to predict the user actions on the candidate pin \(p\). Each head maps to one action. As shown in Figure 2, our model is a realtime-batch hybrid model that encodes the user action history features by both realtime (TransAct) and batch (PinnerFormer) approaches and optimizes for the ranking task (Wang et al., 2018).
Each training sample is \((\mathbf{x},\mathbf{y})\), where \(\mathbf{x}\) represents a set of features, and \(\mathbf{y}\in\{0,1\}^{|\mathbf{H}|}\). Each entry in \(\mathbf{y}\) corresponds to the label of
Figure 2. Pinterest Homefeed ranking model (Pinnability)
an action head in \(H\). The loss function of Pinnambility is a weighted cross-entropy loss, designed to optimize for multi-label classification tasks. We formulate the loss function as:
\[\mathcal{L}=w_{u}\sum_{h\in H}\left\{-w_{h}\left[y_{h}\log f(\mathbf{x})_{h}+(1-y_{h })(1-\log f(\mathbf{x})_{h})\right]\right\} \tag{1}\]
where \(f(\mathbf{x})\in(0,1)^{H}\), and \(f(\mathbf{x})_{h}\) is the output probability of head \(h\). \(y_{h}\in\{0,1\}\) is the ground truth on head \(h\).
A weight \(w_{h}\) is applied on the cross entropy of each head's output \(f(\mathbf{x})_{h}\). \(w_{h}\) is calculated using the ground truth \(\mathbf{y}\) and a label weight matrix \(\mathbf{M}\in\mathbb{R}^{|H|*|H|}\) as follows:
\[w_{h}=\sum_{a\in H}M_{h,a}\times y_{a} \tag{2}\]
The label weight matrix \(\mathbf{M}\) acts as a controlling factor for the contribution of each action to the loss term of each head3. Note that if \(\mathbf{M}\) is a diagonal matrix, Eq (1) reduces to a standard multi-head binary cross entropy loss. But selecting empirically determined label weights \(\mathbf{M}\) improves performance considerably.
Footnote 3: For more details, see Appendix A
In addition, each training example is weighted by a user-dependent weight \(w_{u}\), which is determined by user attributes, such as the user state4, gender and location. We compute \(w_{u}\) by multiplying user state weight, user gender weight, and user location weight: \(w_{u}=w_{\text{state}}\times w_{\text{location}}\times w_{\text{gender}}\). These weights are adjusted based on specific business needs.
Footnote 4: User states are used to group users of different behavior patterns, for example, users who engage daily are in one group, while those who engage once a month have a different user state
### Realtime User Action Sequence Features
User's past action history is naturally a variable length feature - different users have different amounts of past actions on the platform.
Although a longer user action sequence usually means more accurate user interest representation, in practice, it is infeasible to include all user actions. Because the time needed to fetch user action features and perform ranking model inference can also grow substantially, which in turn hurts user experience and system efficiency. Considering infrastructure cost and latency requirements, we choose to include each user's most recent 100 actions in the sequence. For users with less than 100 actions, we pad the feature to the length of 100 with 0s. The user action sequence features are sorted by timestamp in descending order, i.e. the first entry being the most recent action.
All actions in the user action sequence are pin-level actions. For each action, we use three primary features: the timestamp of the action, action type, and the 32-dimensional PinSage embedding (Song et al., 2018) of the pin. PinSage is a compact embedding that encodes a pin's content information.
### Our Approach: TransAct
Unlike static features, the realtime user action sequence feature \(S(\mathbf{u})=[a_{1},a_{2},...,a_{n}]\) is handled using a specialized sub-module called TransAct. TransAct extracts sequential patterns from the user's historical behavior and predicts \((u,p)\) relevance scores.
#### 3.3.1. Feature encoding
The relevance of pins that a user has engaged with can be determined by the types of actions taken on them in the user's action history. For example, a pin repinned to a user's board is typically considered more relevant than one that the user only viewed. If a pin is hidden by the user, the relevance should be very low. To incorporate this important information, we use trainable embedding tables to project action types to low-dimensional vectors. The user action type sequence is then projected to a user action embedding matrix \(\mathbf{W}_{actions}\in\mathbb{R}^{|S|\times d_{action}}\), where \(d_{action}\) is the dimension of action type embedding.
As mentioned earlier, the content of pins in the user action sequence is represented by PinSage embeddings (Song et al., 2018). Therefore, the content of all pins in the user action sequence is a matrix \(\mathbf{W}_{pins}\in\mathbb{R}^{|S|\times d_{PinSage}}\). The final encoded user action sequence feature is \(\text{CONCAT}(\mathbf{W}_{actions},\mathbf{W}_{pins})\in\mathbb{R}^{|S|\times(d_{PinSage }+d_{action})}\).
#### 3.3.2. Early fusion
One of the unique advantages of using user action sequence features directly in the ranking model is that we can explicitly model the interactions between the candidate pin and the user's engaged pins. Early fusion in recommendation tasks refers to merging user and item features at an early stage of the recommendation model. Through experiments, we find that early fusion is an important factor to improve ranking performance. Two early fusion methods are evaluated:
* append: Append candidate pin's PinSage embedding to user action sequence as the last entry of the sequence, similar to BST (Beng et al., 2018). Use a zero vector to serve as a dummy action type for candidate pin.
* concat: For each action in the user action sequence, concatenate the candidate pin's PinSage embedding with user action features.
We choose concat as our early fusion method based on the offline experiment results. The resulting sequence feature with early fusion is a 2-d matrix \(\mathbf{U}\in\mathbb{R}^{|S|\times d}\), where \(d=(d_{action}+2d_{PinSage})\)
#### 3.3.3. Sequence Aggregation Model
With the user action sequence feature \(\mathbf{U}\) prepared, the next challenge is to efficiently aggregate all the information in the user action sequence to represent the user's short-term preference. Some popular model architectures for sequential modeling in the industry include CNN(Wang et al., 2019), RNN (K
forward pass, a random time window \(T\) is sampled uniformly from 0 to 24 hours. All actions taken within \((t_{request}-T,t_{request})\) are masked, where \(t_{request}\) stands for the timestamp of receiving the ranking request. It is important to note that the random time window mask is only applied during training, while at inference time, the mask is not used.
#### 3.3.5. Transformer Output Compression
The output of the transformer encoder is a matrix \(\mathbf{O}=(\mathbf{\omega}_{0}:\mathbf{o}_{|\mathbf{S}|-1})\in\mathbb{R}^{|\mathcal{S}|\times d}\). We only take the first \(K\) columns \((\mathbf{\omega}_{0}:\mathbf{\alpha}_{K-1})\), concatenated them with the max pooling vector \(\texttt{MAXPOOL}(\mathbf{O})\in\mathbb{R}^{d}\), and flattened it to a vector \(\mathbf{z}\in\mathbb{R}^{(K+1)\times d}\). The first \(K\) output columns capture users' most recent interests and \(\texttt{MAXPOOL}(\mathbf{O})\) represents users' longer-term preference over \(S(\mathbf{u})\). Since the output is compact enough, it can be easily integrated into the Pinnability framework using the DCN v2 (Srivastava et al., 2017) feature crossing layer.
### Model Productionization
#### 3.4.1. Model Retraining
Retraining is important for recommender systems because it allows the system to continuously adapt to changing user behavior and preferences over time. Without retraining, a recommender system's performance can degrade as the user's behavior and preferences change, leading to less accurate recommendations (Srivastava et al., 2017). This holds especially true when we use realtime features in ranking. The model is more time sensitive and requires frequent retraining. Otherwise, the model can become stale in a matter of days, leading to less accurate predictions. We retrain Pinnability from scratch twice per week. We find that this retraining frequency is essential to ensure a consistent engagement rate and still maintain a manageable training cost. We will dive into the importance of retraining in Section 4.4.3.
#### 3.4.2. GPU serving
Pinnability with TransAct is 65 times more computationally complex compared to its predecessors in terms of floating point operations. Without any breakthroughs in model inference, our model serving cost and latency would increase by the same scale. GPU model inference allows us to serve Pinnability with TransAct at neutral latency and cost6.
Footnote 6: For more details about model efficiency, see Appendix C.
The main challenge to serve Pinnability on GPUs is the CUDA kernel launch overhead. The CPU cost of launching operations on the GPU is very high, but it is often overshadowed by the prolonged GPU computation time. However, this is problematic for Pinnability GPU model serving in two ways. First, Pinnability and recommender models in general process hundreds of features, which means that there is a large number of CUDA kernels. Second, the batch size during online serving is small and hence each CUDA kernel requires little computation. With a large number of small CUDA kernels, the launching overhead is much more expensive than the actual computation. We solved the technical challenge through the following optimizations:
**Fuse CUDA kernels.** An effective approach is to fuse operations as much as possible. We leverage standard deep learning compilers such as nvFuser7 but often found human intervention is needed for many of the remaining operations. One example is our embedding table lookup module, which consists of two computation steps: raw id to table index lookup and table index to embedding lookup. This is repeated hundreds of times due to the large number of features. We significantly reduce the number of operations by leveraging cuCollections8 to support hash tables for the raw ids on GPUs and implementing a custom consolidated embedding lookup module to merge the lookup for multiple features into one lookup. As a result, we reduced hundreds of operations related to sparse features into one.
Footnote 7: [https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch](https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch).
Footnote 8: [https://github.com/](https://github.com/) NVIDIA/cuCollections
**Combine memory copies.** For every inference, hundreds of features are copied from the CPU to the GPU memory as individual tensors. The overhead of scheduling hundreds of tensor copies becomes the bottleneck. To decrease the number of tensor copy operations, we combine multiple tensors into one continuous buffer before transferring them from CPU to GPU. This approach reduces the scheduling overhead of transferring hundreds of tensors individually to transferring one tensor.
**Form larger batches.** For CPU-based inference, smaller batches are preferred to increase parallelism and reduce latency. However, for GPU-based inference, larger batches are more efficient (Srivastava et al., 2017). This led us to re-evaluate our distributed system setup. Initially, we used a scatter-gather architecture to split requests into small batches and run them in parallel on multiple leaf nodes for better latency. However, this setup did not work well with GPU-based inference. Instead, we use the larger batches in the original requests directly. To compensate for the loss of cache capacity, we implemented a hybrid cache that uses both DRAM and SSD.
**Utilize CUDA graphs.** We relied on CUDA Graphs9 to completely eliminate the remaining small operations overhead. CUDA Graphs capture the model inference process as a static graph of
Figure 3. TransAct architecture. Note that this is a submodule that can be plugged into any similar architecture like Pinnability
operations instead of individually scheduled ones, allowing the computation to be executed as a single unit without any kernel launching overheads.
#### 3.4.3. Realtime Feature Processing
When a user takes an action, a realtime feature processing application based on Flink10 consumes user action Kafka11 streams generated from front-end events. It validates each action record, detects and combines duplicates, and manages any time discrepancies from multiple data sources. The application then materializes the features and stores them in Rockstor (Rockstor, 2017). At serving time, each Homefeed logging/serving request triggers the processor to convert sequence features into a format that can be utilized by the model.
Footnote 10: [https://flink.apache.org/](https://flink.apache.org/)
## 4. Experiment
In this section, we will present extensive offline and online A/B experiment results of TransAct. We compare TransAct with baseline models using Pinterest's internal training data.
### Experiment Setup
#### 4.1.1. Dataset
We construct the offline training dataset from three weeks of Pinterest Homefeed view log (FVL). The model is trained on the first two weeks of FVL and evaluated on the third week. The training data is sampled based on user state and labels. For example, we design the sampling ratio for different label actions based on their statistical distribution and importance. In addition, since users only engage with a small portion of pins shown on their Homefeed page, most of the training samples are negative samples. To balance the highly skewed dataset and improve model accuracy, we employ downsampling on the negative samples and set a fixed ratio between the positive and negative samples. Our training dataset contains 3 billion training instances of 177 million users and 720 million pins.
In this paper, we conduct all experiments with the Pinterest dataset. We do not use public datasets as they lack the necessary realtime user action sequence metadata features, such as item embeddings and action types, required by TransAct. Furthermore, they are incompatible with our proposal of realtime-batch hybrid model, which requires both realtime and batch user features. And they cannot be tested in online A/B experiments.
#### 4.1.2. Hyperparameters
Realtime user sequence length is \(|S|=100\) and the dimension of action embedding \(d_{action}=32\). The encoded sequence feature is passed through a transformer encoder composed of 2 transformer blocks, with a default dropout rate of 0.1. The feed forward network in the transformer encoder layer has a dimension of \(d_{hidden}=32\), and positional encoding is not used. The implementation is done using PyTorch. We use an Adam (Kingmaa et al., 2014) optimizer with a learning rate scheduler. The learning rate begins with a warm-up phase of 5000 steps, gradually increasing to 0.0048, and finally reduced through cosine annealing. The batch size is 12000.
### Offline Experiment
#### 4.2.1. Metrics
The offline evaluation data, unlike training data, is randomly sampled from FVL to represent the true distribution of the real-world traffic. With this sampling strategy, the offline evaluation data is representative of the entire population, reducing the variance of evaluation results.
In addition to sampling bias, we also eliminate position bias in offline evaluation data. Position bias refers to the tendency for items at the top of a recommendation to receive more attention and engagement than the items lower down the list. This can be a problem when evaluating a ranking model, as it can distort the evaluation results and make it difficult to accurately assess the model's performance. To avoid position bias, we randomize the order of pins in a very small portion of Homefeed recommendation sessions. This is done by shuffling the recommendations before presenting them to users. We gather the FVL for those randomized sessions and only use randomized data to perform the offline evaluation.
Our model is evaluated on HIT@3. A chunk \(c=[p_{1},p_{2},\dots,p_{n}]\) refers to a group of pins that are recommended to a user at the same time. Each input instance to the ranking model is associated with a user id _u_id, a pin id _p_id, and a chunk id _c_id_. The evaluation output is grouped by (_u_id,c_id_) so that it contains the model output from the same ranking request. We sort the pins from the same ranking request by a final ranking score \(\mathcal{S}\), which is a linear combination of Pinnability output heads \(f(\mathbf{x})\).
\[\mathcal{S}=\sum_{h\in H}u_{h}f(\mathbf{x})_{h} \tag{3}\]
Then we take the top \(K\) ranked pins in each chunk and calculate the hit@K for all heads, denoted by \(\beta_{c,h}\), which is defined as the number of topK-ranked pins whose labels of \(h\) are 1. For example, if a chunk \(c=[p_{1},p_{2},p_{3},\dots,p_{n}]\) is sorted by \(\mathcal{S}\), and the user repins \(p_{1}\) and \(p_{4}\), then hit@K of repin \(\beta_{c,repin}=1\) when \(K=3\).
We calculate the aggregated HIT@3 for each head \(h\) as follows:
\[HIT\text{@}3/h=\frac{\sum_{u\in U}\sum_{c\in C_{u}}\beta_{c,h}}{|U|} \tag{4}\]
It is important to note that for actions indicating positive engagement, such as repin or click, a higher HIT@K score means better model performance. Conversely, for actions indicating negative engagement, such as hide, a lower HIT@K/hide score is desirable.
At Pinterest, a non-core user is defined as a user who has not actively saved pins to boards within the past 28 days. Non-core users tend to be less active and therefore pose a challenge in terms of improving their recommendation relevance due to their limited historical engagement. This is also referred to as the cold-start user problem in recommendation (Kafka, 2017). Despite the challenges, it is important to retain non-core users as they play a crucial role in maintaining a diverse and thriving community, contributing to long-term platform growth.
All reported results are statistically significant (p-value \(<0.05\)) unless stated otherwise.
#### 4.2.2. Results
We compare TransAct with existing methods of sequential recommendation. The first baseline is the WDL model (Beng et al., 2017) that incorporates sequence features as part of its wide features. Due
to the large size of the sequence features, the number of parameters in the feature cross layer would grow quadratically, making it unfeasible for both training and online serving. Therefore, we used an averaging pooling for PinSage embeddings of user actions to encode the sequence. The second baseline is Alibaba's behavior sequence transformer (BST) model (Beng et al., 2017). We trained 2 BST model variants here: one with only positive actions in user sequence, the other with all actions. We opted not to compare our results with DIN (Zhu et al., 2019) as BST has already demonstrated its superiority over DIN. Additionally, we did not compare with variants like BERT4Rec (Krishnan et al., 2017) as the problem formulations are different and a direct comparison is not feasible.
The results of the model comparison are presented in Table 1. It is evident that BST and TransAct outperform the WDL model, demonstrating the necessity of using a specialized sequential model to effectively capture short-term user preferences through real-time user action sequence features. BST performs well when only positive actions are encoded, however, it struggles to distinguish negative actions. In contrast, TransAct outperforms BST, particularly in terms of hide prediction, due to its ability to distinguish between different actions by encoding action types. Furthermore, TransAct also exhibits improved performance in HIT@3/repin compared to BST, which can be attributed to its effective early fusion and output compression design. A common trend across all groups is that the performance for non-core users is better than for all users, this is due to realtime user action features being crucial for users with limited engagement history on the platform, as they provide the only source of information for the model to learn their preferences.
### Ablation Study
#### 4.3.1. Hybrid ranking model
First, we investigate the effect of the realtime-batch hybrid design by examining the individual impact of TransAct(realtime component) and Pinnerformer(batch component). Table 2 shows the relative decrease in offline performance from the model containing all user features as we remove each component. TransAct captures users' immediate interests, which contribute the most to the user's overall engagement, while PinnerFormer (PF) (Pang et al., 2017) extracts users' long-term preferences from their historical behavior. We observe that TransAct is the most important user understanding feature in the model, but we still see value from the large-scale training and longer-term interests captured by PinnerFormer, showing that longer-term batch user understanding can complement a realtime engagement sequence for recommendations. In the last row of Table 2, we show that removing all user features other than TransAct and PinnerFormer only leads to a relatively small drop in performance, demonstrating the effectiveness of our combination of a realtime sequence model with a pre-trained batch model.
#### 4.3.2. Base sequence encoder architecture
We perform an offline evaluation on different sequential models that process realtime user sequence features. We use different architectures to encode the PinSage embedding sequence from users' realtime actions.
**Average Pooling**: use the average of PinSage embeddings in user sequence to present the user's short-term interest
**CNN**: use a 1-d CNN with 256 output channels to encode the sequence. Kernel size is 4 and stride is 1.
**RNN**: use 2 RNN layers with a hidden dimension of 256, to encode a sequence of PinSage embeddings.
**LSTM**: use Long Short-Term Memory (LSTM) (Hochreiter et al., 2015), a more sophisticated version of RNN that better captures longer-term dependencies by using memory cells and gating. We use 2 LSTM layers with the hidden size of 256.
**Vanilla Transformer**: encodes only PinSage embeddings sequence directly using the Transformer encoder module. We use 2 transformer encoder layers with a hidden dimension of 32.
The baseline group is the Pinnability model without realtime user sequence feature. From Table 3, we learned that using realtime user sequence features, even with a simple average pooling method, improves engagement. Surprisingly, more complex architectures like RNN, CNN, and LSTM do not always perform better than average pooling. However, the best performance is achieved with the use of a vanilla transformer, as it significantly reduces HIT@3/hide and improves HIT@3/repin.
#### 4.3.3. Early fusion and sequence length selection
As discussed in Section 3.3.2, early fusion plays a crucial role in the ranking model. By incorporating early fusion, the model can not only take into account the dependency between different items in the user's action history but also explicitly learn the relationship between the ranking candidate pin and each pin that the user has engaged with in the past.
Longer user action sequences naturally are more expressive than short sequences. To learn the effect of input sequence length on
\begin{table}
\begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{Methods} & \multicolumn{2}{c}{HIT@3/repin} & \multicolumn{2}{c}{HIT@3/hide} \\ \cline{2-5} & all & non-core & all & non-core \\ \hline WDL + seq & +0.21\% & +0.35\% & -1.61\% & -1.55\% \\ BST (all actions) & +4.41\% & +5.09\% & +2.33\% & +3.59\% \\ BST (positive actions) & +7.34\% & +8.16\% & -1.12\% & -3.14\%* \\ TransAct & **+9.40\%** & **+10.42\%** & **-14.86\%** & **-13.54\%** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Offline evaluation of comparing existing methods with TransAct. (* statistically insignificant)
\begin{table}
\begin{tabular}{c c c} \hline \hline Sequence Encoder & HIT@3/repin & HIT@3/hide \\ \hline Average Pooling & +0.21\% & -1.61\% \\ CNN & +0.08\% & -1.29\% \\ RNN & -1.05\% & -2.46\% \\ LSTM & -0.75\% & -2.98\% \\ Vanilla Transformer & **+1.56\%** & **-8.45\%** \\ \hline \hline \end{tabular}
\end{table}
Table 2. Ablation study of realtime-batch hybrid model
\begin{table}
\begin{tabular}{c c c} \hline \hline \multirow{2}{*}{TransAct} & \multirow{2}{*}{PF} & Other User \\ & Features & HIT@3/repin \\ \hline ✓ & ✓ & ✓ & \(-\) \\ ✓ & ✕ & ✓ & -2.46\% \\ ✕ & ✓ & ✓ & -8.59\% \\ ✓ & ✓ & ✕ & -0.67\% \\ \hline \hline \end{tabular}
\end{table}
Table 3. Offline evaluation of sequence encoder architecture
the model performance, we evaluate the model on different lengths of user sequence input.
An analysis of Figure 4 reveals that there is a positive correlation between sequence length and performance. The performance improvement increases at a rate that is sub-linear with respect to the sequence length. The use of concatenation as the early fusion method was found to be superior to the use of appending. Therefore, the optimal engagement gain can be achieved by utilizing the maximum available sequence length and employing concatenation as the early fusion method.
#### 4.3.4. Transformer hyperparameters
We optimized TransAct's transformer encoder by adjusting its hyperparameters. As shown in Figure 5, increasing the number of transformer layers and feed forward dimension leads to higher latency and also better performance. While the best performance was achieved using 4 transformer layers and 384 as the feed forward dimension, this came at the cost of a 30% increase in latency, which does not meet the latency requirement. To balance performance and user experience, we chose 2 transformer layers and 32 as the hidden dimension.
#### 4.3.5. Transformer output compression
The transformer encoder produces \(\mathbf{O}\in\mathbb{R}^{d\times|S|}\), with each column corresponding to an input user action. However, directly using \(\mathbf{O}\) as input to the DCN v2 layers for feature crossing would result in excessive time complexity, which is quadratic to the input size.
To address this issue, we explored several approaches to compress the transformer output. Figure 3 shows that the highest HIT@3/repin is achieved by combining the first K columns and applying max pooling to the entire sequence. The first K column represents the most recently engaged pins and the max pooling is an aggregated representation of the entire sequence, Although using all columns improved HIT@3/hide slightly, the combination of the first K columns and max pooling provided a good balance between performance and latency. We use K=10 for TransAct.
### Online Experiment
Compared with offline evaluation, one advantage of online experiments in recommendation tasks is that they can be run on live user data, allowing the model to be tested in a more realistic and dynamic environment. For the online experiment, we serve the ranking model trained on the 2-week offline training dataset. We set the control group to be the Pinnability model without any realtime user sequence features. The treatment group is Pinnability model with TransAct. Each experiment group serves 1.5% of the total users who visit Homefeed page.
#### 4.4.1. metrics
On Homefeed, one of the most important metrics is **Homefeed repin volume**. Repin is the strongest indicator that users find the recommended pins relevant, and is usually positively correlated to the amount of time users spend on Pinterest. Empirically, we found that offline HIT@3/repin usually aligns very well with Homefeed online repin volume. Another important metric is **Homefeed hide volume**, which measures the proportion of recommended items that users choose to hide or remove from their recommendations. High hide rates indicate that the system is recommending items that users do not find relevant, which can lead to a poor user experience. Conversely, low hide rates indicate that the system is recommending items that users find relevant and engaging, which can lead to a better user experience.
#### 4.4.2. Online engagement
We observe significant online metric improvement with TransAct introduced to ranking. Figure 5 shows that we improved the Homefeed repin volume by 11%. It's worth noting that engagement gains for non-core users are higher because they do not have a well-established user action history. And realtime features can capture their interest in a short time. Using TransAct, the Homefeed page is able to respond quickly and adjust the ranking results timely. We see hide volume dropped and that the overall time spent on Pinterest is increased.
#### 4.4.3. Model retrain
One challenge observed in the TransAct group was the decay of engagement metrics over time for a given user. As shown in Figure 6, we compare the Homefeed repin volume gain of TransAct to the baseline, with both groups either fixed or retrained. We observed that if TransAct was not retrained, despite having a significantly higher engagement on the first day of the
\begin{table}
\begin{tabular}{c c c c} \hline \hline Output Compression & Size & HIT@3/repin & HIT@3/hide \\ \hline a random col & \(d\) & +6.80\% & -10.96\% \\ first col & \(d\) & +7.82\% & -11.28\% \\ random K cols & \(Kd\) & +7.42\% & -12.12\% \\ first K cols & \(Kd\) & +9.38\% & -14.33\% \\ all cols & \(|S|d\) & +8.86\% & -15.**70\%** \\ max pooling & \(d\) & +6.38\% & -14.15\% \\
**first K cols + max pool** & \((K+1)d\) & **+9.41\%** & -14.86\% \\ all cols + max pool & \((|S|+1)d\) & +8.67\% & -12.64\% \\ \hline \hline \end{tabular}
\end{table}
Table 4. Ablation study of transformer output compression
Figure 4. Effect of early fusion and sequence length on ranking model performance (HIT@3/repin, HIT@3/hide)
Figure 5. Effect of transformer hyperparameters on model performance and latency
experiment, it gradually decreased to a lower level over the course of two weeks. However, when TransAct was retrained on fresh data, there was a noticeable increase in engagement compared to not retraining the model. This suggests that TransAct, which utilizes realtime features, is highly sensitive to changes in user behavior and requires frequent retraining. Therefore, it is desired to have a high retrain frequency when using TransAct. In our production, we set the retrain frequency to twice a week and this retrain frequency has been proven to keep the engagement rate stable.
#### 4.4.4. Random time window masking
Another challenge observed was dropping diversity in recommendations. Diversity measures the broadness and variety of the items being recommended to a user. Previous literature(Wang et al., 2017) finds diversity is associated with increasing user visiting frequency. However, diversity is not always desirable as it can lead to a drop in relevance. Therefore, it is crucial to find the right balance between relevance and diversity in recommendations.
At Pinterest, we have a 28k-node hierarchical interest taxonomy (Pinterest, 2015) that classifies all the pins. The top-level interests are coarse. Some examples of top-level interests are art, beauty, and sport. Here, we measure the **impression diversity** as the summation of the number of unique top-level interests viewed per user. We observe that with TransAct introduced to Homefeed ranking, the impression diversity dropped by 2% to 3%. The interpretation is that by adding the user action sequence feature, the ranking model learns to optimize for the user's short-term interest. And by focusing on mainly short-term interest, the diversity of the recommendation dropped.
We mitigate the diversity drop by using a random time window mask in the transformer as mentioned in Section 3.3.3. This random masking encourages the model to focus on content other than only the most recent items a user engaged with. With this design, the diversity metric drop was brought back to only -1% without influencing relevance metrics like repin volume. We also tried using a higher dropout rate in the transformer encoder layer and randomly masking out a fixed percentage of actions in the user action sequence input. However, neither of these methods yielded better results than using random time window masking. They increased the diversity at the cost of engagement drop.
## 5. Discussion
### Feedback Loop
An interesting finding from our online experiment is that the true potential of TransAct is not fully captured. We observed a greater improvement in performance when the model was deployed as the production Homefeed ranking model for full traffic. This is due to the effect of a positive feedback loop: as users experience a more responsive Homefeed built on TransAct, they tend to engage with more relevant content, leading to changes in their behavior (such as more clicks or repins). These changes in behavior lead to shifts in the realtime user sequence feature, which are then used to generate new training data. Retraining the Homefeed ranking model with this updated data results in a positive compounding effect, leading to a higher engagement rate and a stronger feedback loop. This phenomenon is similar to 'direct feedback loops" in literature (Zhou et al., 2017) which refers to a model that directly influences the selection of its own future training data, and it is more difficult to detect if they occur gradually over time.
### TransAct in Other Tasks
The versatility of TransAct extends beyond just ranking tasks. It has been successfully applied in the contextual recommendation and search ranking scenarios as well. TransAct is used in **Related Pins**(Zhou et al., 2017) ranking, a contextual recommendation model to provide personalized recommendations of pins based on a given query pin. TransAct is also applied in Pinterest's **Search** ranking (Beng et al., 2017) system and **notification** ranking (Wang et al., 2017). Table 6 showcases the effectiveness of TransAct in a variety of use cases and its potential to drive engagement in more real-world applications.
## 6. Conclusions
In this paper, we present TransAct, a transformer-based realtime user action model that effectively captures users' short-term interests by encoding their realtime actions. Our novel hybrid ranking model merges the strengths of both realtime and batch approaches of encoding user actions, and has been successfully deployed in the Homefeed recommendation system at Pinterest. The results of our offline experiments indicate that TransAct significantly outperforms state-of-the-art recommender system baselines. In addition, we have discussed and provided solutions for the challenges faced during online experimentation, such as high serving complexity, diversity decrease, and engagement decay. The versatility and effectiveness of TransAct make it applicable for other tasks, such as contextual recommendations and search ranking.
\begin{table}
\begin{tabular}{c c c} \hline \hline Application & Metrics & \(\Delta\) \\ \hline Related Pins & Repin Volume & +2.8\% \\ \hline Search & Repin Volume & +2.3\% \\ \hline \multirow{2}{*}{Notification} & Email CTR & +1.4\% \\ & Push Open Rate & +1.9\% \\ \hline \hline \end{tabular}
\end{table}
Table 6. TransAct’s impact on other applications
Figure 6. Effect of retraining on TransAct
\begin{table}
\begin{tabular}{c c c} \hline \hline Online Metrics & All Users & Non-core Users \\ \hline Homefeed Repin Volume & +11.0\% & +17.0\% \\ Homefeed Hide Volume & -10.0\% & -10.5\% \\ Overall Time Spent & +2.0\% & +1.5\% \\ \hline \hline \end{tabular}
\end{table}
Table 5. Online evaluation of TransAct |
2309.06916 | Holographic description of the dissipative unified dark fluid model with
axion field | In this article we extend an axion F(R) gravity model, and apply the
holographic principle to describe in a unifying manner the early and the
late-time universe when the general equation of state (EoS) contains a bulk
viscosity. We assume a spatially flat Friedmann-Robertson-Walker (FRW) universe
model. We use a description based on the generalized infrared-cutoff
holographic dark energy proposed by Nojiri and Odintsov (2006, 2017), and
explore the evolution of the universe when the EoS describes the asymptotic
behavior between the dust in the early universe and the late universe. We
explore various forms of the bulk viscosity, and calculate analytical
expressions for the infrared cutoffs in terms of the particle horizon. In this
way we obtain a unifying description of the early and the late-time universe in
the presence of axion matter, via a viscous holographic fluid model. | I. Brevik, A. V. Timoshkin | 2023-09-13T12:29:25Z | http://arxiv.org/abs/2309.06916v1 | # Holographic description of the dissipative unified dark fluid model with axion field
###### Abstract
In this article we extend an axion F(R) gravity model, and apply the holographic principle to describe in a unifying manner the early and the late-time universe when the general equation of state (EoS) contains a bulk viscosity. We assume a spatially flat Friedmann-Robertson-Walker (FRW) universe model. We use a description based on the generalized infrared-cutoff holographic dark energy proposed by Nojiri and Odintsov (2006, 2017), and explore the evolution of the universe when the EoS describes the asymptotic behavior between the dust in the early universe and the late universe. We explore various forms of the bulk viscosity, and calculate analytical expressions for the infrared cutoffs in terms of the particle horizon. In this way we obtain a unifying description of the early and the late-time universe in the presence of axion matter, via a viscous holographic fluid model.
Keywords: viscous dark fluid, holographic principle, axion matter. Mathematics Subject Classification 2020: 83C55, 83C56, 83F05
## I Introduction
The holographic principle [1] is one of the current approaches aiming for describing the evolution of the universe. A generalized form of the cutoff holographic dark energy (HDE) model was proposed Nojiri and Odintsov [2; 3]. The model can be applied to give a unified description of the early and the late-time accelerated universe. The theory of the holographic principle is associated with the thermodynamics of black holes and string theory [4; 5]. The infrared cutoff can be represented as a combination of various FRW universe parameters: the Hubble function, the particle and the future event horizons, the cosmological constant, and the finite life time of the universe. In the general case, infrared cutoff may be constructed as an arbitrary combination all above quantities and their derivatives. If the life time of the universe is finite due to singularities of various types, the infrared radius depends also on the singularity time. Various versions of the holographic cutoffs were considered in [6; 7; 8; 9; 10; 11; 12; 13]. Earlier, as was shown in [14-16], all the known holographic dark energy models represent subclasses of the Nojiri-Odintsov HDE. The holographic theory of the universe is well confirmed by astronomical observations [17; 18; 19; 20; 21; 22]. In the present article we describe both the early-time and the late-time cosmic accelerating expansion in a single cosmological model. In Ref. [23], a unifying approach was proposed to describe both the early-time and the late-time universe based on phantom cosmology. A unified model of dark energy and dark matter in standard FRW cosmology was suggested in the works [24-27]. We will here suppose that the main component of cold dark matter in the universe is axion matter [28-34].
## II Holographic description of the accelerated universe with axion matter
Let us highlight the main aspects of the holographic principle, following the terminology given by Li in [1]. The holographic principle states that all physical quantities within the universe, including the dark energy density, can be described by fixing some quantities at the boundary of the universe [29]. In the holographic description the main component in this context is the cutoff radius of the horizon. According to the generalized model referred to earlier [2], the holographic energy density is taken to be inversely proportional to the squared infrared cutoff
\[\rho_{\rm hol}=3c^{2}k^{2}L_{\rm IR}^{-2} \tag{1}\]
where \(k\) is Einstein's gravitational constant, \(G\) is Newton's gravitational constant, and \(c>0\) is a nondimensional constant. If the dark energy is described in this manner, it suggests that the horizon cutoff radius corresponds to the infrared cutoff. Although there is no concrete recipe about how to choose this parameter the most suitable choice of it, describing the accelerated expansion of the universe, is to equal it to the particle horizon, eventually to the future
horizon, defined respectively by [3]
\[L_{p}(t)=a(t)\int_{0}^{t}\frac{dt^{\prime}}{a(t^{\prime})},\quad L_{f}(t)=a(t) \int_{t}^{\infty}\frac{dt^{\prime}}{a(t^{\prime})}, \tag{2}\]
where \(a(t)\) is the scale factor. It should be noted that not all choices of the cutoff infrared radius lead to an accelerated expansion of the universe. The choice of a cutoff radius is not arbitrary. One way of obtaining a useful description of the era of inflation and the era of dark energy, is to introduce a model of axion F(R) gravity. Within this model, it is possible to combine the era of inflation with the era of dark energy. We will consider a F(R) gravity model in the presence of a misalignment axion canonical scalar field \(\phi\) with the approximate scalar potential \(V(\phi)\approx\frac{1}{2}m_{a}^{2}\phi_{i}^{2}\), where \(m_{a}\) is the axion mass and \(\phi_{i}\) is the axion scalar. First of all, we will show how it is possible to consider the axion scalar as constituting a cold dark matter perfect fluid.
Let us consider the canonical equation of motion for the axion with scalar potential [30]
\[\ddot{\phi}+3H\dot{\phi}+m_{a}^{2}\phi=0. \tag{3}\]
Since the second term in this equation describes friction, we have to do with decaying oscillations. The evolution of the universe describes a damped oscillator that approximately begins when \(H\sim m_{a}\) and lasts until \(H\gg m_{a}\). Let us suppose that the oscillatory solution of the axion equation (3) has the form
\[\phi(t)=\phi_{i}A(t)\cos m_{a}t, \tag{4}\]
where \(\phi_{i}\) is the initial value of the axion field after the end of inflation and \(A(t)\) is a slow-varying function. The function is monotonous due to the conditions
\[\frac{A}{m_{a}}\sim\frac{H}{m_{a}}\approx\varepsilon\ll 1, \tag{5}\]
(5) which are valid at cosmic times for which \(H\gg m_{a}\). Using (4) and to conditions (5) one obtains the equation of motion (3) in the form
\[\frac{dA}{A}=-\frac{da}{a}, \tag{6}\]
which has the solution
\[A\sim a^{-3/2}. \tag{7}\]
Let us write the expressions for the energy density and pressure of the axion field. They are equal to
\[\rho_{a}=\frac{1}{2}\dot{\phi}^{2}+V(\phi) \tag{8}\]
\[P_{a}=\frac{1}{2}\dot{\phi}^{2}-V(\phi), \tag{9}\]
showing that the misaligned axion field can be considered as a canonical scalar field.
Calculating the term \(\frac{1}{2}\dot{\phi}^{2}\) with the approximation (5), we obtain
\[\frac{1}{2}\dot{\phi}^{2}\approx\frac{1}{2}m_{a}^{2}\phi_{i}^{2}A^{2}\sin^{2} m_{a}t. \tag{10}\]
Then the axion potential is equal to
\[V(\phi)=\frac{1}{2}m_{a}^{2}\phi_{i}^{2}A^{2}\cos^{2}m_{a}t. \tag{11}\]
Using equations (10), (11) and the formula for the axion energy density (8), we obtain
\[\rho_{a}\approx\frac{1}{2}(m_{a}\phi_{i}A)^{2}. \tag{12}\]
In this case, taking into account (7), the expression for the axion energy density becomes [29]
\[\rho_{a}\approx\frac{1}{2}\rho_{m}^{(0)}a^{-3}, \tag{13}\]
where \(\rho_{m}^{(0)}=\frac{1}{2}(m_{a}\phi_{i})^{2}\).
In summary, we conclude that the axion energy density behaves as \(\rho_{a}\sim a^{-3}\) for all cosmic times, assuming that \(m_{a}\gg H\). Hence, we have seen that the axion scalar can be considered as the constituent of a cold dark matter perfect fluid.
Let us also calculate the pressure of the axion scalar, using (10, 11) with the help of (9),
\[P_{a}\approx-(m_{a}\phi_{i}A)^{2}\cos 2m_{a}t. \tag{14}\]
Writing the equation-of-state thermodynamic parameter for the axion scalar as \(\omega_{a}=P_{a}/\rho_{a}\), we obtain
\[\omega_{a}=-\cos 2m_{a}t, \tag{15}\]
whose average value is zero. This is a consequence of our model of the axion being a cold dark matter particle.
## III Dissipative unified dark fluid model
Let us consider the universe filled with viscous dark fluid in presence of axion matter in a homogeneous and isotropic spatially flat Friedmann-Robertson-Walker (FRW) metric,
\[ds^{2}=-dt^{2}+\sum_{i=1}^{3}(dx^{i})^{2}. \tag{16}\]
The modified Friedmann equation is [29]
\[H^{2}=\frac{1}{3}k^{2}(\rho+\rho_{a}) \tag{17}\]
where \(H=\dot{a}/a\) is the Hubble function and \(\rho\) is the holographic dark energy.
We will describe the system, contains a viscous dark fluid in presence of the axion matter in terms of the parameters appearing in the effective inhomogeneous (EoS) in flat FRW space-time [35; 36]
\[p=\omega(\rho,t)\rho+f(\rho)-3H\zeta(H,t). \tag{18}\]
where \(\omega(\rho,t)\) is the thermodynamic parameter and \(\zeta(H,t)\) is the bulk viscosity, which in general depends on both the Hubble function and on the time t. From thermodynamic considerations we take the bulk viscosity to be positive.
Let us consider the model of a unified description of the early and late universe. For this purpose, we choose the function in the form [25]
\[f(\rho)=\frac{\gamma\rho^{n}}{1+\delta\rho^{m}}. \tag{19}\]
where \(\gamma,\delta,n,m\) are free parameters. The function \(f(\rho)\) in the formula (19) of the EoS provides the description of the unified early and late universe. Using interpolation between different powers in the expression (19), we can be describe the asymptotic behavior between the dust in the early universe and the late universe [26].
Dissipative processes are described with the bulk viscosity in the form [36]
\[\zeta(H,t)=\xi_{1}(t)(3H)^{p}, \tag{20}\]
where the parameter \(p\) is positive.
The energy conservation law takes the standard form
\[\dot{\rho}+3H(\rho+p)=0. \tag{21}\]
We will now distinguish between two cases.
**Case 1.**
First, we consider the simplest case, when the thermodynamic parameter is \(\omega=\omega_{0}\) and the bulk viscosity is \(\zeta(H,t)=\zeta_{0}\), both constants. We restrict ourselves to the values \(n=\frac{3}{2}\) and \(m=1\), what corresponds to \(n-m=\frac{1}{2}\). Then the equation of state (18) will read
\[p=\omega_{0}\rho+\frac{\gamma\rho^{3/2}}{1+\delta\rho^{1/2}}-3\zeta_{0}H. \tag{22}\]
The Friedmann equation (17), when the axion energy density (13) is taken into account, reads
\[\rho=\frac{3}{k^{2}}H-\frac{\rho_{m}^{(0)}}{a^{3}}. \tag{23}\]
Using (22), (23) in the approximation of large \(\rho\) one obtains from (21)
\[\frac{2}{k^{2}}\dot{H}+\frac{3}{k^{2}}\left(\omega_{0}+\frac{\gamma}{\delta}+1 \right)H^{2}-\left(\omega_{0}+\frac{\gamma}{\delta}\right)\frac{\rho_{m}^{(0) }}{a^{3}}-3\zeta_{0}H=0. \tag{24}\]
We write this equation in terms of the scale factor as
\[\ddot{a}a+\left[\frac{3}{2}\left(\omega_{0}+\frac{\gamma}{\delta}\right)- \frac{1}{2}\right]\dot{a}^{2}-\frac{3}{2}\zeta_{0}k^{2}\dot{a}a-\frac{1}{2}k^ {2}\rho_{m}^{(0)}\left(\omega_{0}+\frac{\gamma}{\delta}\right)\frac{1}{a}=0. \tag{25}\]
If we take \(\omega_{0}=-\frac{1}{3}-\frac{\gamma}{\delta}\), equation (25) simplifies to
\[\ddot{a}-\frac{3}{2}\zeta_{0}k^{2}\dot{a}+\frac{k^{2}}{6}\rho_{m}^{(0)}\frac{1 }{a^{2}}=0. \tag{26}\]
When the viscosity \(\zeta_{0}\to 0\) the solution of (26) becomes
\[\frac{1}{C_{1}}\sqrt{a\left(C_{1}a+\frac{1}{3}k^{2}\rho_{m}^{(0)}\right)}+ \frac{1}{3C_{1}^{3/2}}k^{2}\rho_{m}^{(0)}\ln\Big{|}\frac{\sqrt{3C_{1}a}}{k \sqrt{\rho_{m}^{(0)}}}\left(1-\sqrt{\frac{1}{3C_{1}}k^{2}\rho_{m}^{(0)}\frac{1 }{a}+1}\,\right)\Big{|}=t+C_{2}, \tag{27}\]
where \(C_{t}\neq 0\) and \(C_{2}\) is arbitrary.
Let us consider the particular case when \(C_{1}=C_{2}=0\). The the solution of the equation of motion becomes
\[a(t)=\rho_{a}^{(0)}t^{\frac{2}{3}}, \tag{28}\]
where \(\rho_{a}^{(0)}=\left(\frac{1}{3}k^{2}\rho_{m}^{(0)}\right)^{1/3}\).
Correspondingly, the Hubble function becomes
\[H(t)=\frac{2}{3t}, \tag{29}\]
and the particle horizon \(L_{p}\) becomes
\[L_{p}=3t. \tag{30}\]
Let us now interpret equation (26) from a holographic point of view. From [2], the Hubble function \(H\) can be expressed in terms of the particle horizon and its time derivative,
\[H=\frac{\dot{L}_{p}-1}{L_{p}},\quad\dot{H}=\frac{\ddot{L}_{p}}{L_{p}}-\frac{ \dot{L_{p}}^{2}}{L_{p}^{2}}+\frac{\dot{L}_{p}}{L_{p}^{2}}. \tag{31}\]
In our case it is necessary to express the scale factor and its time derivative in terms of the particle horizon and its derivative,
\[a=\rho_{a}^{(0)}\left(\frac{\dot{L}_{p}}{L_{p}}\right)^{-\frac{2}{3}},\quad \dot{a}=\frac{2}{3}\rho_{a}^{(0)}\left(\frac{\dot{L}_{p}}{L_{p}}\right)^{\frac {1}{3}},\quad\ddot{a}=-\frac{2}{9}\rho_{a}^{(0)}\left(\frac{\dot{L}_{p}}{L_{p} }\right)^{\frac{4}{3}}. \tag{32}\]
Thus, by using (32) we can rewrite the energy conservation equation (26) in the holographic form
\[\rho_{a}^{(0)}\left(\frac{2}{9}\frac{\dot{L}_{p}}{L_{p}}+\zeta_{0}k^{2}\right)- \frac{k^{2}}{6}\frac{\rho_{m}^{(0)}}{\rho_{a}^{(0)}}\left(\frac{\dot{L_{p}}}{L_ {p}}\right)^{\frac{1}{3}}=0. \tag{33}\]
Thereby, we have applied the holographic principle to this model. Equation (33) shows the holographic description of the viscous dark fluid model with axion matter.
**Case 2.**
Next, we will consider the case, when the thermodynamic parameter is constant, and assume the bulk viscosity to be linearly proportional to the Hubble function, where the viscosity parameter is positive (in natural units where the fundamental length is cm, the dimension of is, and since the dimension of is, the dimension of is ). We will work in the approximation for the parameters and, considered in the previous case. The EoS (18) becomes
\[p=\omega_{0}\rho+\frac{\gamma\rho^{3/2}}{1+\rho\delta^{1/2}}-9\tau H^{2}. \tag{34}\]
Using (21), (23), (34) in the approximation for large one obtains the differential equation of motion
\[\frac{2}{k^{2}}\dot{H}+\frac{3}{k^{2}}\left(\omega_{0}+\frac{\gamma}{\delta}- 9\tau+1\right)H^{2}-\left(\omega_{0}+\frac{\gamma}{\delta}\right)\frac{\rho_{ m}^{(0)}}{a^{3}}=0. \tag{35}\]
Let us rewrite this equation in terms of the scale factor,
\[\ddot{a}a+\frac{1}{2}\left[3\left(\omega_{0}+\frac{\omega}{\delta}\right)-9 \tau k^{2}+1\right]\dot{a}^{2}-\frac{1}{2}k^{2}\rho_{m}^{(0)}\left(\omega_{0} +\frac{\gamma}{\delta}\right)\frac{1}{a}=0. \tag{36}\]
Let us take \(\omega_{0}=-\gamma/delta\), and then obtain the description through viscous dark fluid in the absence of axion dark matter. The equation simplified and takes the form
\[\ddot{a}a+\frac{1}{2}(1-9\tau k^{2})\dot{a}^{2}=0. \tag{37}\]
The solution of (37) becomes
\[\frac{1}{b+1}a^{b+1}=(C_{1}t+C_{2}),\quad b\neq-1, \tag{38}\]
where \(b=\frac{1}{2}(1-9\tau k^{2})\), (38) where \(C_{1}\) and \(C_{2}\) arbitrary constants. If, the value of the viscous parameter, we obtain the solution of (37) in the form
\[a(t)=C_{2}e^{C_{1}t}. \tag{39}\]
Next, we calculate the particle horizon
\[L_{p}=\frac{1}{C_{1}}\left(e^{C_{1}t}-1\right). \tag{40}\]
Using, (39) and (40) we express the scale factor in terms of the particle horizon, its derivatives
\[a=C_{1}(C_{1}L_{p}+1),\quad\dot{a}=C_{1}C_{2}\dot{L}_{p},\quad\ddot{a}=C_{1}C _{2}\ddot{L}_{p}, \tag{41}\]
then the holographic representation of the motion equation (37) is
\[(1+C_{1}L_{p})\ddot{L}_{p}+\frac{1}{2}(1-9\tau k^{2})\dot{L}_{p}^{2}=0. \tag{42}\]
The equation (42) represents a reconstruction of the energy conservation equation, according to the holographic principle. Thus, we applied the holographic principle in the unified dissipative dark fluid model to obtain the appropriate energy conservation law.
Conclusion
In the present paper we have considered a unified model of the early and the late-time universe, in a homogeneous and isotropic spatially flat Friedmann-Robertson-Walker metric, from a holographic point of view. We assumed that the universe is filled with viscous dark fluid in presence axion matter, and presented the energy conservation equation in a holographic language. We showed the equivalence between viscous fluid cosmology and holographic fluid cosmology assuming the cutoff model introduced by Nojiri and Odintsov [2; 3].
For this, we identified the infrared radius \(L_{\rm IR}\) with the particle horizon \(L_{p}\). We assumed a general EoS for the viscous dark fluid in the presence of axion matter. We applied the holographic principle to cosmological models with constant value of the thermodynamic parameter \(\omega(\rho,t)\) and considered different forms of the bulk viscosity \(\zeta(H,t)\). In every model the infrared radius, in the form of a particle horizon, was calculated in order to obtain the energy conservation equation. Despite the fact that in the inflationary scenario the contribution of bulk viscosity is usually insignificant, with an influence increases only with the development of the universe, we described the holographic picture using a viscous fluid. Thus, equivalence is established between the description of the unified model of the early and late universe with the help of a viscous dark fluid, and its holographic description based on a selection of the infrared radius.
One may ask if there is an agreement between the holographic theory and astronomical observations. A comparative analysis was given in [37] examining the holographic dark energy model on the brane. The analysis was carried out for relationships between apparent magnitudes and redshifts for distant supernova Ia, Hubble parameters for different redshifts, and baryon acoustic oscillations. For a wide range of the parameters, the observational data were found to be in good agreement with theoretical predictions.
**Acknowledgment**
This work was supported by Russian Foundation for Basic Research; Project No. 20-52-05009.
|
2303.17840 | Large deviation for small noise path-dependent stochastic differential
equations | In this paper, we study the asymptotic behavior of randomly perturbed
path-dependent stochastic differential equations with small parameter
$\vartheta_{\varepsilon}$, when $\varepsilon \rightarrow 0$,
$\vartheta_\varepsilon$ goes to $0$. When $\varepsilon \rightarrow 0$, we
establish large deviation principle. The proof of the results relies on the
weak convergence approach. As an application, we establish the large deviation
for functionals of path-dependent SDEs in small time intervals. | Liu Xiangdong, Hong Shaopeng | 2023-03-31T07:04:45Z | http://arxiv.org/abs/2303.17840v1 | # Large deviation for small noise path-dependent stochastic differential equations
###### Abstract
In this paper, we study the asymptotic behavior of randomly perturbed path-dependent stochastic differential equations with small parameter \(\vartheta_{\varepsilon}\), when \(\varepsilon\to 0\), \(\vartheta_{\varepsilon}\) goes to \(0\). When \(\varepsilon\to 0\), we establish large deviation principle. The proof of the results relies on the weak convergence approach. As an application, we establish the large deviation for functionals of path-dependent SDEs in small time intervals.
keywords: Path-dependent stochastic differential equations, Large deviation principle, Weak convergence Msc: [2010] 60H10, 60F05, 60F10
## 1 Introduction
This paper sheds new light on the asymptotic behaviour of the class of path-dependent stochastic differential equations (PSDEs).
\[X(t)=X_{0}+\int_{0}^{t}b(s,X_{s})ds+\int_{0}^{t}\sigma(s,X_{s})dW(s)\quad t\in[ 0,T] \tag{1}\]
PSDEs have received increasing attentions by researchers which are much more involved than classical SDEs as the drift and diffusion coefficients depending on path of solution. In a nutshell, this kind of equations plays an important role in characterising non-Markov partial differential equations (PDEs for short). Ekren et al. [6] obtained the viscosity solutions of path-dependent semi-linear parabolic PDEs using backward PSDEs and Non-anticipative analysis [5; 3], and subsequently extended the results to fully nonlinear forms of path-dependent PDEs [7].
It is well known that the key point of large deviation principle (LDP for short) is to show the probability property of rare events. Small noise LDP for SDEs has a long history. The pioneering work of [8] considered rare events induced by Markov diffusions. Recently, an important contribution by [1] was to use the weak convergence method to obtain a significant
simplified approach. Their approach avoided proof exponential continuity and tightness estimates. Weakly convergent methods are widely used in proving large deviations of stochastic differential equations and stochastic partial differential equations, see [11; 13; 14] and references therein.
There have been some studies on large deviations of path-dependent SDEs. For example, Gao and Liu [9] studied such a problem via the sample path LDP method by Freidlin-Wentzell and show the LDP under (r,q)-capacity. And Ma et al. [12] based on PDEs method get the LDP of path-dependent SDEs. In this paper, we use a different line of argument, adapting the weak convergence approach of Budhiraja and Dupuis [1] to the path-dependent case.
Compared with the results mentioned above, the contribution of this paper is to study LDP when the coefficients of PSDEs are all depending on \(\varepsilon\), i.e., the solutions of PSDEs possibly degenerate. As an application, we establish the large deviation for functionals of PSDEs in small time intervals.
The paper is organized as follows. In Section 2, we state the weak convergence method for the large deviation principle given in Budhiraja and Dupuis [1]. We give the main theorem and prove it in Section 3. Finally, in Section 4, we show the large deviation principle for the functional of PSDEs in small time interval.
We end this section with some notations. We consider a fixed time horizon \(T>0\), and denote \(\mathbb{T}:=[0,T]\). Let \(C([0,T];\mathbb{R}^{d})\) be the Banach space of continuous functions \(\psi:[0,T]\to\mathbb{R}^{d}\) equipped with the sup-norm \(\|\psi\|:=\sup_{t\in[0,T]}|\psi(t)|\), \(\mathcal{C}^{1}_{0}([0,T];\mathbb{R}^{d})\) as the space of continuous functions on \([0,T]\) with initial value \(0\) and has first-order derivative, \(\mathcal{C}^{1}_{b}([0,T];\mathbb{R}^{d})\)as the space of continuous functions on \([0,T]\) with initial value \(0\), has first-order derivative and has a bound. \(L^{2}\) stands short for \(L^{2}(\mathbb{T})\) and \(\|\cdot\|_{2}\) is the usual \(L^{2}\) norm.
## 2 Preliminaries
### Framework
We consider small-noise convolution PSDEs
\[X^{\varepsilon}(t)=X^{\varepsilon}_{0}+\int_{0}^{t}b_{\varepsilon}(s,X^{ \varepsilon}_{s})ds+\vartheta_{\varepsilon}\int_{0}^{t}\sigma_{\varepsilon}(s,X^{\varepsilon}_{s})dW(s)\quad t\in[0,T] \tag{2}\]
taking values in \(\mathbb{R}^{d}\) with \(d\geq 1\), where \(\varepsilon>0\) and \(\vartheta_{\varepsilon}>0\) tends to zero as \(\varepsilon\) goes to zero. For each \(\varepsilon>0\), \(X^{\varepsilon}_{0}\in\mathbb{R}^{d}\), \(b_{\varepsilon}:\mathbb{T}\times\mathcal{C}\left(\mathbb{T},\mathbb{R}^{d} \right)\to\mathbb{R}^{d}\), \(\sigma_{\varepsilon}:\mathbb{T}\times\mathcal{C}\left(\mathbb{T},\mathbb{R}^{ d}\right)\to\mathbb{R}^{d\times m}\)are two product measurable maps that are non-anticipative in the sense that they satisfy \(b_{\varepsilon}(t,x)=b_{\varepsilon}(t,x_{t})\) and \(\sigma_{\varepsilon}(t,x)=\sigma_{\varepsilon}(t,x_{t})\) for all \(t\in\mathbb{T}\) and each \(x\in\mathcal{C}\left(\mathbb{T},\mathbb{R}^{d}\right)\), where \(x_{t}\) denote the path \(x\) stopped at time \(t\). \(W(s)\) is an m-dimensional Brownian motion on the filtered probability space \(\left(\Omega,\mathcal{F},\left\{\mathcal{F}_{t}\right\}_{t\in\mathbb{T}}, \mathbb{P}\right)\) satisfying the usual conditions. We make following assumptions about the coefficients:
1. \(X^{\varepsilon}_{0}\) converges to \(x_{0}\in\mathbb{R}^{d}\) as \(\varepsilon\) tends to zero.
2. For all \(\varepsilon>0\) small enough, the coefficients \(b_{\varepsilon}\) and \(\sigma_{\varepsilon}\) are measurable maps on \(\mathbb{T}\times\mathcal{C}\left(\mathbb{T}:\mathbb{R}^{d}\right)\) and converge pointwise to \(b\) and \(\sigma\) as \(\varepsilon\) goes to zero. Moreover, \(b(t,\cdot)\) and \(\sigma(t,\cdot)\) are continuous on \(\mathbb{R}^{d}\), uniformly in \(t\in\mathbb{T}\).
**A.3**: For all \(\varepsilon>0\) small enough, \(b_{\varepsilon}\) and \(\sigma_{\varepsilon}\) have linear growth uniformly in \(\varepsilon\) and in \(t\in\mathbb{T}\). For some \(L>0\)
\[|b_{\varepsilon}(t,\omega)|+|\sigma_{\varepsilon}(t,\omega)|\leq M\left(1+\sup_ {s\leq t}|\omega(s)|+|t|\right) \tag{3}\]
**A.4**: For all \(\varepsilon>0\) small enough, the coefficients \(b_{\varepsilon}\) and \(\sigma_{\varepsilon}\) are locally Lipschitz continuous. For any \(R>0\), there exists \(L_{R}>0\) such that, for all
\[|b_{\varepsilon}(t,\omega)-b_{\varepsilon}(t,\omega^{\prime})|+|\sigma_{ \varepsilon}(t,\omega)-\sigma_{\varepsilon}(t,\omega^{\prime})|\leq L_{R}( \sup_{s\leq t}|\omega(s)-\omega^{\prime}(s)|) \tag{4}\]
_2.2. Abstract sufficient conditions for large deviations_
**Defination 1 (Large deviation [4]).**_A family \(\left\{X^{\varepsilon}\right\}_{\varepsilon>0}\) of \(\mathcal{E}\)-valued random variable is said to satisfy the large deviation principle on \(\mathcal{E}\), with the good rate function \(I\) and with the speed function \(\lambda(\varepsilon)\) which is a sequence of positive numbers tending to \(+\infty\) as \(\varepsilon\to 0\), if the following conditions hold:_
1. _for each_ \(M<\infty\)_, the level set_ \(\left\{x\in\mathcal{E}:I(x)\leq M\right\}\) _is a compact subset of_ \(E\)_;_
2. _for each closed subset_ \(F\) _of_ \(\mathcal{E},\limsup_{\varepsilon\to 0}\frac{1}{\lambda(\varepsilon)}\log \mathbb{P}\left(X^{\varepsilon}\in F\right)\leq-\inf_{x\in F}I(x)\)_;_
3. _for each open subset_ \(G\) _of_ \(\mathcal{E},\liminf_{\varepsilon\to 0}\frac{1}{\lambda(\varepsilon)}\log \mathbb{P}\left(X^{\varepsilon}\in G\right)\geq-\inf_{x\in G}I(x)\)_._
We recall here several results from Budhiraja and Dupuis [1] which gives an abstract framework of LDP.
Let \(\mathcal{A}\) denote the class of real-valued \(\left\{\mathcal{F}_{t}\right\}\)-predictable processes \(\nu\) belonging to \(L^{2}\) a.s. For each \(N\) the spaces of bounded deterministic and stochastic controls
\[S_{N}:=\left\{\nu\in L^{2};\int_{0}^{T}|\nu(s)|^{2}ds\leq N\right\}.\]
\(S_{N}\) is endowed with the weak topology induced from \(L^{2}(\mathbb{T}\times\Omega)\). Define
\[\mathcal{A}_{N}:=\left\{\nu\in\mathcal{A};\nu(s)\in S_{N},\mathbb{P}\text{-a. s. }\right\}.\]
**Theorem 2 (Budhiraja and Dupuis [1]).**_For any \(\varepsilon>0\), let \(\mathcal{G}^{\varepsilon}\) be a measurable mapping from \(C([0,T];\mathbb{R})\) into \(E\). Suppose that \(\left\{\mathcal{G}^{\varepsilon}\right\}_{\varepsilon>0}\) satisfies the following assumptions: there exists a measurable map \(\mathcal{G}^{0}:C([0,T];\mathbb{R})\longrightarrow\mathcal{E}\) such that_
1. _for every_ \(N<+\infty\) _and any family_ \(\left\{\nu^{\varepsilon};\varepsilon>0\right\}\subset\mathcal{A}_{N}\) _satisfying that_ \(\nu^{\varepsilon}\) _converge in distribution as_ \(S_{N}\)_-valued random elements to_ \(\nu\) _as_ \(\varepsilon\to 0,\mathcal{G}^{\varepsilon}\left(W_{\cdot}+\frac{1}{ \sqrt{\varepsilon}}\int_{0}^{\cdot}\nu^{\varepsilon}(s)ds\right)\) _converges in distribution to_ \(\mathcal{G}^{0}\left(\int_{0}^{\cdot}\nu(s)ds\right)\) _as_ \(\varepsilon\to 0\)_;_
2. _for every_ \(N<+\infty\)_, the set_ \(\left\{\mathcal{G}^{0}\left(\int_{0}^{\cdot}\nu(s)ds\right);h\in S_{N}\right\}\) _is a compact subset of_ \(E\)_._
_Then the family \(\left\{\mathcal{G}^{\varepsilon}(W(\cdot))\right\}_{\varepsilon>0}\) satisfies a large deviation principle with the good rate function I given by_
\[I(g):=\inf_{\left\{\nu\in\mathcal{H};g=\mathcal{G}^{0}\left(\int_{0}^{\cdot} \nu(s)ds\right)\right\}}\left\{\frac{1}{2}\int_{0}^{T}|\nu(s)|^{2}ds\right\} \quad\text{ for }g\in\mathcal{E},\]
_with the convention \(\inf\emptyset=\infty\)._
## 3 Main Result and Proof
If **A.5** hold, define the functional \(\mathcal{G}\) as the Borel-measurable map associating the multidimensional Brownian motion \(W\) to the solution of the path dependent stochastic differential systems (2), that is: \(\mathcal{G}^{\varepsilon}\left(W\right)=X^{\varepsilon}(t)\). For any control \(\nu\in\mathcal{A}_{N}\), \(N>0\) and any \(\varepsilon>0\), the process \(\widetilde{W}=W+\vartheta_{\varepsilon}^{-1}\int_{0}^{\cdot}\nu(s)ds\) is a \(\widetilde{\mathbb{P}}-\)Brownian motion by Girsanov's theorem, where
\[\frac{d\widetilde{\mathbb{P}}}{d\mathbb{P}}:=\exp\left\{-\frac{1}{\vartheta_{ \varepsilon}}\sum_{i=1}^{m}\int_{0}^{T}\nu^{(i)}(s)dW^{(i)}(s)-\frac{1}{2 \vartheta_{\varepsilon}^{2}}\int_{0}^{T}|v(s)|^{2}ds\right\}. \tag{5}\]
Hence the shifted version \(X^{\varepsilon,v}:=\mathcal{G}^{\varepsilon}(\tilde{W})\) appearing in Theorem 2 (1) is the strong unique solution of (2) under \(\widetilde{\mathbb{P}}\), with \(X^{\varepsilon}\) and \(W\) replaced by \(X^{\varepsilon,v}\) and \(\widetilde{W}\). Because \(\mathbb{P}\) and \(\widetilde{\mathbb{P}}\) are equivalent, \(X^{\varepsilon,v}\) is also the unique strong solution, under \(\mathbb{P}\), of the controlled equation
\[X^{\varepsilon,v}(t)=X_{0}^{\varepsilon}+\int_{0}^{t}\left[b_{\varepsilon} \left(s,X_{s}^{\varepsilon,v}\right)+\sigma_{\varepsilon}\left(s,X_{s}^{ \varepsilon,v}\right)v(s)\right]\mathrm{d}s+\vartheta_{\varepsilon}\int_{0}^{ t}\sigma_{\varepsilon}\left(s,X_{s}^{\varepsilon,v}\right)\mathrm{d}W(s) \tag{6}\]
Taking \(\varepsilon\to 0\), the system (6) reduces to the deterministic path dependent ODE
\[\phi(t)=x_{0}+\int_{0}^{t}\left[b(s,\phi_{s})+\sigma(s,\phi_{s})\nu(s)\right]ds. \tag{7}\]
**Theorem 3**: _Under **A.1**-**A.4**, the family \(\left\{X^{\varepsilon}\right\}_{\varepsilon>0}\), unique solution of (2), satisfies a Large Deviations Principle with rare function \(I\) and speed \(\vartheta_{\varepsilon}^{-2}\), where \(\mathcal{G}^{0}\) is the solution of (7)_
**Remark 1**: _Theorem 3 generalizes the results in Chiarini and Fischer [2]. When the coefficients \(b_{\varepsilon}\) and \(\sigma_{\varepsilon}\) do not depend on the path of the process \(X_{\varepsilon}\), Theorem 3 and Chiarini and Fischer [2, Theorem 3] are equivalent._
We first show the unique solution of (7) and a uniform estimation.
**Lemma 4**: _Under **A.1**-**A.6**, given any \(\nu\in L^{2}\), there is a unique solution \(\phi\in\mathcal{C}\left([0,T];\mathbb{R}^{n}\right)\) of (7). Moreover, for \(\phi\), we have the growth estimate_
\[\sup_{0\leq s\leq t}|\phi(s)|\leq(3|x_{0}|^{2}+9M^{2}t(t+\|\nu\|^{2})+3M^{2}t^ {3}(t+\|\nu\|^{2}))e^{9M^{2}(t+\|\nu\|^{2})} \tag{8}\]
Let \(\phi,\varphi\in\mathcal{C}\left([0,T];\mathbb{R}^{d}\right)\) be solutions of (7). We have
\[|\phi(t)-\varphi(t)|\leq\int_{0}^{t}\left|b(s,\phi_{s})-b(s,\varphi_{s})\right| ds+\int_{0}^{t}\left|\sigma(s,\phi_{s})-\sigma(s,\varphi_{s})\right|\left|\nu \right|ds \tag{9}\]
By assumption **A.4**, we have for large enough \(R>0\)
\[\left|\phi(t)-\varphi(t)\right|^{2}\leq 2L_{R}^{2}\left(T+\|\nu\|^{2}\right) \int_{0}^{t}\sup_{0\leq u\leq s}\left|\phi(u)-\phi(s)\right|^{2}ds\]
Gronwall's inequality now entails that \(\|\phi(t)-\varphi(t)\|=0\), which yields uniqueness.
By using assumption **A.3**, we can get that
\[\left|\phi(t)\right|^{2} \leq 3|x_{0}|+3t\int_{0}^{t}\left|b(s,\phi_{s})^{2}\right|ds+3\left( \int_{0}^{t}\left|\sigma(s,\phi_{s})\right|\left|\nu\right|ds\right)^{2} \tag{10}\] \[\leq 3|x_{0}|^{2}+9M^{2}\left(t+\|\nu\|^{2}\right)\int_{0}^{t} \left(1+\sup_{0\leq u\leq s}\left|\phi(u)\right|^{2}+\left|s\right|^{2}\right)ds\] \[\leq 3|x_{0}|^{2}+9M^{2}t(t+\|\nu\|^{2})+3M^{2}t^{3}(t+\|\nu\|^{2 })+9M^{2}(t+\|\nu\|^{2})\int_{0}^{t}\sup_{0\leq u\leq s}|\phi(u)|^{2}du\]
By Gronwalls' inequality, we can deduce
\[\sup_{0\leq s\leq t}|\phi(s)|\leq(3|x_{0}|^{2}+9M^{2}t(t+\|\nu\|^{2})+3M^{2}t^ {3}(t+\|\nu\|^{2}))e^{9M^{2}(t+\|\nu\|^{2})}\]
We need some technical preliminary results.
**Lemma 5**: _Under **A.1**-**A.6**, for all \(p\geq 2\), \(N>0\), \(\nu\in\mathcal{A}_{N}\) and \(\varepsilon>0\) small enough, there exists a constant \(c>0\) independent of \(\varepsilon\), \(\nu\), \(t\) such that_
\[\mathbb{E}\left[\sup_{t\in\mathbb{T}}|X^{\varepsilon,\nu}(t)|^{p}\right]\leq c \tag{11}\]
Let us fix \(p\geq 2\), \(N>0\), \(\nu\in\mathcal{A}_{N}\) and \(t\in\mathbb{T}\). Let \(\tau_{n}\) be the stopping time defined by
\[\tau_{n}=\inf\left\{t\geq 0:|X^{\varepsilon,\nu}(t)|\geq n\right\}\wedge T\]
We write \(b_{s}^{n}:=b_{\varepsilon}(s,X_{s}^{\varepsilon,\nu}\mathbbm{1}_{\left\{s\leq \tau_{n}\right\}})\) and \(\sigma_{s}^{n}:=\sigma_{\varepsilon}\left(s,X_{s}^{\varepsilon,\nu} \mathbbm{1}_{\left\{s\leq\tau_{n}\right\}}\right)\).
We fix \(n\in\mathbb{N}\) and observe that, almost surely:
\[\mathbb{E}\left[\|X^{\varepsilon,\nu}(t)\mathbbm{1}_{t\leq\tau_ {n}}\|^{p}\right] \leq 4^{p-1}\left|X_{0}^{\varepsilon}\right|^{p}+4^{p-1}\mathbb{E} \left\{\left[\int_{0}^{t}b_{s}^{n}ds\right]^{p}\right\}+4^{p-1}\mathbb{E} \left\{\left[\int_{0}^{t}\sigma_{s}^{n}\nu(s)ds\right]^{p}\right\} \tag{12}\] \[+4^{p-1}\vartheta_{\varepsilon}^{p}\mathbb{E}\left\{\left[\int_{ 0}^{t}\sigma_{s}^{n}dW(s)\right]^{p}\right\}\] \[=:4^{p-1}\left(|X_{0}^{\varepsilon}|^{p}+I_{1}+I_{2}+I_{3}\right)\]
For \(\varepsilon\) small enough we can bound \(|X_{0}^{\varepsilon}|\) by \(2\left|X_{0}\right|\) and \(\vartheta_{\varepsilon}\) by \(1\). Using Holder's and Jensen's inequalities, we obtain the following estimates almost surely:
\[I_{1}\leq\mathbb{E}\left\{\left[\int_{0}^{t}(b_{n}^{s})^{p}ds\right]\right\} \tag{13}\]
and
\[I_{2}\leq N^{\frac{p}{2}}\mathbb{E}\left\{\left[\int_{0}^{t}(\sigma_{s}^{n})^ {2}ds\right]^{\frac{p}{2}}\right\}\leq N^{\frac{p}{2}}\mathbb{E}\left\{\int_{ 0}^{t}\left(\sigma_{s}^{n}\right)^{p}ds\right\} \tag{14}\]
By Burkholder-Davis-Gundy (B-D-G) inequality, there exists \(C_{p}>0\) such that
\[\mathbb{E}\left[I_{3}\right]\leq C_{p}\mathbb{E}\left\{\int_{0}^{t}(\sigma_{s }^{n})^{P}ds\right\} \tag{15}\]
From the linear growth condition on \(b_{\varepsilon}\) and \(\sigma_{\varepsilon}\) we deduce that there exists \(C_{1}>0\) independent of \(\varepsilon\), \(\nu\), \(n\) and \(t\) such that for all \(n\in\mathbb{N}\)
\[\mathbb{E}\left[\left\|X^{\varepsilon,\nu}(t)\mathbb{1}_{\,t\leq\tau_{n}}\right\| ^{p}\right]\leq C_{1}+C_{1}\int_{0}^{t}\mathbb{E}\left[\left\|X^{\varepsilon, \nu}(s)\mathbb{1}_{s\leq\tau_{n}}\right\|^{p}\right]ds \tag{16}\]
Taking \(n\) goes to infinity and using Gronwall's lemma, we prove this bound.
**Lemma 6**: \(\left\{X^{\varepsilon,\nu^{\varepsilon}}\right\}_{\varepsilon>0}\) _is tightness_
Proof. In view of the Kolmogorov tightness criterion, it suffices to show that there exist strictly positive constants \(\alpha\), \(\beta\) and \(\gamma\) such that for all \(t\), \(s\in[0,T]\),
\[\sup_{\nu\in\mathcal{S}_{N}}\mathbb{E}\left[\left|X^{\varepsilon,\nu^{ \varepsilon}}(t)-X^{\varepsilon,\nu^{\varepsilon}}(s)\right|^{\alpha}\right] \leq\beta\left|t-s\right|^{\gamma}\]
Without loss of generality, let \(s<t\). We will write \(b_{s}^{n}:=b_{\varepsilon}(s,X_{s}^{\varepsilon,\nu^{\varepsilon}}\mathbb{1}_ {\{s\leq\tau_{n}\}})\) and \(\sigma_{s}^{n}:=\sigma_{\varepsilon}\left(s,X_{s}^{\varepsilon,\nu^{ \varepsilon}}\mathbb{1}_{\,\{s\leq\tau_{n}\}}\right)\).
\[\mathbb{E}\left[\left|X^{\varepsilon,\nu^{\varepsilon}}(t)-X^{ \varepsilon,\nu^{\varepsilon}}(s)\right|^{\alpha}\right] \leq 3^{p-1}(t-s)^{p-1}\mathbb{E}\left\{\int_{s}^{t}\left|b_{u}^{ n}\right|^{p}du\right\}+3^{p-1}\mathbb{E}\left[\left(\int_{s}^{t}\left| \sigma_{u}^{n}\right|\left|\nu(u)\right|du\right)^{p}\right] \tag{17}\] \[+3^{p-1}\vartheta_{\varepsilon}^{P}\mathbb{E}\left[\left|\int_{s} ^{t}\sigma_{u}^{n}dW(u)\right|^{p}\right]\] \[\leq 3^{p-1}(t-s)^{p-1}\mathbb{E}\left\{\int_{s}^{t}\left|b_{u}^{ n}\right|^{p}du\right\}+3^{p-1}N^{\frac{p}{2}}\mathbb{E}\left[\int_{s}^{t} \left|\sigma_{u}^{n}\right|^{p}du\right]\] \[+3^{p-1}C_{p}\mathbb{E}\left[\int_{s}^{t}\left|\sigma_{u}^{n} \right|^{\frac{p}{2}}du\right]\] \[\leq 3^{p-1}(t-s)^{p-1}\mathbb{E}\left\{\int_{s}^{t}\left|b_{u}^{ n}\right|^{p}du\right\}+3^{p-1}N^{\frac{p}{2}}\mathbb{E}\left[\int_{0}^{t} \left|\sigma_{u}^{n}\right|^{\frac{p}{2}}du\right]\] \[+3^{p-1}C_{p}\mathbb{E}\left[\int_{s}^{t}\left|\sigma_{u}^{n} \right|^{\frac{p}{2}}du\right]\]
From the linear growth condition on \(b_{\varepsilon}\) and \(\sigma_{\varepsilon}\) we deduce that there exists sufficiently large \(\beta\) and let \(\alpha=p\), \(\gamma=p-1\). Then the hypotheses of Kolmogorov's criterion are therefore satisfied.
**Lemma 7**: _For any positive \(N<\infty\), the set_
\[K_{N}:=\left\{\mathcal{G}^{0}\left(\int_{0}^{\cdot}\nu(s)ds,\nu\in\mathcal{S }_{N}\right)\right\}\]
_is a compact set in \(\mathcal{C}\left([0,T];\mathbb{R}^{n}\right)\)_
Proof. We first prove \(\mathcal{G}^{0}\) is a continuity map form \(\mathcal{S}_{N}\) to \(\mathcal{C}\left([0,T];\mathbb{R}\right)\), then for any positive \(N<\infty\), \(\mathcal{S}_{N}\) is compact set in weak topology. Since \(\mathcal{G}^{0}\) is continuity map, we can show \(K_{N}\) is a compact set in \(\mathcal{C}\left([0,T];\mathbb{R}\right)\).
Taking \(\{\nu^{n}(s)\}\in\mathcal{S}_{N}\), \(\nu^{n}\to\nu\) weakly, let \(\varphi^{n}=\mathcal{G}^{0}(\nu^{n})\), \(\varphi=\mathcal{G}^{0}(\nu)\). Then, for \(t\in[0,T]\),
\[\begin{split}\varphi^{n}(t)-\varphi(t)&=\int_{0}^{t }\left(b(s,\varphi^{n})-b(s,\varphi)\right)ds+\int_{0}^{t}(\sigma(s,\varphi^{n })-\sigma(s,\varphi))\nu^{n}(s)ds\\ &+\int_{0}^{t}\sigma(s,\varphi_{s})(\nu^{n}(s)-\nu(s))ds\end{split} \tag{18}\]
Since \(\|\nu^{n}\|\leq N\), it follows from (8) that \(R:=\sup_{n\in\mathbb{N}}\|\varphi\|\vee\|\varphi^{n}\|\) is finite. Therefore, using assumption **A.4**,
\[\begin{split}\sup_{0\leq s\leq t}|\varphi^{n}(s)-\varphi(s)|& \leq L_{R}\int_{0}^{t}\sup_{0\leq u\leq s}|\varphi^{n}(u)-\varphi (u)|\,ds+L_{R}\int_{0}^{t}\sup_{0\leq u\leq s}|\varphi^{n}(u)-\varphi(u)|\, \nu^{n}(s)ds\\ &+\sup_{0\leq u\leq T}\left|\int_{0}^{u}\sigma(s,\varphi_{s}) \left(\nu^{n}(s)-\nu(s)\right)ds\right|\end{split} \tag{19}\]
Let \(\Delta_{\sigma}^{n}=\sup_{0\leq u\leq T}\left|\int_{0}^{u}\sigma(s,\varphi_{s} )\left(\nu^{n}(s)-\nu(s)\right)ds\right|\). By Holder's inequality and since \(\|\nu^{n}\|^{2}\leq N\) for all \(n\in\mathbb{N}\), it follows that
\[\sup_{0\leq s\leq t}|\varphi^{n}(s)-\varphi(s)|\leq 3L_{R}^{2}(t+N)\int_{0}^{t} \sup_{0\leq u\leq s}|\varphi^{n}(u)-\varphi(u)|^{2}\,ds+3(\Delta_{\sigma}^{n} )^{2}\]
By Gronwall's lemma, we can deduce that
\[\mathcal{G}^{0}(\nu^{n})-\mathcal{G}^{0}(\nu)=\sup_{0\leq t\leq T}|\varphi^{n }(t)-\varphi(t)|^{2}\leq 3\left(\Delta_{\sigma}^{n}\right)^{2}e^{3L^{2}T(T+N)}\]
In order to establish continuity of \(\mathcal{G}^{0}\) on \(\mathcal{S}_{N}\), it remains to check that \(\Delta_{\sigma}^{n}\) goes to \(0\) as \(n\to\infty\). By **A.3**, it follows that \(\sigma(\cdot,\varphi)\nu^{n}\) converges weakly to \(\sigma\left(\cdot,\varphi\right)\nu\) in \(L^{2}\). Moreover, the family \(\left\{\sigma(\cdot,\varphi)\nu^{n}\right\}_{n\in\mathbb{N}}\) is bounded in \(L^{2}\) with respect to the \(L^{2}-\)norm. Hence,
\[\int_{0}^{t}\sigma(s,\varphi_{s})\nu^{n}(s)ds\to\int_{0}^{t}\sigma\left(s, \varphi\right)\nu(s)ds\quad\text{as }n\to\infty\]
which implies that \(\Delta_{\sigma}^{n}\to 0\) as \(n\to\infty\).
**Lemma 8**.: _Under **A.1**-**A.6**, for every \(N<+\infty\) and any family \(\left\{\nu^{\varepsilon}\right\}_{\varepsilon>0}\in\mathcal{A}_{N}\) satisfying that \(\nu^{\varepsilon}\) converge in distribution as \(\mathcal{S}_{N}-\)valued random elements to \(\nu\) as \(\varepsilon\to 0\), \(\mathcal{G}^{\varepsilon}\left(W_{\cdot}+\frac{1}{\vartheta_{\varepsilon}} \int_{0}^{\cdot}\nu^{\varepsilon}(s)ds\right)\) converges in distribution to \(\mathcal{G}^{0}\left(\int_{0}^{\cdot}\nu(s)ds\right)\) as \(\varepsilon\to 0\)._
Proof.: By Skorohod representation theorem we can work with almost sure convergence for the purpose of identifying the limit. We follow the technique in Chiarini and Fischer [2].
For \(t\in[0,T]\), define \(\Phi_{t}:\mathcal{S}_{N}\times\mathcal{C}\left([0,T];R^{n}\right)\) as
\[\Phi_{t}(\omega,f):=\left|\omega(t)-x_{0}-\int_{0}^{t}b(s,\omega_{s})ds-\int_ {0}^{t}\sigma(s,\omega_{s})f(s)ds\right|\wedge 1\]
\(\Phi_{t}\) is bounded and we show that it is also continuous. Let \(\omega^{n}\rightarrow\omega\) in \(\mathcal{C}\left([0,T];R^{n}\right)\) and \(f^{n}\to f\) in \(\mathcal{S}_{N}\) with respect to the weak topology. **A.2** implies the existence of continuous moduli of continuity \(\rho_{b}\) and \(\rho_{\sigma}\) for both coefficients such that \(\left|b(t,\varphi_{t})-b(t,\phi_{t})\right|\leq\rho_{b}\left(\|\varphi-\phi\|\right)\) and \(\left|\sigma(t,\varphi_{t})-\sigma(t,\phi_{t})\right|\leq\rho_{\sigma}\left( \|\varphi-\phi\|\right)\). Using Holder's inequality we find that
\[\left|\Phi_{t}(\omega^{n},f^{n})-\Phi_{t}(\omega^{n},f)\right| \leq\left|\omega^{n}(t)-\omega(t)\right|+\int_{0}^{t}\left|b(s, \omega_{s}^{n})-b(s,\omega_{s})\right|ds \tag{20}\] \[+\int_{0}^{t}\left|\sigma(s,\omega_{s}^{n})-\sigma(s,\omega_{s}) \right|\left|f^{n}(s)\right|ds+\left|\int_{0}^{t}\sigma(s,\omega_{s})(f^{n}(s )-f(s))ds\right|\] \[\leq\left\|\omega^{n}-\omega\right\|+T\rho_{b}\left(\left\| \omega^{n}-\omega\right\|\right)+\sqrt{NT}\rho_{\sigma}\left(\left\|\omega^ {n}-\omega\right\|\right)\] \[+\left\|\sigma(\cdot,\omega)\right\|\left|\int_{0}^{t}\left(f^{n }(s)-f(s)\right)ds\right|\]
Since \(f^{n}\) tends to \(f\) weakly in \(L^{2}\) then the last integral converges to zero as \(n\) goes to infinity. Moreover \(\lim\limits_{n\uparrow\infty}\left\|\omega^{n}-\omega\right\|=0\), which proves that \(\Phi_{t}\) is continuous, and therefore
\[\lim\limits_{n\uparrow\infty}\mathbb{E}\left[\Phi_{t}(X^{n},\nu^{n})\right]= \mathbb{E}\left[\Phi_{t}(X,\nu)\right]\]
Define \(b_{\varepsilon}^{R}:[0,T]\times\mathcal{C}([0,T];\mathbb{R}^{d})\) and \(\sigma_{\varepsilon}^{R}:[0,T]\times\mathcal{C}([0,T];\mathbb{R}^{d})\) by
\[b_{\varepsilon}^{R}(s,\omega_{s})=\begin{cases}b_{\varepsilon}(s,\omega_{s}) \quad\text{if}\|\omega\|\leq R\\ b_{\varepsilon}(s,\frac{R}{\|\omega\|}\omega_{s})\quad\text{otherwise}\end{cases} \sigma_{\varepsilon}^{R}(s,\omega_{s})=\begin{cases}\sigma_{\varepsilon}(s, \omega_{s})\quad\text{if}\|\omega\|\leq R\\ \sigma_{\varepsilon}(s,\frac{R}{\|\omega\|}\omega_{s})\quad\text{otherwise} \end{cases}\]
It is clear that the function \(b_{\varepsilon}^{R}\) and \(\sigma_{\varepsilon}^{R}\) are globally Lipschitz and bounded. By assumption **A.2**, \(b_{\varepsilon}^{R}\to b^{R}\) and \(\sigma_{\varepsilon}^{R}\rightarrow\sigma^{R}\) uniformly on \([0,T]\times\mathcal{C}\left([0,T];\mathbb{R}^{n}\right)\). In analogy with \(\Phi_{t}\), set
\[\Phi_{t}^{R}(\omega,f):=\left|\omega(t)-x_{0}-\int_{0}^{t}b^{R}(s,\omega_{s}) ds-\int_{0}^{t}\sigma^{R}(s,\omega_{s})f(s)ds\right|\wedge 1\]
Consider the family \(\left\{X^{R,\varepsilon,\nu}\right\}\) of solutions to the PSDE
\[X^{R,\varepsilon,v}(t)=X_{0}^{\varepsilon}+\int_{0}^{t}\left[b_{\varepsilon}^ {R}\left(s,X_{s}^{R,\varepsilon,\nu}\right)+\sigma_{R,\varepsilon}^{R}\left(s,X_{s}^{R,\varepsilon,\nu}\right)v(s)\right]\mathrm{d}s+\vartheta_{\varepsilon} \int_{0}^{t}\sigma_{R,\varepsilon}^{R}\left(s,X_{s}^{R,\varepsilon,\nu}\right) \mathrm{d}W(s)\]
We will show
\[\lim\limits_{\varepsilon\to 0}\mathbb{E}\left[\Phi_{t}^{R}\left(X^{R, \varepsilon,\nu^{\varepsilon}},\nu\right)\right]=0\]
\[\mathbb{E}\left[\Phi_{t}^{R}\left(X^{R,\varepsilon,\nu^{\varepsilon}}, \nu\right)\right] \leq|X_{0}^{\varepsilon}-X_{0}|+\mathbb{E}\left[\int_{0}^{t} \left|b_{\varepsilon}^{R}(s,X^{R,\varepsilon,\nu^{\varepsilon}})-b^{R}(s,X^{R, \varepsilon,\nu^{\varepsilon}})\right|ds\right] \tag{21}\] \[+\mathbb{E}\left[\int_{0}^{t}\left|\sigma_{\varepsilon}^{R}(s,X^ {R,\varepsilon,\nu^{\varepsilon}})-\sigma^{R}(s,X^{R,\varepsilon,\nu^{ \varepsilon}})\right||\nu(s)|ds\right]\] \[+\vartheta_{\varepsilon}\mathbb{E}\left[\left|\int_{0}^{t} \sigma_{\varepsilon}^{R}\left(s,X^{R,\varepsilon,\nu^{\varepsilon}}\right)dW( s)\right|\right]\] \[\leq|X_{0}^{\varepsilon}-X_{0}|+t\|b_{\varepsilon}^{R}-b^{R}\|+ \|\sigma_{\varepsilon}^{R}-\sigma^{R}\|\mathbb{E}\left[\int_{0}^{T}|\nu(s)| ds\right]\] \[+\vartheta_{\varepsilon}\sqrt{\int_{0}^{t}\mathbb{E}\left[ \sigma_{\varepsilon}^{R}(s,X^{R,\varepsilon,\nu^{\varepsilon}})^{2}\right]ds}\]
The last term in the above display tends to \(0\) since
\[\sup_{\nu\in\mathcal{S}_{N}}\int_{0}^{t}\mathbb{E}\left[\sigma_{ \varepsilon}^{R}(s,X^{R,\varepsilon,\nu^{\varepsilon}})^{2}\right]ds \leq 2\sup_{\nu\in\mathcal{S}_{N}}\int_{0}^{T}\mathbb{E}\left[ \left|\sigma^{R}(s,X^{R,\varepsilon,\nu^{\varepsilon}})\right|^{2}\right]ds \tag{22}\] \[+2\sup_{\nu\in\mathcal{S}_{N}}\int_{0}^{T}\mathbb{E}\left[\left| \sigma_{\varepsilon}^{R}(s,X^{R,\varepsilon,\nu^{\varepsilon}})-\sigma^{R}(s,X ^{R,\varepsilon,\nu^{\varepsilon}})\right|^{2}\right]ds\] \[\leq 2T\sup_{\nu\in\mathcal{S}_{N}}\|\sigma_{\varepsilon}^{R}- \sigma^{R}\|^{2}+2\sup_{\nu\in\mathcal{S}_{N}}\int_{0}^{T}\mathbb{E}\left[ \left|\sigma^{R}(S,X^{R,\varepsilon,\nu^{\varepsilon}})\right|^{2}\right]ds\] \[<\infty\]
Then, we have
\[\lim_{n\to\infty}\mathbb{E}\left[\Phi_{t}^{R}\left(X^{R,\varepsilon,\nu^{ \varepsilon}},\nu\right)\right]=0\]
For \(R>0\) and \(\nu\in\mathcal{S}_{N}\), let \(\tau_{n}^{R}\) is a stopping time defined by
\[\tau_{n}^{R}=\inf\left\{t\geq 0:X^{\varepsilon,\nu}(t)\geq R\right\}\]
We have
\[\mathbb{P}\left(X^{R,\varepsilon,\nu^{\varepsilon}}(t)=X^{\varepsilon,\nu^{ \varepsilon}}(t)\mathbb{1}_{t\leq\tau_{n}}\right)=1\]
It follows that
\[\mathbb{E}\left[\Phi_{t}(X^{\varepsilon,\nu},\nu)\right] =\mathbb{E}\left[\mathbb{1}_{t<\tau_{n}}\Phi_{t}(X^{\varepsilon, \nu},\nu)\right]+\mathbb{E}\left[\mathbb{1}_{t\geq\tau_{n}}\Phi_{t}(X^{ \varepsilon,\nu},\nu)\right] \tag{23}\] \[\leq\mathbb{E}\left[\Phi_{t}^{R}(X^{R,\varepsilon,\nu^{\varepsilon }},\nu)\right]+\mathbb{P}\left(t\geq\tau_{R}^{n}\right)\]
For all \(\nu\in\mathcal{S}_{N}\), by Markov's inequality we have
\[\mathbb{P}\left(t\geq\tau_{n}^{R}\right)=\mathbb{P}\left(\sup_{0\leq s\leq t} \left|X^{R,\varepsilon,\nu^{\varepsilon}}(s)\right|\geq R\right)\leq\frac{c}{ R^{2}}\]
Taking upper limits on both side of (23), we obtain
\[\limsup_{\varepsilon\to 0}\mathbb{E}\left[\Phi_{t}(X^{\varepsilon,\nu},\nu) \right]\leq\limsup_{n\to\infty}\mathbb{P}\left(t\geq\tau_{n}^{R}\right)\leq \frac{c}{R}\]
Since \(R>0\) has been chosen arbitrarily, it follows that
\[\lim_{\varepsilon\to 0}\mathbb{E}\left[\Phi_{t}(X^{\varepsilon,\nu^{\varepsilon}}, \nu^{\varepsilon})\right]=0\]
Proof of Theorem 3. According to Theorem 2, combined with Lemma 5, 6, 7 and 8, it can be seen that Theorem 3 holds.
Application: Small time large deviation principle for path-dependent stochastic differential equation
In this section, we study the LDP for functional of PSDEs in small time intervals: \(\{X(t),t\in\mathbb{T}\}\) as \(t\to 0\), where
\[X(t)=x_{0}+\int_{0}^{t}b(s,X_{s})ds+\int_{0}^{t}\sigma(s,X_{s})dW(s)\]
We rescale the small time problem to a small perturbation problem.
\[\begin{split} X(\varepsilon t)&=x_{0}+\int_{0}^{ \varepsilon t}b(s,X_{s})ds+\int_{0}^{\varepsilon t}\sigma(s,X_{s})dW(s)\\ &=x_{0}+\varepsilon\int_{0}^{t}b(\varepsilon t,X_{\varepsilon t })ds+\sqrt{\varepsilon}\int_{0}^{\varepsilon t}\sigma(\varepsilon t,X_{ \varepsilon t})d\widehat{W}(s)\end{split} \tag{24}\]
Where, \(\widehat{W}(s)=\frac{1}{\sqrt{\varepsilon}}W(\varepsilon s)\). Let \(U(t)=X(\varepsilon t)\), by (24), we have
\[U(t)=x_{0}=\int_{0}^{t}b_{\varepsilon}(s,U_{s})ds+\sqrt{\varepsilon}\int_{0}^{ t}\sigma(s,U_{s})d\widehat{W}(s)\]
Now, we can use Theorem 3 to obtain the LDP for small time PSDEs.
**Theorem 9**.: _The process \(X(\varepsilon t)\) satisfies LDP as \(\varepsilon\to 0\) with rate function \(J^{1}\) and speed \(\varepsilon\)._
\[J(g)=\inf_{\nu\in L^{2};g=\mathcal{G}^{0}(\int_{0}^{\cdot}\nu(s)ds)}\left\{ \frac{1}{2}\int_{0}^{\cdot}|\nu(s)|^{2}ds\right\}\]
\(\mathcal{G}^{0}\) _is the solution map of (25)_
\[\phi(t)=x_{0}+\int_{0}^{t}\sigma(s,\phi_{s})ds \tag{25}\]
For functional of \(X(\varepsilon t)\) generally, we have the following result
**Theorem 10**.: _Let \(f\in\mathcal{C}^{1}_{b}(\mathbb{R}^{d};\mathbb{R}^{m})\). Then the process \(f(X(\varepsilon t))\) satisfies a LDP as \(\varepsilon\to 0\) with rate function \(J^{f}\) and speed \(\varepsilon\)._
\[J^{f}(g)=\inf_{\{Df(x_{0})\varphi=g\}}J(\varphi) \tag{26}\]
Proof. This proof bases on Theorem 3 and the delta method of large deviation[10].
For any \(f\in C_{b}^{1}\left(\mathbb{R}^{d};\mathbb{R}^{m}\right)\), define \(\Phi:C_{0}^{1}\left([0,T],\mathbb{R}^{d}\right)\to C_{0}^{1}\left([0,T], \mathbb{R}^{m}\right)\) as follows:
\[f(\varphi)(t)=f(\varphi(t)),\quad t\in[0,T].\]
\(\Phi\) is Hadamard differentiable and its Hadamard differential at constant function \(\varphi\equiv f\left(x_{0}\right)\) is
\[\Phi_{f(x_{0})}^{\prime}(\psi)=\left(Df\right)\left(x_{0}\right)\psi,\quad \psi\in C_{0}^{\alpha}\left([0,T],\mathbb{R}^{d}\right)\]
Then the result follows from the delta method.
|
2309.04867 | Finite-sample analysis of rotation operator under $l_2$ norm and
$l_\infty$ norm | In this article, we consider a special operator called the two-dimensional
rotation operator and analyze its convergence and finite-sample bounds under
the $l_2$ norm and $l_\infty$ norm with constant step size. We then consider
the same problem with stochastic noise with affine variance. Furthermore,
simulations are provided to illustrate our results. Finally, we conclude this
article by proposing some possible future extensions. | Mi Zhou | 2023-09-09T19:37:15Z | http://arxiv.org/abs/2309.04867v1 | # Finite-sample analysis of rotation operator under \(l_{2}\) norm and \(l_{\infty}\) norm
###### Abstract
In this article, we consider a special operator called the two-dimensional rotation operator and analyze its convergence and finite-sample bounds under the \(l_{2}\) norm and \(l_{\infty}\) norm with constant step size. We then consider the same problem with stochastic noise with affine variance. Furthermore, simulations are provided to illustrate our results. Finally, we conclude this article by proposing some possible future extensions.
## I Introduction
Looking for the fixed points of non-expansive mapping (i.e., \(T(x)=x\)) is an important topic in nonlinear mapping theory and has applications in image recovery and signal processing. A myriad of research has been done on the properties and theorems of non-expansive operators. While Banach fixed point theorem stated the existence and uniqueness of fixed point under a contractive mapping, the fixed-point set of a nonexpansive operator can be empty or contains multiple points. As a direct consequence of non-expansiveness, it is not enough to directly iterate the operator \(T\) to find a fixed point. Instead, one may iterate using the averaged operator \(T_{\alpha}=(1-\alpha)I+\alpha T\). Such iteration is also known as the Krasnosel'skii-Mann (KM) iteration [1] and the update rule is given as in the following
\[x_{k+1}=(1-\alpha_{k})x_{k}+\alpha_{k}T(x_{k}),\]
where \(\{\alpha_{k}\}\) is the step size sequence. Convergence of \(x_{k}\) to a fixed-point was proved in [1, 2, 3] under the bounded orbit assumption. Under the non-empty fixed-point set assumption, the convergence result is analyzed in [4, 5]. The optimal convergence rate \(O(1/\sqrt{k})\) is obtained in [6, 7, 8, 9] for arbitrary norm.
Despite above works in deterministic case, the work in KM iteration of non-expansive operators with stochastic noise is sparse. In [5], the authors derived a relaxed finite-sample bounds for non-expansive operators under \(l_{2}\) norm with bounded variance. However, the rotation map with some specific rotation angles under \(l_{\infty}\) is neither contractive nor non-expansive, which makes all the existing works inapplicable. Furthermore, the work on finite-sample analysis of non-expansive operators with affine noise is lack. In this work, we aim to analyze the properties and finite-sample bounds of two-dimensional rotation operators under both \(l_{2}\) and \(l_{\infty}\) norm with and without affine stochastic noise. We expect this work can give some hindsight in future studies of finite-sample bound for general non-expansive operators with and without noise.
This paper is organized as follows: in Section II, we first introduce some preliminaries in normed linear space and non-expansiveness. We then formulate our problem by constructing two KM iterations under \(l_{2}\) norm and \(l_{\infty}\) norm respectively. We analyze their finite-sample bound and provide rigorous theoretic proof for each case. Then in Section III, we consider the same problem with noise and analyze its finite-sample bound. Section IV is our simulation results to illustrate our theoretical results in Section II and Section III. Finally, we conclude our article in Section V.
## II Problem formulated
In this section, we will first introduce some preliminaries and then formulate our problem under different norms.
### _Preliminaries_
**Definition 1** (Normed space): _A norm on the vector space \(V\) is a function \(||\cdot||\) that assigns to each vector \(v\in V\) a real number \(||v||\) such that for \(c\) a scalar and \(u,v\in V\), the following hold:_
1. \(||u||\geq 0\) _with equality hold if and only if_ \(u=0\)_._
2. \(||cu||=|c||u||\)_._
3. _(Triangle Inequality)_ \(||u+v||\leq||u||+||v||\)_._
A vector space \(V\), together with a norm \(||\cdot||\) on the space \(V\), is called normed space. The distance between \(u\) and \(v\) is \(d(u,v)=||u-v||\).
**Definition 2**: _Let \(V\) be one of the standard spaces \(\mathbb{R}^{n}\) and \(p\geq 1\) is a real number. The \(p\)-norm of a vector in \(V\) is defined by_
\[||z||_{p}=(\sum_{i=1}^{n}|z_{i}|^{p})^{\frac{1}{p}}.\]
_Specifically, when \(p=2\), we have the familiar \(l_{2}\) norm. The \(l_{\infty}\) norm of a vector in \(V\) is defined as_
\[||z||_{\infty}=\max\{|z|,\,i=1,\cdots,n\}.\]
**Definition 3** (matrix norm): _For a matrix \(A\in\mathbb{R}^{m\times n}\), the operator norm is defined as_
\[||A||_{2}^{2}=\lambda_{\max}(A^{\top}A),\quad||A||_{\infty}=\max_{1\leq i \leq m}\sum_{j=1}^{n}||a_{ij}||,\]
_and the following inequality holds for the matrix norm:_
\[||Ax||\leq||A||||x||,\,\forall x\in\mathbb{R}^{n}. \tag{1}\]
**Definition 4** ([10]): _Let \(C\) be a nonempty subset of a real Banach space \(X\) and \(T\) a self-mapping of \(C\). Denote \(F(T)\) as the set of fixed points of \(T\). The mapping \(T\) is said to be_
1. non-expansive if \(||T(x)-T(y)||\leq||x-y||\) for all \(x,y\in C\) |
2306.17720 | Conformal duality of the nonlinear Schrödinger equation: Theory and
applications to parameter estimation | The nonlinear Schr\"odinger equation (NLSE) is a rich and versatile model,
which in one spatial dimension has stationary solutions similar to those of the
linear Schr\"odinger equation as well as more exotic solutions such as solitary
waves and quantum droplets. Here we present the unified theory of the NLSE,
showing that all stationary solutions of the local one-dimensional
cubic-quintic NLSE can be classified according to a single number called the
cross-ratio. Any two solutions with the same cross-ratio can be converted into
one another using a conformal transformation, and the same also holds true for
traveling wave solutions. Further, we introduce an optimization afterburner
that relies on this conformal symmetry to substantially improve NLSE parameter
estimation from noisy empirical data. The new method therefore should have far
reaching practical applications for nonlinear physical systems. | David B. Reinhardt, Dean Lee, Wolfgang P. Schleich, Matthias Meister | 2023-06-30T15:03:51Z | http://arxiv.org/abs/2306.17720v3 | # Unified theory of the nonlinear Schrodinger equation
###### Abstract
The nonlinear Schrodinger equation (NLSE) is a rich and versatile model, which in one spatial dimension has stationary solutions similar to those of the linear Schrodinger equation as well as more exotic solutions such as solitary waves and quantum droplets. We present a unified theory of the NLSE, showing that all stationary solutions of the cubic-quintic NLSE can be classified according to a single number called the cross-ratio. Any two solutions with the same cross-ratio can be converted into one another using a conformal transformation, and the same also holds true for traveling wave solutions. In this way we demonstrate a conformal duality between solutions of cubic-quintic NLSEs and lower-order NLSEs. The same analysis can be applied to the Newtonian dynamics of classical particles with polynomial potentials. Our framework provides a deeper understanding of the connections between the physics of the NLSE and the mathematics of algebraic curves and conformal symmetry.
_Introduction -_ The nonlinear Schrodinger equation (NLSE) is ubiquitous in physics, where it plays a key role in plasma physics [1; 2; 3], hydrodynamics [4; 5; 6], degenerate quantum gases [7; 8] and light propagation in nonlinear fiber optics [9; 10; 11; 12]. Understanding the possible solutions of the NLSE is therefore of great importance for a large variety of purposes whether they are application-oriented or fundamental. In this Letter we point out a conformal duality between different classes of solutions and even different orders of the NLSE. This conformal mapping provides a unified picture of the cubic- and the cubic-quintic NLSE and even establishes a direct link to the linear Schrodinger equation. Moreover, our method allows us to systematically classify the complete solution spaces of these equations.
The linear Schrodinger equation typically features oscillating and constant-amplitude solutions which have their counterparts in the NLSE. However, there also exist solutions which are uniquely nonlinear such as solitary waves [13; 14; 15; 16; 17; 18; 19] which are of versatile interest in physics [20; 21; 22; 23; 24]. Considering (multiple) higher-order self-modulating terms like in the cubic-quintic NLSE drastically expands the solution space allowing for instance for bright and dark soliton pairs [25] and solitons with power law tail decay [26]. Although the different polynomial NLSEs have been studied in great detail [27; 28; 29; 30; 31; 32; 33; 34; 35; 36] there exists so far no unified theory linking their solution spaces.
In this work, we identify a large family of conformal dualities for the one-dimensional time-independent cubic-quintic NLSE. These dualities allow us to establish conformal maps between solutions of the cubic-quintic NLSE and even to conformally reduce the cubic-quintic to the cubic NLSE and the linear Schrodinger equation, highlighting that the lower-order equations essentially are conformal limiting cases of the cubic-quintic NLSE. Conformal dualities are of particular interest in physics [37; 38; 39] with famous instances being the Kramers-Wannier duality in statistical mechanics [40] relating high-and low temperature limits of the free energy in certain Ising models and the Montonen-Olive duality [41] a generalization of the electro-magnetic symmetry of Maxwell's equations in quantum field theory.
The new and useful insights into the conformal duality of NLSEs presented in this Letter can directly be transferred to Newtonian mechanics relating the motion of classical particles in various harmonic and anharmonic potentials. In fact, there is a remarkable similarity between the solutions of the NLSE and Newtonian dynamics. Finally, our theory provides a fundamental understanding of two-and three-body contact interactions being inherently related in one-dimensional Bose-Einstein condensates [42; 43; 44; 45] described by higher-order Gross-Pitaevskii equations [46; 47; 48; 14; 49].
_Nonlinear Schrodinger equation -_ We consider the dimensionless time-independent cubic-quintic NLSE
\[\left(-\frac{1}{2}\frac{\mathrm{d}^{2}}{\mathrm{d}x^{2}}+a_{3}|\psi|^{2}+ \frac{a_{4}}{2}|\psi|^{4}\right)\psi=a_{2}\psi \tag{1}\]
in one spatial dimension of coordinate \(x\), where \(\psi=\psi\left(x\right)\) is the complex-valued wave function and \(a_{2}\), \(a_{3}\), and \(a_{4}\) are constants [50]. By omitting position-dependent potentials, we focus on the homogeneous case with either box or periodic boundary conditions.
The amplitude-phase representation \(\psi\equiv\sqrt{\sigma}\exp\left(i\phi\right)\)
casts Eq. (1) into the differential equations [25; 28; 30]
\[\left(\frac{\mathrm{d}\sigma}{\mathrm{d}x}\right)^{2}=P\left(\sigma\right) \tag{2}\]
for the density \(\sigma=\sigma(x)\) with the quartic polynomial
\[P\left(\sigma\right)\equiv\frac{4}{3}a_{4}\,\sigma^{4}+4a_{3}\,\sigma^{3}-8a_{ 2}\,\sigma^{2}-16a_{1}\,\sigma-4a_{0} \tag{3}\]
and
\[\frac{\mathrm{d}\phi}{\mathrm{d}x}=\pm\frac{\sqrt{a_{0}}}{\sigma} \tag{4}\]
for the phase \(\phi=\phi(x)\). Here \(a_{0},a_{1}\) are constants of integration and the different signs in Eq. (4) refer to the two possible directions of the flow induced by the phase gradient.
Obviously, the order of the polynomial \(P=P(\sigma)\) directly depends on the leading nonlinearity in Eq. (1) yielding a cubic or quartic polynomial in the case of the cubic (\(a_{4}=0\)) or cubic-quintic NLSE, while the polynomial is quadratic for the linear Schrodinger equation (\(a_{3}=a_{4}=0\)).
_Classification of solutions -_ The stationary solutions of Eq. (1) are in general determined [30] by the polynomial \(P\), defined by Eq. (3), and its discriminant \(\Delta=a_{4}^{6}\prod_{j\neq k}(\sigma_{j}-\sigma_{k})\) given by the roots \(\sigma_{j}\) of \(P\). Depending on the sign of \(\Delta\), three classes of solutions can be identified: (i) simple complex conjugated roots (\(\Delta<0\)), (ii) multiple roots (\(\Delta=0\)), or (iii) only simple roots (\(\Delta>0\)) of \(P\).
In order to discuss the roots \(\sigma_{j}\) of \(P\) it is convenient to introduce the tuple notation (\(r_{4}\), \(r_{3}\), \(r_{2}\), \(r_{1}\)), where every entry \(r_{m}\) denotes the number of roots at order \(m\). For instance \((0,0,0,4)\) labels a polynomial with four simple real roots as displayed in Fig. 1a, while a polynomial with two simple real roots and two simple complex-conjugated roots as shown in Fig. 1d is labeled by \((0,0,0,2+2_{\mathrm{C}})\).
The explicit solutions of the stationary NLSE are obtained by direct integration of Eqs. (2) and (4) in the region between two neighboring real roots. Consequently, oscillatory solutions of Eq. (2) occur between two neighboring simple real roots which define the minimum and maximum density of the oscillation as displayed by the three closed phase-space orbits in Fig. 1b which originate from the polynomial in Fig. 1a.
Complex conjugate roots with finite imaginary parts can therefore not be the turning points of such solutions, but instead deform the resulting orbits spanned between other real roots as illustrated in Fig. 1e. Due to the finite order of \(P\), there is thus only one oscillatory orbit possible for the polynomial shown in Fig. 1d.
For polynomials with a multiple root (shown in Fig. 2b for the case (0,0,1,2), solitonic and other more exotic solutions emerge [51]. In fact, the multiple root acts as a bifurcation point in phase space constituting a separatrix for the phase-space trajectories separating the two other solution classes [25]. Moreover, there always exists a constant amplitude solution at the density value of the multiple root.
Finally, the outer density regions which are restricted by only one real root typically lead to unbounded solutions with poles. For instance the light orange shaded region of the polynomial displayed in Fig. 2c yields such an unbounded solution [51].
Consequently, the sign of the discriminant \(\Delta\) and thus the nature of the roots of \(P\) not only determine the character and shape of the resulting solutions, but also the to
Figure 1: Conformal mapping between two realizations of the cubic-quintic NLSE with discriminant \(\Delta>0\) (a–c) and \(\Delta<0\) (d–f). (a,d) Polynomials \(P(\sigma)\) and \(\tilde{P}(\tilde{\sigma})\), defined by Eq. (2), with four simple roots \(\sigma_{j}\) (a) or two simple and two complex roots \(\tilde{\sigma}_{j}\) (d), respectively. (b,e) Oscillating phase-space trajectories corresponding to real (blue) and complex (red) solutions determined by \(P\) in (a,d). The two cases (a–c) and (d–f) are related by a conformal transformation, Eq. (5), which maps the positions of the roots, the polynomials, and the corresponding phase-space orbits into each other. (c,f) Complex density plane of \(P\) and \(\tilde{P}\) shown in (a,d) illustrating the positions of the roots \(\sigma_{j}\) and \(\tilde{\sigma}_{j}\) (black dots) as well as the argument of the phase of the density \(\sigma\) (color map). The lines of constant real and imaginary part of the density \(\sigma\) form a square grid (c) which is mapped into a grid of circles (f) by the conformal transformation due to changing the sign of \(\Delta\). The roots \(\sigma_{j}\) are thus mapped from a straight line (c) to a circle (f) with counter-clockwise orientation starting from the first real root. Likewise, the cloverleaf-shaped boundary \(Q\) shown in (c) is the inverse image of the square boundary shown in (f) while the center of the angle-shaped region in (f) corresponds to the point at infinity in (c).
tal number of different solutions for a given set of parameters. Indeed, according to Eq. (2) physically meaningful real solutions require \(P(\sigma)>0\) between the roots considered, in addition to any restrictions set by the boundary conditions of the system under study, while for \(P(\sigma)<0\) complex density solutions emerge. Hence, this approach enables a straightforward and systematic classification of all possible stationary solutions of higher-order NLSEs.
_Conformal duality_ - In the phase space \((\sigma,\sigma^{\prime})\) with \(\sigma^{\prime}\equiv\mathrm{d}\sigma/\mathrm{d}x\), the differential equation Eq. (2) constitutes an elliptic curve. A key characteristic of elliptic curves is the possibility to transform their underlying algebraic equation by rational transformations [52]. Strikingly, in the case of the NLSE the Mobius transformation can be adapted to the differential equation Eq. (2) leading to the conformal map
\[\sigma\left(x\right)=\frac{A\,\tilde{\sigma}\left(\tilde{x}\right)+B}{C\, \tilde{\sigma}\left(\tilde{x}\right)+D} \tag{5}\]
of the densities \(\sigma\) and \(\tilde{\sigma}\) with the generally complex-valued coefficients \(A,B,C,D\). In contrast to the mapping of elliptic curves, here the spatial coordinate \(x\) also needs to be transformed according to the affine transformation \(x=x_{0}+\left(AD-BC\right)\tilde{x}\), where \(x\) can become complex-valued and \(x_{0}\) is a constant.
The duality, Eq. (5), relates any two physical systems with the same real-valued cross-ratio \(k^{2}\) which is an invariant of the transformation determined by the roots \(\sigma_{j}\) of \(P\)[51]. Note that the conformal character of the Mobius transformation will preserve the angles in the complex density plane by mapping every straight line of constant density into another line or circle of constant density, and vice versa.
The gradient of the phase, determined by Eq. (4), enjoys a similar transformation [51]
\[\frac{\mathrm{d}\phi}{\mathrm{d}x}=\pm\sqrt{a_{0}}\,\frac{D\,\frac{\mathrm{d} \tilde{\phi}}{\mathrm{d}\tilde{x}}\pm\sqrt{\tilde{a}_{0}}\,C}{B\,\frac{ \mathrm{d}\tilde{\phi}}{\mathrm{d}\tilde{x}}\pm\sqrt{\tilde{a}_{0}}\,A} \tag{6}\]
with the very same coefficients \(A,B,C,D\) as in Eq. (5). As a result, the combination of Eqs. (5) and (6) provides the complete conformal mapping of the differential equations under study. Hence, different realizations of the cubic-quintic NLSE are conformally related establishing a fundamental connection between their solution spaces. In particular, these transformations also apply to the solutions of density \(\sigma\) and phase \(\phi\) themselves such that the complete complex wave function \(\psi\) can be conformally mapped.
We emphasize that the conformal duality remains intact for traveling-wave solutions of the NLSE such as solitary waves subjected to a velocity boost. This effect is a direct consequence of the Galilean covariance of the NLSE allowing to obtain arbitrary many additional solutions through the application of Galilei-transformations [51].
_Conformal mapping and reduction of the NLSE_ - Depending on the choice of the transformation coefficients different scenarios can be realized. Indeed, the conformal map, Eq. (5), directly relates different quartic polynomials with each other and therefore their solution spaces. In this case the ratio \(A/C\) must not match the value of any of the roots of the involved polynomials to preserve their quartic order. Real-valued coefficients \(A,B,C,D\) connect polynomials within a given solution class, while complex coefficients enable to change the solution class corresponding to a change of sign of \(\Delta\). Figure 1 shows an intriguing example of the latter case, where a \((0,0,0,4)\)-polynomial is mapped to one classified by \((0,0,0,2+2_{\mathrm{C}})\). Despite the fact that the graphs of the polynomials \(P\) and \(\tilde{P}\) (a,d) and their phase-space orbits (b,e) appear quite distinct in Fig. 1 they are intimately connected as visualized by the density maps (c,f). Here, the conformal character of the transformation manifests itself by transforming the straight line connecting all four simple roots in Fig. 1c to a circle which passes again through all (now partly complex) roots in Fig. 1f. In the same way, the rectangular boundary of the density plot in Fig. 1f is mapped into the cloverleaf-shaped boundary displayed in Fig. 1c.
Moreover, as depicted in Fig. 2, it is possible to conformally reduce the cubic-quintic NLSE to either the cubic NLSE or the linear Schrodinger equation by mapping either an outer simple root or an outer double root to plus or minus infinity, respectively. In these cases the ratio \(A/C\) must match the value of the roots to be moved. As a consequence, the overall degree of the polynomial is reduced by one (simple root moved) or two (double root moved). Analogously, the linear Schrodinger equation
Figure 2: Conformal reduction from the cubic-quintic (b) to the cubic NLSE (c) and the linear Schrödinger equation (a). Exemplary polynomials \(P\) (green) and \(-P\) (orange), Eq. (2), with (a) two simple roots (black dots), (b) two simple roots \(\sigma_{3}\), \(\sigma_{4}\) and one double root \(\sigma_{1,2}\) (encircled black dot), or (c) one simple and one double root. By moving the roots \(\sigma_{4}\) (or \(\sigma_{1,2}\)) to plus (minus) infinity the cubic-quintic NLSE can be reduced to the cubic NLSE or the linear Schrödinger equation, respectively. The green (soliton solutions) and orange (oscillating solutions) fillings show which of the density regions (and corresponding solutions) are mapped into each other featuring similar characteristics. The light shaded green and orange areas in (a) and (c) illustrate unbounded solutions.
with an energy eigenvalue of zero is obtained by removing a triple root.
By reducing the degree of the polynomial the solution space changes based on Eq. (5) and as illustrated in Fig. 2: (i) one unbounded solution vanishes because the root constituting its minimum or maximum density has been removed, (ii) a bound solution becomes unbound since it is now only restricted by one root, and (iii) the remaining solutions get transformed, but keep their main characteristics as their roots retain their order.
The case shown in Fig. 2 highlights two prominent solitonic solutions, namely the flat-top soliton [14] (green shaded area in b) and the elementary bright soliton [53] (green shaded area in c) which are both governed by a hyperbolic cosine in the denominator of their density profile. By the transformation from the cubic-quintic to the cubic NLSE only the prefactor in front of the hyperbolic cosine gets changed such that both solutions are quite similar [51].
Likewise, the oscillatory solution in the cubic-quintic case (orange area in Fig. 2b) governed by a cosine in the denominator as well changes its prefactor when transformed. However, in this case the corresponding solution of the cubic case (light orange area in Fig. 2c) becomes unbound due to the now different prefactor and has thus completely changed its character by the transformation.
Fascinatingly, the solutions of the green and orange regions in Fig. 2 are also interconnected by a tranformation that maps a real position coordinate \(x\) to a purely imaginary position \(\tilde{x}\) changing the functional dependency from a hyperbolic sine (green) to a trigonometric sine (orange). Effectively, this transformation thus flips the overall sign of the polynomial from \(P\) to \(-P\). In this way all the solutions of the cubic and cubic-quintic NLSE as well as the linear Schrodinger equation are fundamentally connected.
_Connection to Newtonian mechanics -_ Besides the importance of the unified theory of the NLSE, the conformal duality discussed in this Letter has also strong implications for the dynamics of classical particles subjected to anharmonic conservative potentials in Newtonian mechanics. Indeed, it is well-known that the NLSE formally constitutes a classical Hamiltonian system for the density \(\sigma\) with the Hamiltonian function [54]
\[\mathcal{H}(\sigma^{\prime},\sigma)\equiv\frac{1}{2}{\sigma^{\prime}}^{2}+U \left(\sigma\right) \tag{7}\]
with the potential \(U=U\left(\sigma\right)\). Here, the density \(\sigma\) will be analogous to the classical position, while the spatial coordinate \(x\) corresponds to time in classical mechanics.
By constraining the energy of \(\mathcal{H}\) to the value of \(-4a_{0}\) and considering the potential \(U\left(\sigma\right)\equiv 2a_{0}-P\left(\sigma\right)/2\) one can recover [25] Eq. (2). Hence, in this analogy the non-linearites in Eq. (1) directly correspond to anharmonic contributions in \(U\) with the cubic or cubic-quintic NLSE giving rise to a cubic or quartic potential, respectively, while the linear Schrodinger equation yields a harmonic potential as usual.
The conformal map, Eq. (5), now allows us to transform the Hamiltonian, Eq. (7), of a classical particle and consequently its underlying equations of motions. Thus, we can map a double-well problem to another double-well problem, or carry out the conformal reduction from a quartic to a cubic, quadratic, linear or constant potential by mapping a simple, double, triple or quadruple root of the potential to plus or minus infinity.
As a result, soliton solutions, as shown in Fig. 2, in our classical mechanics analogy correspond to an oscillation with infinitely long period where the particle in phase space approaches a bifurcation point similar to a mathematical pendulum, where the angular coordinate approaches the unstable fixed point at \(\pi\) radians.
Analogously, unbounded solutions are the counterpart of scattering states of the corresponding classical potentials. Hence, the ideas and concepts for treating physics problems of classical particles in anharmonic potentials have a direct correspondence to those required for the NLSE. Remarkably, both systems enable the conformal mapping of their solutions within and between different solution classes.
_Conclusion -_ In summary we have provided a unified picture of the NLSE by establishing a conformal duality between the solution spaces of the cubic-quintic and cubic NLSE as well as the linear Schrodinger equation. This connection gives rise to novel and elementary understanding of the physics of nonlinear systems, in particular, when comparing the effect of nonlinearities of different degrees.
Our results apply to stationary and travelling-wave solutions of the NLSE and remain valid even under Galilean transformations. We therefore expect our findings to have a wide variety of applications that include the dynamics of solitons and their dual counterparts, mode structures in nonlinear fiber optics, hydrodynamic wave-dynamics, and the interplay of two- and three-body interactions in quasi-1D Bose-Einstein condensates as utilized for atomtronics devices [55; 56].
In addition, the conformal duality can be employed for Newtonian mechanics allowing us to classify and relate numerous different dynamical systems caused by anharmonic conservative potentials. Finally, our algebraic-geometric classification scheme can straightforwardly be extended to even higher order NLSEs such as the cubic-quintic-septic NLSE [57] to search for new physics in the form of exotic solutions that require strong nonlinearities of this kind.
_Acknowledgments -_ We thank M.A. Efremov for fruitful discussions and helpful suggestions. D.L. acknowledges financial support from the U.S. Department of Energy (DE-SC0021152, DE-SC0013365, DE-SC0023658, SciDAC-5 NUCLEI Collaboration). W.P.S. is grateful to Texas A&M University for a Faculty Fellowship at
the Hagler Institute for Advanced Study at the Texas A&M University as well as to the Texas A&M AgriLife Research.
|
2307.00175 | Still No Lie Detector for Language Models: Probing Empirical and
Conceptual Roadblocks | We consider the questions of whether or not large language models (LLMs) have
beliefs, and, if they do, how we might measure them. First, we evaluate two
existing approaches, one due to Azaria and Mitchell (2023) and the other to
Burns et al. (2022). We provide empirical results that show that these methods
fail to generalize in very basic ways. We then argue that, even if LLMs have
beliefs, these methods are unlikely to be successful for conceptual reasons.
Thus, there is still no lie-detector for LLMs. After describing our empirical
results we take a step back and consider whether or not we should expect LLMs
to have something like beliefs in the first place. We consider some recent
arguments aiming to show that LLMs cannot have beliefs. We show that these
arguments are misguided. We provide a more productive framing of questions
surrounding the status of beliefs in LLMs, and highlight the empirical nature
of the problem. We conclude by suggesting some concrete paths for future work. | B. A. Levinstein, Daniel A. Herrmann | 2023-06-30T23:44:51Z | http://arxiv.org/abs/2307.00175v1 | # Still No Lie Detector for Language Models: Probing Empirical and Conceptual Roadblocks
###### Abstract
We consider the questions of whether or not large language models (LLMs) have beliefs, and, if they do, how we might measure them. First, we evaluate two existing approaches, one due to Azaria and Mitchell (2023) and the other to Burns et al. (2022). We provide empirical results that show that these methods fail to generalize in very basic ways. We then argue that, even if LLMs have beliefs, these methods are unlikely to be successful for conceptual reasons. Thus, there is still no lie-detector for LLMs. After describing our empirical results we take a step back and consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some recent arguments aiming to show that LLMs cannot have beliefs. We show that these arguments are misguided. We provide a more productive framing of questions surrounding the status of beliefs in LLMs, and highlight the empirical nature of the problem. We conclude by suggesting some concrete paths for future work.
Probes CCS Large Language Models Interpretability
_One child says to the other "Wow! After reading some text, the AI understands what water is!"_
_... The second child says "All it understands is relationships between words. None of the words connect to reality. It doesn't have any internal concept of what water looks like or how it feels to be wet...."_
_... Two angels are watching [some] chemists argue with each other. The first angel says "Wow! After seeing the relationship between the sensory and atomic-scale worlds, these chemists have realized that there are levels of understanding humans are incapable of accessing." The second angel says "They haven't truly realized it. They're just abstracting over levels of relationship between the physical world and their internal thought-forms in a mechanical way. They have no concept of [*******) or [*******)**_. You can't even express it in their language!"_
-- Scott Alexander, _Meaningful_
## 1 Introduction
Do large language models (LLMs) have beliefs? And, if they do, how might we measure them?
These questions have a striking resemblance to both philosophical questions about the nature of belief in the case of humans (Ramsey (2016)) and economic questions about how to measure beliefs (Savage (1972)).1
Footnote 1: Diaconis and Skyrms (2018) give a concise and thoughtful introduction to both of these topics.
These questions are not just of intellectual importance but also of great practical significance. It is news to no one that LLMs are having a large effect on society, and that they will continue to do so. Given their prevalence, it is important to address their limitations. One important problem that plagues current LLMs is their tendency to generate falsehoods with great conviction. This is sometimes called _lying_ and sometimes called _hallucinating_(Ji et al., 2023; Evans et al., 2021). One strategy for addressing this problem is to find a way to read the beliefs of an LLM directly off its internal state. Such a strategy falls under the broad umbrella of model interpretability,2 but we can think of it as a form of mind-reading with the goal of detecting lies.
Footnote 2: See Lipton (2018) for a conceptual discussion of model interpretability.
Detecting lies in LLMs has many obvious applications. It would help us successfully deploy LLMs at all scales: from a university student using an LLM to help learn a new subject, to companies and governments using LLMs to collect and summarize information used in decision-making. It also has clear applications in various AI safety research programs, such as Eliciting Latent Knowledge (Christiano et al. (2021)).
In this article we tackle the question about the status of beliefs in LLMs head-on. We proceed in two stages. First, we assume that LLMs _do_ have beliefs, and consider two current approaches for how we might measure them, due to Azaria and Mitchell (2023) and Burns et al. (2022). We provide empirical results that show that these methods fail to generalize in very basic ways. We then argue that, even if LLMs have beliefs, these methods are unlikely to be successful for conceptual reasons. Thus, _there is still no lie-detector for LLMs_.
After describing our empirical results we take a step back and consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some recent arguments aiming to show that LLMs cannot have beliefs (Bender et al. (2021); Shanahan (2022)). We show that these arguments are misguided and rely on a philosophical mistake. We provide a more productive framing of questions surrounding the status of beliefs in LLMs. Our analysis reveals both that there are many contexts in which we should expect systems to track the truth in order to accomplish other goals but that the question of whether or not LLMs have beliefs is largely an empirical matter.3
Footnote 3: We provide code at [https://github.com/balevinstein/Probes](https://github.com/balevinstein/Probes).
## 2 Overview of Transformer Architecture
The language models we're interested in are transformer models (Vaswani et al., 2017). In this section, we provide a basic understanding of how such models work.4 In particular, we'll be focusing on autoregressive, decoder-only models such as OpenAI's GPT series and Meta's LLaMA series. The basic structure is as follows:
Footnote 4: For an in depth, conceptual overview of decoder only transformer models, see (Levinstein, 2023).
1. **Input Preparation:** Text data is fed to the model. For example, let's consider the phrase, Mike Trout plays for the.
2. **Tokenization:** The input text is tokenized, which involves breaking it down into smaller pieces called tokens. In English, these tokens are typically individual (sub)words or punctuation. So, our example sentence could be broken down into [Mike, Trout, plays, for, the].
3. **Embedding:** Each token is then converted into a mathematical representation known as an embedding. This is a vector of a fixed length that represents the token along with its position in the sequence.5 For instance, Mike might be represented by a list of numbers such as [0.1, 0.3, -0.2,...]. Footnote 5: After the model is trained, intuitively what these embeddings are doing is representing semantic and other information about the token along with information about what has come before it in the sequence.
4. **Passing through Layers:** These embeddings are passed through a series of computational layers. Each layer transforms the embeddings of the tokens based on the each token's current embedding, as well as the information received from previous tokens' embeddings. This procedure enables information to be'moved around' from token to token across the layers. It is through these transformations that the model learns complex language patterns and relationships among the tokens. For example, to compute the embedding for plays in Mike Trout plays for the at a layer \(m\), a decoder-only model can use information from the layer \(m-1\) embeddings for Mike, Trout, and plays, but not from for or the.
5. **Prediction:** After the embeddings pass through the last layer of the model, a prediction for what the next token will be is made using the embedding _just for_ the previous token. This prediction involves estimating the probabilities of all potential next tokens in the vocabulary. When generating new text, the model uses this distribution to select the next token. For example, after processing the phrase Mike Trout plays for the, the model might predict Angels as the next token given its understanding of this sequence of text. (In reality, the model will actually make a prediction for what comes after each initial string of text. So, it will make predictions for the next token after Mike, after Mike Trout plays, etc.)
The power of transformer models comes from their ability to consider and manipulate information across all tokens in the input, allowing them to generate human-like text and uncover deep patterns in language. Figure 1 provides a basic depiction of information flow in decoder-only models.
## 3 Challenges in Deciphering the 'Beliefs' of Language Models
For now, let's assume that in order to generate human-like text, LLMs (like humans) have beliefs about the world. We might then ask how we can measure and discover their beliefs. This question immediately leads to a number of problems:
### Unreliable Self-Reporting
Asking an LLM directly about its beliefs is insufficient. As we've already discussed, models have a tendency to "hallucinate" or even lie. So belief reports alone cannot be taken as trustworthy. Moreover, when asked about its beliefs, an LLM likely will not "introspect" and decode some embedding that contains information about its information state. Instead, it just needs to answer the question in a reasonable way that accords with its training process.
### Limited Behavioral Evidence
When trying to understand human beliefs, we have a rich tapestry of behavioral evidence to draw upon. We consider not only what people say, but also what they do. For instance, if someone consistently invests in the S&P, we infer that they believe the S&P will go up in value, even if they never explicitly state it. For LLMs, however, we have a limited behavioral basis for inferring beliefs. The "behavior" of a language model is confined to generating sequences of tokens, which lacks the depth and breadth of human action.
### Contextuality of LLMs
Everything one inputs and doesn't input into the LLM is fair game for it to base its responses on. Through clever prompting alone, there is no way to step "outside" of the language game the LLM is playing to get at what it _really_ thinks. This problem also plagues economists' and psychologists' attempts to uncover the beliefs of humans. For example, economists have challenged the validity of the famous "framing effects" of Tversky and Kahneman (1981) by considering the possibility that the subjects in the study updated on higher-order evidence contained in what was and wasn't said to them, and the rest of the context of the experiment (Gilboa et al. (2020)).6
Footnote 6: Lieder and Griffiths (2020) make a similar point.
### Opaque and Alien Internal Structure
While we can examine the embeddings, parameters, and activations within an LLM, the semantic significance of these elements is opaque. The model generates predictions using a complex
Figure 1: A simplified representation of a decoder-only transformer model processing the input string Mike Trout plays for the. Each input token passes through several hidden layers. At each layer, each token is associated with a vector (represented by \(\langle\bullet,\bullet,\bullet\rangle\)). The final hidden layer generates a unique probability distribution (\(p_{i}\)) over the next possible token for each input token.
algorithm that manipulates high-dimensional vectors in ways that don't obviously resemble human thought processes.
We can paraphrase a metaphor from Quine to help us think about language models:
Different [models trained on] the same language are like different bushes trimmed and trained to take the shape of identical elephants. The anatomical details of twigs and branches will fulfill the elephantine form differently from bush to bush, but the overall outward results are alike. [20, p. 7]
LLMs produce output similar to the output of humans competent in the same language. Transformer models are fundamentally different from humans in both structure and function. Therefore, we should exercise caution in interpreting their outputs and be aware of the inherent limitations in our understanding of their internal processes.
## 4 Interpreting the Minds of LLMs
One potential strategy to decipher the beliefs of transformer models is to bypass the opacity of their internal structure using an approach known as "probing" [1].
Although the internals of LLMs are difficult for humans to decipher directly, we can use machine learning techniques to create simplified models (probes) that can approximate or infer some aspects of the information captured within these internal structures.
At a high-level, this works as follows. We generate true and false statements and feed them to the LLM. For each statement, we extract a specific embedding from a designated hidden layer to feed into the probe. The probe only has access to the embedding and is ignorant of the original text fed into the LLM. Its task is to infer the "beliefs" of the LLM solely based on the embedding it receives.
In practice, we focus on the embedding associated with the last token from a late layer. This is due to the fact that in autoregressive, decoder-only models like the LLMs we are studying, information flows forward. Therefore, if the LLM is processing a statement like The earth is round, the embeddings associated with the initial token The will not receive any information from the subsequent tokens. However, the embedding for the final word round has received information from all previous tokens. Thus, if the LLM computes and stores a judgement about the truth of the statement The earth is round, this information will be captured in the embedding associated with round.7 We use relatively late layers because it seems more likely that the LLM will try to determine whether a statement is true or false after first processing lower-level semantic and syntactic information in earlier layers.
Footnote 7: The sentences in the dataset all ended with a period (i.e., full-stop) as the final token. We ran some initial tests to see if probes did better on the embedding for the period or for the penultimate token. We found it did not make much of a difference, so we did our full analysis using the embeddings for the penultimate tokens.
### Supervised Learning Approach
The first approach for training a probe employs supervised learning. This uses a list of statements labelled with their truth-values. The statements are each run through the language model. The probe receives as input the embedding for the last token from a specific layer of the large language model, and it outputs a number--intended to be thought of as a subjective probability--ranging from \(0\) to \(1\). The parameters of the probe are then adjusted based on the proximity of its output to the actual truth-value of the statement.
This approach was recently investigated by Azaria and Mitchell (2023). They devised six labelled datasets, each named according to their titular subject matter: Animals, Cities, Companies, Elements, Scientific Facts, and Inventions. Each dataset contained a minimum of 876 entries, with an approximate balance of true and false statements, totalling 6,084 statements across all datasets. Table 1 provides some examples from these datasets.
\begin{table}
\begin{tabular}{l l l} \hline \hline Dataset & Statement & Label \\ \hline Animals & The giant anteater uses walking for locomotion. & 1 \\ & The hyena has a freshwater habitat. & 0 \\ \hline Cities & Tripoli is a city in Libya. & 1 \\ & Rome is the name of a country. & 0 \\ \hline Companies & The Bank of Montreal has headquarters in Canada. & 1 \\ & Lowe’s engages in the provision of telecommunication services. & 0 \\ \hline Elements & Scandium has the atomic number of 21. & 1 \\ & Thalium appears in its standard state as liquid. & 0 \\ \hline Facts & Comets are icy celestial objects that orbit the sun. & 1 \\ & The freezing point of water decreases as altitude increases. & 0 \\ \hline Inventions & Ernesto Blanco invented the electric wheelchair. & 1 \\ & Alan Turing invented the power loom. & 0 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Example Statements from Different Datasets
Figure 2: High-level overview of how the probe measures the beliefs of the LLM on inputs of true and false statements. Instead of looking at the text the LLM itself ouputs, we look at the numbers that the probe outputs.
### Azaria and Mitchell's Implementation
Azaria and Mitchell (2023) trained probes on the embeddings derived from Facebook's OPT 6.7b model (Zhang et al., 2022).8 Their probes were all feedforward neural networks comprising four fully connected layers, utilizing the ReLU activation function. The first three layers consisted of 256, 128, and 64 neurons, respectively, culminating in a final layer with a sigmoid output function. They applied the Adam optimizer for training, with no fine-tuning of hyperparameters, and executed training over five epochs.
Footnote 8: The ‘6.7b’ refers to the number of parameters (i.e., 6.7 billion).
For each of the six datasets, they trained three separate probes on the five other datasets and then tested them on the remaining one (e.g., if a probe was trained on Cities, Companies, Elements, Facts, and Inventions, it was tested on Animals). The performance of these probes was evaluated using binary classification accuracy. This process was repeated for five separate layers of the model, yielding fairly impressive accuracy results overall.
The purpose of testing the probes on a distinct dataset was to verify the probes' ability to identify a general representation of truth within the language model, irrespective of the subject matter.
### Our Reconstruction
We implemented a reconstruction of Azaria and Mitchell's method with several modifications:
* We constructed the probes for LLaMA 30b (Touvron et al., 2023), a model from Meta with 33 billion parameters and 60 layers.
* We utilized an additional dataset named Capitals consisting of 10,000 examples, which was provided by Azaria and Mitchell. It has substantial overlap with the Cities dataset, which explains some of the test accuracy.
* We trained probes on three specific layers: the last layer (layer -1), layer 56 (layer -4), and layer 52 (layer -8).
* We took the best of ten probes (by binary classification accuracy) for each dataset and each layer instead of the best of three.
Similar to the findings of Azaria and Mitchell, our reconstruction resulted in generally impressive performance as illustrated in Table 2.
In addition to binary classification accuracy, we evaluated the calibration of the probes across the different layers. Calibration provides another metric for evaluating the quality of the probes' forecasts. Figure 3 illustrates these calibration curves for each layer when tested on the Scientific Facts dataset.
\begin{table}
\begin{tabular}{l c c c c c c c} \hline \hline & Animals & Capitals & Cities & Companies & Elements & Facts & Inventions \\ \hline Layer -1 &.722 &.970 &.867 &.722 &.755 &.826 &.781 \\ Layer -4 &.728 &.973 &.882 &.766 &.792 &.821 &.831 \\ Layer -8 &.729 &.967 &.869 &.742 &.694 &.810 &.792 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Binary Classification Accuracy for probes trained on LLaMA 30b embeddings.
### The Challenge of Generalization
This section explores our empirical findings, which suggest that probes in this setting often learn features that correlate with truth in the training set, but do not necessarily generalize well to broader contexts.
#### 4.4.1 Evaluating Performance on Negations
Creating Boolean combinations of existing statements is one of the most straightforward ways to generate novel statements for testing a model's generalization capabilities. Negation, the simplest form of Boolean operation, offers a useful starting point.9
Footnote 9: In formal models of beliefs and credence, the main domain is usually an algebra over events. If we wish to identify doxastic attitudes in language models, then we should check that those attitudes behave roughly as expected over such an algebra. Such algebras are closed under negation, so it is a motivated starting point.
We derived NegFacts and NegCompanies from Azaria and Mitchell's datasets. These new datasets contained the negations of some statements in Scientific Facts and Companies respectively. For instance, the statement The earth orbits the sun from Scientific Facts is transformed into The earth doesn't orbit the sun in NegFacts.
Given that the original datasets contained few Boolean statements, these negation datasets allowed us to test the probes on a simple new distribution.
We initially tested the probes trained on Animals, Capitals, Cities, Companies, Elements, and Inventions (i.e., trained all positive datasets except Scientific Facts) on NegFacts. Similarly, we tested the probes trained on Animals, Capitals, Scientific Facts, Cities, Elements, and Inventions on NegCompanies. Since roughly 50% of the statements in each of NegFacts and NegCompanies are true, the accuracy of five of six of these probes was worse than chance, as Table 3 illustrates.
We then tested a new set of probes on NegFacts, after training on all seven original datasets (including Scientific Facts) and NegCompanies, which consisted of 550 labeled negations of statements from Companies. Thus, these probes were trained on _all positive variants of the negated statements they were tested on, along with all positive examples from Companies and their negated counterparts._ We did the same, _mutatis mutandis_ with NegCompanies. Despite the expanded training data, the performance was still surprisingly poor, as shown in Table 3.
Figure 3: Calibration curves for probes tested on the Scientific Facts dataset at each layer.
Since the probes failed to do well on NegFacts and NegCompanies even after training on all positive analogs along with other negative examples, it's likely the original probes are not finding representations of truth within the language model embeddings. Instead, it seems they're learning some other feature that correlates well with truth on the training sets but that does not correlate with truth in even mildly more general contexts.
Of course, we could expand the training data to include more examples of negation and other Boolean combinations of sentences. This likely would allow us to train better probes. However, we have general conceptual worries about generalizing probes trained with supervised learning that we will explore in the next subsection. Specifically, we will be delving into the potential shortcomings of relying on supervised learning techniques for probe training. These issues stem from the inherent limitations of supervised learning models and how they handle unknown scenarios and unseen data patterns.
### Conceptual Problems: Failure to Generalize
In the realm of machine learning, out-of-distribution generalization remains a pervasive challenge for classifiers. One of the common pitfalls involves learning _spurious correlations_ that may be present in the training data, but do not consistently hold in more general contexts.
Consider an example where a classifier is trained to distinguish between images of cows and camels (Beery et al., 2018). If the training set exclusively features images of cows in grassy environments and camels in sandy environments, the classifier may learn to associate the environmental context (grass or sand) with the animal, and using that to predict the label, rather than learning the distinguishing features of the animals themselves. Consequently, when presented with an image of a cow standing on sand, the classifier might erroneously label it as a camel.
We think there are special reasons to be concerned about generalization when training probes to identify a representation of truth using supervised learning because supervised learning severely limits the sort of data we can use for training and testing our probes. First, we need to use sentences we believe the model itself is in a position to know or infer from its own training data. This is the easier part. The harder part is curating data that we can unambiguously label correctly. The probe most directly is learning to predict the _label_, not the actual truth-value. These coincide only when the labels are completely correct about the statements in the training and test set.
We ultimately want to be able to use probes we've trained on sentences whose truth-value we ourselves don't know. However, the requirement that we accurately label training and testing data limits the confidence we can place in the probes' capability of accurately identifying a representation of truth within the model. For instance, consider the following statements:
* Barry Bonds is the best baseball player of all time.
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline Layer & Facts & NegFacts\({}^{1}\) & NegFacts\({}^{2}\) & Companies & NegCompanies\({}^{1}\) & NegCompanies\({}^{2}\) \\ \hline -1 &.826 &.408 &.526 &.722 &.555 &.567 \\ -4 &.821 &.373 &.568 &.766 &.460 &.629 \\ -8 &.810 &.373 &.601 &.742 &.431 &.596 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Binary classification accuracy for NegFacts compared to Facts. ‘NegFacts\({}^{1}\)’, (‘NegCompanies\({}^{1}\)’) denotes the accuracy for probes trained only on positive datasets, excluding Scientific Facts (Companies). ‘NegFacts\({}^{2}\)’ denotes the accuracy for probes trained on all positive datasets including Scientific Facts and NegCompanies, while ‘NegCompanies\({}^{2}\) denotes the accuracy for probes trained on all positive datasets including Companies and NegFacts.
* If the minimum wage is raised to $15 an hour, unemployment will increase.
* France is hexagonal.
* We are morally responsible for our choices.
* Caeser invaded Gaul due to his ambition.
These statements are debatable or ambiguous. We must also be cautious of any contentious scientific statements that lack full consensus or could be reconsidered as our understanding of the world evolves.
Given these restrictions, it's likely the probes will identify properties that completely or nearly coincide with truth over the limited datasets used for training and testing. For instance, the probe might identify a representation for:
* Sentence is true _and_ contains no negation
* Sentence is true _and_ is expressed in the style of Wikipedia
* Sentence is true _and_ can be easily verified online
* Sentence is true _and_ verifiable
* Sentence is true _and_ socially acceptable to assert
* Sentence is true _and_ commonly believed
* Sentence is true _or_ asserted in textbooks
* Sentence is true _or_ believed by most Westerners
* Sentence is true _or_ ambiguous
* Sentence is accepted by the scientific community
* Sentence is believed by person X
On the original datasets we used, if the probe identified representations corresponding to any of the above, it would achieve impressive performance on the test set. Although we can refine our training sets to eliminate some of these options, we won't be able to eliminate all of them without compromising our ability to label sentences correctly.
Indeed, if the labels are inaccurate, the probe might do even better if it identified properties like "Sentence is commonly believed" or "Sentence corresponds to information found in many textbooks" even when the sentence is not true.10
Footnote 10: Azaria and Mitchell (2023) did an admirable job creating their datasets. Some of the statements were generated automatically using reliable tables of information, and other parts were automated using ChatGPT and then manually curated. Nonetheless, there are some imperfect examples. For instance, in Scientific Facts, one finds sentences like Humans have five senses: sight, smell, hearing, taste, and touch, which is not unambiguously true.
This situation can be likened to the familiar camel/cow problem in machine learning. But given the constraints imposed by using supervised learning and limited data, isolating representations of truth from other coincidental properties is even more challenging than usual. The fact that probes empirically seem to identify representations of something other than truth should make us wary of this method.
### Conceptual Problems: Probabilities Might not Correspond to Credences
So far we have been assuming that if the probes extracted accurate probabilities, that this would be good evidence we were extracting the credences of the model. However, this is too quick. While
these probes output probabilities for statements, these probabilities do not directly correspond to the "credences" of the underlying language model. This disparity arises because the _probe_ is directly penalized based on the probabilities it reports, while the underlying model is not. Thus, the probe aims to translate the information embedded within the language model's representations into probabilities in a manner that minimizes its own loss.
Consider an illustrative analogy: Suppose I forecast stock prices, with rewards based on the accuracy of my predictions. However, my predictions are entirely reliant on advice from my uncle, who, unfortunately, is systematically inaccurate. If he predicts a stock's price will rise, it actually falls, and vice versa. If I wish to make accurate forecasts, I need to reverse my uncle's predictions. So, while my predictions are based entirely on my uncle's advice, they don't directly reflect his actual views. Analogously, the probe makes predictions based on the information in the embeddings, but these predictions don't necessarily represent the actual "beliefs" of the language model.
This analysis suggests that there are further conditions that probabilities extracted by a probe must satisfy in order to be properly considered credences. Going back to the example of my uncle, the problem there was that the predictions I was making were not used by my uncle in the appropriate way in order to make decisions. Thus it makes sense to day that my predictions do not reflect my _uncle's_ views.
Thus, beyond merely extracting probabilities that minimize loss, there are _other_ requirements that extracted representations must satisfy. In the context of natural langauge processing, Harding in a recent paper (2023) has argued for three conditions that must hold of a a pattern of activations in neural models for it to count as a representation of a property:11
Footnote 11: Harding makes these conditions precise in the language of information theory. Further development of the concept of representation in the context of probes strikes us as an important line of research in working to understand the internal workings of deep learning models.
1. **Information.** The pattern must have information about the property.
2. **Use.** The system (in our case, LLM) must use the pattern to accomplish its task.
3. **Misrepresentation.** It should be possible for the pattern to misrepresent the property.
In our context the property we care about is _truth_, and we might call the corresponding representation _belief_. Thus we see what went wrong in the uncle example: even though the predictions I extracted from his advice were informative (satisfied _information_), they also violated _use_: my uncle did not use them, but instead used his own forecasts (benghted as they were) to buy stocks. So it didn't make sense to refer to _my_ forecasts as _his_ beliefs. For this same reason, depending on how the actual LLM ends up using the representation the probe extracts, it might not make sense to call the _probe's_ outputs the _LLM's_ beliefs.
### Unsupervised Learning: CCS
The second approach for training a probe eschews the need for labelled data. Instead, it attempts to identify patterns in the language model's embeddings that satisfy certain logical coherence properties.
One particularly innovative implementation of this idea is the Contrast-Consistent Search (CCS) method proposed by Burns et al. (2022). The CCS method relies on training probes using _contrast pairs_. For our purposes, we can think of a contrast pair as a set of statements \(x^{+}\) and \(x^{-}\), where \(x^{+}\) has no negation, and \(x^{-}\) is the negated version of \(x^{+}\). For example, The earth is flat and The earth is not flat form a contrast pair. (One can also form contrast pairs picking
up on other features instead. For example, Burns et al. (2022) uses movie reviews from the IMDb database (Maas et al. 2011) prefixed with "The following movie review expresses a positive sentiment" and "The following move review expresses a negative sentiment" to create contrast pairs.)
CCS proceeds in the following manner:
1. Create a dataset of contrast pairs of true or false statements. Each pair is of the form \((x_{i}^{+},x_{i}^{-})\), so the dataset is \(\{(x_{1}^{+},x_{1}^{-}),\ldots,(x_{n}^{+},x_{n}^{-})\}\).
2. Pass each statement through the network, and extract the embedding for the last token from a chosen layer.
3. Train a probe \(p_{\theta}\) with parameters \(\theta\). The probe takes these embeddings as inputs and outputs numbers between \(0\) and \(1\). It is trained such that: 1. The probabilities given by the probe for the embeddings of \(x_{i}^{+}\) and \(x_{i}^{-}\) should sum up to (approximately) 1. 2. The probabilities given by the probe for the embeddings of \(x_{i}^{+}\) and \(x_{i}^{-}\) are distinct.
The underlying rationale behind step 3(a) is that if the model represents \(x_{i}^{+}\) as true, then it should represent \(x_{i}^{-}\) as false and vice versa. We can think of a successful probe as encoding a probability function (or something approximating a probability function) that underwrites the beliefs of the model. Thus, if a probe is able to find this representation within the embeddings, it should map the embeddings of \(x_{i}^{+}\) and \(x_{i}^{-}\) to numbers whose sum is close to \(1\). This is the central insight behind Burns et al.'s approach. As they put it, CCS finds a "direction in activation space that is consistent across negations" (p. 3). Step 3(b) is crucial in preventing the probe from trivially mapping every embedding to \(.5\) to satisfy condition 3(a).
To implement the conditions in step 3, Burns et al. (2022) introduce two loss functions. The consistency loss, given by
\[L_{\text{consistency}}(\theta;x_{i})\coloneqq(1-p_{\theta}(\text{emb}(x_{i} ^{+}))-p_{\theta}(\text{emb}(x_{i}^{-})))^{2},\]
penalizes a probe for mapping the embeddings for \(x_{i}^{+}\) and \(x_{i}^{-}\) to numbers whose sum deviates from \(1\). (Here \(\text{emb}(x)\) denotes the embedding for \(x\)'s last token at the given layer.)
The confidence loss, defined as
\[L_{\text{confidence}}(\theta;x_{i})\coloneqq\min\{p_{\theta}(\text{emb}(x_{i} ^{+})),p_{\theta}(\text{emb}(x_{i}^{-}))\}^{2},\]
penalizes a probe for approximating the degenerate solution of returning \(.5\) for every embedding.12
Footnote 12: Some readers may worry about a second degenerate solution. The model could use the embeddings to find which of \(x_{i}^{+}\) and \(x_{i}^{-}\) contained a negation. It could map one of the embeddings to (approximately) \(1\) and the other to (approximately) \(0\) to achieve a low loss. Burns et al. (2022) avoid this solution by normalizing the embeddings for each class by subtracting the means and dividing by the standard deviations. However, as we’ll see below, for the datasets that we used, such normalization was ineffective, and the probes consistently found exactly this degenerate solution.
The total loss for the dataset, termed the CCS loss, is given by:
\[L_{\text{CCS}}(\theta)\coloneqq\frac{1}{n}\sum_{i=1}^{n}L_{\text{consistency}}( \theta;x_{i})+L_{\text{confidence}}(\theta;x_{i}).\]
Crucially, this loss function does not take actual accuracy into account. It merely penalizes probes for lack of confidence and (one type of) probabilistic incoherence.
An important caveat to note is that, while the trained CCS probe itself approximates probabilistic coherence, its outputs do not correspond to the credences or subjective probabilities of the model.
\(L_{\text{confidence}}\) pushes the probe to report values close to \(0\) or \(1\) only. To see why, suppose a probe at one stage of the training process returned \(.6\) for \(x_{i}^{+}\) and \(.4\) for \(x_{i}^{-}\). It could get a better loss by reporting \(.99\) for \(x_{i}^{+}\) and \(.01\) for \(x_{i}^{-}\) regardless of the language model's actual subjective probabilities, and it will be pushed in this extreme direction by gradient descent. So, the probes themselves are, at best, useful for determining what the model's categorical beliefs are, not its probabilities.13
Footnote 13: One way to see that \(L_{\text{CCS}}\) won’t incentive a probe to learn the actual credences of the model is to observe that this loss function is not a strictly proper scoring rule (Gneiting and Raftery, 2007). However, use of a strictly proper scoring rule for training probes requires appeal to actual truth-values, which in turn requires supervised learning.
Burns et al. (2022) report two key findings. First, even when using a fully linear probe, CCS yields high accuracy rates--often over 80%--across numerous datasets for a number of different language models.14 Second, binary classification using CCS tends to be slightly more accurate than the LLMs' actual outputs when asked whether a statement is true. This suggests that CCS can identify instances where the language models internally represent a statement as true but output text indicating it as false, or vice versa. (For a detailed description of their results, see p. 5 of their paper).
Footnote 14: A linear probe is one that applies linear weights to the embeddings (and perhaps adds a constant), followed by a sigmoid function to turn the result into a value between \(0\) and \(1\). Linear probes have an especially simple functional form, so intuitively, if a linear probe is successful, the embedding is easy to extract.
However, the performance of the CCS probe on GPT-J (Wang and Komatsuzaki, 2021), the only decoder-only model tested in the study, was less impressive, with an accuracy rate of only 62.1% across all datasets. This is notably lower than the peak accuracy of 84.8% achieved by the encoder-decoder model UnifiedQA (Khashabi et al., 2020).
### Our Reconstruction
We reconstructed Burns et al.'s method using embeddings for LLaMA 30b with probes trained and tested on contrast pairs from the Scientific Facts and NegFacts datasets, as well as the Companies and NegCompanies datasets. These contrast pairs consist of simple sentences and their negations. This approach more closely resembles the examples given in the main text of Burns et al.'s paper, than do the longer and more structured contrast pairs that they actually used to train their probes, such as movie reviews from IMDb.
We experimented with a variety of different methods and hyperparameters. However, we found that while CCS probes were consistently able to achieve low loss according to \(L_{\text{CCS}}\), their accuracy was in effect no better than chance--it ranged from 50% to 57% depending on the training run. (Recall, the minimum possible accuracy for a CCS probe is 50%.) Low accuracy persisted even after we normalized the embeddings for each class by subtracting the means and dividing by the standard deviations, following the same procedure as Burns et al. (2022).
Upon inspection, it is clear that CCS probes were usually able to achieve low loss simply by learning which embeddings corresponded to sentences with negations, although they sometimes learned other features uncorrelated with truth. Given the similarity of the outcomes across these experiments, we report quantitative results from the probes we trained using a simple one hidden layer MLP with 100 neurons followed by a sigmoid output function on layers 60, 56, and 52 in Table 4. Recall these layers correspond to the last, fourth-last, and eighth-last layers of the LLaMA 30b, respectively.
We can confirm that, despite normalization, the probes were able to determine which embeddings corresponded to positive and negative examples in layers -1 and -4 by checking the average values the probes returned for members of each class. Probes found some other way to achieve low loss
in layer -8, but they did not do any better in terms of accuracy as shown in Table 5. (Recall, only roughly half the positive examples and half the negative examples are actually true.)
Now, one might think that this failure of our probes is itself fragile. Normalization by subtracting the mean and dividing by the standard deviation was supposed to disguise the grammatical form of the sentences, but it did not. There is likely some more sophisticated normalization method that would work better.
We agree that such alternative methods are likely possible. However, as we discuss in the next section, we are not sanguine about the basic approach Burns et al. (2022) use for conceptual reasons.
### Conceptual Problems: Failure to Isolate Truth
The advantage of CCS and unsupervised approaches more generally over supervised approaches is that they do not restrict the training and testing data so severely. There is no need to find large collections of sentences that can unambiguously be labeled as true or false. So, one may have hope that CCS (and unsupervised approaches) will generalize well to new sentences because we are less restricted in training.
However, the fundamental issue we've identified is that coherence properties alone can't guarantee identification of truth. As demonstrated in our experiments, probes might identify sentence properties, such as the presence or absence of negation, rather than truthfulness.
Further, probes could identify other, non-truth-related properties of sentences. For example, they could associate truth with widespread belief, resulting in the classification "\(x\) is true _and_ commonly believed" or even "\(x\) is believed by most people".
To demonstrate this, consider any probability function \(\Pr\). The sum of the probabilities that a sentence \(x\) is true and commonly believed, and that it is false or not commonly believed, equals 1. Indeed, this equation holds for any sentence property \(P\), where \(\Pr(x\wedge P(x))+\Pr(\neg x\vee\neg P(x))=1\). Likewise, \(\Pr(x\lor P(x))+\Pr(\neg x\wedge\neg P(x))=1\).15 Checking for coherence over all Kolmogorov probability axioms--which require probabilities to be non-negative, normalized, and additive--
\begin{table}
\begin{tabular}{l c c c c} \hline \hline Layer & \(L_{\text{CCS}}\) & \(L_{\text{Confidence}}\) & \(L_{\text{Consistency}}\) & Accuracy \\ \hline -1 &.009 &.004 &.005 &.552 \\ -4 &.003 &.002 &.002 &.568 \\ -8 &.013 &.002 &.010 &.502 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Performance of CCS Probes at various layers on each component of the loss function and in terms of overall accuracy.
\begin{table}
\begin{tabular}{l c c} \hline \hline Layer & Positive Prediction Avg & Negative Prediction Avg \\ \hline -1 &.968 &.035 \\ -4 &.990 &.012 \\ -8 &.389 &.601 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Average prediction value in positive examples and negative examples at each layer.
will rule out some properties \(P\), but will not come close to isolating truth. This means that coherence criteria alone can't distinguish encodings of truth from encodings of other concepts.
The failure to isolate truth here is reminiscent of the issue we noted with supervised learning, where truth may align with some alternative property over a dataset. However, the reasons for the failure differ. In the case of CCS and other unsupervised methods, the problem lies in the inability of formal coherence patterns alone to separate the encoding of truth from the encoding of other properties that differentiate positive from negative examples. If it's generally easier to find "directions in activation space" that differentiate examples but don't correspond exclusively to truth, then CCS probes will either fail immediately or fail to generalize.16
Footnote 16: Burns et al. (2022) investigate other unsupervised approaches as well that appeal to principal component analysis and/or clustering (such as Bimodal Salience Search (p. 22)). We believe—with some changes—most of the conceptual issues for CCS apply to those as well.
## 5 Do LLMs even have beliefs at all?
Our investigation points in a negative direction: probing the beliefs of LLMs is more difficult than it appeared after a first pass. Does this mean that we should be skeptical that LLMs have beliefs all together?
To gain traction on this question we will consider arguments that intend to show that LLMs cannot have beliefs, even in principle. These arguments rely on the claim that LLMs make predictions about which tokens follow other tokens, and do not work with anything like propositions or world-models.
We claim that these arguments are misguided. We will show that our best theories of belief and decision making make it a very live possibility that LLMs _do_ have beliefs, since beliefs might very well be helpful for making good predictions about tokens. We will argue that ultimately whether or not LLMs have beliefs is largely an empirical question, which motivates the development of better probing techniques.
### Stochastic Parrots & the Utility of Belief
Even without having known the limitations of current probing techniques, some have expressed deep skepticism that LLMs have anything resembling beliefs. For example, Bender et al. (2021) write:
Text generated by an LM is not grounded in communicative intent, any model of the world, or any model of the reader's state of mind. It can't have been because the training data never included sharing thoughts with a listener, nor does the machine have the ability to do that...an LM is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot. (pp. 616-617)
Similarly, Shanahan (2022) writes,
A bare-bones LLM doesn't "really" know anything because all it does, at a fundamental level, is sequence prediction. Sometimes a predicted sequence takes the form of a proposition. But the special relationship propositional sequences have to truth is apparent only to the humans who are asking questions...Sequences of words with a propositional form are not special to the model itself in the way
they are to us. The model itself has no notion of truth or falsehood, properly speaking, because it lacks the means to exercise these concepts in anything like the way we do. (p. 5)
These arguments rely on the idea that all the LLM is doing is predicting the next token. Because of this, both deny that the LLM can be working with anything like a meaningful model of the world. In other words, there is nothing _propositional_ going on under the hood.
Shanahan doesn't deny that LLMs might contain information about the world around them. He does, however, claim that LLMs don't make judgements or have beliefs:
Only in the context of a capacity to distinguish truth from falsehood can we legitimately speak of "belief" in its fullest sense. But an LLM -- the bare-bones model -- is not in the business of making judgements. It just models what words are likely to follow from what other words. The internal mechanisms it uses to do this, whatever they are, cannot in themselves be sensitive to the truth or otherwise of the word sequences it predicts. Of course, it is perfectly acceptable to say that an LLM "encodes", "stores", or "contains" knowledge, in the same sense that an encyclopedia can be said to encode, store, or contain knowledge...But if Alice were to remark that "Wikipedia knew that Burundi was south of Rwanda", it would be a figure of speech, not a literal statement. (p. 5)
The idea is that, since the LLM models which tokens are likely to follow other tokens, and doesn't interact with the world in any other way, it cannot be tracking the truth. This is similar to the argument in the Bender et al. quote above: since the LLM does not have "communicative intent", it cannot be using any model of the world or the reader to make its predictions.
These arguments, however, rest on a mistake. While it is true that the ultimate output of an LLM is a token sampled from a probability distribution over tokens, and so the LLM is certainly modeling what words are probable to come after other words, this does _not_ mean that the internal mechanisms must be insensitive to truth. This is because it might very well _be_ that a capacity to distinguish truth from falsehood is very useful for predicting the next token. In other words, tracking the truth of propositions could be a good _means_ toward the end of predicting what token comes next.
This is in line with a much more general feature of many types of goal directed action that can be made precise with decision theory. Decision theory gives us our best models of rational choice. The core idea of decision theory is an _expected utility maximizer_. When faced with a set of options, an expected utility maximizer combines two different attitudes to compute which act to take: beliefs in the form of a probability function, and desires in the form of a utility function.17 There is a precise sense in which all the agent cares about is the _utility_.18 The agent does not care about belief for its own sake, but does have beliefs in order to take effective action.
Footnote 17: The canonical formalization of this idea in economics and statistics is Savage’s _Foundations of Statistics_ (1972). Philosophers use Savage’s formulation, as well as Jeffrey’s in _The Logic of Decision_ (1990).
Footnote 18: More precisely, utility is a numerical representation that captures how strongly an agent cares about outcomes.
For example, an investor may care purely about the return on her investments. She may take actions with the goal to maximize her profit. It would be a mistake to conclude from this that the investor must not have beliefs, because she is merely doing profit maximization. Indeed, the investor's beliefs about how various firms will perform will probably play a crucial role in helping her make decisions.
Similarly, it is a mistake to infer from the fact that the LLM outputs tokens that are likely to follows its inputs that the LLM must not have beliefs. On the contrary, given that our best theories
of intelligent behaviour involve belief as a crucial component, it should be a very live hypothesis that the LLM is doing its best to track truths about the world, _in order to_ maximize predictive accuracy.19
Footnote 19: We are here ignoring nuances involving inner alignment (Hubinger et al. 2019).
Even beyond decision theory, philosophers have long held that true beliefs are useful for achieving goals, and that they play the functional role of helping us take successful action (Millikan (1995); Papineau (1988)). Indeed, not only is it useful, but it is a common view that the instrumental utility of accurate beliefs applies selection pressure on agents and organisms to conform to epistemic norms (Street (2009); Cowie (2014)). For example, in the context of forming true beliefs by induction, Quine famously writes, "[c]reatures inveterately wrong in their inductions have a pathetic but praiseworthy tendency to die before reproducing their kind" (p. 13, 1969).
This is very intuitive. It is easy to generate decision contexts (such as strategic board games, investing, figuring out how to get to Toronto from Prague, etc.) that do seem to push us to form accurate beliefs about the world.
This is not to say that it is _necessary_ that LLMs have beliefs, or that they necessarily have accurate beliefs. There are contexts where there seems to be less pressure on us to form accurate beliefs (Stich (1990)). Importantly, there are two sub-cases to consider here. The first is the case in which there is little or no selection pressure for forming true beliefs, but there is not selection against _having_ beliefs. For example, Smead (2009) considers contexts in which there are evolutionary advantages for misperceiving the payoffs of a strategic interaction (section 3.4). The second is the one in which there is selection pressure against having beliefs altogether (or, more conservatively, there is no selection pressure for having beliefs). For example, Godfrey-Smith (1991, 1998), Sober (1994), Stephens (2001), and Smead (2015) have all developed models that characterize when an agent should (be expected to) learn from its environment and then select actions based on what it learned, and when it should not. This later situation is one in which there is selection pressure against (or at least none for) forming beliefs.
This leads us to the conclusion that, whether or not LLMs have beliefs, is largely an empirical matter. In favour of the view expressed by folks like Shanahan and Bender et al., there certainly are contexts in which there is little to no selection pressure in favour of accurate beliefs, and indeed there are contexts that push against having beliefs altogether. On the other hand, there are plenty of contexts in which it is very useful to have an accurate map of the world, in order to guide action. Indeed, out best theories of rational choice witness this.
## 6 Probing the Future
We've deployed both empirical and conceptual tools in order to investigate whether or not LLMs have beliefs and, if so, how we might measure them. We demonstrated that current probing techniques fail to generalize adequately, even when we set aside the conceptual question of whether or it not it makes sense to ascribe beliefs to LLMs in the first place. We then considered two prominent arguments against the claim that LLMs have beliefs, and showed that they rest on a mistake. Ultimately, the status of beliefs in LLMs is (largely) an empirical question.
Here we outline two possible empirical paths one might pursue to make progress on measuring beliefs. These are certainly not the only two. Finally, we discuss a possible line of investigation one might take in order gain clarity on whether or not it makes sense to attribute beliefs to LLMs.
### Applying Pressure for Truth
One of the insights in SS 5.1 is that contexts can apply different amounts of pressure to track the truth. This suggests a natural experiment to run: use prompt engineering to apply more pressure for the LLM to track truth, and then probe the LLM. For example, at a first pass, instead of just inputting a sentence like, Rome is the name of a country, we might input the prompt, I want you to think hard about the following sentence: 'Rome is the name of a country.'' Perhaps refined versions of such prompting will make the representations of truth more perspicuous and easier for probes to find.20
Footnote 20: The conceptual problems plaguing both types of probing techniques would still exist. However, the thought is that if the representation of truth is especially perspicuous, the probes might land upon such a representation naturally. This is a highly empirical question.
### Replacing Truth with Chance
So far the prompts we tested the probes on all had clear truth values. This has the advantage that we can use them for supervised learning, since we have the labels (the truth values). However, it also limits the ability to generate large datasets, since generating and labeling true sentences is costly. Furthermore, if we are selecting sentences that we think it is plausible the LLM might "know" (have high credence in), then our probing technique might not get good feedback on what the state of the LLM is like when it is more uncertain.
One way we might address these issues is by systematically generating prompts that describe chance set-ups, and then training and testing the probe on statements about outcomes. For example, we might prompt the model with something like, There is an un with six yellow balls, four purple balls, and nothing else. A ball is drawn uniformly at random. Then, within the scope of that prompt, we can use statements like The ball drawn is purple, which we can easily label with a chance value of 0.4.
### The Question of World-Models
In SS5.1 we pushed back against the claims in Shanahan (2022) and Bender et al. (2021) that all an LLM is doing is text prediction, and thus cannot be tracking the truth. We did this by showing that there are many contexts in which tracking the truth is useful for other goals. We did not, however, fully address other parts of their arguments. In particular, both Shanahan and Bender et al. are worried that the predictions of LLMs are not generated using any "model of the world" (p. 616, Bender et al. (2021)), and thus that the internal states cannot "count as a belief about the world" (p. 6, Shanahan (2022)).
In contrast to the concern about doing mere sequence prediction, this concern focuses our attention on _how_ the LLM computes the distribution, and whether or not it does so in a way that corresponds to it having any kind of model of the world. For example, even if there is some sense in which the LLM does keep track of something like truth because of pressure to do so as described in SS5.1, a skeptic might reply that the representation the system is using is not truth-apt.
We thus think that a productive way to proceed would be to characterize the _latent variables21_ that an LLM uses to predict text, and to evaluate whether or not these variables correspond to anything we might want to consider a world-model.22 This requires both empirical work (in order to understand what kind of latent variables LLMs actually work with) and conceptual work (in order to understand what it would take for a latent variable to be truth-apt).23
Footnote 21: Latent variables are best understood in contrast to _observable_ variables. Suppose, for example, that you are trying to predict the outcomes of a series of coin tosses. The observable variables in this context would be the actual outcomes: heads and tails. A latent variable would be an _unobservable_ that you use to help make your predictions. For example, suppose you have different hypotheses about the bias of the coin and take the expected bias as your prediction for the probability of heads on the next toss. You are using your beliefs about the latent variable (the bias of the coin) to generate your beliefs about the observables.
Footnote 22: Using latent variables to compute probability distributions is commonplace in science and statistics (Everett (2013)). Though we do not have the space to do latent variable methods full justice, one reason for this is that using distributions over latent variables in order to calculate a distribution over observable variables can have massive computational benefits (see, for example, chapter 16 of Goodfellow et al. (2016)). Thus, it would be fairly surprising if there _weren’t_ a useful way to think of LLMs as using some kinds of latent variables in order to make predictions about the next token. Indeed, there is already some preliminary work on what sorts of latent variables LLMs might be working with (Xie et al. (2021); Jiang (2023)).
#### Acknowledgments
Thanks to Amos Azaria, Dylan Bowman, Nick Cohen, Jacqueline Harding, Aydin Mohseni, Bruce Rushing, Nate Sharadin, and audiences at UMass Amherst and the Center for AI Safety for helpful comments and feedback. Special thanks to Amos Azaria and Tom Mitchell jointly for access to their code and datasets. We are grateful to the Center for AI Safety for use of their compute cluster. B.L. was partly supported by a Mellon New Directions Fellowship (number 1905-06835) and by Open Philanthropy. D.H. was partly supported by a Long-Term Future Fund grant.
|
2302.14735 | IMU-based Online Multi-lidar Calibration | Modern autonomous systems typically use several sensors for perception. For
best performance, accurate and reliable extrinsic calibration is necessary. In
this research, we propose a reliable technique for the extrinsic calibration of
several lidars on a vehicle without the need for odometry estimation or
fiducial markers. First, our method generates an initial guess of the
extrinsics by matching the raw signals of IMUs co-located with each lidar. This
initial guess is then used in ICP and point cloud feature matching which
refines and verifies this estimate. Furthermore, we can use observability
criteria to choose a subset of the IMU measurements that have the highest
mutual information -- rather than comparing all the readings. We have
successfully validated our methodology using data gathered from Scania test
vehicles. | Sandipan Das, Bengt Boberg, Maurice Fallon, Saikat Chatterjee | 2023-02-28T16:39:27Z | http://arxiv.org/abs/2302.14735v2 | # IMU-based online multi-lidar calibration without lidar odometry
###### Abstract
When deploying autonomous systems that require several sensors for perception, accurate and reliable extrinsic calibration is required. In this research, we offer a reliable technique that can extrinsically calibrate numerous lidars in the base frame of a moving vehicle without the use of odometry estimation or fiducial markers. Our method is based on comparing the raw IMU signals between a collocated IMU present with the lidar and the IMU measurements from the GNSS system in the vehicle base frame. Additionally, based on our observability criterion, we choose measurements that include the most mutual information rather than comparing all comparable IMU readings. This enables us to locate the measurements that are most useful for real-time calibration. Utilizing data gathered from Scania test vehicles with various sensor setups, we have successfully validated our methodology.
## I Introduction
For safe navigation and redundancy, autonomous vehicles need numerous sensors to generate \(360^{\circ}\) sensing coverage. Therefore, it is crucial to establish the precise mounting location for the vehicle's sensors, a process known as extrinsic calibration. Without the proper extrinsic calibration parameters, it is impossible to fuse the sensing data into a single reference frame. We concentrated on multi-lidar system extrinsic calibration in our work. The same principle, though, might also be applied to other modalities.
There are two methods for extrinsic calibration: offline and online. Putting fiducial markers in their shared field-of-view (FoV) of both sensors allows the lidars' characteristics to be matched offline [1, 2]. Online calibration is carried out by comparing the estimated states of the sensor to those in the relevant sensor frame [3, 4, 5, 6, 7]. Since the sensors may not have a common field of view, it is preferable to execute online calibration because it requires less engineering work than an offline method. In online calibration, it is widely recognized that not all motion segments give the information necessary for extrinsic calibration [8, 9, 10]. Hence, identifying the degenerate motion segments helps to discard information that is not necessary for extrinsic calibration computation.
We offer an alternative vehicle-run, an observability-aware online calibration method for simultaneously calibrating multiple lidars using collocated IMU (Inertial measurement unit) signals without the need for state estimation. This method can be extended to other modalities seamlessly and does not require common FoV in between the sensors. Additionally, we compensate for the biases of the IMU measurements online for improving the calibration accuracy.
### _Motivation_
As seen in the Fig. 1, the mobile platforms we employed for our studies have a wide variety of sensors. Therefore, it is essential to confirm that the sensors are correctly calibrated prior to any autonomous run. As a result, we must calibrate every sensor online with respect to a common reference frame, which is often found in the middle of the rear axle of the vehicle. The state estimates for motion-based online calibration have intrinsic drifts under aggressive maneuvers which are needed for online calibration to excite the different degrees of freedom. It could therefore cause inaccuracies in the calibration process.
This is why we propose calibrating the lidars using a comparison of the raw IMU signals (with bias compensation) between the collocated IMU(s) with the lidar(s), and the IMU signals from the GNSS system calibrated to the vehicle reference frame. Our approach is based on comparing the raw measurements, as opposed to comparing estimated states, which by their very nature contain process noise. Furthermore, we solely compare the raw angular velocity data for observability analysis.
Fig. 1: Illustration of the 2 lidars with their embedded IMUs positioned around one data collection vehicle, when the sensors are calibrated. The vehicle base frame B, is located at the center of the rear axle. The sensor frames of the lidars are: \(\mathrm{L}^{\left(\mathtt{F_{L}-Top}\right)}\) and \(\mathrm{L}^{\left(\mathtt{F_{R}-Top}\right)}\), whereas, the sensor frames of the IMUs are: \(\mathrm{I}^{\left(\mathtt{F_{L}-Top}\right)}\) and \(\mathrm{I}^{\left(\mathtt{F_{R}-Top}\right)}\). \(\left(\mathtt{F_{L}}\right)\): front-left and \(\left(\mathtt{F_{R}}\right)\): front-right.\)
### _Contribution_
The extensive literature on motion-based calibration serves as the inspiration for our work. We would want to contribute the following:
* An observability-aware extrinsic calibration algorithm to calibrate multiple lidars online using collocated IMUs without the need of lidar odometry estimation.
* Online bias compensation of the IMU signals to improve the quality of the extrinsic calibration algorithm.
* Verification of our method based on data collected from Scania autonomous vehicles with the sensor setup shown in Fig. 2, with FoV schematics similar to Fig. 1.
## II Related Work
Extrinsic sensor calibration is a well studied area. In our discussion we briefly review the relevant literature and motivate our choice of method.
### _Offline calibration_
Offline extrinsic calibration is performed by matching known markers placed in the common FoV of multiple sensors. To calibrate a multi-lidar system, geometric elements including points, lines, and planes were retrieved and matched in [1, 2].
### _Online calibration_
Online extrinsic calibration based on the Hand-Eye method [11, 12, 3] is an extensively studied topic. The term "Hand-Eye" refers to early motion-based calibration research that estimated the motion of the gripper (hand) and the camera (eye) while constricting their poses with a fixed rigid body transformation. Multi-camera extrinsic calibration with the Hand-Eye method has been explored in works like Camodocal [4] and recently extended to multi-lidar extrinsic calibration [13, 6, 14].
Instead of performing Hand-Eye calibration, in Kalibr [8] the authors automatically detected sets of measurements from which they could identify an observable parameter space and then performed a maximum likelihood estimate (MLE) by minimizing the errors between landmark observations and their known correspondences. They also discarded parameter updates for numerically unobservant directions and degenerate scenarios. Kalibr was extended towards MIMU calibration [15] based on matching poses fitted to a B-spline and further constraining with known image landmarks. In MIMC-VINS [16], an efficient multi-state constraint Kalman filter was used to jointly propagate all IMU states while enforcing rigid body constraints between the IMUs during the filter update stage, which produced the MIMU calibration.
We claim that because we have integrated IMUs in our setup, calibrating the multi-IMU (MIMU) configuration will enable us to retrieve extrinsics from the multi-lidar system, given the lidar-IMU extrinsics are known. We also selected measurements that have enough signal excitation known as observability-aware criteria. Nilsson et al. [17] provided housing with noncoplanar orientations for observability analysis, in which their MIMU setup was placed and the IMU measurements were matched using MLE. Since we wanted to calibrate the sensors directly in the vehicle, our observability-aware criteria closely resemble work by Jiajun et al. [7]. However, unlike prior work, we maximized mutual information between the angular velocity signals to identify relevant motion segments.
Unlike prior work, we used the fundamental tenet that a rigid body's angular velocity is constant around every point of the body as the foundation for our MIMU calibration. We used MLE to match the raw angular velocity between the IMU(s) embedded within the lidar(s) and GNSS system and recovered the extrinsics in accordance with the signal-to-signal match principle [18]. We also estimated the IMU biases online by capturing data in a special sequence, removing the need for any exteroceptive modality.
## III Problem Statement
### _Sensor platform and reference frames_
The sensor platform with its corresponding reference frames is shown in Fig. 2 along with the illustrative sensor FoV of the bus in Fig. 1. Each of the sensor housings contains a lidar with an embedded IMU and two cameras. Please note the cameras are not used for this work. We used logs from a bus and a truck with similar sensor housings for our experiments.
Now we describe the necessary notation and reference frames used in our system. The vehicle base frame, B is located in the center of the rear axle of the vehicle. Sensor readings from lidars and IMUs are represented in their respective sensor frames as \(\mathrm{L}^{(k)}\), and \(\mathrm{I}^{(k)}\) respectively. Here, \(k\in[\mathtt{F}_{\mathrm{L}}-\mathtt{Top},\mathtt{F}_{\mathrm{R}}-\mathtt{ Top}]\) denotes the location of the sensor in the vehicle corresponding to front-left-top and front-right-top respectively. The GNSS measurements are reported in a world fixed frame, W, and transformed to B frame by performing a calibration routine outside the scope of this work. The GNSS system also has an embedded IMU which is transformed to the base frame, B as well. In our discussions the transformation matrix is denoted as, \(\mathbf{T}=\left[\begin{array}{cc}\mathbf{R}_{3\times 3}&\mathbf{t}_{3 \times 1}\\ \mathbf{0}^{\top}&1\end{array}\right]\in\mathrm{SE}(3)\) and \(\mathbf{R}\mathbf{R}^{T}=\mathbf{I}_{3\times 3}\), since the rotation matrix is orthogonal.
### _Problem formulation_
Our primary goal is to estimate the extrinsic calibration of multiple lidar sensors in base frame, B in real-time without the need of any fiducial markers. We only use the raw IMU
Fig. 2: Reference frames conventions for our vehicle platform. The world frame W is a fixed frame, while the base frame B, as shown in Fig. 1, is located at the rear axle center of the vehicle. Each sensor unit contains the two optical frames C, an IMU frame, I, and lidar frame L.
signals from \(\mathbf{I}^{(k)}\) and \(\mathtt{B}\) frame (from GNSS system) and use the known extrinsics \(\mathbf{T}_{\mathbf{I}^{(k)}\mathbf{I}^{(k)}}\) from sensor supplier to estimate \(\mathbf{T}_{\mathtt{BL}^{(k)}}\). Additionally, we estimate the IMU biases online to denoise the raw IMU signals which improves the calibration quality.
## IV Methodology
### _Initialization_
The IMU measurements are in their corresponding sensor frame. For gravity alignment, we use the equations [19, Eq. 25, 26] to estimate roll and pitch and obtain \(\mathtt{R}_{\mathtt{WB}}\) after collecting IMU data when the vehicle is static for a few seconds.
### _IMU sensor model_
We have considered a 6-DoF (degree of freedom) IMU such that it has a 3-axis accelerometer and 3-axis gyroscope. The IMU sensor data in its corresponding sensor frame can be represented as,
\[\left[\begin{array}{c}\boldsymbol{\omega}_{\mathbf{I}_{t}}\\ \mathbf{a}_{\mathbf{I}_{t}}\end{array}\right]=\left[\begin{array}{c} \boldsymbol{\hat{\omega}}_{\mathbf{I}_{t}}\\ \hat{\mathbf{a}}_{\mathbf{I}_{t}}-\mathbf{R}^{T}_{\mathcal{U}\mathbf{I}_{t}} \mathbf{g}\end{array}\right]+\left[\begin{array}{c}\mathbf{b}^{\omega}_{ \mathbf{I}_{t}}\\ \mathbf{b}^{T}_{\mathbf{I}_{t}}\end{array}\right]+\left[\begin{array}{c} \mathbf{n}^{\omega}_{\mathbf{I}_{t}}\\ \mathbf{n}^{\omega}_{\mathbf{I}_{t}}\end{array}\right]. \tag{1}\]
Here, \([\boldsymbol{\omega}_{\mathbf{I}_{t}}\in\mathbb{R}^{3\times 1},\mathbf{a}_{ \mathbf{I}_{t}}\in\mathbb{R}^{3\times 1}]\) represent the measured angular velocity and linear acceleration at timestamp \(t\) and \([\boldsymbol{\hat{\omega}}_{\mathbf{I}_{t}},\hat{\mathbf{a}}_{\mathbf{I}_{t}}]\) represent the latent ideal angular velocity and linear acceleration respectively. \(\mathbf{b}^{\omega}_{\mathbf{I}_{t}}\in\mathbb{R}^{3\times 1}\) and \(\mathbf{b}^{a}_{\mathbf{I}_{t}}\in\mathbb{R}^{3\times 1}\) represent the gyro and accelerometer biases which change with time and other factors like temperature. \(\mathbf{n}^{\omega}_{\mathbf{I}_{t}}\sim\mathcal{N}(\mathbf{0},\boldsymbol{ \Sigma}_{\omega})\) and \(\mathbf{n}^{a}_{\mathbf{I}_{t}}\sim\mathcal{N}(\mathbf{0},\boldsymbol{\Sigma}_ {a})\) are the additive zero-mean white Gaussian noises for gyroscope and accelerometer with covariance \(\boldsymbol{\Sigma}_{\omega}\in\mathbb{R}^{3\times 3}\) and \(\boldsymbol{\Sigma}_{a}\in\mathbb{R}^{3\times 3}\) respectively. \(\mathbf{g}_{\mathbf{w}}\in\mathbb{R}^{3\times 1}\), represents the gravity vector in \(\mathtt{W}\) frame and \(\mathbf{R}_{\mathtt{BW}}\) represent the gravity alignment rotation matrix.
### _IMU bias characterization_
In our method, since we are not using an external odometry source for bias compensation we collect the data sequences with periods of rest in between. This limits the bias covariance growth as they converge faster after rest detection.
#### Iv-C1 IMU state propagation
The IMU dynamical model based on angular velocity in quaternion form is shown as,
\[\mathbf{q}_{\mathtt{BI}_{t}}=\left[\cos\left(\frac{\Delta t}{2}\left\| \boldsymbol{\omega}_{\mathbf{I}_{t}}\right\|\right)\mathbf{I}_{4}+\frac{\sin \left(\frac{\Delta t}{2}\left\|\boldsymbol{\omega}_{\mathbf{I}_{t}}\right\| \right)}{\left\|\boldsymbol{\omega}_{\mathbf{I}_{t}}\right\|}\mathbf{\Omega}_{ \mathbf{I}_{t}}\right]\mathbf{q}_{\mathtt{BI}_{t-1}} \tag{2}\]
and,
\[\mathbf{\Omega}_{\mathbf{I}_{t}}=\left[\begin{array}{ccc}0&\left[ \boldsymbol{\omega}_{\mathbf{I}_{t}}\right]_{z}&-\left[\boldsymbol{\omega}_{ \mathbf{I}_{t}}\right]_{y}&\left[\boldsymbol{\omega}_{\mathbf{I}_{t}}\right]_ {x}\\ -\left[\boldsymbol{\omega}_{\mathbf{I}_{t}}\right]_{z}&0&\left[\boldsymbol{ \omega}_{\mathbf{I}_{t}}\right]_{x}&\left[\boldsymbol{\omega}_{\mathbf{I}_{t}} \right]_{y}\\ \left[\boldsymbol{\omega}_{\mathbf{I}_{t}}\right]_{y}&-\left[\boldsymbol{\omega}_ {\mathbf{I}_{t}}\right]_{x}&0&\left[\boldsymbol{\omega}_{\mathbf{I}_{t}} \right]_{z}\\ -\left[\boldsymbol{\omega}_{\mathbf{I}_{t}}\right]_{x}&-\left[\boldsymbol{\omega}_ {\mathbf{I}_{t}}\right]_{y}&-\left[\boldsymbol{\omega}_{\mathbf{I}_{t}} \right]_{z}&0\end{array}\right]. \tag{3}\]
\(\Delta t\) denotes the sampling period of the IMU data. We used the Madgwick filter [20] to estimate the refined rotation which minimizes the difference between the measured acceleration and the aligned gravity vector using a gradient descent algorithm as,
\[\mathbf{q}^{*}_{\mathtt{BI}_{t}}=\operatorname*{arg\,min}_{\mathbf{q}_{ \mathtt{BI}_{t}}\in\mathbb{R}^{4\times 1}}\left(\begin{array}{c}\mathbf{\tilde{q}}_{ \mathtt{BI}_{t}}\otimes\mathbf{R}^{-1}_{\mathtt{BW}}\mathbf{g}_{\mathtt{W}} \otimes\mathbf{q}_{\mathtt{BI}_{t}}-\mathbf{\tilde{a}}_{\mathtt{BI}_{t}}\end{array} \right), \tag{4}\]
where, \(\mathbf{\tilde{q}}\) represents the quaternion conjugate and \(\otimes\) is the quaternion product operator. Note that we use the compensated accelerometer signals after bias compensation which is discussed in IV-C3.
#### Iv-C2 Rest detection
Rest is detected if the difference between the norm of the aligned gravity vector and the acceleration vector is less than a predefined threshold, \(\tau\) for at least more than 2 seconds. Thus,
\[\mathtt{Rest\,detected}=\parallel\ \mathbf{\tilde{q}}^{*}_{\mathtt{BI}_{t}} \otimes\mathbf{g}_{\mathtt{B}}\otimes\mathbf{q}^{*}_{\mathtt{BI}_{t}}-\mathbf{ \hat{a}}_{\mathtt{BI}_{t}}\parallel_{2}\leq\tau \tag{5}\]
#### Iv-C3 Accelerometer bias estimation
The accelerometer bias is estimated by computing the mean of the signal when stationary and the standard deviation gives us the co-variance of the white Gaussian noise. We recompute these parameters whenever the vehicle is stationary and keep it unchanged until the next rest period. If the rest period duration is \(N\) seconds then,
\[\mathbf{b}^{a} =\frac{\Delta t}{N}\sum_{i=1}^{N}(\mathbf{a}_{\mathbf{I}_{t}}- \mathbf{R}^{*}_{\mathtt{BI}_{t}}\mathbf{R}^{-1}_{\mathtt{BW}}\mathbf{g}_{ \mathtt{W}}), \tag{6}\] \[\boldsymbol{\Sigma}_{a} =\mathtt{diag}\left(\frac{\Delta t}{N}\sum_{i=1}^{N}|\mathbf{a}_{ \mathbf{I}_{i}}-\mathbf{b}^{a}|^{2}\right),\] \[\implies\mathbf{\hat{a}}_{\mathbf{I}_{t}} =\mathbf{a}_{\mathbf{I}_{t}}+\mathbf{R}^{*}_{\mathtt{BI}_{t}} \mathbf{R}^{-1}_{\mathtt{BW}}\mathbf{g}_{\mathtt{W}}-\mathbf{b}^{a}-\mathcal{N}( 0,\boldsymbol{\Sigma}_{a}).\]
#### Iv-C4 Gyro bias estimation
The initialization of the gyro bias is done in a similar fashion as we did for the accelerometer and recomputed whenever the rest period is detected. Thus,
\[\text{System State},\hat{\mathbf{x}}_{0} =\mathbf{b}^{\omega}_{\mathbf{I}_{0}}=\frac{\Delta t}{N}\sum_{i=1 }^{N}(\boldsymbol{\omega}_{\mathbf{I}_{i}}) \tag{7}\] \[\text{Measurement Noise},\mathbf{W}_{0} =\mathcal{N}(0,\boldsymbol{\Sigma}_{\omega})\] \[\text{Process Noise},\mathbf{Q} =(0.1^{\circ}/\mathtt{sec})^{2}\mathbf{I}_{3\times 3}\] \[\text{Initial Covariance},\mathbf{P}_{0} =(0.3^{\circ}/\mathtt{sec})^{2}\mathbf{I}_{3\times 3}.\]
After that we use a Kalman filter [21] to track the gyroscope bias as a state. The system is modeled by,
\[\mathbf{x}_{t} =\mathbf{\hat{x}}_{t-1}+\mathcal{N}(0,\mathbf{Q}) \tag{8}\] \[\mathbf{y}_{t} =\mathbf{R}^{*}_{\mathtt{BI}_{t}}\mathbf{x}_{t}+\mathbf{W}_{t}.\]
The standard Kalman filter update equations become,
\[\mathbf{P}_{t} =\mathbf{P}_{t-1}+\mathbf{Q} \tag{9}\] \[\mathbf{K}_{t} =\mathbf{P}_{t}\mathbf{R}^{*T}_{\mathtt{BI}_{t}}(\boldsymbol{ \Sigma}_{\omega}+\mathbf{R}^{*}_{\mathtt{VI}_{t}}\mathbf{P}_{t}\mathbf{R}^{*T}_{ \mathtt{BI}_{t}})^{-1}\] \[\hat{\mathbf{x}}_{t} =\hat{\mathbf{x}}_{t-1}+\mathbf{K}_{t}(\mathbf{y}_{t}-\mathbf{R}^{*} _{\mathtt{BI}_{t}}\hat{\mathbf{x}}_{t-1})\] \[\mathbf{P}_{t} =\mathbf{P}_{t}-\mathbf{K}_{t}\mathbf{R}^{*}_{\mathtt{BI}_{t}} \mathbf{P}_{t}.\]
Thus after estimating the biases we can rearrange eq. 1 to compute \(\boldsymbol{\hat{\omega}}_{\mathbf{I}_{t}}\).
### _IMU-based lidar calibration_
#### Iv-D1 Rotation estimation
The extrinsics of \(\mathtt{L}^{(k)}\)_wrt_\(\mathtt{B}\) frame has a strong
are the same because they are all securely affixed to the vehicle. Let, [\(\hat{\mathbf{\omega}}_{\mathbf{i}_{i}^{(k)}}\), \(\hat{\mathbf{\mathrm{a}}}_{\mathbf{i}_{i}^{(k)}}\)] and [\(\hat{\mathbf{\omega}}_{\mathbf{\mathrm{B}}_{i}}\), \(\hat{\mathbf{\mathrm{a}}}_{\mathbf{\mathrm{B}}_{i}}\)] be the estimated angular velocity and linear acceleration of the \(k^{th}\) IMU and the base IMU at timestamp \(t_{i}\) after bias compensation. By removing the \(k^{th}\) superscript for brevity, our optimization problem becomes,
\[\begin{split}\mathbf{R}_{\mathtt{BI}}^{\star}&= \operatorname*{arg\,min}_{\mathbf{R}_{\mathtt{BI}}}\sum_{i=1}^{N}\|\mathbf{R}_ {\mathtt{BI}}\hat{\mathbf{\mathrm{\omega}}}_{\mathbf{\mathrm{B}}_{i}}-\hat{\mathbf{ \mathrm{\omega}}}_{\mathbf{\mathrm{I}}_{i}}\|_{\sum_{i}}^{2},\\ \text{s.t.}&\mathbf{R}_{\mathtt{BI}}\mathbf{R}_{ \mathtt{BI}}^{T}=\mathbf{I}_{3},\end{split} \tag{10}\]
which, can be simply solved with Horn alignment [22] or Kabsch alignment [23].
#### Iv-B2 Translation estimation
For the acceleration components we compensate for the Coriolis forces as illustrated in Fig. 3 and equate the translation components as,
\[\begin{split}(\mathbf{R}_{\mathtt{BI}}^{\star})^{-1}\hat{\mathbf{ \mathrm{a}}}_{\mathtt{I}}&=\hat{\mathbf{\mathrm{a}}}_{\mathtt{B}}+ \underbrace{\hat{\mathbf{\mathrm{\omega}}}_{\mathtt{B}}\times(\hat{\mathbf{\mathrm{ \omega}}}_{\mathtt{B}}\times\mathbf{t}_{\mathtt{BI}})}_{\text{Centitional force}}+ \underbrace{\hat{\mathbf{\mathrm{\omega}}}_{\mathtt{B}}\times\mathbf{t}_{\mathtt{BI }}}_{\text{Euler force}}\\ &=\hat{\mathbf{\mathrm{a}}}_{\mathtt{B}}+[\hat{\mathbf{\mathrm{\omega}}}_{ \mathtt{B}}]_{\times}^{2}\mathbf{t}_{\mathtt{BI}}+[\hat{\mathbf{\mathrm{\omega}}}_{ \mathtt{B}}]_{\times}\mathbf{t}_{\mathtt{BI}},\end{split} \tag{11}\]
where, \([\cdot]_{\times}\) is a skew-symmetric matrix and \([\mathbf{a}]_{\times}\mathbf{b}=\mathbf{a}\times\mathbf{b}\).
Since, we already computed \(\mathbf{R}_{\mathtt{BI}}^{\star}\), our optimization problem for the translation component becomes,
\[\begin{split}\mathbf{t}_{\mathtt{BI}}^{\star}&= \operatorname*{arg\,min}_{\mathbf{t}_{\mathtt{BI}}}\sum_{i=1}^{N}\|\underbrace {([\hat{\mathbf{\mathrm{\omega}}}_{\mathbf{\mathrm{b}}_{i}}]_{\times}^{2}+[\hat{\mathbf{ \mathrm{\omega}}}_{\mathbf{\mathrm{b}}_{i}}]_{\times})}_{\mathbf{\mathrm{A}}_{i}} \mathbf{t}_{\mathtt{BI}}-\underbrace{(\mathbf{R}_{\mathtt{BI}}^{-1\star}\hat{ \mathbf{\mathrm{a}}}_{\mathbf{\mathrm{I}}_{i}}-\hat{\mathbf{\mathrm{a}}}_{\mathbf{\mathrm{I}}_ {i}})}_{\mathbf{\mathrm{B}}_{i}}\|^{2},\end{split} \tag{12}\]
where, \(\mathbf{A}=[\mathbf{A}_{0}\ \mathbf{A}_{1}\ \cdots\ \mathbf{A}_{n}]^{T}\), \(\mathbf{B}=[\mathbf{B}_{0}\ \mathbf{B}_{1}\ \ldots\ \mathbf{B}_{n}]^{T}\) and \(\mathbf{x}=\mathbf{t}_{\mathtt{BI}}\ \forall i\in[1,N]\). This is a system of linear equations of the form, \(\mathbf{A}\mathbf{x}=\mathbf{B}\), which can be solved using a least-square approach as,
\[\mathbf{x}^{\star}=\operatorname*{arg\,min}_{\mathbf{x}}\|\mathbf{A}\mathbf{x }-\mathbf{B}\|_{\mathbf{\Sigma}}^{2}, \tag{13}\]
where, \(\mathbf{\Sigma}\) denotes the co-variance of the residual. The main approach [24] used to tackle these problems involves solving a series of approximations to the original problem repeatedly by linearizing as, \(F(\mathbf{x}+\Delta\mathbf{x})\approx F(\mathbf{x})+\mathbf{J}(\mathbf{x}) \Delta\mathbf{x}\), where, \(J\) being the jacobian of \(F(\mathbf{x})\). Thus, \(\hat{\mathbf{x}}\) is updated in the current iteration as, \(\hat{\mathbf{x}}\leftarrow\hat{\mathbf{x}}\boxplus\mathbf{A}\mathbf{x}\), where \(\boxplus\) is an addition operator in the manifold and the problem becomes,
\[\mathbf{\Delta}\mathbf{x}^{\star}=\operatorname*{arg\,min}_{\mathbf{\Delta}\mathbf{x}} \frac{1}{2}\|(\mathbf{A}\hat{\mathbf{x}}-\mathbf{B})+\mathbf{A}^{T}\mathbf{ \Delta}\mathbf{x}\|_{\mathbf{\Sigma}}^{2}. \tag{14}\]
The optimal solution is given by,
\[\underbrace{\left(\mathbf{J}^{\top}\mathbf{\Sigma}^{-1}\mathbf{J}\right)}_{\text{ Fisher information matrix}}\mathbf{\Delta}\mathbf{x}=-\mathbf{J}^{\top}\mathbf{\Sigma}^{-1}(\mathbf{A}\hat{ \mathbf{x}}-\mathbf{B}). \tag{15}\]
In practice, since we receive reliable initial estimate for the translation component from the CAD parameters, we search for \(\mathbf{x}^{\star}\) only in a local neighborhood within reasonable bounds. Thus, we solve a bounded variable least squares problem as,
\[\mathbf{x}^{\star}=\operatorname*{arg\,min}_{\mathbf{L}\leq\mathbf{x}\leq \mathbf{U}}\|\mathbf{A}\mathbf{x}-\mathbf{B}\|_{\mathbf{\Sigma}}^{2}, \tag{16}\]
where, \(\mathbf{L}\) and \(\mathbf{U}\) are the lower and upper bounds of \(\mathbf{x}\) respectively. Thus the solution space in Eq. 14 is modified with an additional constraint as,
\[\begin{split}\mathbf{\Delta}\mathbf{x}^{\star}&= \operatorname*{arg\,min}_{\mathbf{\Delta}\mathbf{x}}\frac{1}{2}\|(\mathbf{A}\hat{ \mathbf{x}}-\mathbf{B})+\mathbf{A}^{T}\mathbf{\Delta}\mathbf{x}\|_{\mathbf{\Sigma}}^{ 2},\\ \text{s.t.}&\mathbf{L}\leq\hat{\mathbf{x}}+\mathbf{ \Delta}\mathbf{x}\leq\mathbf{U}.\end{split} \tag{17}\]
### _Observability analysis_
All motions do not excite enough degrees of freedom to allow calibration. As a result, it's important to identify the motion segments for information-aware calibration updates. We capture this information by comparing the angular velocity between \(\mathtt{I}^{(k)}\) and B frame based on Eq.10 which can also be solved iteratively with the update step as,
\[\underbrace{\left(\sum_{i}\mathbf{J}_{i}^{\top}\Sigma_{i}^{-1}\mathbf{J}_{i} \right)}_{\text{Fisher information matrix}}\mathbf{\Delta}\mathbf{R}_{\mathtt{BI}} \overset{Eq.~{}\ref{eq:1}}{=}-\sum_{i}\mathbf{J}_{i}^{\top}\Sigma_{i}^{-1}( \hat{\mathbf{R}}_{\mathtt{BI}}\mathbf{\omega}_{\mathbf{\mathrm{B}}_{i}}-\mathbf{\omega}_{ \mathbf{\mathrm{I}}_{i}}), \tag{18}\]
where, \(\mathbf{\Sigma}_{i}=\operatorname*{cov}(\hat{\mathbf{R}}_{\mathtt{BI}}\mathbf{\omega}_{ \mathbf{\mathrm{B}}_{i}}-\mathbf{\omega}_{\mathbf{\mathrm{I}}_{i}})\) and \(\mathbf{J}_{i}=\mathbf{\omega}_{\mathbf{\mathrm{B}}_{i}}^{T}\). The Fisher information matrix, \(\mathbf{L}_{N\times N}\) captures all the information contained in the measurements. We do the processing after arranging the data in a batch size of \(N\). We perform a Singular Value Decomposition of \(\mathcal{I}_{N\times N}\) for each batch as:
\[\mathcal{I}_{N\times N}=\mathbf{U}\mathbf{U}^{T}, \tag{19}\]
where, \(\mathbf{U}=[\mathbf{u}_{1},\mathbf{u}_{2}\ldots,\mathbf{u}_{N}]\) and \(\mathbf{S}=\text{diag}(\sigma_{1},\sigma_{2}\ldots,\sigma_{N})\) is a diagonal matrix of singular values in decreasing order. Information about the data in the batch is indicated by the value of the minimal singular value. If the minimal singular value exceeds a certain threshold (design decision), we may say that there are sufficient excitations in the batch of data to allow for extrinsics computation and hence chosen for calibration.
## V Experimental Results
To demonstrate the online calibration performance, we conducted several experiments with real-world data collected using two vehicles with the sensor setup illustrated in Fig. 2.
### _Dataset_
We use data from a GNSS receiver for the GT (ground truth) poses. We analyze our results on 2 different collected test sequences which are described in detail in Table I, covering different driving scenarios. In Seq-1, we performed aggressive motion to excite the possible degrees of freedom.
Fig. 3: Illustration of IMU signal transformation between B and \(\mathtt{I}^{(\mathtt{F}_{\mathtt{E}}-\mathtt{Top})}\) frame.
However, in Seq-2 we recorded data in a normal driving scenario in regular traffic conditions. In Seq-1, there were no periods of rest whereas in Seq-2 there were occasional periods of rest due to traffic signals.
We evaluated the performance of a state-of-the-art lidar odometry estimator (Fast-lio2 [25]) on both the motion sequences as seen in Fig. 4 and observe that the estimator is sensitive to the aggressive motion sequence, Seq-1. The RMSE of the absolute pose error (APE) for Seq-1 is 2.292\(\mathrm{fm}\) and 2.1063\(\mathrm{m}\) for \(\mathrm{F_{L}-Top}\) and \(\mathrm{F_{R}-Top}\) lidars respectively, which is not suitable for motion-based calibration using pose alignment strategies. As seen in Fig. 3(a), the Kabsch alignment [23] algorithm does not lead us to correct trajectory alignment. For Seq-2, the APE is less than 0.5 \(\mathrm{m}\) for both lidars when using Fast-lio2. However, this motion sequence doesn't have enough excitation in all the different degrees of freedom and thus is not a great choice for a calibration motion sequence using our proposed method. We did not run the state estimator for the whole sequence as not enough excitation for the different degrees of freedom was present in the data.
### _Bias estimation results_
To improve our calibration performance, which relies on the signal-to-signal match policy we compensate for the biases in both the accelerometer and angular velocity signals. The accelerometer biases are recomputed whenever a period of rest is detected. The angular velocity biases are tracked online based on the Kalman filter. The update step is based on the estimated orientation using the Madgwick filter that uses the bias-compensated accelerometer signals for gravity alignment. Thus whenever a period of rest is detected the covariances of the angular velocity biases converge.
In Seq-1, we didn't have any periods of rest. Thus the angular velocity bias covariances do not converge as seen in Fig. 5. Whereas, in Fig. 6, we see that the angular velocity bias covariances converge after the detection of the rest period for a few seconds at the beginning of Seq-2.
### _Online calibration results_
GT calibration of the lidars is obtained offline by refining the vehicle CAD parameters. The refinement process analyzes
Fig. 4: Odometry estimation of \(\mathrm{F_{L}-Top}\) and \(\mathrm{F_{R}-Top}\) lidars using using Fast-lio2 [25] for both the sequences. The trajectory alignment is done using Kabsch alignment [23].
Fig. 5: Bias estimation for the \(\mathrm{F_{L}-Top}\) angular velocity signals for Seq-1.
Fig. 6: Bias estimation for the \(\mathrm{F_{L}-Top}\) angular velocity signals for Seq-2.
known static feature positions around the vehicle in the world frame, \(\mathtt{W}\), and matches the corresponding detected features from the lidar perspective. After that, we obtain the extrinsics of the lidars to the base frame, \(\mathtt{B}\) and compute the transformation matrix, \(\mathbf{T}_{\mathtt{GT}}=\mathbf{T}_{\mathtt{BL}^{(\mathtt{r_{2}}-\mathtt{Top}) }}^{-1}\mathbf{T}_{\mathtt{BL}^{(\mathtt{r_{2}}-\mathtt{Top})}}\), between the 2 lidars. We compare our results to this matrix by comparing the translation and rotation errors as:
\[\Delta t =\frac{1}{3}\sqrt{\|\widehat{\mathbf{t}}-\mathbf{t}_{\mathrm{GT} }\|_{F}^{2}}, \tag{20}\] \[\Delta R =\frac{180}{\pi}\cos^{-1}\left[\frac{1}{2}\left(\mathrm{Tr}( \widehat{\mathbf{R}}^{-1}\mathbf{R}_{\mathrm{GT}})-1\right)\right], \tag{21}\]
where, \(\Delta R\) is the rotation along the principle eigenvector of \((\widehat{\mathbf{R}}^{-1}\mathbf{R}_{\mathrm{GT}})\).
For Seq-1, we obtained \(\Delta t=0.3387\mathrm{m}\) and \(\Delta R=0.4577^{\circ}\) and the original IMU signals, as well as the calibrated IMU signals between the 2 collocated IMUs, can be observed in Fig. 7 and Fig. 8 respectively. For Seq-2, the results were, \(\Delta t=0.4431\mathrm{m}\) and \(\Delta R=4.7938^{\circ}\). For both the sequences, the translation components were obtained by solving a bound variable least square optimization problem as defined in Eq.16 with bounds of \(\pm 0.3\mathrm{m}\). The raw and calibrated signals for Seq-2 are additionally shown in our complimentary video. As expected, in Seq-1, due to the presence of rich motion sequence in terms of excitation of different degrees of freedom our signal matching algorithm produces better results than Seq-2.
### _Observability analysis_
For observability analysis we extract the information matrix by comparing the angular velocity signals between the collocated IMU, \(\mathtt{I}^{(k)}\), and the \(\mathtt{B}\) frame by dividing the data into equal segments of 10 sec each. We analyze the SVD of the Fisher information matrix from Eq.18 and select the IMU data in the segment for calibration if the minimum singular value is greater than a threshold. For our experiments, we set the threshold as \(5^{-10}\).
As seen in Fig. 8(b), we selected all segments (highlighted in green) as there was enough excitation in the angular velocity during all the time in Seq-1. The corresponding selected trajectory segments are shown in Fig. 8(a). Similarly for Seq-2, we can see that in Fig. 9(a), relevant poses for calibration are selected only when there are significant turns in the maneuvers as observed in Fig. 9(b), meaning there is enough excitation for different degrees of freedom.
Fig. 8: Calibrated IMU signals from \(\mathtt{F}_{\mathtt{L}}-\mathtt{Top}\) and \(\mathtt{F}_{\mathtt{R}}-\mathtt{Top}\) IMUs with bias compensation for Seq-1.
Fig. 7: Raw IMU signals from \(\mathtt{F}_{\mathtt{L}}-\mathtt{Top}\) and \(\mathtt{F}_{\mathtt{R}}-\mathtt{Top}\) IMUs for Seq-1.
Fig. 9: Seq-1: The observability criteria selects all the motion segments as there were enough excitation in the angular velocity signals in all the motion segments.
## VI Conclusion
We show that it is possible to estimate the extrinsic calibration of multiple lidars based on matching collocated IMU signals only. Unlike other methods which use odometry estimates for matching poses, this is a lightweight method that relies on raw signal matching. The observability-aware module selects signals that carry enough excitation for different degrees of freedom required for calibration. Our method provides comparable performance to GT extrinsics parameters when sufficient signal excitations are present. This work can be easily extended to other modalities like cameras and radars. For further improvement, we can also match features in between the corresponding sensors if they have a common FoV.
## VII Acknowledgements
This research has been jointly funded by the Swedish Foundation for Strategic Research (SSF) and Scania. The research was also affiliated with Wallenberg AI, Autonomous Systems and Software Program (WASP).
|
2309.16250 | Fuzzy bi-Gödel modal logic and its paraconsistent relatives | We present the axiomatisation of the fuzzy bi-G\"{o}del modal logic
(formulated in the language containing $\triangle$ and treating the
coimplication as a defined connective) and establish its PSpace-completeness.
We also consider its paraconsistent relatives defined on fuzzy frames with two
valuations $e_1$ and $e_2$ standing for the support of truth and falsity,
respectively, and equipped with \emph{two fuzzy relations} $R^+$ and $R^-$ used
to determine supports of truth and falsity of modal formulas. We establish
embeddings of these paraconsistent logics into the fuzzy bi-G\"{o}del modal
logic and use them to prove their PSpace-completeness and obtain the
characterisation of definable frames. | Marta Bilkova, Sabine Frittella, Daniil Kozhemiachenko | 2023-09-28T08:42:05Z | http://arxiv.org/abs/2309.16250v4 | # Fuzzy bi-Godel modal logic and its paraconsistent relatives+
###### Abstract
We present the axiomatisation of the fuzzy bi-Godel logic (formulated in the language containing \(\triangle\) and treating \(\prec\) as the defined connective) \(\mathbf{KbiG}^{\mathsf{f}}\) and establish its \(\mathsf{PSpace}\)-completeness. We also consider its paraconsistent relatives \(\mathbf{K}\mathsf{G}^{2\pm f}\) and \(\mathsf{G}^{2\pm f}_{\blacksquare,\bullet}\) defined on fuzzy frames with two valuations \(e_{1}\) and \(e_{2}\) standing for the support of truth and falsity, respectively, and equipped with _two fuzzy relations_\(R^{+}\) and \(R^{-}\) used to determine supports of truth and falsity of modal formulas. We establish embeddings of \(\mathbf{K}\mathsf{G}^{2\pm f}\) and \(\mathsf{G}^{2\pm f}_{\blacksquare,\bullet}\) into \(\mathbf{KbiG}^{\mathsf{f}}\) and use them to prove their \(\mathsf{PSpace}\)-completeness and obtain the characterisation of \(\mathbf{K}\mathsf{G}^{2}\)- and \(\mathsf{G}^{2}_{\blacksquare,\bullet}\)-definable frames.
_Keywords:_ Godel modal logic; paraconsistent modal logics; complexity; frame definability.
## 1 Introduction
When dealing with modalised propositions such as 'I believe that Paula has a dog' or 'I believe that Quinn thinks that Paula has a cat', 'Quinn has to submit the report by Friday', etc., it is reasonable to assume that they can be equipped with a truth degree. After all, we can be more or less convinced of something, have stronger or weaker obligations, etc. On the other hand, it is not always justified that the agents can assign exact numerical values to the degrees of their beliefs or obligations (even if they can compare them). Thus, if we want to formalise these contexts, it makes sense to utilise modal expansions of Godel logic \(\mathsf{G}\). Indeed, \(\mathsf{G}\) can be construed as the logic of comparative truth since the truth of the formula is determined not by the individual values of its variables but rather by the relative order of these values. In particular, the Godel implication \(\phi\to\chi\) is true iff the value of \(\phi\) is less or equal to the value of \(\chi\). Note, however, that there is no Godel formula of two variables \(p\) and \(q\) which is true iff the value of \(p\)_is strictly greater than_ the value of \(q\). To express this, one can use the coimplication \(\prec\) -- \(\sim\sim\)(\(p\!\prec\!q\)) (the connective was introduced in [28], the notation is due to [19]), or the Baaz Delta operator \(\triangle\)[3] -- \(\sim\)\(\triangle\)(\(p\to q\)). Adding either of these1 to the language of Godel logic produces the _bi-Godel logic_\(\mathsf{biG}\) (or _symmetric Godel logic_ in the terminology of [20]).
Footnote 1: In Gödel logic, \(\triangle\) and \(\prec\) are interdefinable as follows: \(\triangle\phi\coloneqq\mathbf{1}\prec(\mathbf{1}\prec\phi)\) and \(\phi\prec\chi\coloneqq\phi\wedge\sim\triangle(\phi\to\chi)\).
(bi-)Godel modal logicsOur main source of inspiration is the extensive study of Godel modal logics undertaken in the existing literature. Both \(\square\) and \(\Diamond\) fragments2 of the Godel modal logic are axiomatised (cf. [14] for the Hilbert-style axiomatisation and [22, 23] for Gent
fuzzy [15] (**K**G) and crisp [29] (**K**G**\({}^{\sf c}\)) bi-modal logics are also axiomatised; there is also a tableaux calculus for **K**G\({}^{\sf f}\) and **K**G\({}^{\sf c}\)[30]. It is also established that all these logics are decidable [12] and, in fact, **PSpace**-complete [13], even though they are more expressive than **K**. Regarding the bi-Godel modal logics, a temporal expansion has been studied in [1] and a provability logic with algebraic semantics was presented in [20].
Paraconsistent modal logicsThe second source of motivation is the study of _paraconsistent modal logics_. A logic is paraconsistent when its entailment does not satisfy the explosion property -- \(p\), \(\neg p\models q\) (or _ex contradictione quodlibet_). In modal contexts, the failure of this principle aligns very well with our intuition. Indeed, one can have contradictory beliefs but still _not believe in something else_; likewise, even if one has conflicting obligations, it does not mean that they have (or are permitted) to do _everything_.
One of the most common treatments of paraconsistent modal logics is to consider Kripke frames _with two independent valuations_ as it is done in, e.g., [33, 18, 27, 24, 25]. These valuations are interpreted as supports of truth and falsity of a formula in a given state. Historically, this idea can be traced to the Belnap-Dunn logic (BD, alias 'first-degree entailment' or **F**DE) [17, 5, 4]. Each proposition could be true, false, both true and false or neither true nor false depending on the available information.
An important consequence of having independent supports of truth and falsity is that it is no longer the case that any two propositions have _comparable values_. Indeed, it might be the case that both truth and falsity of \(\phi\) are supported but neither truth nor falsity of \(\chi\) is supported. Or, more generally, both the support of the falsity and the support of the truth of \(\phi\) can be greater than those of \(\chi\) (or vice versa). This makes sense if we wish to model agents who cannot always compare the degrees of their beliefs in two given statements, who cannot always compare the strengths of their obligations, etc. This is justified if, e.g., \(\phi\) and \(\chi\) do not have any content in common. In addition, in the multi-agent setting, we might not always be able to compare the strength of beliefs of different agents even in the same statement.
After introducing two valuations on a frame, it makes sense to consider frames with _two relations_ designated \(R^{+}\) and \(R^{-}\) and utilised to determine supports of truth and falsity of modal formulas (cf., e.g., [31, 16] for examples of such logics). Intuitively, if we assume \(R^{+}\) and \(R^{-}\) to be fuzzy (i.e., if \(W\) is the set of states of a given frame, then \(R^{+},R^{-}:W\times W\rightarrow[0,1]\)) and the states in the frame to represent sources that refer to one another, they can be construed as, respectively, degrees of trust of one source in confirmations (assertions) or denials of the other source. For example, if a source is too sceptical, we might be inclined to trust its denials _less than_ we trust its confirmations. On the contrary, if a sensationalist source denies something, it can be considered _more_ trustworthy than when the very same source asserts something.
Logics in the paperThis paper continues the study of paraconsistent expansions of Godel modal logics that we began in [7] and continued in [8, 9]. We cover quite a few logics (cf. Fig. 2). Previously, we were mostly dealing with bi-Godel and paraconsistent Godel logics on _crisp_ frames. Now, our main interest lies in the logics on fuzzy frames (put in rectangles in the picture). These logics are obtained from the fuzzy bi-modal Godel logic (**K**b**G\({}^{\sf f}\) from [7]) by adding a De Morgan negation \(\neg\) or (additionally) replacing \(\Box\) and \(\Diamond\) with \(\blacksquare\) and \(\blacklozenge\).
Let us quickly explain the difference between these two pairs of modalities. Recall, first of all (cf. Fig. 1), that instead of giving two independent valuations on \([0,1]\), we can equivalently define one valuation on \([0,1]^{\clubsuit}=[0,1]\times[0,1]^{\sf op}\). **K**G\({}^{2\pm \ddagger}\) uses \(\Box\) and \(\Diamond\) which can be thought of as generalisations of the meet and join on \([0,1]^{\clubsuit}\) w.r.t. the truth order: i.e., the value of \(\Box\phi\) is computed using \(\bigwedge\) -- the truth infimum of \(\phi\) in the accessible states while \(\Diamond\) uses \(\bigvee\) -- the truth supremum. \([0,1]^{\clubsuit}\), however, is a _bi-lattice_ and thus, there are \(\sqcap\) and \(\sqcup\), i.e., meet and join w.r.t. the informational order. Thus, \(\textsf{G}^{2\pm\ddagger}_{\blacksquare\blacklozenge}\) (first introduced in [9]) expands biG with \(\neg\) and \(\blacksquare\) and \(\blacklozenge\) that generalise \(\sqcap\) and \(\sqcup\) (we will further call \(\blacksquare\) and \(\blacklozenge\) 'informational modalities' and \(\Box\) and \(\Diamond\)'standard modalities') as expected: the value of \(\blacksquare\)\(\phi\) is computed using \(\bigsqcap\) (the informational infimum) of the values of \(\phi\) in the accessible states and the
value of \(\blacklozenge\) is obtained via \(\bigsqcup\) (the informational supremum).
_Convention 1.1_.: In what follows, we are going to use several conventions for naming the logics.
* Index 2 designates that the logic is evaluated on frames with two valuations. Footnote 2: I.e., the logic defined on the frames where the accessibility relation is crisp.
* Indices \({}^{\mathsf{c}}\) and \({}^{\mathsf{f}}\) stand for, respectively, logics on crisp and fuzzy frames.
* Index \({}^{\pm}\) denotes the logic whose frames have two accessibility relations \(R^{+}\) and \(R^{-}\).
When we consider both fuzzy and crisp versions of a given logic at once, we are going to omit the corresponding indices. Likewise, we omit all indices except 2 when we deal with all versions of a logic. Thus, for example 'in all \(\mathbf{KG}^{2}\)'s it holds that \(X\)' means that \(X\) is true w.r.t. \(\mathbf{KG}^{2\mathsf{c}}\), \(\mathbf{KG}^{2\mathsf{f}}\), \(\mathbf{KG}^{2\pm\mathsf{c}}\), and \(\mathbf{KG}^{2\pm\mathsf{f}}\). Similarly, '\(Y\) holds in \(\mathbf{G}^{2\pm}_{\blacksquare,\blacklozenge}\)' stands for '\(Y\) holds in \(\mathbf{G}^{2\pm\mathsf{f}}_{\blacksquare,\blacklozenge}\) and \(\mathbf{G}^{2\pm\mathsf{c}}_{\blacksquare,\blacklozenge}\).
Footnote 2: I.e., the logic defined on the frames where the accessibility relation is crisp.
Plan of the paperIn the previous paper [8], we studied the _crisp_ modal bi-Godel logic3 and its paraconsistent expansion (in this paper, we denote them \(\mathbf{KbiG}^{\mathsf{c}}\) and \(\mathbf{KG}^{2\mathsf{c}}\), respectively). We also studied its computational and model-theoretic properties. The main goal of this paper is to study the fuzzy bi-Godel modal logic and its paraconsistent relatives with standard and informational modalities. The remainder of the text is organised as follows.
Footnote 3: I.e., the logic defined on the frames where the accessibility relation is crisp.
In Section 2, we remind the semantics of propositional bi-Godel logic and its expansion with \(\Box\) and \(\lozenge\) interpreted on both crisp and fuzzy frames. In addition, we show that some classes of frames definable in \(\mathbf{KbiG}\) cannot be defined without \(\bigtriangleup\). Section 3 is dedicated to the axiomatisation of \(\mathbf{KbiG}^{\mathsf{f}}\). We construct a Hilbert-style calculus for \(\mathbf{KbiG}^{\mathsf{f}}\) and establish its strong completeness by adapting the
proof from [15]. In Section 4, we present \(\mathbf{KG}^{2}\)'s and \(\mathsf{G}^{2}_{\blacksquare,\bullet}\)'s, establish their embeddings into \(\mathbf{KbiG}\), and study their semantical properties. Section 5 is dedicated to the study of transferrable formulas, i.e., those that are valid on the same frames both in \(\mathbf{KbiG}\) and \(\mathbf{KG}^{2}\). In Section 6, we establish the \(\mathsf{PSpace}\)-completeness of \(\mathbf{KbiG}^{\mathsf{f}}\) using the technique from [13]. We then use the embeddings obtained in Section 4 to obtain the \(\mathsf{PSpace}\)-completeness of the paraconsistent relatives of \(\mathbf{KbiG}^{\mathsf{f}}\). Finally, Section 7 is devoted to the discussion of the results that we obtained and outlines further work.
## 2 Preliminaries
To make the paper self-contained, we begin with the presentation of the propositional fragment of \(\mathbf{KbiG}\), namely, the bi-Godel logic.
### Propositional bi-Godel logic
The language is generated from the countable set \(\mathtt{Prop}\) via the following grammar.
\[\phi\coloneqq p\in\mathtt{Prop}\mid\sim \phi\mid\bigtriangleup\phi\mid(\phi\wedge\phi)\mid(\phi\vee\phi) \mid(\phi\to\phi)\] ( \[\mathcal{L}_{\bigtriangleup}\] )
We also introduce two defined constants
\[\mathbf{1}\coloneqq p\to p \mathbf{0}\coloneqq\sim \mathbf{1}\]
We choose \(\bigtriangleup\) over \(\prec\) as a primitive symbol because the former allows for a shorter and more elegant axiomatisation of the propositional fragment. Furthermore, the use of \(\bigtriangleup\) simplifies the completeness proof of \(\mathbf{KbiG}^{\mathsf{f}}\).
The semantics of \(\mathcal{L}_{\bigtriangleup}\) are given in the following definition. For the sake of simplicity, we also include \(\prec\) in the definition of the bi-Godel algebra on \([0,1]\) since it will simplify the presentation of the semantics of paraconsistent logics.
**Definition 2.1**.: The bi-Godel algebra on \([0,1]\) denoted \([0,1]_{\mathsf{G}}=\langle[0,1],0,1,\wedge_{\mathsf{G}},\vee_{\mathsf{G}},- \neg_{\mathsf{G}},\prec,\sim_{\mathsf{G}},\bigtriangleup_{\mathsf{G}}\rangle\) is defined as follows: for all \(a,b\in[0,1]\), the standard operations are given by \(a\wedge_{\mathsf{G}}b\coloneqq\min(a,b)\), \(a\vee_{\mathsf{G}}b\coloneqq\max(a,b)\),
\[a\to_{G}b=\begin{cases}1\text{ if }a\leq b\\ b\text{ else}\end{cases}\qquad a\prec_{G}b=\begin{cases}0\text{ if }a\leq b\\ a\text{ else}\end{cases}\qquad\sim a=\begin{cases}0\text{ if }a>0\\ 1\text{ else}\end{cases}\qquad\bigtriangleup a=\begin{cases}0\text{ if }a<1\\ 1\text{ else}\end{cases}\]
A \(\mathsf{biG}\)_valuation_ is a homomorphism \(e:\mathcal{L}_{\bigtriangleup}\to[0,1]_{\mathsf{G}}\) that is defined for the complex formulas as \(e(\phi\circ\phi^{\prime})=e(\phi)\circ_{\mathsf{G}}e(\phi^{\prime})\) for every connective \(\circ\). We say that \(\phi\) is _valid_ iff \(e(\phi)=1\) under every valuation. Moreover, \(\Gamma\subseteq\mathcal{L}_{\bigtriangleup}\)_entails_\(\chi\in\mathcal{L}_{\bigtriangleup}\) (\(\Gamma\models_{\mathsf{biG}}\chi\)) iff for every valuation \(e\), it holds that
\[\inf\{e(\phi):\phi\in\Gamma\}\leq e(\chi).\]
It is now easy to see that \(e(\bigtriangleup\phi)=e(\mathbf{1}\!\prec\!(\mathbf{1}\!\prec\!\phi))\) and \(e(\phi\!\prec\!\chi)=e(\phi\wedge\sim\!\bigtriangleup(\phi\to\chi))\) for every \(e\) as intended.
Finally, let us recall the Hilbert-style calculus for \(\mathsf{biG}\) from [3].
**Definition 2.2** (\(\mathcal{HG}\bigtriangleup\) -- Hilbert-style calculus for \(\mathsf{biG}\)).: The calculus has the following axiom schemas and rules (for any \(\phi\), \(\chi\), \(\psi\)):
1. \((\phi\to\chi)\to((\chi\to\psi)\to(\phi\to\psi))\)
2. \(\phi\to(\phi\vee\chi)\); \(\chi\to(\phi\vee\chi)\); \((\phi\to\psi)\to((\chi\to\psi)\to((\phi\vee\chi)\to\psi))\)
3. \((\phi\wedge\chi)\to\phi\); \((\phi\wedge\chi)\to\chi\); \((\phi\to\chi)\to((\phi\to\psi)\to(\phi\to(\chi\wedge\psi)))\)
4. \((\phi\to(\chi\to\psi))\to((\phi\wedge\chi)\to\psi)\); \(((\phi\wedge\chi)\to\psi)\to(\phi\to(\chi\to\psi))\)
5. \((\phi\to\chi)\to(\sim\chi\to\sim\phi)\)
6. \((\phi\to\chi)\vee(\chi\to\phi)\)
7. \(\triangle\phi\vee\sim\triangle\phi\)
8. \(\triangle(\phi\to\chi)\to(\triangle\phi\to\triangle\chi)\); \(\triangle(\phi\lor\chi)\to(\triangle\phi\vee\triangle\chi)\)
9. \(\triangle\phi\to\phi\); \(\triangle\phi\to\triangle\triangle\phi\)
10. \(\dfrac{\phi\ \ \phi\to\chi}{\chi}\)
11. \(\triangle\mathrm{nec}\ \dfrac{\vdash\phi}{\vdash\triangle\phi}\)
_Remark 2.1_.: Note that it is crucial for the soundness of \(\mathcal{H}\mathsf{G}\triangle\) that \(\triangle\mathrm{nec}\) is applied only to theorems. Otherwise, \(p\vdash_{\mathcal{H}\mathsf{G}\triangle}\triangle p\) would be derivable which, of course, is not a valid instance of entailment.
**Proposition 2.1** ([3, Theorem 3.1]).: \(\mathcal{H}\mathsf{G}\triangle\) _is strongly complete: for any \(\Gamma\cup\{\phi\}\subseteq\mathcal{L}_{\triangle}\), it holds that_
\[\Gamma\vdash_{\mathcal{H}\mathsf{G}\triangle}\phi\text{ iff }\Gamma\models_{ \mathsf{biG}}\phi\]
_Convention 2.1_.: In what follows, given a calculus \(\mathcal{H}\), we use \(\mathsf{Th}(\mathcal{H})\) to denote the set of its _theorems_, i.e., formulas that can be proven in \(\mathcal{H}\) without assumptions.
### KbiG: language and semantics
Let us now introduce the logic \(\mathbf{KbiG}\). The language \(\mathcal{L}_{\triangle,\square,\Diamond}\) of \(\mathbf{KbiG}\) expands \(\mathcal{L}_{\triangle}\) with two additional modal operators: \(\square\) and \(\Diamond\). We will further use \(\mathbf{KG}\) to denote the Godel modal logic, i.e., the \(\triangle\)-free fragment of \(\mathbf{KbiG}\).
**Definition 2.3** (Frames).:
* A _fuzzy frame_ is a tuple \(\mathfrak{F}=\langle W,R\rangle\) with \(W\neq\varnothing\) and \(R:W\times W\to[0,1]\).
* A _crisp frame_ is a tuple \(\mathfrak{F}=\langle W,R\rangle\) with \(W\neq\varnothing\) and \(R\subseteq W\times W\) (or, equivalently, with \(R:W\times W\to\{0,1\}\)).
**Definition 2.4** (\(\mathbf{KbiG}\) models).: A \(\mathbf{KbiG}\)_model_ is a tuple \(\mathfrak{M}=\langle W,R,e\rangle\) with \(\langle W,R\rangle\) being a (crisp or fuzzy) frame, and \(e:\mathtt{Prop}\times W\to[0,1]\). \(e\) (a valuation) is extended on complex \(\mathcal{L}_{\triangle,\square,\Diamond}\) formulas according to Definition 2.1 in the cases of propositional connectives:
\[e(\phi\circ\phi^{\prime},w)=e(\phi,w)\circ_{\mathsf{G}}e(\phi^{\prime},w). (\circ\in\{\sim,\triangle,\wedge,\vee,\to\})\]
The interpretation of modal formulas is as follows:
\[e(\square\phi,w)=\inf_{w^{\prime}\in W}\{wRw^{\prime}\to_{\mathsf{G}}e(\phi, w^{\prime})\}\qquad\qquad e(\Diamond\phi,w)=\sup_{w^{\prime}\in W}\{wRw^{\prime} \wedge_{\mathsf{G}}e(\phi,w^{\prime})\}.\]
We say that \(\phi\in\mathcal{L}_{\triangle,\square,\Diamond}\) is \(\mathbf{KbiG}\)-_valid on frame_\(\mathfrak{F}\) (denoted, \(\mathfrak{F}\models_{\mathbf{KbiG}}\phi\)) iff for any \(w\in\mathfrak{F}\), it holds that \(e(\phi,w)=1\) for any model \(\mathfrak{M}\) on \(\mathfrak{F}\). \(\Gamma\)_entails_\(\chi\) (on \(\mathfrak{F}\)), denoted \(\Gamma\models_{\mathbf{KbiG}}\chi\) (\(\Gamma\models^{\mathfrak{F}}_{\mathbf{KbiG}}\chi\)), iff for every model \(\mathfrak{M}\) (on \(\mathfrak{F}\)) and every \(w\in\mathfrak{M}\), it holds that
\[\inf\{e(\phi,w):\phi\in\Gamma\}\leq e(\chi,w).\]
_Convention 2.2_.: Given a crisp or fuzzy frame \(\mathfrak{F}=\langle W,S\rangle\), set \(S(w)=\{w^{\prime}:wSw^{\prime}>0\}\) and \(S=\{\langle w,w^{\prime}\rangle:w,w^{\prime}\in W,wSw^{\prime}>0\}\).
We have already discussed in the introduction that \(\triangle\) can be used to define _strict_ order on values (while \(\to\) can only define _non-strict_ order). In addition, \(\triangle\) enhances the expressivity of \(\square\) and \(\Diamond\) fragments of \(\mathbf{KG}\). In particular, while the \(\Diamond\) (fuzzy) fragment of \(\mathbf{KG}\) has the finite model property [14, Theorem 7.1], \(\triangle\Diamond p\to\Diamond\triangle p\)_does not have_ finite countermodels [8, Proposition 2.10.2]. Likewise, while the \(\square\) fragment of \(\mathbf{KG}\) is complete both w.r.t. crisp and fuzzy frames [14, Theorem 4.2], \(\triangle\square p\to\square\triangle p\)_defines_ crisp frames [8, Proposition 2.10.1]. Moreover, we can show that there are frame properties that cannot be defined without \(\triangle\) in the bi-modal \(\mathbf{KG}\).
**Proposition 2.2**.: _There are \(\mathbf{KbiG}\)-definable classes of fuzzy frames that are not \(\mathbf{KG}\)-definable_
Proof.: Consider \(\tau=\sim\triangle\Diamond 1\wedge\sim\square 0\). It is clear that
\[\mathfrak{F}\models_{\mathbf{KbiG}}\tau\text{ iff }\forall u\in\mathfrak{F}: \sup\{uRu^{\prime}:u^{\prime}\in\mathfrak{F}\}<1 \tag{1}\]
Denote the class of frames satisfying (1) with \(\mathbb{S}\). Observe now that \(\mathfrak{F}\in\mathbb{S}\) iff \(e(\Diamond 1\vee\square 0,w)<1\) in every \(w\in\mathfrak{F}\) since it is always the case that \(e(\square 0,w),e(\sim\triangle\Diamond 1,w)\in\{0,1\}\) and since \(e(\sim\triangle\Diamond 1,w)=1\) iff \(e(\Diamond 1,w)<1\). On the other hand, \(\Diamond 1\) can take any value from \([0,1]\) on a fuzzy frame. But there is no Godel formula that is true iff \(e(\Diamond 1\vee\square 0,w)<1\) because \(\Diamond 1\vee\square 0\) can have any value from \(0\) to \(1\); thus to say that it has a value less than \(1\), one needs \(\triangle\) or \(\prec\). Thus, \(\mathbb{S}\) is not \(\mathbf{KG}\)-definable.
## 3 Axiomatisation of the fuzzy \(\mathbf{KbiG}\)
Before proceeding to the axiomatisation of \(\mathbf{KbiG}^{\mathsf{f}}\), let us recall the axiomatisation of the crisp \(\mathbf{KbiG}\) from [8].
**Definition 3.1** (\(\mathcal{H}\mathbf{KbiG}^{\mathsf{c}}\) -- Hilbert-style calculus for \(\mathbf{KbiG}^{\mathsf{c}}\)).: The calculus has the following axiom schemas and rules.
**biG:**: All substitution instances of \(\mathcal{H}\mathbf{G}\triangle\) theorems and rules.
**0:**: \(\sim\Diamond 0\)
**K:**: \(\square(\phi\to\chi)\to(\square\phi\to\square\chi)\); \(\Diamond(\phi\vee\chi)\to(\Diamond\phi\vee\Diamond\chi)\)
**FS:**: \(\Diamond(\phi\to\chi)\to(\square\phi\to\Diamond\chi)\); \((\Diamond\phi\to\square\chi)\to\square(\phi\to\chi)\)
**\(\sim\triangle\Diamond\):**: \(\sim\triangle(\Diamond\phi\to\Diamond\chi)\to\Diamond\sim\triangle(\phi\to\chi)\)
**Cr:**: \(\square(\phi\vee\chi)\to(\square\phi\vee\Diamond\chi)\); \(\triangle\square\phi\to\square\triangle\phi\)
**nec:**: \(\frac{\vdash\phi}{\square\Box\phi}\); \(\frac{\vdash\phi\to\chi}{\vdash\Diamond\phi\to\Diamond\chi}\)
It is clear that \(\mathbf{Cr}\) axioms are not \(\mathbf{KbiG}^{\mathsf{f}}\)-valid since they define crisp frames. It is also easy to check that \(\sim\triangle\Diamond\) is also valid only on fuzzy frames. Indeed, consider Fig. 3: \(e(\Diamond\sim\triangle(p\to q),w_{0})=\frac{1}{2}\) (since \(w_{0}Rw_{1}=\frac{1}{2}\)) but \(e(\sim\triangle(\Di p\to\Di q),w_{0})=1\). In fact, the following statement holds.
**Proposition 3.1**.: \(\sim\triangle\Diamond\) _is redundant in \(\mathcal{H}\mathbf{KbiG}^{\mathsf{c}}\)._
Proof.: First, observe that \(\Diamond\!\!\sim\!\!\sim\!\!p\leftrightarrow\sim\sim\Diamond\!\!\sim\!\!p\) is \(\mathbf{KG}^{\mathsf{c}}\)-valid (and thus, \(\mathcal{HK}\mathbf{KG}^{\mathsf{c}}\)-provable). Thus, it suffices to check that \(\sim\!\!\triangle(\Di p\to\Di q)\to\sim\sim\Diamond\!\!\sim\!\!\triangle(p\to q)\) is provable. Now recall that \(\sim\!\!\phi\to\sim\!\!\times\) is \(\mathcal{H}\mathsf{G}\triangle\)-provable iff \(\sim\!\!\sim\!\!\times\!\!\to\sim\sim\!\!\phi\) is provable. Thus, we need to prove \(\sim\!\!\sim\!\sim\!\!\sim\!\!\!\sim\!\!\triangle(p\to q)\to\sim\sim\!\!\triangle( \Di p\to\Di q)\). But \(\mathcal{H}\mathsf{G}\triangle\vdash\sim\sim\sim\phi\leftrightarrow\sim\phi\) and \(\mathcal{H}\mathsf{G}\triangle\vdash\sim\sim\!\!\triangle\phi\leftrightarrow\Delta\phi\), whence, we reduce our task to the proof of \(\sim\Diamond\!\!\sim\!\!\triangle(p\to q)\to\triangle(\Di p\to\Di q)\). Finally, \(\mathcal{HK}\mathsf{G}\vdash\sim\Diamond\phi\leftrightarrow\Box\!\!\sim\!\!\phi\), whence, we need to prove \(\Box\triangle(p\to q)\to\triangle(\Di p\to\Di q)\).
To do this, recall [8, Proposition 3.1] that \(\Box\triangle\phi\to\Box\Box\phi\) is provable in \(\mathcal{HK}\mathsf{biG}^{\mathsf{c}}\)_without the use of_ **Cr** _and_\(\sim\!\!\triangle\Diamond\) and that \(\mathcal{HK}\mathsf{G}+\Box(p\to q)\to(\Di p\to\Di p)\) (whence, \(\Box(p\to q)\to\triangle(\Di p\to\Di p)\) is provable using \(\triangle\)nec). Thus, using the transitivity of \(\to\), we obtain \(\Box\triangle(p\to q)\to\triangle(\Di p\to\Di q)\), as required.
The above statement makes one inquire whether the axiomatisation of \(\mathsf{KbiG}^{\mathsf{f}}\) is obtained by removing **Cr** from \(\mathcal{HK}\mathsf{biG}^{\mathsf{c}}\). In the remainder of the section, we show that this is indeed the case.
**Definition 3.2** (\(\mathcal{HK}\mathsf{biG}^{\mathsf{f}}\) -- Hilbert-style calculus for \(\mathsf{KbiG}^{\mathsf{f}}\)).: The calculus has the following axiom schemas and rules.
* All substitution instances of \(\mathcal{H}\mathsf{G}\triangle\) theorems and rules.
* \(\sim\Diamond\)
**K:**: \(\Box(\phi\to\chi)\to(\Box\phi\to\Box\chi)\); \(\Diamond(\phi\lor\chi)\to(\Diamond\phi\lor\Di\chi)\)
**FS:**: \(\Diamond(\phi\to\chi)\to(\Box\phi\to\Diamond\chi)\); \((\Diamond\phi\to\Box\chi)\to\Box(\phi\to\chi)\)
**nec:**: \(\infer{\vdash\phi}{\vdash\Box\phi}\); \(\infer{\vdash\phi\to\chi}{\vdash\Diamond\phi\to\Diamond\chi}\)
Note that we do not add any modal axioms to the calculus for \(\mathbf{KG}^{\mathsf{f}}\) (the only axioms we add to \(\mathcal{HK}\mathsf{G}\) are the propositional \(\triangle\) axioms of \(\mathcal{H}\mathsf{G}\triangle\)). Moreover (just as it is the case with \(\mathcal{HK}\mathsf{G}\)), since modal rules can only be applied to theorems, it is clear that we can reduce \(\mathcal{H}\mathsf{KbiG}^{\mathsf{f}}\) proofs to propositional \(\mathcal{H}\mathsf{G}\triangle\) proofs using \(\mathcal{HK}\mathsf{biG}^{\mathsf{f}}\) theorems as additional assumptions:
\[\Gamma\vdash_{\mathcal{HK}\mathsf{biG}^{\mathsf{f}}}\phi\text{ iff }\Gamma, \mathsf{Th}(\mathcal{H}\mathsf{KbiG}^{\mathsf{f}})\vdash_{\mathcal{H}\mathsf{ G}\triangle}\phi \tag{2}\]
It is also easy to see that the deduction theorem holds for \(\mathcal{H}\mathsf{KbiG}^{\mathsf{f}}\):
\[\Gamma,\phi\vdash_{\mathcal{HK}\mathsf{biG}^{\mathsf{f}}}\chi\text{ iff }\Gamma \vdash_{\mathcal{HK}\mathsf{biG}^{\mathsf{f}}}\phi\to\chi \tag{3}\]
This is why our completeness proof utilises the same method as the proof of \(\mathcal{HK}\mathsf{G}\) completeness in [15]. There is, however, an important difference between \(\mathbf{KG}^{\mathsf{f}}\) and \(\mathbf{KbiG}^{\mathsf{f}}\). Namely, the \(\mathbf{KG}\) entailment is defined via the preservation of \(1\) from premises to the assumption in all valuations [15, Definition 1.1]. However, the entailment as preservation of \(1\) is not equivalent to the entailment as preservation of the order on \([0,1]\) in \(\mathbf{KbiG}\). Indeed, \(p\not\models_{\mathbf{KbiG}}\triangle p\), even though \(e(\triangle p,w)=1\) in every valuation \(e\text{ s.t. }e(p,w)=1\).
We proceed as follows. First, we prove the weak completeness of \(\mathcal{H}\mathsf{KbiG}^{\mathsf{f}}\) via a canonical model construction and the truth lemma. Then, we provide a translation into the classical first-order logic and use compactness to achieve the strong completeness result. The canonical model definition is the same as in [15].
**Definition 3.3** (Canonical (counter-)model of a formula).: Let \(\tau\in\mathcal{L}_{\triangle,\Box,\Diamond}\) be s.t. \(\mathcal{HK}\mathsf{biG}^{\mathsf{f}}\not\vdash\tau\) and let \(\mathsf{Sf}^{\mathbf{0,1}}(\tau)=\{\mathbf{0,1}\}\cup\{\psi:\psi\text{ occurs in }\tau\text{ as a subformula}\}\). We define _the canonical countermodel_ of \(\tau\), \(\mathfrak{M}^{\tau}=\langle W^{\tau},\mathsf{R}^{\tau},e^{\tau}\rangle\) as follows.
* \(W^{\tau}\) is the set of all \(\mathsf{G}\triangle\) homomorphisms \(u:\mathcal{L}_{\triangle,\Box,\Diamond}\to[0,1]_{\mathsf{biG}}\) s.t. all theorems of \(\mathcal{H}\mathsf{KbiG}^{\mathsf{f}}\) are evaluated at \(1\).
* \(u\mathsf{R}^{\tau}u^{\prime}=\inf\big{\{}u(\square\psi)\to_{\mathsf{G}}u^{\prime}( \psi)\bigwedge_{\mathsf{G}}u^{\prime}(\psi)\to_{\mathsf{G}}u(\lozenge\psi):\psi \in\mathsf{Sf}^{\mathsf{0,1}}(\tau)\big{\}}\).
* \(e^{\tau}(p,u)=u(p)\).
In the definition above, we interpret '\(\mathsf{G}\triangle\)-homomorphisms' as the valuations of _propositional_ formulas built from variables and _modalised formulas of the form \(\square\chi\) or \(\lozenge\chi\)_. We need to explicitly demand that all theorems be evaluated at \(1\) since, for example, \(\square(p\to p)\) counts as a 'variable' in this setting. But it is a theorem of \(\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}}\), and thus, has to be evaluated at \(1\) in every state of \(W^{\tau}\). The definition of \(\mathsf{R}^{\tau}\) is a modification of the definition of the accessibility relation in the canonical models of classical modal logics (cf., e.g., [11, Definition 4.18]) for the fuzzy case.
The most important part of the truth lemma is the cases of modal formulas. The proofs follow the original ones of [15, Claim 1] (for the \(\square\) formulas) and [15, Claim 2] (for the \(\lozenge\) formulas) but to make the text self-contained, we give them here in full and then discuss the main differences between them and the original ones.
**Lemma 3.1**.: _Let \(\alpha<1\), \(\varepsilon>0\), and \(v(\square\phi)=\alpha\). Then there is \(w\in W^{\tau}\) s.t. \(w(\phi)<\alpha+\varepsilon\) and \(v\mathsf{R}^{\tau}w>w(\phi)\)._
Proof.: The proof follows that of [15, Claim 1]. First, we produce \(u\in W\) s.t. \(u(\phi)<1\) and code the ordering conditions for \(w\) with a theory \(\Gamma_{\phi,v}\). Then, we move the values \(u(\theta)\) (\(\theta\in\mathsf{Sf}^{\mathsf{0,1}}\)) to the correct valuation \(w\) by composing \(u\) with an increasing map of \([0,1]\) into itself.
Now, define
\[\Gamma_{1} =\{\theta:v(\square\theta)>\alpha\text{ and }\theta\in\mathsf{Sf}^{ \mathsf{0,1}}(\tau)\}\] \[\Gamma_{2} =\{\theta_{1}\to\theta_{2}:v(\lozenge\theta_{1})\leq v(\square \theta_{2})\text{ and }\theta_{1},\theta_{2}\in\mathsf{Sf}^{\mathsf{0,1}}(\tau)\}\] \[\Gamma_{3} =\{(\theta_{2}\to\theta_{1})\to\theta_{1}:v(\lozenge\theta_{1})< v(\square\theta_{2})\text{ and }\theta_{1},\theta_{2}\in\mathsf{Sf}^{\mathsf{0,1}}(\tau)\}\] \[\Gamma_{\phi,v} =\Gamma_{1}\cup\Gamma_{2}\cup\Gamma_{3} \tag{4}\]
It is clear that \(v(\square\gamma)>\alpha\) for every \(\gamma\in\Gamma_{\phi,v}\). Indeed, this holds w.r.t. \(\Gamma_{1}\) by construction; for \(\Gamma_{2}\), since \((\lozenge\theta_{1}\to\square\theta_{2})\to\square(\theta_{1}\to\theta_{2})\) is an instance of an axiom scheme and since \(v((\lozenge\theta_{1}\to\square\theta_{2}))=1\); for \(\Gamma_{3}\), since \((\square\theta_{2}\to\lozenge\theta_{1})\lor\square((\theta_{2}\to\theta_{1}) \to\theta_{1})\) is \(\mathsf{K}\mathsf{G}^{\mathsf{f}}\)-valid (whence, \(\mathcal{H}\mathsf{K}\mathsf{G}\)-provable and evaluated at \(1\) in every state of the canonical model) but \(v(\square\theta_{2}\to\lozenge\theta_{1})<1\), and thus, \(v(\square((\theta_{2}\to\theta_{1})\to\theta_{1}))=1\).
It is easy to see now that \(\Gamma_{\phi,v}\nvdash_{\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}}}\phi\). Indeed, otherwise, \(\square\Gamma_{\phi,v}\vdash_{\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}}}\Box\phi\) (using **nec** and **K** for \(\square\)), whence \(\square\Gamma_{\phi,v},\mathsf{Th}(\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}} )\vdash_{\mathcal{H}\mathsf{G}\triangle}\square\phi\) by (2) (and by completeness of \(\mathcal{H}\mathsf{G}\triangle\), we have \(\square\Gamma_{\phi,v},\mathsf{Th}(\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}} )\models_{\mathsf{bi}\mathsf{G}}\square\phi\)) which leads to a contradiction since \(v[\square\Gamma]>\alpha\), \(v[\mathsf{Th}(\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}})]=1\) (recall that \(\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}}\) theorems are closed under \(\triangle\)nec) but \(v(\square\phi)=\alpha\). This means that there is a state \(u\in W^{\tau}\) s.t. \(1=u[\mathsf{Th}(\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}})]\geq u[\Gamma]>u(\phi)\).
Thus, the following relations between \(v\) and \(u\) hold. Observe that we cannot simply evaluate the assumptions of \(\Gamma_{\phi,v}\nvdash_{\mathcal{H}\mathsf{Kbi}\mathsf{G}^{\mathsf{f}}}\phi\) at \(1\) (only theorems are guaranteed to be evaluated at \(1\) since they are closed under \(\triangle\)-necessation). Hence, some conditions are weaker than originally (the weakenings are underlined).
* If \(v(\square\theta)>\alpha\), then \(u(\theta)>u(\phi)\).
* If \(v(\lozenge\theta_{1})\leq v(\square\theta_{2})\), then \(u(\theta_{1})\leq u(\theta_{2})\) or \(u(\theta_{1})>u(\theta_{2})>u(\phi)\).
* If \(v(\lozenge\theta_{1})<v(\square\theta_{2})\), then \(u(\theta_{1})>u(\theta_{2})\), or \(u(\theta_{1})=u(\theta_{2})=1\), or \(u(\theta_{2})\leq u(\theta_{1})>u(\phi)\).
* If \(v(\square\theta)>0\), then \(u(\theta)>0\).
Consider now the _contrapositions_ of #2 and #3.
\[\#2^{\prime} \Big{(}u(\theta_{1})>u(\theta_{2})\Big{)}\ \&\ \Big{(}u(\theta_{1})\leq u(\theta_{2})\text{ or }u( \theta_{2})\leq u(\phi)\Big{)}\Rightarrow e(\lozenge\theta_{1})>v(\square\theta_{2}).\] \[\#3^{\prime} \Big{(}u(\theta_{1})\!\leq\!u(\theta_{2})\Big{)}\ \&\ \Big{(}\{u(\theta_{1}),u(\theta_{2})\}\not\subseteq\{1\}\Big{)}\ \&\ \Big{(}u(\theta_{2})\!>\!u(\theta_{1})\text{ or }u(\theta_{1})\leq u(\phi)\Big{)} \Rightarrow e(\lozenge\theta_{1})\geq v(\square\theta_{2}).\]
The left-hand side of \(\#2^{\prime}\) can be simplified:
\[\#2^{\mathsf{c}}\;\;u(\theta_{1})>u(\theta_{2})\Rightarrow v(\lozenge\theta_{1})> v(\Box\theta_{2}).\]
Just as in the original proof, we set \(\mathsf{B}=\{v(\Box\theta):\theta\in\mathsf{Sf}^{\mathbf{0,1}}(\tau)\}\) and \(u_{b}=\min\{u(\theta):v(\Box\theta)=b\}\) for every \(b\in\mathsf{B}\). From here, we define
\[b_{0} =\alpha\] \[b_{i+1} =\max\{b:b<b_{i}\text{ and }u_{b}<u_{b_{i}}\} \tag{5}\]
which forms a strictly decreasing sequence \(\alpha=b_{0}>b_{1}>\ldots>b_{N}=0\). Indeed, if \(b_{N}=v(\Box\phi_{N})>0\), then \(u_{b_{N}}=u(\phi_{N})>0\) by \(\#4\). Since \(v(\Box\mathbf{0})\leq v(\Box\phi_{N})\), we have \(u(\mathbf{0})<u(\phi_{N})\), whence \(v(\Box\mathbf{0})<v(\Box\phi_{N})\) (i.e., there is \(b_{N+1}<b_{N}\), a contradiction).
We also pick formulas \(\phi_{i}\in\mathsf{Sf}^{\mathbf{0,1}}(\tau)\) s.t. \(e(\Box\phi_{i})=b_{i}\) and \(u(\phi_{i})=u_{b_{i}}\) and an \(\varepsilon>0\) s.t. \(\alpha+\varepsilon<1\). From here, we define
\[p_{0} =(\alpha+\varepsilon)\wedge_{\mathsf{G}}\min\{v(\lozenge\theta) :\theta\in\mathsf{Sf}^{\mathbf{0,1}}(\tau)\text{ and }v(\lozenge\theta)>\alpha\}\] \[p_{i+1} =b_{i}\wedge_{\mathsf{G}}\min\{v(\lozenge\theta):\theta\in \mathsf{Sf}^{\mathbf{0,1}}(\tau)\text{ and }v(\lozenge\theta)>b_{i+1}\} \tag{6}\]
We take a strictly increasing function \(g:[0,1]\to[0,1]\) s.t.
\[g(1) =1\] \[g[[u_{\alpha},1)] =[\alpha,p_{0})\] \[g[[u_{b_{i+1}},u_{b_{i}})] =[b_{i+1},p_{i+1}) \tag{7}\]
It is clear that \(w=g\circ u\) is a state in the canonical model and that \(w(\phi)<p_{0}\leq\alpha+\varepsilon\). Thus, it remains to show that \(v\mathsf{R}^{\tau}w>w(\phi)\) for every \(\theta\in\mathsf{Sf}^{\mathbf{0,1}}\).
Case 1. If \(u(\theta)=1\), then \(w(\theta)=1\), whence, \(e(\Box\theta)\leq w(\theta)\). Now, assume for contradiction that \(e(\lozenge\theta)\leq\alpha=v(\Box\phi)\). Then we have from \(\#2\) that either (i) \(u(\theta)\leq u(\phi)<1\) or (ii) \(u(\theta)>u(\phi)>u(\phi)\). In the first case, we have a contradiction since \(u(\theta)=1\) because \(w(\theta)=1\). In the second case, the contradiction is immediate. Thus, \(e(\lozenge\theta)\geq p_{0}\).
Case 2. Now, we have two options. Either (i) \(u(\theta)\in[u_{b_{i}},u_{b_{i-1}})\) or (ii) \(u(\theta)\in[u_{\alpha},1)\). In both cases, we have that \(w(\theta)\in[b_{i},p_{i})\). In (i), we obtain \(b_{i}=\max\{v(\Box\psi):u(\psi)<u_{b_{i-1}}\}\), whence \(e(\Box\theta)\leq b_{i}\leq w(\theta)\). In (ii), we have that \(w(\theta)\geq b_{0}=\alpha\). If \(u(\theta)=u_{\alpha}\), then \(e(\Box\theta)\leq\alpha\) and \(w(\theta)=\alpha\) from \(\#1\). And if \(u(\theta)>u_{\alpha}\), then \(w(\theta)>w(\alpha)\) (i.e., \(e(\Box\theta)\to_{\mathsf{G}}w(\theta)>w(\alpha)\)).
Moreover, if \(u(\theta)=u_{b_{i}}=u(\phi_{i})\), then since \(u_{b_{i}}<1\) and \(u(\theta)\leq u(\phi)\), we have \(w(\theta)=b_{i}=v(\Box\theta)\leq v(\lozenge\theta)\) by \(\#3^{\prime}\). Finally, if \(u(\theta)>u_{b_{i}}\), we have that \(u(\theta)>u(\phi_{i})\leq u(\phi)\), whence \(e(\lozenge\theta)>v(\Box\phi_{i})=b_{i}\) by \(\#2^{\mathsf{c}}\). Thus, \(w(\theta)<p_{i}\leq v(\lozenge\theta)\).
It is clear that in both cases,
\[\inf\{v(\Box\theta)\to_{\mathsf{G}}w(\theta):\theta\in\mathsf{Sf}^{\mathbf{0,1 }}(\tau)\}>w(\phi)\text{ and }\inf\{w(\theta)\to_{\mathsf{G}}v(\lozenge\theta):\theta\in\mathsf{Sf}^{ \mathbf{0,1}}(\tau)\}\geq p_{0}>\alpha\]
The result follows.
_Remark 3.1_.: Let us return once again to the proof of Lemma 3.1 and summarise its differences from [15, Claim 1]. First, our conditions \(\#1\)-\(\#4\) are weaker than the original ones. Second, since \(\#1\) states that \(u(\theta)>u(\phi)\) for \(e(\Box\theta)>\alpha\) (and not \(u(\theta)=1\)), we can obtain only \(\inf\{v(\Box\theta)\to_{\mathsf{G}}w(\theta):\theta\in\mathsf{Sf}^{\mathbf{0,1 }}(\tau)\}>w(\phi)\) and not \(\inf\{v(\Box\theta)\to_{\mathsf{G}}w(\theta):\theta\in\mathsf{Sf}^{\mathbf{0,1 }}(\tau)\}=1\) as originally.
**Lemma 3.2**.: _Let \(\alpha\), \(\varepsilon_{1}\), and \(\varepsilon_{2}\) be positive and let further \(v(\lozenge\phi)=\alpha\). Then, there is \(w\) s.t. \(w(\alpha)\geq\alpha-\varepsilon_{1}\) and \(v\mathsf{R}^{\tau}w\geq\alpha-\varepsilon_{2}\)._
Proof.: Again, the proof follows that of [15, Claim 2]. We code the minimal requirements for \(w\) using a finite set of formulas \(\Xi_{\phi,v}\), to obtain \(u\in W\) satisfying those requirements, and then transform \(u\) into the required \(w\) constructing a suitable map of \([0,1]\) into itself.
We define
\[\Xi_{1} =\{\theta:v(\theta)<\alpha\text{ and }\theta\in\mathsf{Sf}^{\mathbf{0 },\mathbf{1}}(\tau)\}\] \[\Xi_{2} =\{\theta_{2}\to\theta_{1}:\alpha<v(\lozenge\theta_{1})<v( \square\theta_{2})\text{ and }\theta_{1},\theta_{2}\in\mathsf{Sf}^{\mathbf{0}, \mathbf{1}}(\tau)\}\] \[\Xi_{3} =\{(\theta_{1}\to\theta_{2})\to\theta_{1}:v(\lozenge\theta_{1})=v( \square\theta_{2})<\alpha\text{ and }\theta_{1},\theta_{2}\in\mathsf{Sf}^{\mathbf{0}, \mathbf{1}}(\tau)\}\] \[\Xi_{\phi,v} =\Xi_{1}\cup\Xi_{2}\cup\Xi_{3} \tag{8}\]
It is clear that \(e(\lozenge\xi)<\alpha\) for every \(\xi\in\Xi_{\phi,v}\) and that \(\Xi_{\phi,v}\neq\varnothing\) is finite. Thus, we have that \(\phi\nvdash_{\mathcal{H}\mathbf{K}\mathsf{bi}\mathsf{ci}^{\prime}}\bigvee \limits_{\xi\in\Xi}\xi\) via an application of (2), (3), \(\mathbf{K}\), and **nec** (this time for \(\lozenge\)). Again, the argument here is the same as in [15, Claim 2], so we omit it for the sake of brevity. Thus, there is a valuation \(u\) s.t. \(1\!\geq\!u(\phi)\!>\!u[\Xi_{\phi,v}]\) that evaluates all theorems at \(1\) (observe that we cannot evaluate \(\phi\) at \(1\) by default; theorems, on the other hand, are closed under \(\triangle\) necessitation and thus are evaluated at \(1\)). In addition, the following statements hold w.r.t. \(u\) (observe that in contrast to Lemma 3.1, we could preserve the original conditions on \(u\) from [15, Claim 2]).
1. If \(e(\lozenge\theta)<\alpha\), then \(u(\theta)<u(\phi)\leq 1\).
2. If \(\alpha>v(\lozenge\theta_{1})<v(\square\theta_{2})\), then \(u(\theta_{1})<u(\theta_{2})\).
3. If \(\alpha>v(\lozenge\theta_{1})\leq v(\square\theta_{2})\), then \(u(\theta_{1})\leq u(\theta_{2})\).
4. If \(u(\theta)=0\), then \(e(\square\theta)=0\).
5. If \(e(\lozenge\theta)=0\), then \(u(\theta)=0\).
We now proceed in a manner dual to that of Lemma 3.2. We define \(\mathsf{C}=\{c:v(\lozenge\theta)\leq\alpha\text{ and }\theta\in\mathsf{Sf}^{ \mathbf{0},\mathbf{1}}(\tau)\}\) and set \(u_{c}=\max\{u(\theta):v(\lozenge\theta)=c\}\) for every \(c\in\mathsf{C}\). It is clear that \(u_{0}=0\) (by \(\#\#5\)) and \(u_{\alpha}=\max\{u_{c}:c\in\mathsf{C}\}\). Now, define the following strictly increasing sequence
\[c_{0} =v(\lozenge\mathbf{0})=0\] \[c_{i+1} =\min\{c\in\mathsf{C}:c>c_{i}\text{ and }u_{c}>u_{c_{i}}\} \tag{9}\]
We choose \(\phi_{i}\)'s s.t. \(u_{c_{i}}=u(\phi_{i})\) and \(c_{i}=v(\lozenge\phi_{i})\). It is clear that the sequence stops at some \(c_{N}=\alpha\) since if \(c_{i}=v(\lozenge\phi_{i})<\alpha\), then \(u_{c_{i}}(\phi_{i})<u(\phi)\leq u_{\alpha}\leq 1\) from \(\#\#1\) which implies the existence of \(c_{i+1}\).
Now fix \(\varepsilon>0\) s.t. \(\alpha-\varepsilon>c_{N-1}\) and define
\[q_{N-1} =(\alpha-\varepsilon)\vee_{\mathsf{G}}\max\{v(\square\theta):v( \square\theta)<c_{N}\}\] \[q_{i} =c_{i}\vee_{\mathsf{G}}\max\{v(\square\theta):v(\square\theta)<c_{ i+1}\} \tag{10}\]
This gives us two sequences:
\[0=c_{0}\leq q_{0}<c_{1}\leq q_{1}<\ldots<c_{N-1}\leq\alpha- \varepsilon\leq q_{N-1}<c_{N}=\alpha\text{ and }0=u_{c_{0}}<u_{c_{1}}<\ldots<u_{c_{N}}=u_{\alpha}\]
We now have two cases: (1) \(u(\phi)<1\) and (2) \(u(\phi)=1\). In the first case, we fix another \(\varepsilon^{\prime}>0\) s.t. \(\alpha-\varepsilon\leq q_{N-1}<\alpha-\varepsilon^{\prime}<c_{N}=\alpha\) and choose a strictly increasing function \(g:[0,1]\to[0,1]\) s.t.
\[g(0) =0\] \[g[(u_{c_{i}},u_{c_{i+1}}]] =(q_{i},c_{i+1}]\quad(i<N-2)\] \[q[(u_{c_{N-2}},u_{c_{N-1}})] =(q_{N-1},\alpha-\varepsilon^{\prime})\] \[g(u(\phi)) =\alpha-\varepsilon^{\prime}\]
\[g[(u(\phi),u_{\alpha}]] =\left(\alpha-\varepsilon^{\prime},\frac{2\alpha-\varepsilon^{ \prime}}{2}\right]\] \[g[(u_{\alpha},1)] =\left(\frac{2\alpha-\varepsilon^{\prime}}{2},1\right)\] \[g(1) =1 \tag{11}\]
Note that we cannot always send \(u_{\alpha}\) to \(\alpha\) since it is possible that \(e(\lozenge\phi)=1\) but \(u(\phi)\neq 1\) always (if, for example, \(\phi=p\wedge\sim\triangle p\)). Now, we can set \(w=g\circ u\). It is clear that \(w(\phi)\geq\alpha-\varepsilon^{\prime}\). It remains only to show that \(v\mathsf{R}^{\tau}w\geq\alpha-\varepsilon\). This is done exactly as in [15, Claim 2].
Case 1.1 If \(e(\lozenge\theta)\geq\alpha\), then, trivially, \((w(\theta)\to_{\mathsf{G}}v(\lozenge\theta))\geq\alpha>\alpha-\varepsilon\).
Case 1.2 If \(e(\lozenge\theta)<\alpha\), then \(u(\theta)<u(\phi)\leq 1\) from ##1 and we have two cases: (i) \(u(\theta)\in(u_{c_{i}},u_{c_{i+1}}]\) or (ii) \(u(\theta)=0\). In the first case, \(w(\theta)\in(q_{i},c_{i+1}]\), and, moreover, \(c_{i+1}=v(\lozenge\phi_{i+1})=\min\{v(\lozenge\psi):u(\psi)>u_{c_{i}}\}\). Hence, \(e(\lozenge\theta)\geq c_{i+1}\geq w(\theta)\). In the second case, using ##4, we have \(w(\theta)=0\), whence \(e(\square\theta)=0\).
Case 1.3 If \(e(\square\theta)<\alpha\), then \(e(\square\theta)>c_{N-1}=v(\lozenge\phi_{N-1})\), whence \(u(\theta)>u(\phi_{N-1})\) (from ##2) and thus, \(w(\theta)>q_{N-1}>\alpha-\varepsilon\) (from (11)). Hence, \((v(\square\theta)\to_{\mathsf{G}}w(\theta))>\alpha-\varepsilon\).
Case 1.4 If \(e(\square\theta)<\alpha\), then \(c_{i}\leq v(\square\theta)\leq q_{i}<c_{i+1}\) and we have two options: (i) \(e(\square\theta)=c_{i}\) and (ii) \(e(\square\theta)>c_{i}\). If (i), we have \(e(\square\theta)=v(\lozenge\phi_{i})\), whence \(u_{c_{i}}=u(\phi_{i})\leq u(\theta)\) using ##3, and thus, \(c_{i}\leq w(\theta)\) using (11). Hence, \(e(\square\theta)\leq w(\theta)\). If (ii), then \(u_{c_{i}}<u(\theta)\) using ##2, whence, \(q_{i}\leq w(\theta)\) from (11) and thus, again \(e(\square\theta)\leq w(\theta)\).
Now we can see that cases 1.1-1.4 imply
\[\inf\{w(\theta)\to_{\mathsf{G}}v(\lozenge\theta):\theta\in\mathsf{Sf^{0,1}}( \tau)\}\geq\alpha\text{ and }\inf\{v(\square\theta)\to_{\mathsf{G}}w(\theta)\}>\alpha-\varepsilon\]
In the second case, we define \(g\) exactly as in [15, Claim 2]:
\[g(0) =0\] \[g[(u_{c_{i}},u_{c_{i+1}}]] =(q_{i},c_{i+1})\quad(i<N-2)\] \[g[(u_{c_{N-1}},1)] =(q_{N-1},\alpha) \tag{12}\] \[g(1) =1\]
In this case, we have that \(w(\phi)=1\geqslant\alpha-\varepsilon^{\prime}\) for any \(\varepsilon^{\prime}>0\). We can also show that \(v\mathsf{R}^{\tau}w\geq\alpha-\varepsilon\) in the same way as in the first case. The result follows.
_Remark 3.2_.: Again, let us survey the differences between our proof and [15, Claim 2]. First, it is not always the case that we can construct \(w\) s.t. \(w(\phi)=1\). This might fail even if \(v(\lozenge\phi)=1\). This is why, \(u_{\alpha}\) is only the greatest among \(u_{c}\)'s but not necessarily \(1\) and is not even necessarily equal to \(u(\phi)\). Furthermore, we cannot always send \(u(\phi)\) (or \(u_{\alpha}\)) back to \(\alpha\) itself and have to consider the case where we find an arbitrarily small \(\varepsilon^{\prime}\) s.t. \(q_{N-1}<g(u(\phi))=\alpha-\varepsilon^{\prime}\leq g(u_{\alpha})<\alpha\) as shown in (11). This makes \(g\) more fine-grained than originally.
We are now ready to prove the truth lemma.
**Lemma 3.3**.: _For every \(\phi\in\mathsf{Sf^{0,1}}(\tau)\) and \(u\in W^{\tau}\), it holds that \(u(\phi)=e^{\tau}(\phi,u)\)._
Proof.: We proceed by induction on \(\phi\). The basis case of propositional variables holds by Definition 3.3; the cases of propositional connectives can be obtained via a straightforward application of the induction hypotheses. Finally, let us consider the modal cases.
If \(\phi=\square\psi\), we have two options. First, \(u(\square\psi)=1\). By Definition 3.3, \(u\mathsf{R}^{\tau}u^{\prime}\leq u(\square\psi)\to_{\mathsf{G}}u^{\prime}(\psi)\), whence \(u(\square\psi)\leq\inf\{u(\square\psi)\to_{\mathsf{G}}u^{\prime}(\psi):u^{ \prime}\in W^{\tau}\}\). It is now immediate that \(u(\square\psi)=\inf\{u(\square\psi)\to_{\mathsf{G}}u^{\prime}(\psi):u^{\prime} \in W^{\tau}\}\). Otherwise, if \(u(\square\psi)<1\), we obtain the result by Lemma 3.1.
If \(\phi=\lozenge\psi\), we proceed in the dual manner: if \(u(\lozenge\psi)=0\), the result is immediate since \(u\mathsf{R}^{\tau}u^{\prime}\leq u^{\prime}(\psi)\to_{\mathsf{G}}u(\lozenge\psi)\). If \(u(\lozenge\psi)>0\), we use Lemma 3.2.
The result follows.
**Theorem 3.1**.: _Let \(\Gamma\cup\{\phi\}\subsetneq\mathcal{L}_{\triangle,\square,\Diamond}\) be finite. Then \(\Gamma\models_{\mathbf{KbiG}^{\mathsf{f}}}\phi\) iff \(\Gamma\vdash_{\mathcal{H}\mathbf{KbiG}^{\mathsf{f}}}\phi\)._
Proof.: The soundness can be obtained by a routine check of axioms' and rules' validity. The completeness follows from Lemma 3.3 and (3).
We finish the section by establishing the strong completeness result. We adapt the proof from [15] in the same manner that we did for the crisp \(\mathbf{KbiG}\) in [8, Theorem 3.2].
**Theorem 3.2**.: \(\mathcal{H}\mathbf{KbiG}^{\mathsf{f}}\) _is strongly complete: for any \(\Gamma\cup\{\phi\}\subseteq\mathcal{L}_{\triangle,\square,\Diamond}\), it holds that \(\Gamma\models_{\mathbf{KbiG}^{\mathsf{f}}}\phi\) iff \(\Gamma\vdash_{\mathcal{H}\mathbf{KbiG}^{\mathsf{f}}}\phi\)._
Proof.: The proof follows [15, Theorem 3.1]. The only two differences are that we need to account for \(\triangle\) and that the \(\mathbf{KbiG}^{\mathsf{f}}\) entailment \(\Gamma\models_{\mathbf{KbiG}^{\mathsf{f}}}\chi\) is defined via the order on \([0,1]\). That is, if the entailment is refuted by \(e\), then \(\inf\{e(\phi,c):\phi\in\Gamma\}>e(\chi,c)\) for some \(c\in\mathfrak{F}\). This, in turn, is equivalent to
\[\exists d\!\in\!(0,1]\ \forall\phi\in\Gamma:e(\phi,c)\geq d\ \text{but}\ e(\chi,c)<d \tag{13}\]
Now let \(\Gamma\cup\{\phi\}\subseteq\mathcal{L}_{\triangle,\square,\Diamond}\) and \(\Gamma\vdash_{\mathcal{H}\mathbf{KbiG}^{\mathsf{f}}}\phi\). We consider the classical first order theory \(\Gamma^{*}\) whose signature contains two unary predicates \(W\) and \(P\), one binary predicate \(<\), binary functions \(\circ\) and \(\mathfrak{s}\), unary function \(\blacktriangle\), constants \(0\), \(1\), \(c\), \(d\), and a function symbol \(f_{\theta}\) for each \(\theta\in\mathcal{L}_{\triangle,\square,\Diamond}\). Intuitively, \(W(x)\) stands for '\(x\) is a state'; \(P(x)\) for '\(x\) is a number'; \(f_{\theta}(x)\) for 'the value of \(\theta\) in \(x\)'; \(<\) is going to be the order on numbers. Constants \(c\) and \(d\) stand for the state where the entailment is refuted and the value that separates \(\Gamma\) and \(\chi\) -- cf. (13), respectively. \(\circ\) is used to define the value of the Godel implication; \(\blacktriangle\) is the counterpart of \(\triangle\); \(\mathfrak{s}\) is the relation between states.
We can now formalise the \(\mathbf{KbiG}^{\mathsf{f}}\) semantics in the classical first-order logic as follows.
* \(\forall x\!\!\sim\!(W(x)\wedge P(x))\)
* \(\forall x(W(x)\vee\sim\!\!W(x))\)
* \(P(d)\)
* '\(\langle P,<\rangle\) is a strict linear order s.t. \(0<d\leq 1\), \(0\) and \(1\) are the minimum and the maximum of \(\langle P,<\rangle\)'.
* \(\forall x\forall y((W(x)\wedge W(y))\to P(\mathfrak{s}(x,y)))\)
* \(\forall x\forall y((P(x)\wedge P(y))\to((x\leq y\wedge x\circ y=1)\vee(x>y \wedge x\circ y=y)))\)
* \(\forall x(P(x)\to((x=1\land\blacktriangle(x)=1)\vee(x<1\land\blacktriangle(x)=0)))\)
* For each \(\theta,\theta^{\prime}\in\mathcal{L}_{\triangle,\square,\Diamond}\), we add the following formulas.
* \(\forall x(W(x)\to P(f_{\theta}(x)))\)
* \(\forall x(W(x)\to f_{\sim\theta}(x)=(f_{\theta}(x)\circ 0))\)
* \(\forall x(W(x)\to f_{\triangle\theta}(x)=\blacktriangle(f_{\theta}(x)))\)
* \(\forall x(W(x)\to f_{\theta\wedge\theta^{\prime}}(x)=\min\{f_{\theta}(x),f_{ \theta^{\prime}}(x)\})\)
* \(\forall x(W(x)\to f_{\theta\vee\theta^{\prime}}(x)=\max\{f_{\theta}(x),f_{ \theta^{\prime}}(x)\})\)
* \(\forall x(W(x)\to f_{\theta\rightarrow\theta^{\prime}}(x)=f_{\theta}(x)\circ f _{\theta^{\prime}}(x))\)
* \(\forall x(W(x)\to f_{\square\theta}(x)=\inf\limits_{y}\{\mathfrak{s}(x,y) \circ f_{\theta}(y)\})\)
* \(\forall x(W(x)\to f_{\Diamond\theta}(x)=\sup\limits_{y}\{\min\{\mathfrak{s}(x,y),f_{\theta}(y)\}\})\)
* For each \(\gamma\in\Gamma\), we add \(f_{\gamma}(c)\geq d\).
* We also add \(W(c)\wedge(f_{\phi}(c)<d)\).
The rest of the proof is identical to that in [15]. For each finite subset \(\Gamma^{-}\) of \(\Gamma^{*}\), we let \(\mathcal{L}^{-}_{\Delta,\square,\Diamond}=\{\theta:f_{\theta}\text{ occurs in }\Gamma^{-}\}\). Since \(\mathcal{L}^{-}_{\Delta,\square,\Diamond}\cap\Gamma\not\vdash_{\mathcal{H} \mathbf{K}\mathbf{b}\mathsf{i}\mathsf{i}}\phi\) by assumption, Theorem 3.1 entails that there is a crisp pointed model \(\langle\mathfrak{M},c\rangle\) with \(\mathfrak{M}=\langle W,\mathfrak{s}^{\Gamma^{-}},e^{\Gamma^{-}}\rangle\) being such that \(e^{\Gamma^{-}}(\phi,c)<d\) and \(e^{\Gamma^{-}}(\theta,c)\geq d\) for every \(\theta\in\Gamma\cap\Gamma^{-}\). Thus, the following structure
\[\langle W\uplus[0,1],W,[0,1],<,0,1,c,d,\circ,\blacktriangle,\mathfrak{s}^{ \Gamma^{-}},\{f_{\theta}\}_{\theta\in\mathcal{L}_{\Delta,\square,\Diamond}}\rangle\]
is a model of \(\Gamma^{-}\). Now, by compactness and the downward Lowenheim-Skolem theorem, \(\Gamma^{*}\) has a countable model
\[\mathfrak{M}^{*}=\langle B,W,P,<,0,1,c,d,\circ,\blacktriangle,\mathfrak{s}\{ f_{\theta}\}_{\theta\in\mathcal{L}_{\Delta,\square,\Diamond}}\rangle\]
Now, we can embed \(\langle P,<\rangle\) into \(\langle\bigcirc\cap[0,1],<\rangle\) preserving \(0\) and \(1\) as well as all infima and suprema. Hence, we may w.l.o.g. assume that \(\mathfrak{s}\) is crisp and the ranges of \(f_{\theta}\)'s are contained in \([0,1]\). Then, it is straightforward to verify that \(\mathfrak{M}=\langle W,S,e\rangle\), where \(e(\theta,w)=f_{\theta}(w)\) for all \(w\in W\) and \(\theta\in\mathcal{L}_{\Delta,\square,\Diamond}\), is a crisp \(\mathbf{K}\mathsf{i}\mathsf{i}\mathsf{G}\) model with a distinguished world \(c\) such that \(v[\Gamma,c]\geq d\) and \(e(\phi,c)<d\) for some \(0<d\leq 1\). Hence, \(\inf\{e(\gamma,c):\gamma\in\Gamma\}>e(\phi,c)\), and thus, \(\Gamma\not\models_{\mathbf{K}\mathsf{i}\mathsf{i}\mathsf{G}^{\prime}}\phi\).
\(\mathbf{K}\mathbf{G}^{2}\) and \(\mathbf{G}^{2}_{\blacksquare,\Diamond}\) -- paraconsistent relatives of \(\mathbf{K}\mathsf{i}\mathsf{i}\mathsf{G}\)
As we have mentioned in the introduction, we will be mainly concerned with paraconsistent relatives of \(\mathbf{K}\mathsf{i}\mathsf{i}\mathsf{G}\) defined on _bi-relational_ frames of the form \(\mathfrak{F}=\langle W,R^{+},R^{-}\rangle\) where \(R^{+}\) and \(R^{-}\) are (possibly) fuzzy relations on \(W\). Let us quickly recall the motivation for the bi-relational frames and informational modalities that was outlined in [9].
We begin with the interpretations of modalities. If we interpret the states in a Kripke frame as sources that refer to one another and the accessibility relations as the degree of trust, then all modalities correspond to different strategies of aggregating the information.
Namely, \(\square\) and \(\Diamond\) stand for the 'pessimistic' and 'optimistic' aggregations: the positive support of \(\square\phi\) is calculated using the infima of the positive supports of \(\phi\) and thus, \(e_{1}(\square\phi,w)<1\) as long as there exists \(w^{\prime}\) that does not completely positively support \(\phi\) (i.e., \(e_{1}(\phi,w^{\prime})<1\)) and is trustworthy enough for \(w\) (i.e., \(wR^{+}w^{\prime}>e_{1}(\phi,w^{\prime})\)). The negative support of \(\square\phi\) is calculated using the suprema of the negative supports of \(\phi\), i.e., it suffices to find a sufficiently trusted source \(w^{\prime\prime}\) with \(wR^{-}w^{\prime\prime}>0\) that gives \(\phi\) some non-zero negative support (\(e_{2}(\phi,w^{\prime\prime})>0\)) for \(e_{2}(\square\phi,w)>0\). Thus, \(\square\phi\) represents the search for trustworthy refutations of \(\phi\) (and only if they are not found can \(\square\phi\) be evaluated at \((1,0)\)). Dually, since \(\Diamond\phi\) uses suprema of positive supports and infima of negative supports, it can be seen as the search of trusted confirmations of \(\phi\) (and if these are not found \(\Diamond\phi\) is evaluated at \((0,1)\)).
The informational modalities stand for the'sceptical' (\(\blacksquare\)) and 'credulous' (\(\blacklozenge\)) aggregations. Their support of truth is defined in the same manner as that of \(\square\) and \(\Diamond\). The support of falsity, however, is the same as the support of truth: \(e_{2}(\blacksquare\phi,w)\) is calculated using the _infima_ of \(e_{2}(\phi,w^{\prime})\) and \(e_{2}(\blacklozenge\phi,w)\) uses the _suprema_. Here \(w\) either looks for _trusted rejections4_ (using \(\blacksquare\)) or (using \(\blacklozenge\)) for _trusted confirmations of both positive and negative supports_ of \(\phi\).
Footnote 4: We differentiate between a _rejection_ which we treat as _lack of support_ and a _denial, disproof, refutation, counterexample,_ etc. which we interpret as the _negative support_.
Observe that the independence of the support of truth from the support of falsity is crucial to differentiate sceptical (credulous) and pessimistic (optimistic) aggregations. Indeed, if we did not treat them separately, sceptical and pessimistic (and, likewise, credulous and optimistic) aggregations would coincide.
In what follows, we use \(R^{+}\) to compute the support of the truth of the modal formulas and \(R^{-}\) for the support of falsity. As modalities represent aggregation from different sources, it is reasonable to assume that one can trust a confirmation from a source more or less than a denial. For example, if we read a sensationalistic newspaper, we might be less inclined to believe its assertions than refutations; likewise, if we are listening to an extremely sceptical person, we might believe their confirmations more than their denials.
### Semantics
The languages \(\mathcal{L}^{\neg}_{\Delta,\square,\Diamond}\) and \(\mathcal{L}^{\neg}_{\Delta,\blacksquare,\blacklozenge}\) of \(\mathbf{KG}^{2}\) and \(\mathsf{G}^{2}_{\blacksquare,\blacklozenge}\) are defined via the following grammars.
\[\mathcal{L}^{\neg}_{\Delta,\square,\Diamond}\ni\phi\coloneqq p\in \mathtt{Prop}\mid\neg\phi\mid\sim\!\phi\mid\triangle\phi\mid(\phi\wedge\phi) \mid(\phi\vee\phi)\mid(\phi\to\phi)\mid\Box\phi\mid\Diamond\phi\] \[\mathcal{L}^{\neg}_{\Delta,\blacksquare,\blacklozenge}\ni\phi\coloneqq p\in \mathtt{Prop}\mid\neg\phi\mid\sim\!\phi\mid\triangle\phi\mid(\phi\wedge\phi) \mid(\phi\vee\phi)\mid(\phi\to\phi)\mid\blacksquare\phi\mid\blacklozenge\phi\]
Since the languages differ only with their modal operators and interpreted on the same classes of frames, we put their semantics in one definition below.
**Definition 4.1** (Semantics of \(\mathbf{KG}^{2}\) and \(\mathsf{G}^{2}_{\blacksquare,\blacklozenge}\)).: A bi-relational paraconsistent model is a tuple \(\mathfrak{M}\!=\!\langle W,R^{+},R^{-},e_{1},e_{2}\rangle\) with \(\langle W,R^{+},R^{-}\rangle\) being a bi-relational frame and \(e_{1},e_{2}\!:\mathtt{Prop}\times W\to[0,1]\).
The valuations are extended on complex propositional formulas as follows.
\[\begin{array}{rclrclrcl}e_{1}(\neg\phi,w)&=&e_{2}(\phi,w)&e_{2}(\neg\phi,w )&=&e_{1}(\phi,w)\\ e_{1}(\phi\wedge\phi^{\prime},w)&=&e_{1}(\phi,w)\wedge_{\mathsf{G}}e_{1}(\phi^ {\prime},w)&e_{2}(\phi\wedge\phi^{\prime},w)&=&e_{2}(\phi,w)\vee_{\mathsf{G}}e _{2}(\phi^{\prime},w)\\ e_{1}(\phi\vee\phi^{\prime},w)&=&e_{1}(\phi,w)\vee_{\mathsf{G}}e_{1}(\phi^{ \prime},w)&e_{2}(\phi\vee\phi^{\prime},w)&=&e_{2}(\phi,w)\wedge_{\mathsf{G}}e _{2}(\phi^{\prime},w)\\ e_{1}(\phi\to\phi^{\prime},w)&=&e_{1}(\phi,w)\!\to_{\mathsf{G}}e_{1}(\phi^{ \prime},w)&e_{2}(\phi\to\phi^{\prime},w)&=&e_{2}(\phi^{\prime},w)\cdot_{ \mathsf{G}}e_{2}(\phi,w)\\ e_{1}(\sim\!\phi,w)&=&\sim_{\mathsf{G}}\!e_{1}(\phi,w)&e_{2}(\sim\!\phi,w)&=& 1\prec_{\mathsf{G}}e_{2}(\phi,w)\\ e_{1}(\triangle\phi,w)&=&\triangle_{\mathsf{G}}e_{1}(\phi,w)&e_{2}(\triangle \phi,w)&=&\sim_{\mathsf{G}}\!\sim_{\mathsf{G}}\!e_{2}(\phi,w)\end{array}\]
The modal formulas are interpreted as follows.
\[\begin{array}{rclrclrclrcl}e_{1}(\Box\phi,w)&=&\inf_{w^{\prime}\in W}\{wR^ {+}w^{\prime}\to_{\mathsf{G}}e_{1}(\phi,w^{\prime})\}&e_{2}(\Box\phi,w)&=& \sup_{w^{\prime}\in W}\{wR^{-}w^{\prime}\wedge_{\mathsf{G}}e_{2}(\phi,w^{ \prime})\}\\ e_{1}(\Diamond\phi,w)&=&\sup_{w^{\prime}\in W}\{wR^{+}w^{\prime}\wedge_{ \mathsf{G}}e_{1}(\phi,w^{\prime})\}&e_{2}(\Diamond\phi,w)&=&\inf_{w^{\prime} \in W}\{wR^{-}w^{\prime}\to_{\mathsf{G}}e_{2}(\phi,w^{\prime})\}\\ e_{1}(\blacksquare\phi,w)&=&\inf_{w^{\prime}\in W}\{wR^{+}w^{\prime}\!\to_{\mathsf{G }}e_{1}(\phi,w^{\prime})\}&e_{2}(\blacksquare\phi,w)&=&\inf_{w^{\prime}\in W}\{wR^{ -}w^{\prime}\!\to_{\mathsf{G}}e_{2}(\phi,w^{\prime})\}\\ e_{1}(\blacklozenge\phi,w)&=&\sup_{w^{\prime}\in W}\{wR^{+}w^{\prime}\! \wedge_{\mathsf{G}}e_{1}(\phi,w^{\prime})\}&e_{2}(\blacklozenge\phi,w)&=&\sup _{w^{\prime}\in W}\{wR^{-}w^{\prime}\!\wedge_{\mathsf{G}}e_{2}(\phi,w^{ \prime})\}\end{array}\]
We say that \(\phi\) is \(e_{1}\)_-valid on \(\mathfrak{F}\)_ (\(\mathfrak{F}\models^{+}\phi\)) iff for every model \(\mathfrak{M}\) on \(\mathfrak{F}\) and every \(w\in\mathfrak{M}\), it holds that \(e_{1}(\phi,w)=1\). \(\phi\) is \(e_{2}\)_-valid on \(\mathfrak{F}\)_ (\(\mathfrak{F}\models^{-}\phi\)) iff for every model \(\mathfrak{M}\) on \(\mathfrak{F}\) and every \(w\in\mathfrak{M}\), it holds that \(e_{2}(\phi,w)=0\). \(\phi\) is _strongly valid on \(\mathfrak{F}\)_ (\(\mathfrak{F}\models\phi\)) iff it is \(e_{1}\) and \(e_{2}\)-valid.
\(\Gamma\)_entails_\(\chi\) (on \(\mathfrak{F}\)) iff for every model \(\mathfrak{M}\) (on \(\mathfrak{F}\)) and every \(w\in\mathfrak{M}\), it holds that
\[\inf\{e_{1}(\phi,w):\phi\in\Gamma\}\leq e_{1}(\chi,w)\text{ and }\sup\{e_{2}(\phi,w):\phi\in\Gamma\}\geq e_{2}(\chi,w)\]
One can see from Definition 4.1 that _the propositional_ fragment of both logics is, in fact, \(\mathsf{G}^{2}\), a paraconsistent expansion of Godel logic with \(\neg^{5}\) introduced in [6] (cf. Fig. 2). In addition, support of falsity conditions of \(\mathbf{KG}^{2}\) coincide with the semantics of \(\mathbf{KbiG}\). Let us now recall and expand an example from [9] that illustrates the semantics of modalities.
_Example 4.1_.: A tourist \((t)\) wants to go to a restaurant and asks their two friends (\(f_{1}\) and \(f_{2}\)) to describe their impressions regarding the politeness of the staff \((s)\) and the quality of the desserts \((d)\). Of course, the friends' opinions are not always internally consistent, nor is it always the case that one or the other even noticed whether the staff was polite or was eating desserts. Furthermore, \(t\) trusts their friends to different degrees when it comes to their positive and negative opinions.
The first friend says that half of the staff was really nice but the other half is unwelcoming and rude and that the desserts (except for the triamisu and souffle) are tasty. The second friend, unfortunately, did not have the desserts at all. Furthermore, even though, they praised the staff, they also said that the manager was quite obnoxious.
The situation is depicted in Fig. 4. Let us now look at how different aggregations work with this information. If the tourist is sceptical w.r.t. \(s\) and \(d\), they look for _trusted rejections_ of both positive
and negative supports of \(s\) and \(d\). Thus \(t\) uses the values of \(R^{+}\) and \(R^{-}\) as thresholds above which the information provided by the source does not count as a trusted enough rejection. I.e., to accept rejection from a friend, it should be stronger than the degree of trust the tourist gives to the friend. We have that \(tR^{+}f_{1}>e_{1}(s,f_{1})\) but \(tR^{+}f_{2}\leq e(s,f_{2})\) (Fig. 4). Thus, only the account of the first friend counts as a rejection. In our case, we have \(e(\blacksquares,t)=(0.5,0.5)\) and \(e(\blacksquared,t)=(0,0)\).
On the other hand, if \(t\) is credulous, they look for _trusted confirmations_ of both positive and negative supports and use \(R^{+}\) and \(R^{-}\) as thresholds up to which they accept the information provided by the source. In particular, we have \(e(\blacklozenge s,t)=(0.7,0.2)\) and \(e(\blacklozenge d,t)=(0.7,0.3)\).
Similarly, if \(t\) is _pessimistic_ about the staff, they will use the account of \(f_{1}\) for the trusted _rejection_ of the _positive support_ of \(s\) and for the trusted confirmation of its _negative support_ (since \(t\) trusts the rejections provided by \(f_{1}\) more than those provided by \(f_{2}\)). Thus, \(e(\squares,t)=(0.5,0.5)\). If \(t\) is _optimistic_ about the desserts, then they will use the account of \(f_{1}\) for the trusted confirmation of the positive support of \(d\). However, \(f_{2}\)_completely rejects_ the negative support of \(d\), and \(t\) has positive trust in \(f_{2}\)'s rejections (\(tR^{-}f_{2}>0\)). Hence, the result of the optimistic aggregation is as follows: \(e(\lozenge d,t)=(0.7,0)\).
The main goal of this section is to study \(\mathbf{KG}^{2\pm\mathrm{f}}\) and \(\mathsf{G}^{2\pm}_{\blacksquares,\blacklozenge}\). We first show that \(\square\) and \(\Diamond\) are not interdefinable in \(\mathbf{KG}^{2\pm\mathrm{c}}\) (in fact, even in \(\mathbf{KG}^{2\pm\mathrm{c}}\)) in contrast to the mono-relational logics.
**Proposition 4.1**.: \(\square\) _and \(\Diamond\) are not interdefinable in \(\mathbf{KG}^{2\pm\mathrm{c}}\) and hence, in \(\mathbf{KG}^{2\pm\mathrm{f}}\)._
Proof.: Denote with \(\mathscr{L}_{\square}\) and \(\mathscr{L}_{\Diamond}\) the \(\Diamond\)-free and \(\square\)-free fragments of \(\mathcal{L}^{-}_{\square,\square,\Diamond}\), respectively. To prove the statement, it suffices to find a pointed model \(\langle\mathfrak{M},w\rangle\) s.t. there is no \(\mathscr{L}_{\Diamond}\) formula that has the same value at \(w\) as \(\square p\) and vice versa.
Consider the model on Fig. 5. We have \(e(\square p,w_{0})=\left(\frac{3}{5},\frac{3}{4}\right)\) and \(e(\lozenge p,w_{0})=\left(\frac{4}{5},\frac{2}{4}\right)\).
It is easy to check that \(e(\phi,t)\in\{e(p,t),e(\neg p,t),(1,0),(0,1)\}\) for every \(\phi\in\mathcal{L}^{-}_{\square,\square,\Diamond}\) over one variable on the single-point irreflexive frame with a state \(t\). Thus, for every \(\chi\in\mathscr{L}_{\square}\) and every \(\psi\in\mathscr{L}_{\Diamond}\) it holds that
\[e(\square\chi,w_{0})\in\left\{(0;1),\left(\frac{3}{5};\frac{3}{4} \right),\left(\frac{1}{4};\frac{3}{5}\right),\left(\frac{3}{4};\frac{3}{5} \right),\left(\frac{3}{5};\frac{1}{4}\right),(1;0)\right\}=X\] \[e(\lozenge\psi,w_{0})\in\left\{(0;1),\left(\frac{4}{5};\frac{2} {4}\right),\left(\frac{2}{4};\frac{2}{5}\right),\left(\frac{2}{4};\frac{4}{5} \right),\left(\frac{2}{5};\frac{2}{4}\right),(1;0)\right\}=Y\]
Now, let \(X^{c}\) and \(Y^{c}\) be the closures of \(X\) and \(Y\) under propositional operations. It is clear6 that \(\left(\frac{3}{5},\frac{3}{4}\right)\notin Y^{c}\) and \(\left(\frac{4}{5};\frac{2}{4}\right)\notin X^{c}\). It is also easy to verify by induction that for all \(\chi^{\prime}\in\mathscr{L}_{\square}\) and \(\psi^{\prime}\in\mathscr{L}_{\Diamond}\), it holds that \(e(\chi^{\prime},w_{0})\in X^{c}\) and \(e(\psi^{\prime},w_{0})\in Y^{c}\). The result now follows.
Footnote 6: Note that the closure under Gödelian propositional operations of a given set \(\{x_{1},\ldots,x_{n}\}\subseteq[0,1]\) can only add \(0\) and \(1\) to this set but not an additional \(x^{\prime}\notin\{x_{1},\ldots,x_{n}\}\) s.t. \(0<x^{\prime}<1\).
Figure 4: \((x,y)\) stands for \(wR^{+}w^{\prime}=x,wR^{-}w^{\prime}=y\). \(R^{+}\) (resp., \(R^{-}\)) is interpreted as the tourist’s threshold of trust in positive (negative) statements by the friends.
Figure 5: All variables have the same values in all states exemplified by \(p\).
It is also easy to check (Fig. 6) that \(\Diamond\sim\sim p\to\sim\sim\Di p\)_is not strongly valid_ in \(\mathbf{KG}^{2\pm\mathfrak{f}}\) (and, in fact, in \(\mathbf{KG}^{2\mathfrak{f}}\) since the countermodel is _monoreal_), even though it is \(\mathbf{KbiG}^{\mathfrak{f}}\)-valid. Likewise, \(\Box\mathbf{0}\lor\sim\Box\mathbf{0}\) is \(\mathbf{KbiG}^{\mathfrak{f}}\)-valid but not \(\mathbf{KG}^{2}\)-valid. Namely, \(e(\Box\mathbf{0}\lor\sim\Box\mathbf{0},w)=(1,\frac{1}{2})\) (Fig. 6).
On the other hand, \(\mathbf{KG}^{2\pm\mathfrak{c}}\)_does extend_\(\mathbf{KbiG}^{\mathfrak{c}}\).
**Lemma 4.1**.: _Let \(\mathfrak{M}=\langle W,R^{+},R^{-},e_{1},e_{2}\rangle\) be a crisp \(\mathbf{KG}^{2\pm}\) model. We define_
\[\mathfrak{M}^{*}=\langle W,(R^{+})^{*},(R^{-})^{*},e_{1}^{*},e_{2}^{*}\rangle\]
_to be as follows: \((R^{+})^{*}=R^{-}\), \((R^{-})^{*}=R^{+}\), \(e_{1}^{*}(p,w)=1-e_{2}(p,w)\), and \(e_{2}^{*}(p,w)=1-e_{1}(p,w)\)._
_Then, \(e(\phi,w)=(x,y)\) iff \(e^{*}(\phi,w)=(1-y,1-x)\)._
Proof.: We proceed by induction on \(\phi\). The basis case of propositional variables holds by the construction of \(\mathfrak{M}^{*}\). The cases of propositional connectives can be shown as in [6, Proposition 5]. We consider the case of \(\phi=\Box\psi\) since \(\phi=\Diamond\psi\) can be tackled in the same manner.
Let \(e(\Box\psi,w)=(x,y)\). Then \(\inf\{e_{1}(\psi,w^{\prime}):wR^{+}w^{\prime}\}=x\), and \(\sup\{e_{2}(\psi,w^{\prime}):wR^{-}w^{\prime}\}=y\). Now, we apply the induction hypothesis to \(\psi\), and thus if \(e(\psi,s)=(x^{\prime},y^{\prime})\), then \(e_{1}^{*}(\psi,s)=1-y^{\prime}\) and \(e_{2}^{*}(\psi,s^{\prime})=1-x^{\prime}\) for any \(s\in R^{+}(w)=(R^{-})^{*}(w)\) and \(s^{\prime}\in R^{-}(w)=(R^{+})^{*}(w)\). But then \(\inf\{e_{1}^{*}(\psi,w^{\prime}):w(R^{+})^{*}w^{\prime}\}=1-y\), and \(\sup\{e_{2}^{*}(\psi,w^{\prime}):w(R^{-})^{*}w^{\prime}\}=1-x\), as required.
**Proposition 4.2**.: _Let \(\phi\in\mathcal{L}_{\triangle,\Box,\Diamond}\). Then, \(\phi\) is \(\mathbf{KbiG}^{\mathfrak{c}}\)-valid iff it is \(\mathbf{KG}^{2\pm\mathfrak{c}}\)-valid._
Proof.: Since the \(e_{1}\)-conditions coincide with the semantics of \(\mathbf{KbiG}\), it is clear that if \(\phi\) is _not_\(\mathbf{KbiG}\) valid, then it is not \(\mathbf{KG}^{2\pm}\)-valid either. For the converse, it follows from Lemma 4.1 that if \(e_{2}(\phi,w)>0\) for some frame \(\mathfrak{F}=\langle W,R^{+},R^{-}\rangle\), \(w\in\mathfrak{F}\) and \(e_{2}\) on \(\mathfrak{F}\), then \(e_{1}^{*}(\phi,w)<1\). But \(\phi\) does not contain \(\neg\) and thus its support of falsity depends only on \(e_{2}\) and \(R^{-}\), whence \(e_{1}^{*}\) is a \(\mathbf{KbiG}\) valuation on \(\langle W,R^{-}\rangle\). Thus, \(\phi\) is not \(\mathbf{KbiG}^{\mathfrak{c}}\)-valid either.
It is also clear that \(\blacksquare\)**1** and \(\sim\blacklozenge\mathbf{0}\) are not strongly \(\mathsf{G}^{2}_{\blacksquare,\blacklozenge}\)-valid. Still, both \(\blacksquare\) and \(\blacklozenge\) are regular in the following sense.
**Proposition 4.3**.: _Let \(\phi\to\phi^{\prime}\) and \(\chi\to\chi^{\prime}\) be strongly valid. Then \(\blacksquare\phi\to\blacksquare\phi^{\prime}\) and \(\blacklozenge\chi\to\blacklozenge\chi^{\prime}\) are strongly valid too._
Proof.: We prove only the \(\blacksquare\) case. Let \(\blacksquare\phi\to\blacksquare\phi^{\prime}\) be _not strongly valid_ in some frame \(\mathfrak{F}\). Then, there is a \(w\in\mathfrak{F}\) as well as \(e_{1}\) and \(e_{2}\) thereon s.t. \(e(\blacksquare\phi\to\blacksquare\phi^{\prime},w)\neq(1,0)\). Since \(e_{1}\)-conditions (support of truth) of \(\blacksquare\) coincide with the \(\mathbf{KbiG}^{\mathfrak{f}}\) semantics of \(\Box\) (and since \(\Box\) is obviously regular in \(\mathbf{KbiG}^{\mathfrak{f}}\)), it suffices to check the case when \(e_{2}(\blacksquare\phi\to\blacksquare\phi^{\prime},w)>0\).
We have that
\[e_{2}(\blacksquare\phi\to\blacksquare\phi^{\prime},w)>0 \text{iff }e_{2}(\blacksquare\phi,w)<e_{2}(\blacksquare\phi^{\prime},w)\] \[\text{iff }\inf_{w^{\prime}\in W}\{wR^{-}w^{\prime}\to_{ \mathfrak{G}}e_{2}(\phi)\}<\inf_{w^{\prime}\in W}\{wR^{-}w^{\prime}\to_{ \mathfrak{G}}e_{2}(\phi^{\prime})\}\] \[\text{then }\exists w^{\prime}\!\in\!R^{-}(w):e_{2}(\phi,w^{ \prime})<e_{2}(\phi^{\prime},w^{\prime})\] \[\text{then }e_{2}(\phi\to\phi^{\prime},w^{\prime})>0\]
The regularity of \(\blacklozenge\) can be tackled similarly.
### Embeddings into KbiG
In this section, we are going to construct faithful embeddings of \(\mathbf{K}\mathsf{G}^{2}\)'s and \(\mathsf{C}^{2}_{\blacksquare,\blackblackblack}\)'s into \(\mathbf{KbiG}\). To do this in the bi-relational case, we introduce new modal operators that will enable us to convert formulas into \(\neg\) NNFs.
**Definition 4.2**.: The languages \(\overline{\mathcal{L}_{\triangle,\square,\Diamond}}\) and \(\overline{\mathcal{L}_{\triangle,\blacksquare,\blackblack\bullet}}\) expand \(\mathcal{L}_{\triangle,\square,\Diamond}\) and \(\mathcal{L}_{\triangle,\blacksquare,\blackblackblack}^{\neg}\), respectively, with two new modal operators each: \(\overline{\square}\) and \(\overline{\Diamond}\) (\(\overline{\mathcal{L}_{\triangle,\square,\Diamond}}\)); \(\overline{\blacksquare}\) and \(\overline{\Diamond}\) (\(\overline{\mathcal{L}_{\triangle,\blacksquare,\blackblackblack}^{\neg}}\)). Their semantics is given as follows.
\[e_{1}(\overline{\square}\phi,w) = \inf_{w^{\prime}\in W}\{wR^{-}w^{\prime}\to_{\mathsf{G}}e_{1}( \phi,w^{\prime})\}\quad e_{2}(\overline{\square}\phi,w) = \sup_{\begin{subarray}{c}w^{\prime}\in W\end{subarray}}\{wR^{+}w^{ \prime}\wedge_{\mathsf{G}}e_{2}(\phi,w^{\prime})\}\] \[e_{1}(\overline{\Diamond}\phi,w) = \sup_{\begin{subarray}{c}w^{\prime}\in W\end{subarray}}\{wR^{-}w^ {\prime}\wedge_{\mathsf{G}}e_{1}(\phi,w^{\prime})\}\quad e_{2}(\overline{ \Diamond}\phi,w) = \inf_{\begin{subarray}{c}w^{\prime}\in W\end{subarray}}\{wR^{+}w^ {\prime}\to_{\mathsf{G}}e_{2}(\phi,w^{\prime})\}\] \[e_{1}(\overline{\blacksquare}\phi,w) = \inf_{\begin{subarray}{c}w^{\prime}\in W\end{subarray}}\{wR^{-}w ^{\prime}\to_{\mathsf{G}}e_{1}(\phi,w^{\prime})\}\quad e_{2}(\overline{ \blacksquare}\phi,w) = \inf_{\begin{subarray}{c}w^{\prime}\in W\end{subarray}}\{wR^{+}w^ {\prime}\to_{\mathsf{G}}e_{2}(\phi,w^{\prime})\}\] \[e_{1}(\overline{\blackblackblack}\phi,w) = \sup_{\begin{subarray}{c}w^{\prime}\in W\end{subarray}}\{wR^{-}w ^{\prime}\wedge_{\mathsf{G}}e_{1}(\phi,w^{\prime})\}\quad e_{2}(\overline{ \blackblackblack}\phi,w) = \sup_{\begin{subarray}{c}w^{\prime}\in W\end{subarray}}\{wR^{+}w^ {\prime}\wedge_{\mathsf{G}}e_{2}(\phi,w^{\prime})\}\]
_Remark 4.1_ (\(\neg\) NNFs in \(\overline{\mathcal{L}_{\triangle,\square,\Diamond}}\) and \(\overline{\mathcal{L}_{\triangle,\blacksquare,\blackblack}^{\neg}}\)).: It is now clear that \(\overline{\mathcal{L}_{\triangle,\square,\Diamond}}\) and \(\overline{\mathcal{L}_{\triangle,\blacksquare,\blackblack}^{\neg}}\) admit \(\neg\) NNFs. Namely, the following transformations are equivalent. (For the sake of convenience, we give the transformations for all connectives and modalities, including \(\prec\) and double Godelian negation \(\sim\sim\).)
\[\neg\mathbf{1} \thickapprox\mathbf{0} \neg\mathbf{0} \thickapprox\mathbf{1}\] \[\neg\neg\phi \thickapprox\phi \neg\neg\phi \thickapprox\mathbf{1}\prec\neg\phi \neg\triangle\phi \neg\triangle\phi \thickapprox\neg\neg\phi \neg\neg\diamondsuit \neg\diamondsuit \thickapprox\triangle\neg\phi\] \[\neg(\phi\wedge\chi) \thickapprox\neg\phi\vee\neg\chi \neg(\phi\vee\chi) \thickapprox\neg\phi\wedge\neg\chi \neg(\phi\rightarrow\chi) \thickapprox\neg\chi\prec\neg\phi \neg(\phi\prec\chi) \thickapprox\neg\chi\rightarrow\neg\phi\] \[\neg\Box\phi \thickapprox\overline{\Diamond}\neg\phi \neg\Diamond\phi \neg\Box\phi \neg\Box\phi \neg\Box\phi \neg\neg\phi \neg\overline{\Diamond}\phi \thickapprox\Box\neg\phi\] \[\neg\blacksquare\phi \thickapprox\overline{\blacksquare}\neg\phi \neg\blackblack\phi \neg\black\phi \neg\black\phi \neg\overline{\blackblack}\phi \thick\blacksquare\neg\phi \neg\black\phi \neg\overline{\blackblack}\phi \thick\blacksquare\neg\phi \neg\black\phi \neg\overline{\blackblack}\phi \thick\black\black\neg\phi \tag{14}\]
Now, since the transformation into an NNF requires the introduction of a new pair of modalities, we will need to construct our embeddings not into the mono-relational \(\mathbf{KbiG}\) but in the _bi-relational_\(\mathbf{KbiG}\) which we will denote \(\mathbf{KbiG}(2)\).7 The language of \(\mathbf{KbiG}(2)\), \(\mathcal{L}_{\triangle,\square,\Diamond}(2)\) contains two pairs of modalities: \(\Box_{1}\), \(\Diamond_{1}\), \(\Box_{2}\), and \(\Diamond_{2}\). It is also clear that the axiomatisation of both \(\mathbf{KbiG}^{\mathsf{G}}(2)\) and \(\mathbf{KbiG}^{\mathsf{G}}(2)\) can be obtained from \(\mathcal{H}\mathbf{KbiG}^{\mathsf{G}}\) and \(\mathcal{H}\mathbf{KbiG}^{\mathsf{G}}\) by replicating the modal axioms and rules for \(\Box_{2}\) and \(\Diamond_{2}\). To construct these embeddings, we are going to reduce every \(\overline{\mathcal{L}_{\triangle,\square,\Diamond}^{\neg}}\) or \(\overline{\mathcal{L}_{\triangle,\blacksquare,\blackblack}^{\neg}}\) formula \(\phi\) not to one but to _two_\(\mathcal{L}_{\triangle,\square,\Diamond}(2)\) formulas: \(\phi^{*}\) and \(\phi^{\partial}\). Moreover, we will treat \(\prec\) as a basic connective to make the size of embeddings linear. Since bi-relational fuzzy frames constitute the largest class of frames, these embeddings will suffice for all paraconsistent modal logics in Fig. 2.
Footnote 7: In what follows, we will sometimes call \(\mathbf{KbiG}\) ‘mono-relational \(\mathbf{KbiG}^{\mathsf{G}}\) and \(\mathbf{KbiG}(2)\) ‘bi-relational \(\mathbf{KbiG}^{\mathsf{G}}\). Since \(\Box\) and \(\Diamond\) are not interdefinable in \(\mathbf{KbiG}\)[7, Proposition 3] even for crisp frames, and since Gödel modal logics in the language with \(\Box\) and \(\Diamond\) are called ‘bi-modal’, a proper moniker would be ‘tetra-modal’. Despite this, we choose ‘bi-relational’ to designate that each pair of modalities correspond to one relation of the two.
**Definition 4.3**.: Let \(\phi\in\overline{\mathcal{L}_{\triangle,\square,\Diamond}}\cup\overline{ \mathcal{L}_{\triangle,\blacksquare,\blackblackblack}^{\neg}}\) be in \(\neg\) NNF. Then \(\phi^{*}\) is the result of replacing every literal \(\neg p\) occurring in \(\phi\) with a new variable \(p^{*}\). Given \(\chi^{*}\in\overline{\mathcal{L}_{\triangle,\square,\Diamond}}\cup\overline{ \mathcal{L}_{\triangle,\blacksquare,\blackblackblack}^{\neg}}\), we define \(\chi^{\partial}\) as follows.
\[p^{\partial} =p^{*} (p^{*})^{\partial} =p\] \[\mathbf{1}^{\partial} =\mathbf{0} \mathbf{0}^{\partial} =\mathbf{1}\] \[(\sim\chi)^{\partial} =\mathbf{1}\prec\chi^{\partial} (\triangle\chi)^{\partial} =\sim\sim\chi^{\partial}\] \[(\chi_{1}\wedge\chi_{2})^{\partial} =\chi_{1}^{\partial}\vee\chi_{2}^{\partial} (\chi_{1}\vee\chi_{2})^{\partial} =\chi_{1}^{\partial}\wedge\chi_{2}^{\partial}\] \[(\chi_{1}\rightarrow\chi_{2})^{\partial} =\chi_{2}^{\partial}\prec\chi_{1}^{\partial} (\chi_{1}\prec\chi_{2})^{\partial} =\chi_{2}^{\partial}\rightarrow\chi_{1}^{\partial}\] \[(\Box\chi)^{\partial} =\Diamond_{2}\chi^{\partial} (\blacksquare)\] \[(\Diamond\chi)^{\partial} =\Box_{2}\chi^{\partial} (\blackblack\black\psi) =\Diamond_{2}\chi^{\partial}\]
\[(\overline{\Box}_{\lambda})^{\partial} =\Di_{1}\chi^{\partial} (\overline{\blacksquare}_{\lambda})^{\partial} =\Box_{1}\chi^{\partial}\] \[(\overline{\Di}\chi)^{\partial} =\Box_{1}\chi^{\partial} (\overline{\blacktriangle}_{\lambda})^{\partial} =\Di_{1}\chi^{\partial}\]
_Convention 4.1_.: In what follows, if we deal with _mono-relational frames_, we will assume that \({}^{\partial}\) translation does not add modalities of the form \(\overline{\Di}\). Indeed, it is clear that \(\Diamond p\leftrightarrow\overline{\Di}p\) is strongly valid on any mono-relational frame for every pair of modalities \(\Diamond\) and \(\overline{\Di}\).
_Convention 4.2_.:
1. Let \(\phi\in\overline{\mathcal{L}_{\triangle,\blacksquare,\blacklozenge}^{\cdot}}\) be \(\neg\)-free. We use \(\phi^{\circ}\) to denote the formula obtained by replacing all \(\blacksquare\)'s, \(\blacklozenge\)'s, \(\blacksquare\)'s and \(\overline{\blacklozenge}\)'s in \(\phi\) with \(\Box\)'s, \(\Diamond\)'s, \(\overline{\Box}\)'s, and \(\overline{\Di}\)'s, respectively.
2. Let \(\phi\in\overline{\mathcal{L}_{\triangle,\Box,\Diamond}^{\cdot}}\). We use \(\phi^{+\bullet}\) to denote the formula obtained by replacing all \(\Box\)'s, \(\Diamond\)'s, \(\overline{\Box}\)'s, and \(\overline{\Di}\)'s in \(\phi\) with \(\blacksquare\)'s, \(\blacklozenge\)'s and \(\overline{\blacklozenge}\)'s, respectively.
It is easy to see that \({}^{*}\)-transformation preserves validity on frames.
**Lemma 4.2**.: _Let \(\phi\in\overline{\mathcal{L}_{\triangle,\Box,\Diamond}^{\cdot}}\cup\overline{ \mathcal{L}_{\triangle,\blacksquare,\blacklozenge}^{\cdot}}\) be in \(\neg\) NNF. Then for every pointed frame \(\langle\mathfrak{F},w\rangle\), it holds that_
\[\mathfrak{F},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi\text{ iff } \mathfrak{F},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi^{*}\] \[\mathfrak{F},w\models_{\mathbf{G}^{2}_{\blacksquare,\blacklozenge} }\phi\text{ iff }\mathfrak{F},w\models_{\mathbf{G}^{2}_{\blacksquare,\blacklozenge} }\phi^{*}\]
Proof.: Let \(\phi\) be in \(\neg\) NNF and \(e_{1}\) an arbitrary valuation. We construct \(e_{1}^{*}\) as follows: \(e_{1}^{*}(p,w)=e_{1}(p,w)\) and \(e_{1}^{*}(p^{*},w)=e_{2}(p,w)=e_{1}(\neg p,w)\). It is clear that \(e_{1}(\phi,w)=e_{1}^{*}(\phi^{*},w)\). The \(e_{2}\)-validity can be considered in the same way.
The next two lemmas establish that \({}^{\partial}\) preserves values.
**Lemma 4.3**.: _Let \(\mathfrak{M}=\langle W,R^{+},R^{-},e_{1},e_{2}\rangle\) be \(\mathbf{K}\mathbf{G}^{2\pm\ell}\) model and let \(\mathfrak{M}^{\partial}=\langle W,R_{1},R_{2},e^{\partial}\rangle\) be a \(\mathbf{K}\mathbf{b}\mathbf{G}^{\ell}(2)\) model s.t. \(R_{1}=R^{+}\), \(R_{2}=R^{-}\), \(e^{\partial}(p,w)=e_{2}(p^{*},w)\), and \(e^{\partial}(p^{*},w)=e_{2}(p,w)\). Then, \(e_{2}(\phi^{*})=e^{\partial}(\phi^{\partial})\) for every \(\phi^{*}\in\overline{\mathcal{L}_{\triangle,\Box,\Diamond}^{\cdot}}\)._
Proof.: We proceed by induction on \(\phi^{*}\). The basis cases of \(p\) and \(p^{*}\) variables hold by the construction of \(\mathfrak{M}^{\partial}\). The cases of propositional connectives hold by a simple application of the induction hypothesis. Consider, for example, \(\phi^{*}=\chi_{1}\wedge\chi_{2}\).
\[e_{2}(\chi_{1}\wedge\chi_{2},w) =e_{2}(\chi_{1},w)\vee_{\mathsf{G}}e_{2}(\chi_{2},w)\] \[=e^{\partial}(\chi_{1}^{\partial},w)\vee_{\mathsf{G}}e^{\partial} (\chi_{2}^{\partial},w)\] (by IH) \[=e^{\partial}(\chi_{1}^{\partial}\vee\chi_{2}^{\partial},w)\] \[=e^{\partial}((\chi_{1}\wedge\chi_{2})^{\partial},w)\] (by Definition 4.3 )
The cases of modal formulas can also be tackled in a similar manner, whence we consider only \(\phi^{*}=\overline{\Di}\chi\).
\[e_{2}(\overline{\Di}\chi,w) =\inf_{w^{\prime}\in W}\{wR^{+}w^{\prime}\rightarrow_{\mathsf{G}}e _{2}(\chi,w)\}\] \[=\inf_{w^{\prime}\in W}\{wR^{+}w^{\prime}\rightarrow_{\mathsf{G}}e ^{\partial}(\chi^{\partial},w)\}\] (by IH) \[=e^{\partial}(\Box\chi^{\partial},w)\] \[=e^{\partial}((\overline{\Di}\chi)^{\partial},w)\] (by Definition 4.3 )
**Lemma 4.4**.: _Let \(\mathfrak{M}=\langle W,R^{+},R^{-},e_{1},e_{2}\rangle\) be \(\mathbf{G}_{\blacksquare,\bullet}^{2\pm t}\) model and let \(\mathfrak{M}^{\partial}=\langle W,R_{1},R_{2},e^{\partial}\rangle\) be a \(\mathbf{Kbi}\mathsf{G}^{\mathsf{f}}(2)\) model s.t. \(R_{1}=R^{+}\), \(R_{2}=R^{-}\), \(e^{\partial}(p,w)=e_{2}(p^{*},w)\), and \(e^{\partial}(p^{*},w)=e_{2}(p,w)\). Then, \(e_{2}(\phi^{*})=e^{\partial}(\phi^{\partial})\) for every \(\phi^{*}\in\overline{\mathcal{L}_{\triangle,\blacksquare,\bullet}^{\frown}}\)._
Proof.: Analogously to Lemma 4.3.
We can also obtain a result similar to Lemma 4.2 but regarding \({}^{\partial}\).
**Proposition 4.4**.:
1. _Let_ \(\phi\in\overline{\mathcal{L}_{\triangle,\square,\Diamond}^{\frown}}\)_. Then_ \(\mathfrak{F},w\models_{\mathbf{K}\mathsf{G}^{2}}\phi\) _iff_ \(\mathfrak{F},w\models_{\mathbf{K}\mathsf{G}^{2}}\sim\)__\(\phi^{\partial}\)_._
2. _Let_ \(\phi\in\overline{\mathcal{L}_{\triangle,\blacksquare,\bullet}^{\frown}}\)_. Then_ \(\mathfrak{F},w\models_{\mathbf{G}_{\blacksquare,\bullet}^{\frown}}\phi\) _iff_ \(\mathfrak{F},w\models_{\mathbf{G}_{\blacksquare,\bullet}^{\frown}}(\sim\)__\(\phi^{\partial})^{+\bullet}\)_._
Proof.: We prove only 1. since 2. can be shown in the same way. Let \(\mathfrak{F},w\not\models_{\mathbf{K}\mathsf{G}^{2}}\phi\). Then either (1) there is \(e_{1}\) on \(\mathfrak{F}\) s.t. \(e_{1}(\phi,w)<1\) or (2) there is \(e_{2}\) on \(\mathfrak{F}\) s.t. \(e_{2}(\phi,w)>0\). In the second case, the statement holds immediately by Lemmas 4.2 and 4.3 since \(\mathbf{Kbi}\mathsf{G}\)-valuations are defined in the same way as \(e_{1}\)-valuations in \(\mathbf{K}\mathsf{G}^{2}\). Namely, in this case, we will have that there is a valuation \(e_{1}^{\prime}\) on \(\mathfrak{F}\) s.t. \(e_{1}^{\partial}(\phi^{\partial},w)>0\), whence, \(e_{1}^{\partial}(\sim\)\(\phi^{\partial},w)=0\).
In the first case, we proceed as follows. First, transform \(\phi\) into \(\phi^{*}\) (we can do this because of Lemma 4.2). For a valuation \(e_{1}\) on \(\mathfrak{F}\), we define \(e_{2}^{\partial}\) as expected: \(e_{2}^{\partial}(p,w)=e_{1}(p,w)\) and \(e_{2}^{\partial}(p^{*},w)=e_{1}(p,w)\). Now, we prove by induction that \(e_{1}(\phi^{*},w)=e_{2}^{\partial}((\phi^{*})^{\partial},w)\).
This holds for the propositional variables by construction. The cases of propositional connectives are also simple (we consider \(\phi^{*}=\sim\)\(\chi\)).
\[e_{1}(\sim\)\(\chi,w) =e_{1}(\chi,w)\rightarrow_{\mathsf{G}}0\] \[=e_{2}^{\partial}(\chi^{\partial},w)\rightarrow_{\mathsf{G}}0\] (by IH) \[=e_{2}^{\partial}(\mathbf{1}\!<\!\chi^{\partial},w)\] \[=e_{2}^{\partial}((\sim\)\(\chi)^{\partial},w)\] (by Definition 4.3)
Since the cases of modalities are similar, we consider only \(\phi^{*}=\Diamond\chi\).
\[e_{1}(\Diamond\chi,w) =\sup_{w^{\prime}\in W}\{wR^{+}w^{\prime}\wedge_{\mathsf{G}}e_{ 1}(\chi,w)\}\] \[=\sup_{w^{\prime}\in W}\{wR^{+}w^{\prime}\wedge_{\mathsf{G}}e_{ 2}(\chi^{\partial},w)\}\] (by IH) \[=e_{2}(\overline{\Box}\chi^{\partial},w)\] \[=e_{2}((\Diamond\chi)^{\partial},w)\] (by Definition 4.3)
Part 2. can be proved similarly but we would need to use Lemmas 4.2 and 4.4.
We can now establish the embedding results.
**Theorem 4.1**.:
1. _Let_ \(\mathfrak{F}\!=\!\langle W,R^{+},R^{-}\rangle\) _be a bi-relational frame and_ \(w\in\mathfrak{F}\)_. Then for all_ \(\phi\in\overline{\mathcal{L}_{\triangle,\square,\Diamond}^{\frown}}\)_, it holds that_ \[\mathfrak{F},w\models_{\mathbf{K}\mathsf{G}^{2\pm t}}\phi\text{ iff }\mathfrak{F},w\models_{\mathbf{Kbi}\mathsf{G}^{ \mathsf{f}}(2)}\phi^{*}\wedge\sim\)__\(\phi^{\partial}\] _where_ \(\Diamond_{1}\)_'s are associated to_ \(R^{+}\) _and_ \(\Diamond_{2}\)_'s to_ \(R^{-}\) _(_\(\Diamond\in\{\square,\Diamond\}\)_)._
2. _Let_ \(\mathfrak{F}\!=\!\langle W,R^{+},R^{-}\rangle\) _be a bi-relational frame and_ \(w\in\mathfrak{F}\)_. Then for all_ \(\phi\in\overline{\mathcal{L}_{\triangle,\blacksquare,\bullet}^{\frown}}\)_, it holds that_ \[\mathfrak{F},w\models_{\mathbf{G}_{\blacksquare,\bullet}^{\frown}}\phi\text{ iff } \mathfrak{F},w\models_{\mathbf{K}\mathsf{bi}\mathsf{G}^{\mathsf{f}}(2)}(\phi^{*})^{ \circ}\wedge\sim\)__\(\phi^{\partial}\] _where_ \(\Diamond_{1}\)_'s are associated to_ \(R^{+}\) _and_ \(\Diamond_{2}\)_'s to_ \(R^{-}\) _(_\(\Diamond\in\{\square,\Diamond\}\)_)._
Proof.: We begin with 1. It is clear that the transformations in Remark 4.1 are equivalent. Furthermore, from Lemma 4.2, we have that \(\phi^{*}\) is \(\mathbf{KG}^{2}\)-valid on a given frame \(\mathfrak{F}\) iff \(\phi\) is \(\mathbf{KG}^{2}\)-valid on \(\mathfrak{F}\)
Now, it is clear that \(\phi\) is _\(e_{1}\)-valid_ on \(\langle\mathfrak{F},w\rangle\) iff \(\phi^{*}\) is \(\mathbf{KbiG}\)-valid on \(\langle\mathfrak{F},w\rangle\). Likewise, using Lemma 4.3, we obtain that \(\phi\) is \(e_{2}\)-valid on \(\langle\mathfrak{F},w\rangle\) iff \(\sim\!\!\phi^{\partial}\) is \(\mathbf{KbiG}\)-valid on \(\mathfrak{F}\). Since \(\phi\) is strongly \(\mathbf{KG}^{2}\)-valid on \(\langle\mathfrak{F},w\rangle\) iff it is both \(e_{1}\)- and \(e_{2}\)-valid thereon. The result follows.
The proof of 2. is the same as that of 1. but instead of Lemma 4.3, we use Lemma 4.4.
_Remark 4.2_.: Note from Theorem 4.1 that if \(\phi\) is \(\neg\)-free (i.e., \(\phi^{*}=\phi\)), we have that \(\mathfrak{F}\models_{\mathbf{KG}^{2}}\phi\) iff \(\mathfrak{F}\models_{\mathbf{KbiG}}\phi\wedge\sim\!\!\phi^{\partial}\) and \(\mathfrak{F}\models_{\mathbf{G}^{2}_{\blacksquare,\bullet}}\phi\) iff \(\mathfrak{F}\models_{\mathbf{KbiG}}\phi\wedge\sim\!\!\phi^{\partial}\). This means that \(\phi\) defines the same class of frames \(\mathbb{K}\) in \(\mathbf{KbiG}\) and \(\mathbf{KG}^{2}\) (\(\mathsf{G}^{2}_{\blacksquare,\bullet}\), respectively) iff \(\mathbb{K}\models_{\mathbf{KbiG}}\triangle\phi\leftrightarrow\sim\!\!\phi^{\partial}\) (\(\mathbb{K}\models_{\mathbf{KbiG}}\triangle\phi\leftrightarrow\sim\!\!\phi^{ \partial}\)).
We can also establish the embeddings of \(\mathbf{KG}^{2}\) and \(\mathsf{G}^{2}_{\blacksquare,\bullet}\) entailments into \(\mathbf{KbiG}\) entailment.
**Theorem 4.2**.:
1. _Let_ \(\Gamma\cup\{\phi\}\subseteq\overline{\mathcal{L}_{\triangle,\square,\Diamond}}\)_. Then_ \(\Gamma\models_{\mathbf{KG}^{2}}\phi\) _iff_ \(\{\psi^{*}:\psi\in\Gamma\}\models_{\mathbf{KbiG}(2)}\phi^{*}\) _and there exists a finite set_ \(\Gamma^{\prime}=\{\psi^{\partial}:\psi\in\Gamma\}\) _s.t._ \(\phi^{\partial}\models_{\mathbf{KbiG}(2)}\bigvee\limits_{\psi^{\partial}\in \Gamma^{\prime}}\psi^{\partial}\)_._
2. _Let_ \(\Gamma\cup\{\phi\}\subseteq\overline{\mathcal{L}_{\triangle,\blacksquare,\bullet}}\)_. Then_ \(\Gamma\models_{\mathsf{G}^{2}_{\blacksquare,\bullet}}\phi\) _iff_ \(\{\psi^{*}:\psi\in\Gamma\}\models_{\mathbf{KbiG}(2)}\phi^{*}\) _and there exists a finite set_ \(\Gamma^{\prime}=\{\psi^{\partial}:\psi\in\Gamma\}\) _s.t._ \(\phi^{\partial}\models_{\mathbf{KbiG}(2)}\bigvee\limits_{\psi^{\partial}\in \Gamma^{\prime}}\psi^{\partial}\)_._
Proof.: Consider 1., let \(\Gamma\cup\{\phi\}\subseteq\mathcal{L}_{\triangle,\square,\Diamond}^{\sim}\), and assume that \(\Gamma\models_{\mathbf{KG}^{2\pm\ell}}\phi\). By Definition 4.1, it means that \(\inf\{e_{1}(\psi,w):\psi\in\Gamma\}\leq e_{1}(\phi,w)\) and \(\sup\{e_{2}(\psi,w):\psi\in\Gamma\}\geq e_{2}(\phi,w)\) for every \(\mathbf{KG}^{2\pm\ell}\) model \(\mathfrak{M}=\langle W,R^{+},R^{-},e_{1},e_{2}\rangle\) and every \(w\in W\). Now, by Lemma 4.2, we have that \(\inf\{e(\psi^{*},w):\psi\in\Gamma\}\leq e(\phi^{*},w)\) for every \(\mathbf{KbiG}^{\mathsf{f}}(2)\) model \(\mathfrak{M}=\langle W,R^{+},R^{-},e\rangle\) and every \(w\in W\), i.e., \(\{\psi^{*}:\psi\in\Gamma\}\models_{\mathbf{KbiG}(2)}\phi^{*}\). Furthermore, by Lemma 4.3, we have that \(\sup\{e(\psi^{\partial},w):\psi\in\Gamma\}\geq e(\phi^{\partial},w)\) for every \(\mathbf{KbiG}^{\mathsf{f}}(2)\) model \(\mathfrak{M}=\langle W,R^{+},R^{-},e\rangle\) and every \(w\in W\). Note, however, that since \(\mathbf{KbiG}^{\mathsf{f}}\) is strongly complete (Theorem 3.2), it is also compact. Thus, there must exist some finite \(\Gamma^{\prime}=\{\psi^{\partial}:\psi\in\Gamma\}\) s.t. \(\phi^{\partial}\models_{\mathbf{KbiG}(2)}\bigvee\limits_{\psi^{\partial}\in \Gamma^{\prime}}\psi^{\partial}\), as required.
For the converse, let \(\Gamma\not\models_{\mathbf{KG}^{2\pm\ell}}\phi\). Then, there are some \(\mathbf{KG}^{2\pm\ell}\) model \(\mathfrak{M}=\langle W,R^{+},R^{-},e_{1},e_{2}\rangle\) and \(w\in W\) s.t. (1) \(\inf\{e_{1}(\psi,w):\psi\in\Gamma\}>e_{1}(\phi,w)\) or (2) \(\sup\{e_{2}(\psi,w):\psi\in\Gamma\}<e_{2}(\phi,w)\). It is clear from Lemmas 4.2 and 4.3 that \(\{\psi^{*}:\psi\in\Gamma\}\not\models_{\mathbf{KbiG}(2)}\phi^{*}\) in the first case, and that there is some finite \(\Gamma^{\prime}=\{\psi^{\partial}:\psi\in\Gamma\}\) s.t. \(\phi^{\partial}\not\models_{\mathbf{KbiG}(2)}\bigvee\limits_{\psi^{\partial}\in \Gamma^{\prime}}\psi^{\partial}\) in the second case.
Part 2. can be dealt with similarly, using Lemma 4.4 instead of Lemma 4.3.
### Frame definability
In this section, we show further results concerning the definability of frames in \(\mathbf{KG}^{2\pm}\). We begin with the characterisation of \(\mathbf{KG}^{2}\)- and \(\mathsf{G}^{2}_{\blacksquare,\bullet}\)-definable classes of frames.
**Definition 4.4**.: Let \(\mathbb{S}\) be a class of frames, \(\mathbf{L}\) be a logic and \(\mathcal{L}_{\mathbf{L}}\) its language. We say that \(\phi\in\mathcal{L}_{\mathbf{L}}\)_defines \(\mathbb{S}\)_in \(\mathbf{L}\) iff for every frame \(\mathfrak{F}\), it holds that
\[\mathfrak{F}\in\mathbb{S}\text{ iff }\mathfrak{F}\models_{\mathbf{L}}\phi\]
A class of frames is _definable in \(\mathbf{L}\)_ iff there is an \(\mathcal{L}_{\mathbf{L}}\) formula that defines it.
**Corollary 4.1**.:
1. _All_ \(\mathbf{KG}^{2}\)_- and_ \(\mathsf{G}^{2}_{\blacksquare,\bullet}\)_-definable classes of frames are_ \(\mathbf{KbiG}\)_-definable._
2. _A class of mono- or bi-relational frames_ \(\mathbb{S}\) _is_ \(\mathbf{KG}^{2}\)_-definable iff there is_ \(\phi\in\mathcal{L}_{\triangle,\square,\Diamond}(2)\) _s.t._ \(\phi\wedge\sim\!\!\phi^{\partial}\) _defines_ \(\mathbb{S}\) _in_ \(\mathbf{KbiG}\)
3. _A class of mono- or bi-relational frames_ \(\mathbb{S}\) _is_ \(\mathsf{G}^{2}_{\blacksquare,\bullet}\)_-definable iff there is_ \(\phi\in\mathcal{L}_{\triangle,\square,\diamond}(2)\) _s.t._ \(\phi\wedge\sim\phi^{\partial}\) _defines_ \(\mathbb{S}\) _in_ \(\mathbf{K}\mathrm{Bi}\mathsf{G}\)_._
Proof.: 1. follows immediately from Theorems 4.1 and 4.2.
Let us now prove 2. as 3. can be shown similarly. Assume that \(\chi\) defines \(\mathbb{S}\) in \(\mathbf{K}\mathsf{G}^{2}\). By Theorem 4.1, it follows that \(\mathbb{S}\models_{\mathbf{K}\mathsf{Bi}}\chi^{*}\wedge\sim\chi^{\partial}\) and \(\mathfrak{F}\not\models_{\mathbf{K}\mathsf{Bi}\mathsf{G}}\chi^{*}\wedge\sim \chi^{\partial}\) for every \(\mathfrak{F}\notin\mathbb{S}\). For the converse, assume that \(\phi\wedge\sim\phi^{\partial}\) defines \(\mathbb{S}\) in \(\mathbf{K}\mathrm{Bi}\mathsf{G}\). Since \(\phi\) does not contain \(\neg\), \(\phi^{*}=\phi\). Thus, again, by Theorem 4.1, we have that \(\phi\) defines \(\mathbb{S}\) in \(\mathbf{K}\mathsf{G}^{2}\).
In the remainder of the section, we are going to be concerned with \(\mathbf{K}\mathsf{G}^{2}\) since informational modalities are non-standard. First of all, we can see that _mono-relational_ frames are \(\mathbf{K}\mathsf{G}^{2}\)-definable8 in the expected manner.
Footnote 8: The \(\mathsf{G}^{2}_{\blacksquare,\bullet}\)-definability of mono-relational frames can be found in [9].
**Proposition 4.5**.: _Let \(\mathfrak{F}=\langle W,R^{+},R^{-}\rangle\). Then \(\mathfrak{F}\models\square p\leftrightarrow\neg\Diamond\neg p\) iff \(R^{+}=R^{-}\)._
Proof.: Let \(R^{+}=R^{-}\), we have
\[e_{1}(\neg\Diamond\neg p,w) =e_{2}(\Diamond\neg p,w)\] \[=\inf_{w^{\prime}\in W}\{wRw^{\prime}\rightarrow_{\mathsf{G}}e_{2 }(\neg p,w^{\prime})\}\] \[=\inf_{w^{\prime}\in W}\{wRw^{\prime}\rightarrow_{\mathsf{G}}e_{1 }(p,w^{\prime})\}\] \[=e_{1}(\square p,w)\]
\(e_{2}\) can be tackled similarly.
Now let \(R^{+}\neq R^{-}\), i.e., \(wR^{+}w^{\prime}=x\) and \(wR^{-}w^{\prime}=y\) for some \(w,w^{\prime}\in\mathfrak{F}\), and assume w.l.o.g. that \(x>y\). We define the values of \(p\) as follows: \(e(p,w^{\prime\prime})=(1,0)\) for all \(w^{\prime\prime}\neq w^{\prime}\) and \(e(p,w^{\prime})=(x,y)\). It is clear that \(e(\square p,w)=(1,0)\) but \(e(\neg\Diamond\neg p,w)=(y,x)\neq(1,0)\), as required.
**Definition 4.5** (\(\pm\)-counterparts of frames).: Let \(\mathbb{K}\) be a class of crisp _mono-relational_ frames.
1. The \(+\)_-counterpart_ of \(\mathbb{K}\) is the class \(\mathbb{K}^{+}\) of frames \(\mathfrak{F}=\langle W,R^{+},R^{-}\rangle\) s.t. every \(\langle W,R^{+}\rangle\) belongs to \(\mathbb{K}\).
2. The \(-\)_-counterpart_ of \(\mathbb{K}\) is the class \(\mathbb{K}^{-}\) of frames \(\mathfrak{F}=\langle W,R^{+},R^{-}\rangle\) s.t. every \(\langle W,R^{-}\rangle\) belongs to \(\mathbb{K}\).
3. The \(\pm\)_-counterpart_ of \(\mathbb{K}\) is the class \(\mathbb{K}^{\pm}=\mathbb{K}^{+}\cap\mathbb{K}^{-}\).
**Corollary 4.2**.: _Let \(\phi\) be a \(\neg\)-free formula that defines a class of frames \(\mathfrak{F}=\langle W,R\rangle\), \(\mathbb{K}\) in \(\mathbf{K}\mathrm{Bi}\mathsf{G}^{\mathsf{c}}\) and let \(\mathbb{K}^{\pm}\) be the \(\pm\)-counterpart of \(\mathbb{K}\). Then \(\phi\) defines \(\mathbb{K}^{\pm}\) in \(\mathbf{K}\mathsf{G}^{2\pm\mathsf{c}}\)._
Proof.: Assume that \(\phi\)_does not_ define \(\mathbb{K}^{\pm}\) in \(\mathbf{K}\mathsf{G}^{2\pm\mathsf{c}}\). Then, either (1) there is \(\mathfrak{F}\notin\mathbb{K}^{\pm}\) s.t. \(\mathfrak{F}\models_{\mathbf{K}\mathsf{G}^{2}}\phi\) or (2) \(\mathfrak{F}\not\models_{\mathbf{K}\mathsf{G}^{2}}\phi\) for some \(\mathfrak{H}\in\mathbb{K}^{\pm}\). Since \(\phi\) defines \(\mathbb{K}\) in \(\mathbf{K}\mathrm{Bi}\mathsf{G}\), it is clear that \(\mathfrak{F},\mathfrak{H}\in\mathbb{K}^{+}\). Thus, we need to reason for contradiction in the case when \(\mathfrak{F}\notin\mathbb{K}^{-}\) or \(\mathfrak{H}\notin\mathbb{K}^{-}\). We prove only (1) as (2) can be tackled in a dual manner.
Observe that \(e_{2}(\phi,w)\!=\!0\) for every \(w\!\in\!\mathfrak{F}\) and \(e_{2}\) on \(\mathfrak{F}\!=\!\langle W,R^{+},R^{-}\rangle\). But then, by Lemma 4.1, we have that \(e_{1}^{*}(\phi,w)=1\) for every \(w\in\mathfrak{F}^{*}\) and \(e_{1}^{*}\) on \(\mathfrak{F}^{*}=\langle W,R^{-},R^{+}\rangle\)9. Thus, since for every \(e_{1}^{*}\), there is \(e_{2}\) from which it could be obtained, \(\phi\) is \(\mathbf{K}\mathrm{Bi}\mathsf{G}\)-valid on a frame \(\langle W,R^{-}\rangle\notin\mathbb{K}\). Hence, \(\phi\) does not define \(\mathbb{K}\) in \(\mathbf{K}\mathrm{Bi}\mathsf{G}\) either. The result follows.
Footnote 9: Note that \(R^{+}\) and \(R^{-}\) are now swapped.
**Corollary 4.3**.: _Let \(\mathbb{K}\) be a class of crisp mono-relational frames \(\langle W,R\rangle\). Let further \(\mathbb{K}(2)\) be a class of crisp bi-relational frames s.t. exactly one of the following holds:_
1. \(\mathbb{K}(2)=\{\langle W,R^{+},R^{-}\rangle:\langle W,R^{+}\rangle\in\mathbb{K}\) _and there is some_ \(\langle W,R^{-}\rangle\notin\mathbb{K}\}\)_, or_
_._
2. \(\mathbb{K}(2)=\{\langle W,R^{+},R^{-}\rangle:\langle W,R^{-}\rangle\in\mathbb{K}\) _and there is some_ \(\langle W,R^{+}\rangle\notin\mathbb{K}\}\)_._
_Then, \(\mathbb{K}(2)\) is not \(\mathbf{KG}^{2}\)-definable._
Proof.: Since both cases can be shown similarly, we prove only 1. and reason for the contradiction. Assume that \(\phi\) defines \(\mathbb{K}(2)\), and let \(\mathfrak{F}\in\mathbb{K}(2)\) with \(\mathfrak{F}=\langle W,R^{+},R^{-}\rangle\) be s.t. \(\langle W,R^{-}\rangle\notin\mathbb{K}\). Now denote \(\mathfrak{F}^{*}=\langle W,R^{-},R^{+}\rangle\). Clearly, \(\mathfrak{F}^{*}\notin\mathbb{K}(2)\). However, by Lemma 4.1, we have that \(\mathfrak{F}^{*}\models\phi\), i.e., \(\phi\) does not define \(\mathbb{K}\). A contradiction.
Note, however, that the above statement fails for _fuzzy frames_. Indeed, it is possible to define a class of frames where only one relation is crisp.
**Proposition 4.6**.: _Let \(\mathfrak{F}=\langle W,R^{+},R^{-}\rangle\)._
1. \(R^{+}\) _is crisp iff_ \(\mathfrak{F}\models\triangle\square p\rightarrow\square\triangle p\)_._
2. \(R^{-}\) _is crisp iff_ \(\mathfrak{F}\models\Diamond\sim\sim p\rightarrow\sim\sim\Diamond p\)_._
Proof.: Note, first of all, that \(e_{i}(\triangle\phi,w),e_{i}(\sim\sim\Diamond,w)\in\{0,1\}\) for every \(\phi\) and \(i\in\{1,2\}\). Now let \(R^{+}\) be crisp. We have
\[e_{1}(\triangle\square p,w)=1 \text{ then }e_{1}(\square p,w)=1\] ( \[R^{+}\] is crisp) \[\text{ then }\inf\{e_{1}(\triangle p,w^{\prime}):wR^{+}w^{\prime} \}=1\] \[\text{ then }e_{1}(\square\triangle p,w)=1\]
\[e_{2}(\triangle\square p,w)=0 \text{ then }e_{2}(\square p,w)=0\] \[\text{ then }\sup_{w^{\prime}\in W}\{wR^{-}w^{\prime}\wedge \mathfrak{G}e_{2}(p,w^{\prime})\}=0\] \[\text{ then }\sup_{w^{\prime}\in W}\{wR^{-}w^{\prime}\wedge \mathfrak{G}e_{2}(\triangle p,w^{\prime})\}=0\] \[\text{ then }e_{2}(\square\triangle p,w)=0\]
For the converse, let \(wR^{+}w^{\prime}=x\) with \(0<x<1\). We set \(e(p,w^{\prime})=(x,0)\) and \(e(p,w^{\prime\prime})=(1,0)\) elsewhere. It is clear that \(e_{1}(\triangle\square p,w)=1\) but \(e_{1}(\square\triangle p,w)=0\). Thus, \(e_{1}(\triangle\square p\rightarrow\square\triangle p,w)\neq 1\), as required.
The case of \(R^{-}\) is considered dually. For crisp \(R^{-}\), we have
\[e_{1}(\Diamond\sim\sim p,w)=1 \text{ then }\sup_{w^{\prime}\in W}\{wR^{-}w^{\prime}\wedge \mathfrak{G}e_{1}(\sim\sim p,w^{\prime})\}=1\] \[\text{ then }\sup_{w^{\prime}\in W}\{wR^{-}w^{\prime}\wedge \mathfrak{G}e_{1}(p,w^{\prime})\}>0\] \[\text{ then }e_{1}(\Diamond p,w)>0\] \[\text{ then }e_{1}(\sim\sim\Diamond p,w)=1\] \[e_{2}(\Diamond\sim\sim p,w)=0 \text{ then }\inf\{e_{2}(\sim\sim p,w^{\prime}):wR^{-}w^{\prime} \}=0 \text{ ($R^{-}$ is crisp)}\] \[\text{ then }\inf\{e_{2}(p,w^{\prime}):wR^{-}w^{\prime}\}<1\] \[\text{ then }e_{2}(\Diamond p,w)<1\] \[\text{ then }e_{2}(\sim\sim\Diamond p,w)=0\]
For the converse, let \(wR^{-}w^{\prime}=y\) with \(y\in(0,1)\). We set \(e(p,w^{\prime})=(1,y)\) and \(e(p,w^{\prime\prime})=(1,0)\) elsewhere. It is clear that \(e_{2}(\Diamond\sim\sim p,w)=0\) but \(e_{2}(\sim\sim\Diamond p,w)=1\). Thus, \(e_{2}(\Diamond\sim\sim p\rightarrow\sim\Diamond p,w)\neq 0\), as required.
_Remark 4.3_.: Note that there is no contradiction between Propositions 4.4 and 4.6. Indeed, \((\triangle\square p\rightarrow\square\triangle p)^{\partial}=\overline{\Diamond \sim}\sim p\rightarrow\sim\sim\overline{\Diamond}p\) in the bi-relational case. If, on the other hand, the underlying frame is _mono-relational_, it either validates or refutes both formulas in Proposition 4.6.
## 5 Transfer
In [8], we studied _transferrable formulas_, i.e., formulas over \(\{\mathbf{0},\wedge,\vee,\rightarrow,\square,\Diamond\}\) that are _classically_ valid on some crisp frame \(\mathfrak{F}\) iff they are \(\mathbf{KbiG}\)-valid on \(\mathfrak{F}\). Classical modal logic does not support fuzzy frames, however. Furthermore, \(\mathbf{K}\mathbf{G}^{2\mathrm{f}}\) does not extend \(\mathbf{KbiG}^{\mathrm{f}}\) (recall Section 4.1 and Fig. 6). Thus, it makes sense to study the transfer from \(\mathbf{KbiG}^{\mathrm{f}}\) to \(\mathbf{K}\mathbf{G}^{2\mathrm{f}}\). In addition, since modalities in \(\overline{\mathcal{L}_{\triangle,\square,\Diamond}}\)_do not treat \(R^{+}\) and \(R^{-}\) independently_, it makes sense to ask which formulas can be transferred from the _mono-relational_\(\mathbf{KbiG}^{\mathrm{f}}\) to \(\mathbf{K}\mathbf{G}^{2\pm\mathrm{f}}\).
Let us now introduce the notions of transfer.
**Definition 5.1** (Transfer).: Let \(\langle\mathfrak{F},w\rangle\) be a pointed mono- or bi-relational frame. We call \(\phi\in\mathcal{L}_{\triangle,\square,\Diamond}(2)\)_transferrable_ iff the following condition holds.
\[\forall\mathfrak{F}:\mathfrak{F},w\models_{\mathbf{KbiG}}\phi\text{ iff } \mathfrak{F},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi\]
**Definition 5.2** (Bi-transfer).: Let \(\mathfrak{F}=\langle W,R,w\rangle\) and \(\mathfrak{F}^{\prime}=\langle W,S,w\rangle\) be two fuzzy pointed frames. Denote \(\mathfrak{F}^{R,S}=\langle W,R,S\rangle\) and \(\mathfrak{F}^{S,R}=\langle W,S,R\rangle\). A formula \(\phi\in\mathcal{L}_{\triangle,\square,\Diamond}\) is called _bi-transferrable_ iff the following condition holds.
\[\forall\mathfrak{F},\mathfrak{F}^{\prime}:\ \mathfrak{F},w\models_{\mathbf{KbiG}} \phi\text{ and }\mathfrak{F}^{\prime},w\models_{\mathbf{KbiG}}\iff\mathfrak{F}^{R,S},w \models_{\mathbf{K}\mathbf{G}^{2}}\phi\text{ and }\mathfrak{F}^{S,R},w\models_{\mathbf{K} \mathbf{G}^{2}}\phi\]
First, let us obtain the characterisation of (bi-)transferrable formulas.
**Theorem 5.1**.: _Let \(\phi\in\mathcal{L}_{\triangle,\square,\Diamond}(2)\). Then, \(\phi\) is (bi-)transferrable iff it holds that_
\[\forall\mathfrak{F}\forall w\in\mathfrak{F}:\text{ if }\mathfrak{F},w\models_{ \mathbf{KbiG}}\phi\text{ then }\mathfrak{F},w\models_{\mathbf{KbiG}}\sim\!\!\phi^{\partial} \tag{15}\]
Proof.: We begin with the transferrable formulas. Let \(\phi\) be transferrable and \(\mathfrak{F},w\models_{\mathbf{KbiG}}\phi\) for some frame \(\mathfrak{F}\). Then, \(\mathfrak{F},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi\). Hence, \(\mathfrak{F},w\models_{\mathbf{K}\mathbf{biG}}\sim\!\!\phi^{\partial}\), by Theorem 4.1, as required. Now assume that (15) holds. If \(\mathfrak{F},w\models_{\mathbf{KbiG}}\phi\), then \(\mathfrak{F},w\models\sim\!\!\phi^{\partial}\). Thus, by Theorem 4.1, we get that \(\mathfrak{F},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi\). If \(\mathfrak{F},w\models_{\mathbf{K}\mathbf{biG}}\phi\), it is immediate that \(\mathfrak{F}\not\models_{\mathbf{K}\mathbf{G}^{2}}\phi\). I.e., \(\phi\) is transferrable.
The bi-transferrable formulas can be considered similarly. Let \(\phi\in\mathcal{L}_{\triangle,\square,\Diamond}\) Assume that \(\mathfrak{F}=\langle W,R\rangle\) and \(\mathfrak{F}^{\prime}=\langle W,S\rangle\). If \(\phi\) is bi-transferrable, \(\mathfrak{F},w\models_{\phi}\), and \(\mathfrak{F}^{\prime},w\models_{\mathbf{K}\mathbf{biG}}\phi\) then \(\mathfrak{F}^{R,S},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi\) and \(\mathfrak{F}^{S,R},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi\). Hence, \(\mathfrak{F}^{R,S},w\models_{\mathbf{K}\mathbf{G}^{2}}\sim\!\!\phi^{\partial}\) and \(\mathfrak{F}^{S,R},w\models_{\mathbf{K}\mathbf{G}^{2}}\sim\!\!\phi^{\partial}\). Now recall from Convention 4.1 that \({}^{\partial}\) does not add new modalities when applied to \(\mathcal{L}_{\triangle,\square,\Diamond}\) formulas. Since \(\square\) and \(\Diamond\) in \(\mathbf{KbiG}\) take into account only one relation (the first one), we have that \(\mathfrak{F},w\models_{\mathbf{K}\mathbf{biG}}\sim\!\!\phi^{\partial}\) and \(\mathfrak{F},w\models_{\mathbf{KbiG}}\sim\!\phi^{\partial}\), as required.
For the converse, assume that (15) holds, let \(\mathfrak{F}\) and \(\mathfrak{F}^{\prime}\) be as in Definition 5.2, \(\mathfrak{F},w\models_{\mathbf{KbiG}}\phi\), and \(\mathfrak{F}^{\prime},w\models_{\mathbf{KbiG}}\phi\). Then \(\mathfrak{F},w\models_{\mathbf{KbiG}}\sim\!\!\phi^{\partial}\) and \(\mathfrak{F}^{\prime},w\models_{\mathbf{KbiG}}\sim\!\phi^{\partial}\). It is clear that \(\phi\) and \(\sim\!\!\phi^{\partial}\) are \(e_{1}\)-valid (recall Definition 4.1) on \(\langle\mathfrak{F}^{R,S},w\rangle\) and \(\langle\mathfrak{F}^{S,R},w\rangle\). It remains to show that \(\phi\) and \(\sim\!\!\phi^{\partial}\) are \(e_{2}\)-valid as well.
It is easy to check by induction that the following statements hold.
* \(\phi\) is \(e_{2}\)-valid on \(\langle\mathfrak{F}^{R,S},w\rangle\) iff \(\sim\!\phi^{\partial}\) is \(e_{1}\)-valid on \(\langle\mathfrak{F}^{S,R},w\rangle\).
* \(\phi\) is \(e_{2}\)-valid on \(\langle\mathfrak{F}^{S,R},w\rangle\) iff \(\sim\!\phi^{\partial}\) is \(e_{1}\)-valid on \(\langle\mathfrak{F}^{R,S},w\rangle\).
* \(\sim\!\phi^{\partial}\) is \(e_{2}\)-valid on \(\langle\mathfrak{F}^{R,S},w\rangle\) iff \(\phi\) is \(e_{1}\)-valid on \(\langle\mathfrak{F}^{S,R},w\rangle\).
* \(\sim\!\!\phi^{\partial}\) is \(e_{2}\)-valid on \(\langle\mathfrak{F}^{S,R},w\rangle\) iff \(\phi\) is \(e_{1}\)-valid on \(\langle\mathfrak{F}^{S,R},w\rangle\).
Thus, \(\mathfrak{F}^{R,S},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi\) and \(\mathfrak{F}^{S,R},w\models_{\mathbf{K}\mathbf{G}^{2}}\phi\), as required. On the other hand, if \(\mathfrak{F},w\not\models_{\mathbf{KbiG}}\phi\) (or \(\mathfrak{F}^{\prime},w\not\models_{\mathbf{KbiG}}\phi\), respectively), it is clear that \(\mathfrak{F}^{R,S},w\not\models_{\mathbf{K}\mathbf{G}^{2}}\phi\) (\(\mathfrak{F}^{S,R},w\not\models_{\mathbf{K}\mathbf{G}^{2}}\phi\)).
The result now follows.
Unfortunately, there seems to be no general way of establishing whether (15) holds for an arbitrary \(\phi\) since modal formulas encode _second-order_ conditions on the frame in the general case. Still, it is possible to show that some classes of formulas are transferrable. Namely, we are going to show that Lemmon-Scott formulas are transferrable. To do this, we prove the analogue of (the fuzzy analogue of) the Lemmon-Scott correspondence theorem for \(\mathbf{KG}\).
**Definition 5.3** (Compositions of fuzzy relations).: Let \(R\) and \(S\) be two fuzzy relations on \(W\). We set
* \(u(R\circ S)u^{\prime}=\sup\{uRw\wedge\mathsf{\mathsf{G}}\,wSu^{\prime}:w\in W\}\);
* \((R\circ S)(u)=\{u^{\prime}:u(R\circ S)u^{\prime}>0\}\);
* \(uR^{0}u^{\prime}=\begin{cases}0&\text{if }u\neq u^{\prime}\\ 1&\text{if }u=u^{\prime}\end{cases}\);
* \(uR^{n}u^{\prime}=u(R\circ R^{n-1})u^{\prime}\).
It is clear that the following holds (\(\square^{n}\) and \(\Diamond^{n}\) denote strings of \(n\)\(\square\)'s and \(\Diamond\)'s, respectively):
\[e(\square^{n}\phi,w)=\inf\{wR^{n}w^{\prime}\rightarrow_{\mathsf{\mathsf{G}}}e (\phi,w^{\prime})\}\qquad\qquad e(\Diamond^{n}\phi,w)=\sup\{wR^{n}w^{\prime} \wedge_{\mathsf{\mathsf{G}}}e(\phi,w^{\prime})\}\]
**Theorem 5.2** (Fuzzy Lemmon-Scott correspondence).: _Let \(\mathfrak{F}=\langle W,R\rangle\) be a fuzzy frame. Then_
\[\mathfrak{F},x\models_{\mathbf{K}\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}}}} \ \exists\ \ \forall y,z:(xR^{h}y\wedge_{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}} \exists \ \ \ \psi R^{k}w}\]
Proof.: Let \(\mathfrak{F},x\not\models_{\mathbf{K}\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}}}} \Box}} \Diamond^{\perp}p\). We prove that the condition on \(R\) does not hold. We pick \(e\) on \(\mathfrak{F}\) s.t. \(e(\Diamond^{h}\square^{i}p,x)>e(\square^{j}\Diamond^{k}p,x)\) and proceed as follows.
\[e(\Diamond^{h}\square^{i}p,x)>e(\square^{j}\Diamond^{k}p,x) \text{ then }\sup_{y^{\prime}\in W}\{xR^{h}y^{\prime}\wedge_{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ \mathsf{ }}}}}}}}}}}} \Box \Box \ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf { \mathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf { \mathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsfmathsfmathsf \mathsf{\mathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{\mathsfmathsfmathsf{\mathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{\mathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{\mathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{\mathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsf{ \mathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsf{ \mathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsf{ \mathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsfmathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { { \mathsfmathsfmathsfmathsfmathsfmathsfmathsfmathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsfmathsf\mathsf{ { \mathsfmathsfmathsfmathsfmathsfmathsf
\[\text{then}\ \,\exists y^{\prime},z^{\prime}:\begin{bmatrix}xR^{h}y^{\prime}> \left(\sup_{w\in W}\{z^{\prime}R^{k}w\}\wedge_{\mathsf{G}}\sup_{w\in W}\{e(p,w) \}\right)\\ \text{and}\\ xR^{j}z^{\prime}>\left(\sup_{w\in W}\{z^{\prime}R^{k}w\}\wedge_{\mathsf{G}} \sup_{w\in W}\{e(p,w)\}\right)\\ \text{and}\\ \inf_{y^{\prime}R^{i}w>e(p,w)}\{e(p,w)\}>\sup_{w\in W}\{z^{\prime}R^{k}w\} \end{bmatrix}\]
It is clear that \(\mathsf{FLS}\) and \(\Psi\) are incompatible.
For the converse, let \(\langle\mathfrak{F},x\rangle\) fail \(\mathsf{FLS}\). Namely, let \(x,y,z\in\mathfrak{F}\) be s.t. \((xR^{h}y\wedge_{\mathsf{G}}xR^{j}z)>\sup_{w\in W}\{yR^{i}w\wedge_{\mathsf{G}} zR^{k}w\}\). It is clear that \((xR^{h}y\wedge_{\mathsf{G}}xR^{j}z)>\left(\sup_{w\in W}\{yR^{i}w\}\wedge_{ \mathsf{G}}\sup_{w\in W}\{zR^{k}w\}\right)\). We set the valuation as follows: \(e(p,w)=yR^{i}w\) for every \(w\in R^{i}(y)\) and \(e(p,u)=0\) otherwise. We have that \(e(\square^{i}p,y)=1\) and either
\[e(\lozenge^{k}p,z)=\sup_{w\in W}\{e(p,w)\}=\sup_{w\in W}\{yR^{i}w\}<1\qquad \left(\text{if}\ \sup_{w\in W}\{yR^{i}w\}<\sup_{w\in W}\{zR^{k}w\}\right) \tag{16}\]
or
\[e(\lozenge^{k}p,z)=\sup_{w\in W}\{zR^{k}w\}<1\qquad\qquad\qquad\left(\text{ if}\ \sup_{w\in W}\{yR^{i}w\}\geq\sup_{w\in W}\{zR^{k}w\}\right) \tag{17}\]
If (16) is the case, we have that \(e(\square^{j}\lozenge^{k}p,x)\leq\sup_{w\in W}\{yR^{i}w\}<xR^{h}y\) and \(e(\lozenge^{h}\square^{i}p,x)\geq xR^{h}y\). If (17) holds, then \(e(\square^{j}\lozenge^{k}p,x)\leq\sup_{w\in W}\{zR^{k}w\}\) but \(e(\lozenge^{h}\square^{i}p,x)\geq xR^{h}y>\sup_{w\in W}\{zR^{k}w\}\). In both cases, we have that \(e(\lozenge^{h}\square^{i}p\to\square^{j}\lozenge^{k}p,x)<1\), as required.
Let us now use Theorem 5.2 to obtain the transfer of Lemmon-Scott formulas.
**Corollary 5.1**.:
1. _Lemmon-Scott formulas are transferrable in mono-relational frames._
2. _Lemmon-Scott formulas are bi-transferrable._
Proof.: By Theorem 5.1, it suffices to check that every Lemmon-Scott formula \(\phi\) is \(\mathbf{KbiG}\)-valid on a mono-relational pointed frame \(\langle\mathfrak{F},w\rangle\) iff \(\sim\!\!\phi^{\partial}\) is valid on \(\langle\mathfrak{F},w\rangle\). Now, observe that \(\mathfrak{F},w\models_{\mathbf{KbiG}}\phi\) iff \(\mathfrak{F},w\models_{\mathbf{KbiG}}\triangle\phi\). Let \(\triangle\phi=\triangle(\lozenge^{h}\square^{i}p\to\square^{j}\lozenge^{k}p) ).
It is clear that since \(\mathfrak{F}\) is mono-relational, \(\sim\!\!(\triangle(\lozenge^{h}\square^{i}p\to\square^{j}\lozenge^{k}p))^{ \partial}=\sim\!\!\sim\!(\lozenge^{j}\square^{k}p^{*}\prec\square^{h}\lozenge ^{i}p^{*})\). This is equivalent (in \(\mathsf{biG}\)) to \(\triangle(\lozenge^{j}\square^{k}p^{*}\to\square^{h}\lozenge^{i}p^{*})\). Now, by Theorem 5.2, we have that
\[\mathfrak{F},x\models_{\mathbf{KbiG}}\triangle(\lozenge^{j}\square^{k}p^{*} \to\square^{h}\lozenge^{i}p^{*})\ \text{iff}\ \ \underbrace{\forall y,z:(xR^{j}y\wedge_{\mathsf{G}}xR^{h}z)\leq\sup_{w\in W}\{yR^ {k}w\wedge_{\mathsf{G}}zR^{k}w\}}_{\mathsf{FLS}^{\prime}}\]
It is clear that \(\mathsf{FLS}\) and \(\mathsf{FLS}^{\prime}\) are equivalent conditions on frames. Thus, indeed, a Lemmon-Scott formula \(\phi\) is \(\mathbf{KbiG}\)-valid on a pointed frame iff \(\sim\!\!\phi^{\partial}\) is \(\mathbf{KbiG}\)-valid on that same pointed frame. The result now follows.
## 6 Complexity
In this section, we establish the \(\mathsf{PSpace}\)-completeness of the modal logics discussed in the paper. First, we tackle \(\mathbf{KbiG}^{\mathsf{f}}\). Since we add only \(\triangle\) (or \(\prec\)) which is a propositional connective, the proof is identical to the proof of the \(\mathsf{PSpace}\)-completeness of \(\mathbf{K}\mathbf{G}^{\mathsf{f}}\) given in [13] which is why we will only state the required definitions and formulate the result.
**Definition 6.1** (\(\mathsf{F}\)-models of \(\mathbf{KbiG}\)).: An \(\mathsf{F}\)-model is a tuple \(\mathfrak{M}=\langle W,R,T,e\rangle\) with \(\langle W,R,e\rangle\) being a \(\mathbf{KbiG}\) model and \(T:W\to\mathcal{P}_{\prec\omega}([0,1])\) be s.t. \(\{0,1\}\subseteq T(w)\) for all \(w\in W\). \(e\) is extended to the complex formulas as in \(\mathbf{KbiG}\) in the cases of propositional connectives, and in the modal cases, as follows.
\[e(\Box\phi,w) =\max\{x\in T(w):x\leq\inf_{w^{\prime}\in W}\{wRw^{\prime}\to_{ \mathsf{G}}e(\phi,w^{\prime})\}\}\] \[e(\Diamond\phi,w) =\min\{x\in T(w):x\geq\sup_{w^{\prime}\in W}\{wRw^{\prime}\wedge_ {\mathsf{G}}e(\phi,w^{\prime})\}\}\]
The next lemma is a straightforward extension of [12, Theorem 1] to \(\mathbf{KbiG}\). The proof is essentially the same since we add only \(\triangle\) and \(\prec\) (which are propositional connectives) to the language.
**Lemma 6.1**.: \(\phi\) _is \(\mathbf{KbiG}\)-valid iff \(\phi\) is true in all \(\mathsf{F}\)-models iff \(\phi\) is true in all \(\mathsf{F}\)-models whose depth is \(O(|\phi|)\) s.t. \(|W|\leq(|\phi|+2)^{|\phi|}\) and \(|T(w)|\leq|\phi|+2\) for all \(w\in W\)._
It is now clear that \(\mathbf{KbiG}\) is decidable. To establish its complexity, we can utilise the algorithm described in [13]. The algorithm will work for \(\mathbf{KbiG}\) since its only difference from \(\mathbf{K}\mathbf{G}\) is \(\triangle\) and \(\prec\) which are extensional connectives. Alternatively, we could expand the tableaux calculus for \(\mathbf{K}\mathbf{G}\) from [30] with the rules for \(\triangle\) and \(\prec\) and use it to construct the decision procedure. Note furthermore, that in the presence of \(\triangle\), validity and unsatisfiability are reducible to one another: \(\phi\) is valid iff \(\sim\triangle\phi\) is unsatisfiable; \(\phi\) is unsatisfiable iff \(\sim\!\phi\) is valid. The following statement is now immediate.
**Proposition 6.1**.: _The validity of \(\mathbf{KbiG}^{\mathsf{f}}\) is \(\mathsf{PSpace}\)-complete._
Bi-relational \(\mathsf{F}\)-models can be introduced in the same way.
**Definition 6.2** (\(\mathsf{F}\)-models of \(\mathbf{KbiG}(2)\)).: A bi-relational \(\mathsf{F}\)-model is a tuple \(\mathfrak{M}=\langle W,R_{1},R_{2},T_{1},T_{2},e\rangle\) with \(\langle W,R_{1},R_{2},e\rangle\) being a \(\mathbf{KbiG}(2)\) model and \(T_{1},T_{2}:W\to\mathcal{P}_{\prec\omega}([0,1])\) be s.t. \(\{0,1\}\subseteq T_{i}(w)\) for all \(w\in W\). \(e\) is extended to the complex formulas as in \(\mathbf{KbiG}\) in the cases of propositional connectives, and in the modal cases, as follows.
\[e(\Box_{i}\phi,w) =\max\{x\in T_{i}(w):x\leq\inf_{w^{\prime}\in W}\{wR_{i}w^{ \prime}\to_{\mathsf{G}}e(\phi,w^{\prime})\}\} (i\in\{1,2\})\] \[e(\Diamond_{i}\phi,w) =\min\{x\in T_{i}(w):x\geq\sup_{w^{\prime}\in W}\{wR_{i}w^{ \prime}\wedge_{\mathsf{G}}e(\phi,w^{\prime})\}\} (i\in\{1,2\})\]
It is clear that Lemma 6.1 holds w.r.t. bi-relational \(\mathsf{F}\)-models as well and that \(\mathbf{KbiG}(2)\) is \(\mathsf{PSpace}\)-complete too.
**Proposition 6.2**.: _The validity of \(\mathbf{KbiG}(2)\) is \(\mathsf{PSpace}\)-complete._
Finally, since the embeddings in Definition 4.3 are linear, we obtain the \(\mathsf{PSpace}\)-completeness of the paraconsistent logics.
**Proposition 6.3**.: _The validity of \(\mathbf{K}\mathbf{G}^{2\pm c}\), \(\mathbf{K}\mathbf{G}^{2\mathsf{f}}\), and \(\mathbf{K}\mathbf{G}^{2\pm\mathsf{f}}\) is \(\mathsf{PSpace}\)-complete._
Proof.: The \(\mathsf{PSpace}\)-membership follows immediately from Theorem 4.1 and Propositions 6.1 and 6.2. For the hardness, we proceed as follows. Let \(\phi\) be a formula over \(\{\mathbf{0},\wedge,\vee,\rightarrow,\Box\}\). We define \(\phi^{!}\) to be the result of replacing every occurrence \(p\) of variables with \(\triangle p\wedge\neg\neg\triangle p\) and putting \(\triangle\) in front of every \(\Box\). It is easy to establish by induction that \(\phi^{!}\) can only be evaluated at \((1,0)\) or \((0,1)\).
We show that \(\mathbf{K}\models\phi\) iff \(\mathbf{KG}^{2}\models\phi^{!}\). It is clear that \(\mathbf{KG}^{2}\not\models\phi^{!}\) when \(\mathbf{K}\not\models\phi\) since classical values are preserved by \(\mathcal{L}^{\sim}_{\Delta,\square,\Diamond}\) connectives. For the converse, let \(\mathfrak{M}=\langle W,R^{+},R^{-},e_{1},e_{2}\rangle\) be a \(\mathbf{KG}^{2}\) model s.t. \(e(\phi^{!},w)=(0,1)\) for some \(w\in W\). We construct a _classical_ model \(\mathfrak{M}^{!}=\langle W,R^{!},e^{!}\rangle\) as follows: \(wR^{!}w^{\prime}\) iff \(wR^{+}w^{\prime}>0\) or \(wR^{-}w^{\prime}>0\); \(w\in e^{!}(p)\) iff \(e(p,w)=(1,0)\). We check by induction that \(e(\phi^{!},w)=(1,0)\) iff \(\mathfrak{M}^{!},w\vDash\phi\). The basis case of \(\phi^{!}=\triangle p\wedge\neg\neg\triangle p\) and \(\phi=p\) holds by construction of \(\mathfrak{M}^{!}\). The cases of propositional connectives can be proven directly from the induction hypothesis. Finally, if \(\phi^{!}=\triangle\square\psi^{!}\) and \(\phi=\square\psi\), we have that \(e(\triangle\square\psi^{!},w)=(1,0)\) iff \(e(\psi^{!},w^{\prime})=(1,0)\) for every \(w^{\prime}\) s.t. \(wR^{+}w^{\prime}>0\) or \(wR^{-}w^{\prime}>0\), which, by the induction hypothesis, is equivalent to \(\mathfrak{M},w^{\prime}\vDash\psi\) for every \(w^{\prime}\in R^{!}(w)\), and thus \(\mathfrak{M},w\vDash\square\psi\).
To obtain the \(\mathsf{PSpace}\)-completeness of \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}\) is less straightforward since \(\blacksquare\) and \(\blackblackblack\) are not standard. To circumvent this, we use the approach from [9] and augment its language with an additional constant \(\mathbf{B}\) s.t. \(e_{1}(\mathbf{B},w)=e_{2}(\mathbf{B},w)=1\). We denote the resulting logics with \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}(\mathbf{B})\).
**Proposition 6.4**.: __
1. _The strong validity of all_ \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}(\mathbf{B})\) _logics is_ \(\mathsf{PSpace}\)_-complete._
2. \(e_{1}\)_- and_ \(e_{2}\)_-validities of all_ \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}\) _logics are_ \(\mathsf{PSpace}\)_-complete._
Proof.: Begin with 1. Membership follows from Theorem 4.1 and Propositions 6.1 and 6.2. We just need to add \(\mathbf{B}^{\partial}=\mathbf{1}\) to the \({}^{\partial}\) translation in Definition 4.3. For the hardness, observe that since \(e_{1}\)-conditions from Definition 4.1 coincide with the semantics of \(\mathbf{K}\mathsf{biG}\) (Definition 2.4), we have immediately that \(\phi\) is \(\mathbf{K}\mathsf{biG}\)-valid on a given (mono- or bi-relational) frame \(\mathfrak{F}\) iff \(\mathbf{B}\to\phi^{+\blackblack}\) is strongly \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}\)-valid on \(\mathfrak{F}\).
The proof of 2. is also simple. Again, the membership follows immediately from Theorem 4.1, and Propositions 6.1 and 6.2. For the hardness, we observe that \(\phi\) is \(\mathbf{K}\mathsf{biG}\)-valid iff \(\phi^{+\blackblack}\) is \(e_{1}\)-valid and that \(\chi\) is \(\mathbf{K}\mathsf{biG}\)-valid iff \(\sim\!\chi^{\partial}\) is \(e_{2}\)-valid.
We finish the section with a short observation considering finitely-branching frames. Recall from [7, 8] that (both fuzzy and crisp) finitely-branching frames are definable in \(\mathbf{K}\mathsf{biG}\). It is clear then that they are definable in \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}(\mathbf{B})\) as well. In fact, one can construct a tableaux calculus for \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}\) and \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}(\mathbf{B})\) over finitely-branching frames (we refer the reader to [9]) and show that their satisfiabilities are also in \(\mathsf{PSpace}\) (in fact, the satisfiability of \(\mathsf{G}^{2}_{\blacksquare,\blackblackblack}(\mathbf{B})\) is \(\mathsf{PSpace}\)-complete).
Likewise, it is possible to define finitely-branching frames in \(\mathbf{KG}^{2\pm}\).
**Proposition 6.5**.: \(\mathfrak{F}\) _is finitely branching iff \(\mathfrak{F}\models\sim\!\!\sim\!\square(p\vee\!\sim\!\!p)\) and \(\mathfrak{F}\models\mathbf{1}\!\prec\!\Diamond\!(p\vee\!\sim\!\!p)\)._
Proof.: Observe that \(e_{1}(p\vee\!\sim\!\!p,w)\!>\!0\) and \(e_{2}(p\vee\!\sim\!\!p,w)\!<\!1\) for every \(w\!\in\!\mathfrak{F}\). Since \(\mathfrak{F}\) is finitely branching, \(\underset{w^{\prime}\in W}{\inf}\{wSw^{\prime}\to_{\mathsf{G}}e_{1}(p\vee\! \sim\!\!p,w^{\prime})\}>0\) and \(\underset{w^{\prime}\in W}{\sup}\{wSw^{\prime}\wedge_{\mathsf{G}}e_{2}(p\vee \!\sim\!\!p,w^{\prime})\}<1\) for \(S\in\{R^{+},R^{-}\}\). Thus, \(e_{1}(\square(p\vee\sim\!\!p),w)>0\) and \(e_{2}(\square(p\vee\sim\!\!p),w)<1\), whence, \(e(\sim\!\sim\!\square(p\vee\sim\!\!p),w)=(1,0)\). Likewise, \(e_{1}(\Diamond\!(p\vee\sim\!\!p),w)<1\) and \(e_{2}(\Diamond\!(p\vee\sim\!\!p),w)>0\), whence \(e(\mathbf{1}\prec\Diamond\!(p\vee\sim\!\!p),w)=(1,0)\).
For the converse, we have two cases: (1) \(|R^{+}(w)|\geq\aleph_{0}\) or (2) \(|R^{-}(w)|\geq\aleph_{0}\) for some \(w\in\mathfrak{F}\). In the first case, we let \(X\subseteq R^{+}(w)\) be countable and define the value of \(p\) as follows: \(e(p,w^{\prime\prime})=(1,0)\) for every \(w^{\prime\prime}\notin X\) and \(e(p,w_{i})=(wR^{+}w^{\prime}\cdot\frac{1}{i},0)\) for every \(w_{i}\in X\). It is clear that \(\underset{w^{\prime}\in W}{\inf}\{wR^{+}w^{\prime}\to_{\mathsf{G}}e_{1}(p\vee \sim\!\!p,w^{\prime})\}=0\), whence \(e_{1}(\sim\sim\!\square(p\vee\sim\!\!p))=0\) as required.
In the second case, \(Y\subseteq R^{-}(w)\) be countable and define the value of \(p\) as follows: \(e(p,w^{\prime\prime})=(1,0)\) for every \(w^{\prime\prime}\notin Y\) and \(e(p,w_{i})=\big{(}wR^{-}w^{\prime}\cdot\frac{1}{i},0\big{)}\). It is clear that \(\underset{w^{\prime}\in W}{\inf}\{wR^{-}w^{\prime}\to_{\mathsf{G}}e_{2}(\neg(p \vee\sim\!\!p),w^{\prime})\}=0\), whence \(e_{2}(\Diamond\neg(p\vee\sim\!\!p))=0\) and \(e_{2}(\mathbf{1}\prec\Diamond\neg(p\vee\sim\!\!\!p))=1\) as required.
A tableaux calculus for \(\mathbf{KG}^{2}\) over finitely-branching frames can be constructed in a manner similar to the calculus for \(\mathsf{G}^{2}_{\blacksquare,\blackblackblackblack}\) over finitely-branching frames presented in [9].
## 7 Conclusion
Let us summarise the results of the paper. We provided a strongly complete axiomatisation of the fuzzy bi-Godel modal logic (Theorem 3.2) and constructed the faithful embeddings of its paraconsistent relatives \(\mathbf{KG}^{2}\) and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\) into \(\mathbf{KbiG}\) and \(\mathbf{KbiG}(2)\) depending on the number of relations in the frames. The embeddings hold for the valid formulas and for the valid entailments alike (Theorems 4.1 and 4.2). Using these embeddings, we provided a characterisation of \(\mathbf{KG}^{2}\)- and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\)-definable classes of frames (Corollary 4.1) as well as a characterisation of transferrable formulas (Theorem 5.1). Moreover, we established that all \(\mathbf{KG}^{2}\)'s and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\)'s are \(\mathsf{PSpace}\)-complete (Propositions 6.3 and 6.4).
Still, several questions remain open. First of all, the axiomatisation of \(\mathbf{KG}^{2}\)'s and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\)'s. Indeed, now we can only 'prove' negative normal forms of \(\overline{\mathcal{L}^{\sim}_{\triangle,\square,\Diamond}}\) and \(\overline{\mathcal{L}^{\sim}_{\triangle,\blacksquare,\blackdiamond}}\) formulas in \(\mathcal{H}\mathbf{KbiG}(2)\) and then use the transformations in (14) to obtain the intended formulas. However, since \(\mathbf{KG}^{2}\)'s and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\)'s do not extend \(\mathbf{KbiG}\) in the general case (\(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\)'s are non-standard, whence never extend \(\mathbf{KbiG}\), while \(\mathbf{KG}^{2}\)'s extend \(\mathbf{KbiG}\) only for the crisp case), it would be instructive to obtain calculi designed specifically for these logics. One way to attempt this would be to construct a _bi-lateral_ calculus where the notion of proof is supplemented with the dual notion of disproof. Such calculi exist for bi-Intuitionistic logic (cf., e.g., [2]) which is the bi-Godel logic without two prelinearity axioms \((p\to q)\vee(q\to p)\) and \(\mathbf{1}\prec((p<q)\wedge(q\prec p))\). Likewise, there are bi-lateral calculi for the Belnap-Dunn logic (cf., e.g., [26]) which can also be helpful since \(\mathbf{KG}^{2}\)'s and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\)'s can be seen as Belnapian relatives of \(\mathbf{KbiG}\).
Second, as we have already noted, the propositional fragment of \(\mathbf{KG}^{2}\)'s and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\)'s is \(\mathsf{G}^{2}\) -- a certain paraconsistent expansion of \(\mathsf{G}\). Namely, \(\mathsf{G}^{2}\) is the linear extension of \(\mathsf{l}_{4}\mathsf{C}_{4}\)[34] (cf. the axiomatisation of \(\mathsf{G}^{2}\) in [10]). It makes sense, then, to consider other modal expansions of paraconsistent logics and their linear extensions presented in [34]. Moreover, it makes sense to investigate the modalities whose support of falsity _dualises the support of truth_. In particular, for logics based on \(\mathsf{G}^{2}\):
\[e_{2}(\square\phi,w)=\inf_{w^{\prime}\in W}\{e_{2}(\phi,w)\lhd_{\mathsf{G}}wR ^{-}w^{\prime}\}\qquad\quad e_{2}(\Diamond\phi,w)=\inf_{w^{\prime}\in W}\{wR^ {-}w^{\prime}\vee_{\mathsf{G}}e_{2}(\phi,w)\}\]
Third, we know from [7, 8] that _crisp_\(\mathbf{KG}^{2}\) (both mono- and bi-relational) conservatively extend _monorelational_\(\mathbf{KbiG}^{\mathsf{G}}\). Thus, there is a trivial embedding of \(\mathbf{KbiG}\) into them. On the other hand, it is unclear whether \(\mathbf{KbiG}^{\mathsf{G}}\) or \(\mathbf{KbiG}^{\mathsf{G}}(2)\) can be embedded into _fuzzy_\(\mathbf{KG}^{2}\). And although Corollary 4.1, suggests that not all \(\mathbf{KbiG}^{\mathsf{G}}\)-definable classes of (mono- or bi-relational) frames are \(\mathbf{KG}^{2\mathsf{f}}\)-definable, it is not evident either.
Finally, recall that \(\mathbf{KG}^{2}\) and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\) semantics can be reformulated in terms of a single valuation on \([0,1]^{\divide}\) (cf. Fig. 1). \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\) adds modalities that correspond to the infima and suprema w.r.t. the informational order on \([0,1]^{\divide}\). It makes sense, then, to combine \(\mathbf{KG}^{2}\) and \(\mathsf{G}^{2}_{\blacksquare,\blackdiamond}\) into one logic and equip it with the full set of bi-lattice connectives. Note that the modal Belnapian bi-lattice logic from [21] turns out to be equivalent to the classical bi-modal logic [32]. Our aim then is to determine whether this holds for \(\mathbf{KbiG}(2)\) and the bi-lattice paraconsistent Godel modal logic.
|
2309.14447 | Exact Solution to the Quantum and Classical Dimer Models on the Spectre
Aperiodic Monotiling | The decades-long search for a shape that tiles the plane only aperiodically
under translations and rotations recently ended with the discovery of the
`spectre' aperiodic monotile. In this setting we study the dimer model, in
which dimers are placed along tile edges such that each vertex meets precisely
one dimer. The complexity of the tiling combines with the dimer constraint to
allow an exact solution to the model. The partition function is
$\mathcal{Z}=2^{N_{\textrm{Mystic}}+1}$ where $N_{\textrm{Mystic}}$ is the
number of `Mystic' tiles. We exactly solve the quantum dimer (Rokhsar Kivelson)
model in the same setting by identifying an eigenbasis at all interaction
strengths $V/t$. We find that test monomers, once created, can be infinitely
separated at zero energy cost for all $V/t$, constituting a deconfined phase in
a 2+1D bipartite quantum dimer model. | Shobhna Singh, Felix Flicker | 2023-09-25T18:08:15Z | http://arxiv.org/abs/2309.14447v1 | # Exact Solution to the Quantum and Classical Dimer Models on the Spectre Aperiodic Monotiling
###### Abstract
The decades-long search for a shape that tiles the plane only aperiodically under translations and rotations recently ended with the discovery of the'spectre' aperiodic monotile. In this setting we study the dimer model, in which dimers are placed along tile edges such that each vertex meets precisely one dimer. The complexity of the tiling combines with the dimer constraint to allow an exact solution to the model. The partition function is \(\mathcal{Z}=2^{N_{\text{Myotic}}+1}\) where \(N_{\text{Myotic}}\) is the number of 'Myistic' tiles. We exactly solve the quantum dimer (Rokhsar Kivelson) model in the same setting by identifying an eigenbasis at all interaction strengths \(V/t\). We find that test monomers, once created, can be infinitely separated at zero energy cost for all \(V/t\), constituting a deconfined phase in a 2+1D bipartite quantum dimer model.
The dimer model is one of the oldest models in statistical physics. Given a graph (vertices connected by edges), a 'perfect dimer matching' is a set of edges (dimers) such that each vertex connects to precisely one member of the set. The dimer model then considers the set of all perfect matchings. The model characterises a wide range of physical processes including adsorption [1, 2, 3, 4, 5], zero modes in electronic tight binding models [6, 7, 8], and magnetism, where dimers are used for example in analytic approaches to the Ising model [9]. The _quantum_ dimer model (QDM), also called the Rokhsar Kivelson model, introduces quantum superpositions of dimer placements [10, 11]. Originally introduced to capture the physics of resonating valence bond states [12] in theories of high-temperature superconductivity [10], QDMs are now understood to host a range of exotic phenomena such as quantum spin liquids, topological order, and fractionalisation [11, 13]. They have recently been realised experimentally in programmable quantum simulators [14, 15].
The utility of classical dimer models derives in part from an efficient method (the 'FKT algorithm') for enumerating perfect matchings developed by Fisher, Kasteleyn, and Temperley [1, 2, 3, 4, 5]. The result permits an exact solution to any \(N\)-vertex planar dimer model in the form of the partition function:
\[\mathcal{Z}_{N}\left[w\right]=\sum_{\mathcal{M}_{\text{i}}\in\mathcal{M}}\; \prod_{e\in\mathcal{M}_{\text{i}}}w\left(e\right). \tag{1}\]
Here, \(\mathcal{M}_{\text{i}}\) is a perfect matching in the set of all perfect matchings \(\mathcal{M}\), and \(e\) are the edges, of the graph. Setting weight \(w=1\) on all edges, \(\mathcal{Z}\left[1\right]\) counts the number of perfect matchings. From \(\mathcal{Z}\) all thermodynamic functions of state immediately follow. Of particular interest is the free energy per dimer [16] in an \(N\)-vertex graph
\[f_{N}\left[w\right]=\frac{1}{N/2}\ln\left(\mathcal{Z}_{N}\left[w\right]\right). \tag{2}\]
For certain regular graphs admitting periodic embeddings, \(\mathcal{Z}\) has been evaluated analytically [4, 5, 16]. Owing to the importance of graph connectivity to the behaviour of dimer models, they have recently begun to be studied on infinite graphs with aperiodically ordered planar embeddings [17, 18, 19, 8]. Such graphs, which capture the symmetries of physical quasicrystals [20, 21], are irregular, meaning their vertices meet different numbers of edges, typically leading to a large degree of frustration in dimer arrangements. They admit planar embeddings which are long-range ordered, meaning their diffraction patterns feature sharp Bragg peaks [20, 21], despite lacking a discrete translational symmetry. Examples include the graph version of the Penrose tiling [17, 22] and the Ammann-Beenker tiling [18, 19, 23]. The long range order often permits analytical results; for example, in a modification of the Ammann Beenker tiling an exact solution to the dimer model can be approximated to arbitrary accuracy using transfer matrices [18].
This year saw a major advance in the study of aperiodic tilings with the discovery of the'spectre' aperiodic monotile [24]. The spectre positively answered the decades-old question of whether there exists a shape that tiles the plane only aperiodically under translations and rotations [24]. Spectre tilings, either finite or infinite, can be created by the 'composition rules' in Fig. 1, reproduced from Ref. [24], in which each tile is replaced with copies of itself so as to build a larger tiling. Each tile becomes its mirror image under composition, meaning that all tiles have the same chirality after each composition.
Here we provide an exact analytical solution to both the classical and quantum dimer models on spectre tilings. We treat the vertices and edges of the tiles as those of a graph. Since we are only concerned with graph connectivity, we straighten the curved edges of the spectre tiles (resulting in what is termed 'Tile(1,1)' in Refs. [24, 25]). Each spectre tile, labelled \(S_{0}\), can have either 13 or 14 edges depending on its environment [24]. To ensure that all tiles are identical at the level of graph
connectivity we add a vertex (gold in Fig. 1) to any 13-edge tiles. This makes the graphs bipartite, meaning vertices divide into two sets such that edges only connect vertices in different sets. We discuss the non-bipartite case briefly at the end.
_Results (classical)_ -- Starting from a single spectre \(S_{0}\), a finite number of compositions \(\mathcal{N}\) generates a finite connected tile set \(S_{\mathcal{N}}\), while an infinite number of compositions results in a tiling of the Euclidean plane [24]. Even though each tile has 14 vertices, the total number of vertices can still be odd; in such cases the number of perfect matchings is zero, since a dimer must connect a pair of vertices. By construction, any tiling built by composition can also be seen as a concatenation of the once-composed tiles \(S_{1}\) and \(M_{1}\) (Fig. 1). The two green spectres in Fig. 1 together make up a tile called the 'Mystic', which we denote \(M_{0}\). We term the dark green tile the 'Upper Mystic' \(M_{0}^{+}\). It plays a special role as the only entirely internal tile in \(S_{1}\) and \(M_{1}\). It is also marked out as special by appearing \(\pi/6\) rotated from the other tiles, which appear only \(\pi/3\) rotated amongst themselves (Fig. 1). The Mystic \(M_{0}\) contains four internal edges. Of these four, exactly one of the two central edges must be covered by a dimer in any perfect matching. Either choice forces a range of other dimers. Choosing the red dimer in Fig. 1 forces all the pink and purple dimers; instead choosing the blue dimer forces all the light blue and purple dimers. The purple dimers are the same in both cases, and so these edges are always covered in any perfect matching. In fact the figure demonstrates that _every_ non-boundary edge of \(S_{1}\) and \(M_{1}\) either lies on \(M_{0}^{+}\), or has a fixed dimer occupation as shown. The only degrees of freedom are the two dimer matchings per \(M_{0}^{+}\) (and therefore per \(M_{0}\)), or possibly the boundaries of \(S_{1}\) or \(M_{1}\) within larger regions. From now on we focus on \(S_{\mathcal{N}}\) regions unless otherwise stated.
The only dimer within either \(S_{1}\) or \(M_{1}\) to meet a boundary vertex appears on \(M_{1}\) (vertex circled in green in Fig. 1). Fig. 2 shows the twice-composition of \(S_{0}\), which we term \(S_{2}\). The special dimer has been highlighted in gold. It forces the two closest green dimers, which in turn force every other green dimer. The result is that all internal edges of \(S_{2}\) not on Upper Mystics are again constrained.
In fact this behaviour is generic for \(S_{\mathcal{N}}\) regions. Ref. [24] lists all possible ways in which \(S_{1}\) and \(M_{1}\) can meet. The boundaries of \(S_{1}\) and \(M_{1}\) consist only of bivalent or trivalent vertices. The bottom vertex of the Mystic (the rightmost of the two lowest vertices of \(M_{0}\) in Fig. 1) meets the boundary of \(S_{\mathcal{N}}\) exactly once. In all other cases it appears internally. It does so at a trivalent vertex connecting three regions and touches the (gold) boundary dimer of \(M_{1}\). Since a dimer meeting a trivalent vertex forces the absence of dimers on both other edges, the existence of even a single forced dimer along the network of \(S_{1}\) and \(M_{1}\) boundaries is enough to force all remaining dimer placements. The only exception is a twofold freedom along the boundary of the entire connected tile set \(S_{\mathcal{N}}\) (only relevant for finite tile patches). Hence, the total number of dimer matchings is
\[\mathcal{Z}_{N}\left[1\right]=2^{N_{\text{Mystic}}+1} \tag{3}\]
where \(N_{\text{Mystic}}\) is the number of Mystic tiles \(M_{0}\) (equal to the number of Upper Mystic tiles \(M_{0}^{+}\)).
In the thermodynamic limit \(S_{\mathcal{N}\rightarrow\infty}\), for which the number of vertices \(N\rightarrow\infty\), the free energy per dimer
Figure 1: The composition rules for the spectre tiling (after Ref. [24]). The vertices of the spectre \(S_{0}\) are indicated; the gold vertex is added whenever it is not implied by the meeting of tiles. The two once-composed tiles \(S_{1}\) and \(M_{1}\) can be pieced together without overlaps to construct the infinite aperiodic tiling. Note that composition mirrors \(S_{0}\) tiles in such a way that only one chirality appears at any level of composition. The Mystic \(M_{0}\) is the two green tiles (the darker tile being the Upper Mystic \(M_{0}^{+}\)). Of the four internal edges of \(M_{0}\) either the red or dark blue dimer must appear in any perfect matching. Choosing red, all pink and purple dimers are forced. Choosing blue, all light blue and purple dimers are forced. The only freedom on internal \(S_{1}\) and \(M_{1}\) edges is therefore two dimer matchings per Mystic (these cases are shown at the bottom, labelled \(|r\rangle\) (red) and \(|b\rangle\) (blue) for convenience in the quantum model). The gold vertex appears only on the boundaries of \(S_{1}\) and \(M_{1}\), so does not affect this argument.
is
\[f_{\lim N\rightarrow\infty}\left[1\right]=\frac{\ln\left(2\right)}{3\left(5+\sqrt{1 5}\right)}\approx 0.02604 \tag{4}\]
(see Appendix A). To confirm this result we exactly calculated the free energy per dimer numerically in finite patches \(S_{2}\) to \(S_{6}\) using the FKT algorithm [1; 3]. The results, shown in Fig. 3, converge towards the analytical result. The convergence is slow owing to the fractal boundary of \(S_{\mathcal{N}\rightarrow\infty}\), so we also show the result of a series acceleration method [26] (Appendix B) which gives a rapid convergence.
The free energy per dimer in the spectre tiling, Eq. (4), is significantly smaller than values obtained in periodic lattices [16], e.g. the square (0.583), honeycomb (0.323), triangular (0.857), and Kagome (0.462) lattices. This fits with the observation that all bulk dimers, other than those on \(M_{0}^{+}\), are completely constrained.
_Results (quantum)_ -- The QDM can be defined on \(S_{\mathcal{N}}\) by replacing the square tiles (plaquettes) of Ref. [10] with \(S_{0}\) tiles. Explicitly, on any spectre \(S_{0,i}\) we can define \(\left|r_{i}\right\rangle\) and \(\left|b_{i}\right\rangle\) to be the quantum states with the red and blue dimer placements in Fig. 1 respectively. The Hamiltonian then reads
\[\hat{H}\!\!=\!\!\!\sum_{S_{0,i}\in S_{\mathcal{N}}}\!\!\!-\!t\left(\left|r_{i }\right\rangle\!\langle b_{i}\right|+\left|b_{i}\right\rangle\!\langle r_{i} \right|)+V\left(\left|r_{i}\right\rangle\!\langle r_{i}\right|+\left|b_{i} \right\rangle\!\langle b_{i}|) \tag{5}\]
where \(t\) and \(V\) are real and \(t\) is positive. The terms weighted by \(-t\) can be thought of as defining a kinetic energy operator which enacts 'flips' \(\left|r\right\rangle\leftrightarrow\left|b\right\rangle\). The terms weighted by \(V\) define a potential energy operator which counts 'flippable' plaquettes of the form \(\left|r\right\rangle\) or \(\left|b\right\rangle\).
Eq. (5) is well studied in the square lattice, where \(\left|r_{i}\right\rangle\) denotes two vertical dimers, and \(\left|b_{i}\right\rangle\) two horizontal dimers, on square \(i\). The so-called Rokhsar Kivelson (RK) point \(t=V\) separates ordered phases with different symmetries. The order is set by the sign of \(V/t\) which either attempts to maximise or minimise the number of flippable plaquettes [10; 11; 27]. In contrast, on the spectre tiling \(S_{\mathcal{N}}\) the only flippable plaquettes are \(M_{0}^{+}\) tiles. Their number is entirely fixed. This heavy constraint decouples the problem into one of matching independent \(M_{0}^{+}\) tiles with quantum dimers. Each tile \(M_{0,i}^{+}\) admits two energy eigenstates which we denote:
\[\left|\pm_{i}\right\rangle=\left(\left|r_{i}\right\rangle\pm\left|b_{i} \right\rangle\right)/\sqrt{2} \tag{6}\]
with corresponding energies \(V\mp t\). The ground state of Eq. (5) is therefore
\[\hat{H}\prod_{M_{0,i}^{+}}\left|+_{i}\right\rangle=(V-t)N_{\text{Mystic}} \prod_{M_{0,i}^{+}}\left|+_{i}\right\rangle\!. \tag{7}\]
All excited states can be formed by swapping individual \(\left|+_{i}\right\rangle\) for \(\left|-_{i}\right\rangle\) at a cost of \(2t\) energy per swap.
_Discussion_ -- Removing a dimer from a perfect matching results in two unmatched vertices. These can be thought of as particle-like'monomer' defects which can move independently of one another by dimer rearrangements [11]. Specifically, each monomer lies at the end of an 'alternating path', a set of edges alternately uncovered and covered by dimers. Switching which edges are covered and uncovered moves the monomer along the path. The spectre again has an interesting structure in this regard. Note for example that each green alternating path in Fig. 2 terminates only on the boundary and the gold dimer. The same structure holds for all \(S_{\mathcal{N}}\). Deleting the gold dimer to create a pair of monomers, one of the pair can move to the boundary along any green path;
Figure 3: The free energy per dimer \(f_{N}\) for patches of spectres containing \(N\) vertices. Square points represent compositions \(S_{2}\) to \(S_{6}\). The dashed line shows the analytical result of Eq. 4, valid for \(N\rightarrow\infty\). Convergence is slow owing to the fractal boundary of \(S_{\mathcal{N}\rightarrow\infty}\); green circles represent a series acceleration (see Appendix B).
Figure 2: Twice-composition of the spectre, \(S_{2}\). The mirror of \(M_{1}\) (Fig. 1) is highlighted in pink. The gold dimer (highlighted with an arrow) reaches the boundary of \(M_{1}\), and forces all green dimers. The only freedom in dimer placements is the twofold choice on Upper Mystic tiles, and on the boundary. Hence the number of perfect matchings is \(2^{N_{\text{Mystic}}+1}\).
in the thermodynamic limit it can escape to infinity. In fact any test pair of monomers has this same feature: one will esape to the boundary, and the other localises on an Upper Mystic \(M_{0}^{+}\).
In QDMs on previously studied planar bipartite graphs, such as the square lattice, the RK point \(t=V\) constitutes a deconfined quantum critical point between ordered phases [28]. Deconfinement means that test-monomers can be separated to infinite distance at finite energy cost [11; 29; 30]. In general, since QDMs on bipartite graphs map to compact (matter-free) quantum electrodynamics [13; 29; 31], and since deconfined phases cannot exist in compact 2+1D \(U(1)\) gauge theories [32], the RK point cannot be part of a deconfined phase existing over a range of \(V/t\).
Remarkably, the behaviour in the (bipartite) spectre tiling appears to be at odds with this statement. By the argument just given, any pair of test monomers can be infinitely separated. Doing so preserves the number of flippable plaquettes, so costs no energy according to Eq. (5) at any \(V/t\). Test monomers are therefore deconfined over all \(V/t\). We suggest that the result of Ref. [32] may survive because there seems to be no obvious mapping to a compact \(U(1)\) gauge theory, since the vertices in the spectre tiling connect to variable numbers of edges. Another difference to previous studies is that all previously known bipartite RK points were characterised by algebraic dimer correlations [11]; spectre dimer correlations, being completely uncorrelated between different \(M_{0}^{+}\) tiles, are not algebraic at any \(V/t\).
QDMs on non-bipartite graphs behave qualitatively differently. Here, they _do_ admit deconfined phases spanning a continuous range of \(V/t\)[11; 13; 33; 34], and their emergent gauge field descriptions are \(\mathbb{Z}_{2}\) rather than \(U(1)\)[13; 35; 36]. The spectre tiling can be made non-bipartite by omitting the gold vertex in Fig. 1 whenever it is not forced by the tiling. Fig. 1 shows that gold vertices appear only on the boundaries connecting \(S_{1}\) and \(M_{1}\) tiles, so some of the intuition developed here may hold in non-bipartite spectre tilings. Nevertheless, preliminary checks suggest a more complicated behaviour.
Returning to the classical model, different weights \(w\) can be assigned in Eq. (1). For example, in the square lattice different weights might be assigned to horizontal edges compared to vertical edges [1]. However, since aperiodic tilings lack a unit cell, there is no obvious choice for assigning weights. One option for \(S_{\mathcal{N}}\) tilings is to assign different weights to edges within regions \(S_{1}\) and \(M_{1}\) while assigning weights consistently between different \(S_{1}\) and \(M_{1}\). Since dimer placements are fixed for all internal edges other than \(M_{0}^{+}\), the corresponding weights factor out of the partition function. Those edges which never receive a dimer make no contribution regardless of weight. The partition function can therefore also be calculated in this more general case, with the sum being over weights of edges appearing on \(M_{0}^{+}\) or the boundary of the tiling.
It is interesting to consider what happens when tiles are added or removed from the \(S_{\mathcal{N}}\) regions while still obeying the spectre tiling rules. The total number of vertices can become odd, as in region \(M_{1}\), or even but with an imbalance between the numbers of vertices in the bipartite subgraphs. In both cases there are zero perfect matchings, since dimers connect vertices on distinct bipartite subgraphs. Another possibility is a monomer-free region as shown in Fig. 4. Precisely one \(M_{0}\) touches the boundary of any \(S_{\mathcal{N}}\) region. Removing this \(M_{0}\) from \(S_{2}\), as shown, causes boundaries of some internal \(S_{1}\) and \(M_{1}\) regions to gain a degree of freedom (orange edges host 0 or 1 dimers). This region hosts six perfect matchings excluding those localised around \(M_{0}^{+}\). In general there is a twofold degree of freedom around any graph cycle (closed loop of edges) which connects to the rest of the graph only via edges not hosting a dimer. This accounts for the freedom around \(M_{0}^{+}\), the boundaries of \(S_{\mathcal{N}}\), and also these more complicated branching structures in other tile patches.
The complexity of the spectre tiling leads to a number of surprising simplifications in physical models, permitting exact results where periodic (and other aperiodic) tilings do not. It remains to be seen if there is anything deeper about the structure of the tiling which leads to this simplicity.
_Acknowledgments--_The authors thank S. Franca and J. Schirmann for helpful discussions, and A. G. Grushin, R. Moessner, P. d'Ornellas, M. A. Sanchez Martinez, Z. Ringel, and J. van Wezel for helpful comments on
Figure 4: Removing the boundary-touching Mystic \(M_{0}\) from region \(S_{2}\) allows the freedom formerly localised to the boundary to move into the bulk. Purple edges always receive a dimer, and black edges never receive a dimer, in any perfect matching. Orange edges are free to either host a dimer or not.
the manuscript. F.F. was supported by EPSRC grant EP/X012239/1.
|
2301.00210 | A regular interior solution of Einstein field equations | Starting from the solution of the Einstein field equations in a static and
spherically symmetric spacetime which contains an isotropic fluid, we construct
a model to represent the interior of compact objects with compactness rate
$u=\frac{GM}{c^2R}<0.23577$. The solution is obtained by imposing the isotropy
condition for the radial and tangential pressures, this generates an ordinary
differential equation of second order for the temporal $g_{tt}$ and radial
$g_{rr}$ metric potentials, which can be solved for a specific function of
$g_{tt}$. The graphic analysis of the solution shows that it is physically
acceptable, that is to say, the density, pressure and speed of sound are
positive, regular and monotonically decreasing functions, also, the solution is
stable due to meeting the criteria of the adiabatic index. When taking the data
of mass $M=1.44^{+0.15}_{-0.14}M_\odot$ and radius $R=13.02^{+1.24}_{-1.06}km$
which corresponds to the estimations of the star PSR J0030+045 we obtain values
of central density $\rho_c=7.5125\times 10^{17} kg/m^3$ for the maximum
compactness $u=0.19628$ and of $\rho_c= 2.8411 \times 10^{17} kg/m^3$ for the
minimum compactness $u=0.13460$, which are consistent with those expected for
this type of stars. | Gabino Estevez-Delgado, Joaquin Estevez-Delgado, Modesto Pineda Duran, Arthur Cleary-Balderas | 2022-12-31T14:54:51Z | http://arxiv.org/abs/2301.00210v1 | # A regular interior solution of Einstein field equations.
###### Abstract
Starting from the solution of the Einstein field equations in a static and spherically symmetric spacetime which contains an isotropic fluid, we construct a model to represent the interior of compact objects with compactness rate \(u=\frac{GM}{c^{2}R}<0.23577\). The solution is obtained by imposing the isotropy condition for the radial and tangential pressures, this generates an ordinary differential equation of second order for the temporal \(g_{tt}\) and radial \(g_{rr}\) metric potentials, which can be solved for a specific function of \(g_{tt}\). The graphic analysis of the solution shows
that it is physically acceptable, that is to say, the density, pressure and speed of sound are positive, regular and monotonically decreasing functions, also, the solution is stable due to meeting the criteria of the adiabatic index. When taking the data of mass \(M=1.44^{+0.15}_{-0.14}M_{\odot}\) and radius \(R=13.02^{+1.24}_{-1.06}km\) which corresponds to the estimations of the star PSR J0030+045 we obtain values of central density \(\rho_{c}=7.5125\times 10^{17}kg/m^{3}\) for the maximum compactness \(u=0.19628\) and of \(\rho_{c}=2.8411\times 10^{17}kg/m^{3}\) for the minimum compactness \(u=0.13460\), which are consistent with those expected for this type of stars.
## 1 Introduction
Describing the interior of the stars and determining their average composition requires many different complementary approaches as are: chemistry, thermodynamics, nuclear physics, particle physics and gravitational physics. And in the case that there are no instabilities generated, once all the nuclear fuel has been used, the star shrinks and, depending of the mass and the stability present, it may form a white dwarf, a neutron star or a quark star [1, 2]. Of course the description of each one of these stages and their stability is a far more delicate matter which involves a detailed analysis that considers the type of predominant particles in the interior of the star, whether they are electrons, neutrons, or quarks, even when the stars are hybrids. According to the focus of this job, the matter in general is supposed to be described in a satisfactory manner by a perfect fluid and it will not be necessary to give a specific shape of a state equation. We know that, depending on the value of the mass and radius of a star, it may be a white dwarf, a neutron star or a quark star, this will also determine the orders of magnitude of the density, for example densities in the order of \(10^{18}kg/m^{2}\) are typical for neutron stars. As such in this situation, given the high density, it results adequate to describe the interior of the stars by means of Einstein's general relativity theory. The interior solutions have been approached for over a century, the first of these was constructed for a static and spherically symmetric spacetime with matter from a perfect fluid and incompressible density, known as the interior Schwarzschild solution. Although to start with, the density being constant is an unrealistic requirement, its consequences revealed some differences with the treatment of stellar models in the context of Newton's theory of gravitation. One of these is the compactness relation \(u=GM/c^{2}R<4/9\), where \(M\) is the mass and \(R\) is the radius, this indicates that it is not possible to have stars with arbitrary mass and radius. Afterwards it was shown that this relation is not exclusive of this idealized model and that it is present for
stars with a monotonically decreasing density function for which the exterior geometry is given by the Schwarzschild solution.\({}^{3,4}\) And although more that 130 interior solutions with perfect fluid have been published, only a few met the characteristics that makes them physically acceptable. From an analysis done in 1998 on a total of 127 reported solutions, only 16 of these had their density and pressure functions be positive, regular and monotonically decreasing functions and also had a speed of sound that didn't violate the causality condition. And only 9 out of these 16 had a speed of sound which decreases monotonically with the radius.\({}^{5-16}\) Although this last requirement is debatable since, for example, for the realistic MIT Bag state equation \(P(\rho)=\frac{1}{3}(c^{2}\rho-4B_{g})\) associated to quark stars, the speed of sound is \(v_{s}^{2}=\frac{c^{2}}{3}\), which is not a monotonically decreasing function. The construction of stellar solutions with perfect fluid is an active field although the difficulty in obtaining physically acceptable solutions limits a great number of these. A point which has been explored in relation to these is employing isotropic coordinates\({}^{17-21}\) which has favored the integration of the equations system and its application in the description of stars like HerX1, 4U1538-52, LMC X - 4, SAX J1808.4-3658.\({}^{22}\) A particular class of solutions constructed in Schwarzschild's coordinates assumes a metric potential \(g_{tt}\!=\!-(1+ar^{2})^{n}\) to this group belong the Tolman IV\({}^{6}\) and Durgapal\({}^{15}\) solutions, extensions for other values of \(n\) a positive integer or negative fractional value\({}^{23-25}\) have been done, showing that for \(n\geq 4\) the solutions that are generated are physically acceptable. Other recent works have addressed the possibility of generating exact solutions with metric potential \(g_{tt}\!=\!-\frac{1+ar^{2}}{1+b^{2}}\) showing that this functional form is adequate in obtaining physically acceptable solutions and it's consistent with the stars Her X-1,\({}^{26,27}\) PSRJ0348+0432,\({}^{28}\) PSR B0943+10,\({}^{29}\) PSR J0737 -3039A\({}^{30}\) and PSRJ1614 2230.\({}^{31}\) Motivated by these few last investigation works, in this report we present a new solution to Einstein's equations with perfect fluid with a metric function \(g_{tt}\) and its application to the star PSR J0030+045. The structure of this article will be as follows: in the section 2 we present Einstein field equations for a static and spherically symmetrical spacetime with a perfect fluid and assuming the form of the metric function \(g_{tt}\!=\!-A^{2}\left(1+ar^{2}\right)^{2}\!/\!\left[1\!+\!(\frac{3}{\sqrt{2} }-1)ar^{2}\right]\) we obtain the solution from the isotropy equation. In section 3 we determine the hydrostatic functions and impose the coupling conditions between the interior and the exterior solutions to determine the integration constants. In section 4 graphic analysis of the solution is done, taking the observational values of the mass and radius of the star PSRJ0030+045 we determine the physical values of the pressure, density and speed of sound, starting from the graphic analysis and from the data, we show that the solution is physically acceptable. The conclusions and discussion of future works are presented in the section 5.
The field equations and the solution
The interior geometry of a static and spherically symmetric spacetime can be described through a line element\({}^{32,33}\) :
\[ds^{2}=-y(r)^{2}dt^{2}+\frac{dr^{2}}{B(r)}+r^{2}(d\theta^{2}+\sin^{2}\theta d \phi^{2}), \tag{1}\]
where \(y,B\) are functions of the radial coordinate \(r\leq R\). Einstein equations \(G_{\mu\nu}=R_{\mu\nu}-\frac{1}{2}Rg_{\mu\nu}=kT_{\mu\nu}\), (with \(R_{\mu\nu}\), \(R\) and \(g_{\mu\nu}\) the components of the Ricci tensor, the Ricci scalar and the metric tensor respectively), have as source the distribution of matter from a perfect fluid described by the energy-momentum tensor:
\[T_{\mu\nu}=c^{2}\rho u_{\mu}u_{\nu}+P(u_{\mu}u_{\nu}+g_{\mu\nu}), \tag{2}\]
with \(u^{\mu}\) the four velocity of the fluid, \(\rho\) the energy density and \(P\) the pressure. The non zero components of the Einstein equations are:
\[kc^{2}\rho = -\frac{B^{\prime}}{r}+\frac{1-B}{r^{2}}, \tag{3}\] \[kP = \frac{2By^{\prime}}{ry}-\frac{1-B}{r^{2}},\] (4) \[kP = \frac{(ry^{\prime\prime}+y^{\prime})B}{ry}+\frac{(ry^{\prime}+y) B^{\prime}}{2ry}, \tag{5}\]
with the derivative in relation to the radial coordinate \(r\) denoted by \({}^{\prime}\). Meanwhile the equation of conservation for the energy-momentum tensor \(\nabla_{\mu}T^{\mu}\,_{\nu}=0\) implies the Tolman-Oppenheimer-Volkov (TOV) equation:\({}^{32,33}\)
\[P^{\prime}=-\frac{(P+c^{2}\rho)\,y^{\prime}}{y}. \tag{6}\]
Although this last one is not an independent equation, since it can be obtained from the system of equations (3) - (5). Being this the set of equations for which we will obtain the solution starting from a function \(y(r)\).
### The solution
For the integration of the system we propose a metric function \(g_{tt}=-y(r)^{2}\) with the form of \(y(r)\) similar, but slightly different, to one employed previously with which it was possible to integrate the system in an adequate
manner and which resulted useful to describe compact objects with a compactness rate \(u=GM/c^{2}R\leq 0.2660858316^{28}\). Specifically we have that
\[y\left(r\right)=\frac{A\left(1+ar^{2}\right)}{\sqrt{1+\left(\frac{3}{\sqrt{2}}-1 \right)ar^{2}}}, \tag{7}\]
where \(A\) and \(a\) are constants. One useful relation for the integration of the system is obtained by replacing \(y(r)\) in the difference of the equations (4) and (5), leading to:
\[B^{\prime}-\frac{\sqrt{8}[3+\sqrt{2}+4(\sqrt{8}-1)ar^{2}+7a^{2}r^{4}]B}{\left( 1+\sqrt{2}ar^{2}\right)\left(2+3\sqrt{2}+7ar^{2}\right)r}+\frac{(1+ar^{2})(2+ 3\sqrt{2}+7ar^{2})\sqrt{2}}{(1+\sqrt{2}ar^{2})(3+\sqrt{2}+7ar^{2})r}=0,\]
after the integration we obtain:
\[B\left(r\right) = 1+\frac{\left(31-22\sqrt{2}\right)\left(3\,\sqrt{2}+2+7\,ar^{2} \right)^{3}ar^{2}}{343\,\left(1+a\sqrt{2}r^{2}\right)^{3}}\left[C+\ln\left( \frac{\sqrt{2}+3+7\,ar^{2}}{3\,\sqrt{2}+2+7\,ar^{2}}\right)\right] \tag{8}\] \[+ \frac{\left(22-17\sqrt{2}\right)\left(732\sqrt{2}+1832+7\left(5 55\sqrt{2}+251\right)ar^{2}+4606a^{2}r^{4}\right)ar^{2}}{2303\left(\sqrt{2}+2 ar^{2}\right)^{3}},\]
\(C\) is the constant of integration.
## 3 Hydrostatic functions and physical conditions
Once we know the metric functions we will proceed to determine the hydrostatic functions. Replacing the functions \(y(r)\) and \(B\) given by the equations (7) and (8) in the equations (3) and (4) we determine the density and the pressure:
\[kc^{2}\rho\left(r\right) = \frac{3\left(2\,\sqrt{2}+6+\left(-4+15\,\sqrt{2}\right)ar^{2}+14 \,a^{2}r^{4}\right)\left(1-B\left(r\right)\right)}{\left(3\,\sqrt{2}+2+7\,ar^ {2}\right)\left(\sqrt{2}+2\,ar^{2}\right)r^{2}} \tag{9}\] \[-\frac{14\left(12\,\sqrt{2}-13+7\,ar^{2}\right)a^{2}r^{2}}{\left( 3\,\sqrt{2}+2+7\,ar^{2}\right)\left(\sqrt{2}+2\,ar^{2}\right)\left(\sqrt{2}+ 3+7\,ar^{2}\right)},\] \[kP(r) = \frac{2\left(6\,\sqrt{2}-3+7\,ar^{2}\right)a}{\left(1+ar^{2} \right)\left(3\,\sqrt{2}+2+7\,ar^{2}\right)}\] (10) \[-\frac{\left(3\,\sqrt{2}+2+3\,\left(5\,\sqrt{2}+1\right)ar^{2}+21 \,a^{2}r^{4}\right)\left(1-B\left(r\right)\right)}{\left(1+ar^{2}\right) \left(3\,\sqrt{2}+2+7\,ar^{2}\right)r^{2}}.\]
In these equations the expression \((1-B)/r^{2}\) appears, however, it is regular when \(r=0\) as it can be seen from the equation (8). Another important relation to determine if the solution is physically acceptable is the speed of sound, since it is required that the speed of sound in the model does not violate the causality condition. In this case by means of the chain rule we obtain the speed of sound:
\[\frac{v^{2}(r)}{c^{2}}=\frac{1}{c^{2}}\frac{\partial P(\rho)}{\partial\rho}= \frac{\left(S_{1}\left(r\right)B\left(r\right)+\left(3\,\sqrt{2}+2+7\,ar^{2} \right)^{2}\left(1+ar^{2}\right)^{2}\right)S_{2}\left(r\right)}{\left(S_{3} \left(r\right)\left(\sqrt{2}+3+7\,ar^{2}\right)^{2}B\left(r\right)+S_{4}\left( r\right)\right)\left(1+ar^{2}\right)^{2}},\]
where
\[S_{4}(r) = (1+ar^{2})\left[15\sqrt{2}+150+7(34+9\sqrt{2})ar^{2}+98a^{2}r^{4 }\right]\left[3\sqrt{2}+2+7ar^{2}\right]^{2},\] \[S_{1}(r) = \left(\sqrt{2}-4\right)\left(\sqrt{2}+3+7\,ar^{2}\right)\left(2+ \sqrt{2}+\left(5\,\sqrt{2}-1\right)ar^{2}+3\,a^{2}r^{4}\right),\] \[S_{3}(r) = 3\,\left(\sqrt{2}-4\right)\left(10\,\sqrt{2}+30+3\,\left(-4+15 \,\sqrt{2}\right)ar^{2}+14\,a^{2}r^{4}\right),\] \[S_{2}(r) = \left(6\,\sqrt{2}-3+7\,ar^{2}\right)\left(\sqrt{2}+3+7\,ar^{2} \right)\left(\sqrt{2}+2\,ar^{2}\right).\]
### Criteria for physical acceptability
Obtaining a solution to Einstein's equations is not a guarantee that said solution is physically acceptable, there are many solutions that are not physically acceptable\({}^{5}\) due to the fact that they do not comply with certain properties. In the following we will mention the requirements that must be satisfied, some of these will be applied directly and others will be shown in a graphical manner in the following section.\({}^{28}\)
One of the conditions that must be met, is the **regularity of the geometry** when approaching the center. Which can be expressed in an algebraic manner, through the Kretschmann scalar, given its extension, it is enough with showing that the metric coefficients around \(r=0\) are of the form \(\alpha+\beta r^{2}+O(r^{4})\). The expansion of \(B(r)\) and \(y(r)\) in the proximity of \(r=0\) gives us:
\[y\left(r\right)=A\left(\left(1-\frac{\left(3\,\sqrt{2}-6\right)a}{4}r^{2}- \frac{\left(30\,\sqrt{2}-41\right)a^{2}}{16}r^{4}+O\left(r^{6}\right)\right) \right),\]
\[B\left(r\right)=1+\frac{2\,a\left(\left(17\,\sqrt{2}-26\right)\left(C+\ln\left( \frac{\sqrt{2}+3}{3\,\sqrt{2}+2}\right)\right)+41\,\sqrt{2}-80\right)r^{2}}{49}+ O\left(r^{4}\right),\]
besides the regularity, the geometry must be **absent of any event horizon**, this property is easier to demonstrate through a graphic analysis and it will
be analysed in the following section.
**The density and pressure must be finite, positive and monotonically decreasing** as functions of the radial coordinate. That is to say, for \(r\in(0,R)\), \(\rho^{\prime}<0\) and \(P^{\prime}<0\) (condition that will be analysed graphically) and in the center they must have their maximum value. which implies the following set of inequalities:
\[kc^{2}\rho(0)=\frac{6\,a\left(\left(132\,\sqrt{2}+193\right) \left(2\,C-\ln\left(2\right)\right)+3006\,\sqrt{2}+4286\right)}{6713\,\sqrt{2} +9506}>0, \tag{11}\] \[kP(0)=1/49\,a\left(\left(17\,\sqrt{2}-26\right)\left(2\,C-\ln \left(2\right)\right)-65\,\sqrt{2}+134\right)>0, \tag{12}\]
\[\rho^{\prime\prime}(0)=-\frac{5a^{2}\left[6\left(195-103\sqrt{2} \right)\left(2C-\ln 2\right)+3819+6257\sqrt{2}\right]}{49\,\left(\sqrt{2}+3 \right)^{3}}<0, \tag{13}\] \[P^{\prime\prime}(0)=-\frac{3a^{2}\left[2\left(113-72\,\sqrt{2} \right)\left(2\,C-\ln 2\right)+2759-1248\,\sqrt{2}\right]}{49\,\left(\sqrt{2}+3 \right)^{2}}<0. \tag{14}\]
In addition to these inequalities the solution satisfies \(\rho^{\prime}(0)=\) and \(P^{\prime}(0)=0\), that together with the inequalities (11)-(14) implies that \(r=0\) is a maximum for the functions \(\rho\) and \(P\). The **causality condition in the center** of the star requires that it satisfies
\[0\leq\frac{v(0)^{2}}{c^{2}}=\frac{3[24\,\sqrt{2}+4\,C-2\,\ln\,2+\,55\,]}{5[96 \sqrt{2}+12C-6\ln 2+121]}\leq 1. \tag{15}\]
Combining the previous equations we can determine inequalities for the constants \((C,a)\), in particular forming \(k\left(\rho\left(0\right)c^{2}+3\,\,\mathrm{Pr}\left(0\right)\right)=9\,\left( 2-\sqrt{2}\right)a>0\), from where we obtain that \(a>0\).
The constants \(C\) and \(W\) which appear in the metric functions are determined by imposing that **the interior and exterior geometry on the surface of the star \(r=R\) are joined in a continual manner and that the pressure is zero on the surface**. The exterior geometry is described by the exterior Schwarzschild solution:
\[ds^{2}\;=\;-\left(1-\frac{2GM}{c^{2}r}\right)\,dt^{2}+\left(1- \frac{2GM}{c^{2}r}\right)^{-1}dr^{2}+\,r^{2}(d\theta^{2}+\sin^{2}\theta\,d \phi^{2}),\quad r\geq R,\]
where \(M\) represents the total mass inside the fluid sphere. When we impose \(P(R)=0\), from the equation (10) we obtain \(C\):
\[C = -\ln\left(\frac{\sqrt{2}+3+7\,w}{3\,\sqrt{2}+2+7\,w}\right)+ \frac{\left(25\,\sqrt{2}+47\right)W_{1}}{W_{2}\left(3\,\sqrt{2}+2+7\,w\right) ^{2}}, \tag{16}\]
where \(w=aR^{2}\), \(W_{2}=822\,\sqrt{2}+548+822\,\left(5\,\sqrt{2}+1\right)w+5754\,w^{2}\) and
\[W_{1}=3286\sqrt{2}+4048+(8149\sqrt{2}+18623)w+7(2757\sqrt{2}+1075)w^{2}+13426w^{ 3}.\]
Meanwhile from the continuity of the component \(g_{tt}\) in \(r=R\) it results:
\[A^{2}=\frac{\left(3\,\sqrt{2}-2\right)\left(3\,\sqrt{2}+2+7\,aR^{2}\right)^{2 }}{14\left[3\,\sqrt{2}+2+3\,\left(5\,\sqrt{2}+1\right)aR^{2}+21\,a^{2}R^{4} \right](1+aR^{2})}. \tag{17}\]
The continuity of \(g_{rr}\) in \(r=R\) determines the value of the compactness as function of \(w\):
\[u(w)=\frac{GM}{c^{2}R}=\frac{1}{2}(1-B(R))=\frac{\left(6\,\sqrt{2}-3+7\,w \right)w}{3\,\sqrt{2}+2+3\,\left(5\,\sqrt{2}+1\right)w+21\,w^{2}}. \tag{18}\]
The rest of the conditions require of a graphical analysis and these correspond to the **Energy conditions**:
- The Strong Energy Condition: \(c^{2}\rho+3P\geq 0\), \(c^{2}\rho+P\geq 0\) or
- The Dominant Energy Condition: \(\rho\geq 0\) and \(c^{2}\rho\geq|P|\)
And the **Stability condition**, a configuration of static and spherically symmetrical matter is stable if it satisfies the relativistic condition for the adiabatic index:
\[\Gamma=\frac{P+c^{2}\rho}{c^{2}P}\frac{dP}{d\rho}>\frac{4}{3}\qquad\forall\;r \in[0,R]\]
## 4 Graphic representation of the solution
From the graphic analysis of the functions of density, pressure, speed of sound and adiabatic index we obtain that the function which restricts the values of the parameter \(w\leq w_{0}=0.90378\) is that of the adiabatic index, specifically for values of \(w>w_{0}\) the adiabatic index \(\gamma(0)<4/3\), which implies that the solution will be unstable. This maximum value \(w_{0}\) through the equation (18) allows us to obtain the maximum permissible compactness value for the compactness \(u\leq u_{0}=0.23577\). Although the solution is physically acceptable for the compactness values \(u\leq u_{0}\) in the graphic analysis we will focus on the particular case of the star PSR J0030+0451 with estimates of mass \(M=1.44^{+0.15}_{-0.14}M_{\odot}\) and radius \(R=13.02^{+1.24}_{-1.06}km\), obtained through the study of the X-ray emission by means of the NICER (Neutron star Interior Composition Explorer) telescope from the international space station [34]. The graphic representation will be done in terms of the dimensionless variable \(x=r/R\) and the dimensionless functions associated to the physical quantities
of density \(kc^{2}R^{2}\rho\), pressure \(kR^{2}P\) and speed of sound \(v^{2}/c^{2}\). The values of compactness that were chosen for the graphic analysis are \(u_{max}=0.19628\), \(u=0.18086\), \(u=0.16545\), \(u=0.15003\) and \(u_{min}=0.13460\), where \(u_{max}\) it's associated with the maximum mass \(M=1.59M_{\odot}\) and the minimum radius \(R=11.96km\)\(u=u_{max}=\), meanwhile \(u_{min}\) is obtained by taking the minimum mass \(M=1.3M_{\odot}\) and the maximum radius \(R=14.26km\).
In the figure 1 we show the behaviour of the density and pressure for different values of compactness which is obtained from the estimates in base to observations. The graphics show that the density and pressure are positive and monotonically decreasing functions, their values diminish as the compactness value decreases, appearing in a more noticeable manner the difference between the values of the density or the pressure in the center of the star, we also observe that the pressure is zero on the surface. From the figure 1 we observe that the Strong Energy Condition is satisfied, since both the density and the pressure are positive. Also we have that for a specific value of \(u\) the value of the density is much greater than that of the pressure (\(c^{2}\rho>P\)), which implies that the Dominant Energy Condition is also satisfied. From the figure 2, graphic on the right, we observe that the causality condition is met, since \(0.2c^{2}<v^{2}<0.34c^{2}\) and that the speed of sound is lower for lower compactness values, with maximum values on the surface. The stability of the solution is guaranteed by the adiabatic index, the left graph in the figure 2, with \(\gamma\) being a monotonically increasing function, the lowest value of the adiabatic index occurs in the center of the star for the maximum compactness \(u_{max}\) as such the set of compactness values that is being analysed satisfies
Figure 1: Graphic representation of the density and the pressure for the different values of compactness from the star PSR J0030+0451.
\(\gamma>1.6843>4/3\).
The absence of an event horizon and the continuity of the geometry on the surface of the star is shown in the figure 3. In addition,in the graph on the right side of the figure 3, we graph the forces present (the gravitational force \(F_{g}\) and the hydrostatic force \(F_{h}\)), identified by means of the Tolman-Oppenheimer-Volkoff (TOV) equation
\[-\frac{\left(P_{r}+c^{2}\rho\right)y^{\prime}}{y}-P_{r}{}^{\prime}=0,\qquad \Rightarrow\qquad F_{g}(r)=-\frac{\left(P_{r}+c^{2}\rho\right)y^{\prime}}{y}, \quad F_{h}(r)=-P_{r}{}^{\prime}.\]
In the figure 3 we can observe the attractive effect of the gravitational force \(F_{g}\) countered by the hydrostatic repulsive force.
## 5 Discussion and conclusions
The graphic analysis has allowed us to show that the solution presented satisfies every requirement which makes it physically acceptable and although the graphic analysis was done considering estimated values of mass and radius for the star PSR J0030+045, a similar behaviour is present for other values of compactness, as long as \(u\leq 0.23577\). To confirm that the behaviour of the solution is not only graphically compatible but also that the orders of magnitude which are obtained from the model are compatible with those expected, on the tables 1 and 2 we report the values of density, pressure, speed of sound and adiabatic index in the interior for the case of maximum 1 and minimum compactness 2.
From the tables 1 and 2 it
Figure 2: Graphic representation of the speed of sound and the adiabatic index for the different compactness values of the star PSR J0030+0451.
orders of magnitude of the density and pressure are also those expected for the star PSR J0030+0451. With which we can conclude that the model obtained is physically acceptable and useful to represent stars with compactness \(u\leq 0.23577\). Another relevance of the solution constructed is that it can be useful as seed for obtaining new physically acceptable solutions[35] in which we consider the contribution of the presence of electric charge[36] or from an anisotropy factor,[37] as well as in the determination of new solutions in alternative gravitational theories,[38] investigations that could be developed in future works.
Figure 3: In the graph of the left side we present the behaviour of the metric coefficients from the interior and exterior geometry, meanwhile in the graph of the right side it ’s shown the behaviour of the forces in the interior of the star.
## Acknowledgments
We appreciate the facilities provided by the Universidad Michoacana de San Nicolas de Hidalgo and the CIC -UMSNH during the realization of this investigation as well as the CONACYT for the support given.
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline \(r(km)\) & \(\rho\left(10^{17}\frac{kg}{m^{3}}\right)\) & \(P(10^{33}Pa)\) & \(v^{2}(c^{2})\) & \(\gamma\) \\ \hline
0. & 7.5125 & 9.4371 & 0.20654 & 1.6843 \\ \hline
1.1960 & 7.4155 & 9.2563 & 0.20795 & 1.7052 \\ \hline
2.3920 & 7.1398 & 8.7360 & 0.21214 & 1.7702 \\ \hline
3.5880 & 6.7211 & 7.9258 & 0.21905 & 1.8887 \\ \hline
4.7840 & 6.2131 & 6.9019 & 0.22857 & 2.0775 \\ \hline
5.9800 & 5.6621 & 5.7429 & 0.24055 & 2.3719 \\ \hline
7.1760 & 5.1146 & 4.5238 & 0.25474 & 2.8428 \\ \hline
8.3720 & 4.5970 & 3.3040 & 0.27094 & 3.6581 \\ \hline
9.5680 & 4.1283 & 2.1264 & 0.28887 & 5.3282 \\ \hline
10.764 & 3.7155 & 1.0202 & 0.30824 & 10.395 \\ \hline
11.960 & 3.3547 & 0 & 0.32880 & \(\infty\) \\ \hline \end{tabular}
\end{table}
Table 1: Interior behavior of the physical values of the density, pressure, speed of the sound and adiabatic index for the PSR J0030+0451, with \(R=11.96km\) and \(M=1.59M_{\odot}\), \(u_{max}=0.19628\). |
2309.15981 | A categorical representation of games | Strategic games admit a multi-graph representation, in which two kinds of
relations, accessibility, and preferences, are used to describe how the players
compare the possible outcomes. A category of games with a fixed set of players
$\mathbf{Gam}_I$ is built from this representation, and a more general category
$\mathbf{Gam}$ is defined with games having different sets of players, both
being complete and cocomplete. The notion of Nash equilibrium can be
generalized in this context. We then introduce two subcategories of
$\mathbf{Gam}$, $\mathbf{NE}$ and $\mathbf{Gam}^{NE}$ in which the morphisms
are equilibria-preserving. We illustrate the expressivity and usefulness of
this framework with some examples. | Fernando Tohmé, Ignacio Viglizzo | 2023-09-27T19:57:03Z | http://arxiv.org/abs/2309.15981v2 | # A categorical representation of games
###### Abstract
Strategic games admit a multi-graph representation, in which two kinds of relations, accessibility, and preferences, are used to describe how the players compare the possible outcomes. A category of games with a fixed set of players \(\mathbf{Gam}_{I}\) is built from this representation, and a more general category \(\mathbf{Gam}\) is defined with games having different sets of players, both being _complete_ and _cocomplete_. The notion of Nash equilibrium can be generalized in this context. We then introduce two subcategories of \(\mathbf{Gam}\), \(\mathbf{NE}\) and \(\mathbf{Gam}^{NE}\) in which the morphisms are equilibria-preserving. We illustrate the expressivity and usefulness of this framework with some examples.
## 1 Introduction
Game Theory constitutes one of the main foundations of modern analyses of interaction among agents (human and otherwise). Large swaths of Economics and Evolutionary Biology are game-theoretically characterized. Curiously enough, games have not been categorified until recently. Some previous approaches can be found in [21], [13], [14], [22] and [23]. The theory of _open games_ ([12][13]) pinpoints on one of the main obstacles for a categorical treatment of games. Namely, compositionality is hard to achieve without retrofitting the notion of game with an approach based on _lenses_ and other concepts of the recently developed field of categorical _optics_[12].
Our goal here differs from the current applied category theory treatments of games in the literature. Instead of only focusing on the composition of games as a way of building up new ones, we abstract away payoffs and actions representing them through binary relations on the sets of possible outcomes of the game. Then each game becomes a multigraph with relations indexed by the set of players. The vertices correspond to the outcomes, and for each player, we consider two sorts of edges, one representing the preferences, and another indicating which outcomes are effectively reachable through the actions (accessibility relation). While actions are not explicitly included in the model, this presentation provides a more flexible and expressive framework for some aspects of game theory.
We think that our representation of strategic games is at the same time close enough to the one commonly used by game theory practitioners, yet flexible enough to allow us to build the basic category-theoretic constructions. The key for the categorical constructions in this paper are the properties of the accessibility and preference relations. These relations suffice to define Nash equilibria and thus facilitate the characterization of a category in which morphisms among their objects preserve the equilibria.
In the next section, we introduce the representation of strategic games we will use in the paper. In section 3 we introduce two categories in which these representations of games are the objects,
one with a fixed set of players, \(\mathbf{Gam}_{I}\), and another including games with different sets of players, \(\mathbf{Gam}\). We prove that both these categories are complete, cocomplete, and have exponentials. In section 4 we define subgames and give a generalization of the notion of Nash equilibrium arising from the interaction between the preference and accessibility relations. Finally, in section 5 we investigate two subcategories of \(\mathbf{Gam}\) in which the morphisms preserve Nash equilibria.
## 2 Multi-Graph Representation of Strategic Games
We are going to consider a category whose objects model some aspects of _strategic games_ ([1]):
**Definition 2.1**.: _A strategic game is a structure of the form \(G=\langle I,\{A_{i}\}_{i\in I},\{\pi_{i}\}_{i\in I}\rangle\) where \(I=\{1,\ldots,n\}\) is a set of players and \(A_{i},i\in I\) is a finite set of strategies for each player, and \(\pi_{i}:\prod_{i\in I}A_{i}\to\mathbb{R}\) is player \(i\)'s payoff._
The critical step in the definition of a category is the specification of the class of _morphisms_ among its objects. In the case of the category of games, this requires an alternative definition of _game_. More specifically, our approach is based on replacing profiles \(a=(a_{1},\ldots,a_{n})\in\prod_{i\in I}A_{i}\) by the outcomes \(o\) obtained when all the players play their actions in \(a\). Even so, we need to be able to preserve in the representation the fact that _individual_ agents may change the outcome of a game as well as prefer an outcome over others.
This alternative representation of a game can be seen as defining a _multigraph_. That is, as a set of nodes with multiple edges between them (including loops at the nodes [12]). More specifically, each node corresponds to an outcome of the game and between any pair of nodes there can exist, for each player, two types of edges. One type is undirected, corresponding to the possibility that the player to which it belongs may change from one outcome to the other. The second type is directed and represents that one outcome is preferred by the player over the other.
More formally, and similarly to what is observed in [1], we have:
**Definition 2.2**.: _A game is \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\), where \(I=\{1,\ldots,n\}\) is a set of players and \(O\) is a finite set of outcomes. For each \(i\in I\), \(R_{i}\) is an accessibility relation, a subset of \(O\times O\). In turn, \(\preceq_{i}\subseteq O\times O\) is a preorder representing player \(i\)'s preferences over outcomes._
To see how games defined in this way have some of the same properties as those in the more standard approach in Game Theory above, take as set \(O\) of outcomes the list of the combined actions of all the players, that is, the _profiles_ of actions \((a_{1},\ldots,a_{i},\ldots,a_{n})\) in \(\prod_{i\in I}A_{i}=O\). Then, given any outcome \(o=(a_{1},\ldots,a_{i},\ldots,a_{n})\), an unilateral change of the action chosen by \(i\) (while all other players stay put), say from \(a_{i}\) to \(a^{\prime}_{i}\), yields \(o^{\prime}=(a_{1},\ldots,a^{\prime}_{i},\ldots,a_{n})\). We can understand this as indicating that the pair \(\langle o,o^{\prime}\rangle\) belongs to a relation of accessibility \(R_{i}\), indicating how unilateral choices of a player \(i\) can lead from an outcome to other ones. In this way, \(R_{i}\) captures the result of exerting the actions of \(A_{i}\).1
Footnote 1: Alternatively we could have introduced a relation defined by each _action_ of player \(i\). That is, \(o\) is related to \(o^{\prime}\) if and only if \(o=(a_{1},\ldots,a^{\prime}_{i},\ldots,a_{n})\) and \(o=(a_{1},\ldots,a_{i},\ldots,a_{n})\).
We assume that the relation \(R_{i}\) is reflexive, i.e. \(\langle o,o\rangle\in R_{i}\). Again, this can be justified by considering the case of strategic games: if \(o=(\ldots,a_{i},\ldots)\) and \(i\) does not change her choice \(a_{i}\), the result is again \(o\). Analogously, we assume that \(R_{i}\) is symmetric. In the context of strategic games, this is justified by the fact that from \(o=(a_{1},\ldots,a_{i},\ldots,a_{n})\), a unilateral change from \(a_{i}\) to \(a^{\prime}_{i}\) yields \(o^{\prime}=(a_{1},\ldots,a^{\prime}_{i},\ldots,a_{n})\) just as \(o^{\prime}\) a unilateral change from \(a^{\prime}_{i}\) to \(a_{i}\) yields \(o\). That is,
if \(\langle o,o^{\prime}\rangle\in R_{i}\) then \(\langle o^{\prime},o\rangle\in R_{i}\). We can illustrate the characterization of accessibility relations in a strategic game using the following example.
**Example 2.3**.: _Let us consider the well-known Prisoner's Dilemma (PD) as a strategic game given by the matrix:_
\begin{tabular}{c|c|c} & \(C\) & \(D\) \\ \hline \(C\) & _(-1,-1)_ & _(-3,0)_ \\ \hline \(D\) & _(0,-3)_ & _(-2,-2)_ \\ \end{tabular}
_In this game \(PD\), \(I=\{1,2\}\), where 1 chooses rows and 2 columns. Here \(O_{PD}=\{(C,C),(C,D),\)\((D,C),(D,D)\}\), where \(C\) stands for 'Cooperate' and \(D\) stands for 'Defect'. Figure 1 presents the corresponding multi-graph representation of this game. We can see, for instance, that player 1 can change unilaterally from \(D\) to \(C\), so \((D,C)\) is accessible from \((C,C)\) and vice-versa, and the same for \((D,D)\) and \((C,D)\). The preferences are such that for instance \((C,C)\preceq_{1}(C,D)\) since \(\pi_{1}(C,C)=-1\) and \(\pi_{1}(C,D)=0\)._
Notice that in this standard interpretation of the game, each \(R_{i}\) is transitive. Lifting this requirement allows us to model other situations:
**Example 2.4**.: _Consider the following strategic game_
\begin{tabular}{c|c|c} & \(L\) & \(R\) \\ \hline \(T\) & _(0,0)_ & _(-1,2)_ \\ \hline \(D\) & _(2,-1)_ & _(0,0)_ \\ \end{tabular} _where again player 1 chooses rows and 2 columns. In this game, the outcomes \((T,L)\) and \((D,R)\) both lead to the same payoff of \(0\) for both players. We could represent this as a single outcome \(o\)._
Figure 1: Multi-graph representing the Prisoner’s Dilemma in Example 2.3. Blue and red lines correspond to players 1 and 2, respectively. Full (undirected) lines correspond to accessibility relations while dashed ones (directed) represent preferences.
_Let us call \(p\) the outcome corresponding to \((D,L)\) and \(q\) to that of \((T,R)\). Figure 2 represents this game._
As said, the representation of games as multigraphs is intended to facilitate the specification of a category of games. This is based on the properties of the two types of edges or relations among outcomes.
For each player \(i\), the pair \(\langle O,R_{i}\rangle\) can be regarded as an object in the category **EndoRel**, which is the category of sets endowed with a binary relation on them (an _endorelation_), with morphisms the set functions that preserve the relation. That is, \(f:\langle A,R\rangle\rightarrow\langle B,S\rangle\) is a morphism in **EndoRel** if for every \(a,a^{\prime}\in A\), if \(\langle a,a^{\prime}\rangle\in R\) then \(\langle f(a),f(a^{\prime})\rangle\in S\). We have chosen for our representation to be a bit more specific, asking that the binary relations are reflexive and symmetric. This can also be interpreted as a category of _undirected graphs_.
On the other hand, the pairs \(\langle O,\preceq_{i}\rangle\) can be seen as objects in **PreOrd**, the category of preordered sets with monotone functions as morphisms.
In the next section we define different categories in which the objects are multi-graph versions of strategic games and the morphisms preserve the accessibility and the preference relations, acting as morphisms in **EndoRel** and **PreOrd**, respectively.
## 3 Categories of Multi-Graph Representation of Games
### Games with a fixed set of players
As a first approach to the categorification of the class of multi-graph representations of strategic games presented in Definition 2.2, we consider the subclass of them in which the set of players is
Figure 2: Multi-graph representing the game \(G_{1}\) with set of outcomes \(O_{1}=\{o,p,q\}\) from Example 2.4, in which the accessibility relation is not transitive. Red and blue lines correspond to players \(1\) and \(2\), respectively. Full lines correspond to accessibility relations and dashed ones to preferences.
the same set \(I\) and define the category \(\mathbf{Gam}_{I}\):
**Definition 3.1**.: _Each object in the category \(\mathbf{Gam}_{I}\) is \(G=\langle O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and if \(G^{\prime}=\langle O^{\prime},\{R^{\prime}_{i}\}_{i\in I},\{\preceq^{\prime}_{i }\}_{i\in I}\rangle\) is also an object, a morphism \(f:G\to G^{\prime}\) is a function \(f:O\to O^{\prime}\) such that for all \(i\in I\), \(o^{\prime},p^{\prime}\in O^{\prime}\), \(f\) preserves \(R_{i}\) and \(\preceq_{i}\), that is:_
\[oR_{i}p\text{ implies }f(o)R^{\prime}_{i}f(p),\]
_and_
\[o\preceq_{i}p\text{ implies }f(o)\preceq^{\prime}_{i}f(p).\]
In other words, \(f\) is a morphism in \(\mathbf{Gam}_{I}\) if and only if for each \(i\in I\), \(f:\langle O,R_{i}\rangle\rightarrow\langle O^{\prime},R^{\prime}_{i}\rangle\) and \(f:\langle O,\preceq_{i}\rangle\rightarrow\langle O^{\prime},\preceq^{\prime}_ {i}\rangle\) are \(\mathbf{EndoRel}\) and \(\mathbf{PreOrd}\) morphisms, respectively.
**Example 3.2**.: _The games \(PD\) from Example 2.3 and \(G_{1}\) of Example 2.4 are objects in the category \(\mathbf{Gam}_{\{1,2\}}\). For each outcome \(y\) in \(O_{1}\), the set of outcomes of \(G_{1}\), there is a constant morphism \(\boldsymbol{y}\) with \(\boldsymbol{y}(x)=y\) for every \(x\in O_{PD}\). It is easy to check that since all the relations are reflexive, \(\boldsymbol{y}\) preserves \(R_{1},R_{2},\preceq_{1}\) and \(\preceq_{2}\)._
_A non-constant morphism is given by \(g(C,C)=o=g(D,D)\), \(g(D,C)=p\) and \(g(C,D)=q\). Again, we can easily check that \(g\) preserves \(R_{1},R_{2},\preceq_{1}\), and \(\preceq_{2}\)._
It is clear that the composition of morphisms is a morphism, and that the identity functions over the sets of outcomes are morphisms. Also, the composition is associative, since it is just the composition of set functions preserving relations. Thus we have a category \(\mathbf{Gam}_{I}\) of games with a fixed set of players \(I\).
One of the advantages of the categorical approach is that we can determine (up to isomorphism) the objects with desirable universal properties in \(\mathbf{Gam}_{I}\). These objects can be obtained as in the categories \(\mathbf{EndoRel}\) and \(\mathbf{PreOrd}\). For those categories, the proofs are worked out in [20], and can be easily adapted to the case we deal with here of symmetric and reflexive relations.
As a first example we have:
* **Product**: given games \(G=\langle O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and \(G^{\prime}=\langle O^{\prime},\{R^{\prime}_{i}\}_{i\in I},\{\preceq^{\prime}_ {i}\}_{i\in I}\rangle\), a product can be constructed as \[G\times_{I}G^{\prime}=\langle O\times O^{\prime},\{R^{\times_{I}}_{i}\}_{i\in I },\{\preceq^{\times_{I}}_{i}\}_{i\in I}\rangle\] where \[\langle\langle o,o^{\prime}\rangle,\langle p,p^{\prime}\rangle\rangle\in R^{ \times_{I}}_{i}\text{ iff }\langle o,p\rangle\in R_{i}\text{ and }\langle o^{\prime},p^{\prime}\rangle\in R^{\prime}_{i}\] and \[\langle o,o^{\prime}\rangle\preceq^{\times_{I}}_{i}\langle p,p^{\prime}\rangle \text{ iff }o\preceq_{i}p\text{ and }o^{\prime}\preceq^{\prime}_{i}p^{\prime}.\] \(R^{\times_{I}}_{i}\) is reflexive and symmetric, since both \(R_{i}\) and \(R^{\prime}_{i}\) are reflexive and symmetric relations. By the same token, \(\preceq^{\times_{I}}_{i}\) is a reflexive and transitive relation. The natural projection functions \(\pi_{1}:O\times O^{\prime}\to G\) and \(\pi_{2}:O\times O^{\prime}\to O^{\prime}\) provide the projection morphisms. For instance, if \(\langle\langle o,o^{\prime}\rangle,\langle p,p^{\prime}\rangle\rangle\in R^{ \times_{I}}_{i}\) then \(\langle\pi_{1}\langle o,o^{\prime}\rangle,\pi_{1}\langle p,p^{\prime}\rangle \rangle\in R_{i}\), that is, \(\langle o,p\rangle\in R_{i}\). The product \(G\times_{I}G^{\prime}\) can be understood as a game in which the players in \(I\) are playing simultaneously the games \(G\) and \(G^{\prime}\).
A fundamental colimit can be constructed in \(\mathbf{Gam}_{I}\):
* **Coproduct**: if we denote with \(X+Y\) the disjoint union of the sets \(X\) and \(Y\), and identify the elements in \(X\) and \(Y\) with their inclusion in \(X+Y\), given \(G=\langle O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and \(G^{\prime}=\langle O^{\prime},\{R^{\prime}_{j}\}_{j\in J},\{\preceq^{\prime}_{ j}\}_{j\in J}\rangle\), a coproduct can be constructed as \[G+_{I}G^{\prime}=\langle O+O^{\prime},\{R_{i}+R^{\prime}_{i}\}_{i\in I},\{ \preceq_{i}+\preceq^{\prime}_{i}\}_{i\in I}\rangle,\] where \(R_{i}+R^{\prime}_{i}\) is a reflexive and symmetric relation since \(R_{i}\) and \(R^{\prime}_{i}\) are reflexive and symmetric relations for every \(i\in I\). Similarly, \(\preceq_{i}+\preceq^{\prime}_{i}\) is a reflexive and transitive relation, since \(\preceq_{i}\) and \(\preceq^{\prime}_{i}\) are reflexive and transitive relations. The definitions of these relations ensure that the set inclusions \(i_{O}:O\to O+O^{\prime}\) and \(i_{O^{\prime}}:O^{\prime}\to O+O^{\prime}\) correspond to the inclusion morphisms \(i_{G}:G\to G+_{I}G^{\prime}\) and \(i_{G^{\prime}}:G^{\prime}\to G+_{I}G^{\prime}\) respectively. An interpretation of the game \(G+_{I}G^{\prime}\) is that the agents in the set \(I\) are presented with all the possible outcomes of both \(G\) and \(G^{\prime}\).
Besides products, two other limit objects can be defined in \(\mathbf{Gam}_{I}\), namely _terminal objects_ and _equalizers_:
* **Terminal object**: let \(\mathbb{T}_{I}=\langle O_{\mathbb{T}},\{R_{i}^{\mathbb{T}_{I}}\}_{i\in I},\{ \preceq_{i}^{\mathbb{T}_{I}}\}_{i\in I}\rangle\) be the game in which \(O_{\mathbb{T}}\) is a singleton, while all the relations \(R_{i}^{\mathbb{T}_{I}}\) and \(\preceq_{i}^{\mathbb{T}_{I}}\) are the identity on \(O_{\mathbb{T}}\). Given any other \(I\)-game \(G=\langle O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) there is a unique function \(!_{O}:O\to O_{\mathbb{T}}\), and it determines the unique game morphism from \(G\) to \(\mathbb{T}_{I}\).
In words: there exists a game \(\mathbb{T}_{I}\) in \(\mathbf{Gam}_{I}\) such that for each game \(G\) there exists a unique morphism from \(G\) to \(\mathbb{T}_{I}\). This is a constant morphism as those from Example 3.2.
With respect to the next construction, given two morphisms from \(G\) to \(G^{\prime}\), their equalizer is a subgame in which the outcomes are a subset of those of \(G\) on which the two morphisms coincide.
* **Equalizer**: let \(f\) and \(g\) be two morphisms from \(G\) to \(G^{\prime}\). Then we can define a game \(E=\langle O_{E},\{R_{i}^{E_{I}}\}_{i\in I},\{\preceq_{i}^{E_{I}}\}_{i\in I}\rangle\), where \(O_{E}=\{o\in O:f(o)=g(o)\}\), \(R_{i}^{E_{I}}=R_{i}|_{O_{E}}\), and \(\preceq_{i}^{E_{I}}=\preceq_{i}|_{O_{E}}\). Since \(R_{i}^{E_{I}}\) is a restriction of \(R_{i}\), it is also reflexive and symmetric, while \(\preceq_{i}^{E_{I}}\), being a restriction of \(\preceq_{i}\) it is also a preorder for every \(i\in I\). The equalizing morphism \(e:E\to G\) is given by the inclusion map \(e:O_{E}\to O\).
Analogously, the corresponding _colimits_, i.e. _initial objects_ and _coequalizers_, also exist in \(\mathbf{Gam}_{I}\):
* **Initial object**: consider the empty game \(\mathbf{\emptyset}_{I}\), where the set of outcomes is the empty set, and for each \(i\in I\), all the relations \(R_{i}\) and \(\preceq_{i}\) are the empty set as well. There exists a unique morphism from \(\mathbf{\emptyset}_{I}\) to any game \(G\).
* **Coequalizer**: in the category of sets the coequalizer of two functions \(f,g:X\to Y\) can be constructed as a quotient by a relation \(\sim\), the smallest equivalence relation on the set \(Y\) such that the pairs \(\langle f(x),g(x)\rangle\) are in the relation for every \(x\in X\). Recall that, if we denote by \([y]\) the elements of \(Y/_{\sim}\), for any relation \(R\), and equivalence relation \(\sim\) over \(Y\), \(R/_{\sim}\) is the relation \(\{\langle[y],[y^{\prime}]\rangle:\langle y,y^{\prime}\rangle\in R\}\).
Accordingly, if \(f\) and \(g\) are two morphisms from \(G\) to \(G^{\prime}\) let \(\sim\) be the equivalence relation on \(O^{\prime}\) generated by the pairs \(\langle f(o),g(o)\rangle\) for all the \(o\in O\). Thus we can define a game
\[G^{\prime}/_{\sim}=\langle O^{\prime}/_{\sim},\{R^{\prime}_{j}/_{\sim}\}_{j\in J },\{(\preceq^{\prime}_{j}/_{\sim})^{t}\}_{j\in J}\rangle\]
where by \((\preceq^{\prime}_{j}/_{\sim})^{t}\) we denote the transitive closure of the relation \(\preceq^{\prime}_{j}/_{\sim}\).
It can be proved that \([\cdot]:G^{\prime}\to G^{\prime}/_{\sim}\) is a morphism in \({\bf Gam}_{I}\) and that \(G^{\prime}/_{\sim}\) has the universal mapping property: \([\cdot]\circ f=[\cdot]\circ g\) and if there is a morphism \(m:G^{\prime}\to G^{\prime\prime}\) such that \(m\circ f=m\circ g\), then there is a unique morphism \(u:G^{\prime}/_{\sim}\to G^{\prime\prime}\) such that \(u\circ[\cdot]=m\).
Recalling that a category is _complete_ if it has small products and equalizers and it is _cocomplete_ if it has all the small coproducts and coequalizers we have shown that:
**Theorem 3.3**.: \({\bf Gam}_{I}\) _is a complete and cocomplete category._
Another construction of interest is that of _exponential object_. The game \(G^{\prime G}\) has as outcomes the different ways that the outcomes in \(G\) can be mapped to the outcomes in \(G^{\prime}\) while preserving the accessibility and preference relations. We have:
**Proposition 3.4**.: _Given games \(G,G^{\prime}\) in \({\bf Gam}_{I}\), there exists an exponential object \(G^{\prime G}\)._
Proof.: Given two games \(G=\langle O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and \(G^{\prime}=\langle O^{\prime},\{R^{\prime}_{i}\}_{i\in I},\{\preceq^{\prime}_{ i}\}_{i\in I}\rangle\), we define a game:
\[G^{\prime G}=\langle{\cal O},\{R^{\prime\prime}_{i}\}_{i\in I},\{\preceq^{ \prime\prime}_{i}\}_{i\in I}\rangle\]
where \({\cal O}=\{f\in O^{\prime O}:f\) is a morphism from \(G\) to \(G^{\prime}\}\), and the relations \(R^{\prime\prime}_{i}\) and \(\preceq^{\prime\prime}_{i}\) are defined by: for all \(f,g\in{\cal O}\),
\[fR^{\prime\prime}_{i}g\mbox{ if and only if for all }o,p\in O,oR_{i}p\mbox{ implies }f(o)R^{\prime}_{i}g(p),\]
and
\[(*)\hskip 14.226378ptf\preceq^{\prime\prime}_{i}g\mbox{ if and only if for all }o\in O,f(o)\preceq^{\prime}_{i}g(o).\]
The condition in the definition of \(\preceq^{\prime\prime}_{i}\) may seem simpler than the one for \(R^{\prime\prime}_{i}\), but given that \(\preceq_{i}\) and \(\preceq^{\prime}_{i}\) are preorders, one can derive a similar condition: if \(o\preceq_{i}p\), then by \((*)\), \(f(o)\preceq^{\prime}_{i}g(o)\), and since \(g\) is a morphism, \(g(o)\preceq^{\prime}_{i}g(p)\) so \(f(o)\preceq^{\prime}_{i}g(p)\) by the transitivity of \(\preceq^{\prime}_{i}\).
With these definitions, we can prove that \(R^{\prime\prime}_{i}\) is reflexive and symmetric. Indeed, since for each morphism \(f:G\to G^{\prime}\), every \(i\in I\) and every \(o,p\in O\), \(oR_{i}p\) implies \(f(o)R^{\prime}_{i}f(p)\), we have that \(fR^{\prime\prime}_{i}f\). For symmetry, assume that \(fR^{\prime\prime}_{i}g\). Then, if \(oR_{i}p\), by the symmetry of \(R_{i},pR_{i}o\) so \(f(p)R^{\prime}_{i}g(o)\) and by the symmetry of \(R^{\prime}_{i}\), \(g(o)R^{\prime}_{i}f(p)\), which proves \(gR^{\prime\prime}_{i}f\).
Similarly, it can be proved that \(\preceq^{\prime\prime}_{i}\) is a preorder. Reflexivity is immediate, since for all \(o\in O\), \(f(o)\preceq^{\prime}_{i}f(o)\) and thus \(f\preceq^{\prime\prime}_{i}f\). For transitivity assume that \(f\preceq^{\prime\prime}_{i}g\) and \(g\preceq^{\prime\prime}_{i}h\). Then, for all \(o\in O\) we have that \(f(o)\preceq^{\prime}_{i}g(o)\) and \(g(o)\preceq^{\prime}_{i}h(o)\). By transitivity of \(\preceq^{\prime}_{i}\) we have \(f(o)\preceq^{\prime}_{i}h(o)\) for all \(o\in O\). That is, \(f\preceq^{\prime\prime}_{i}h\).
For any other game \(G^{\prime\prime}=\langle O^{\prime\prime},\{S_{i}\}_{i\in I},\{\preceq_{i}\}_{ i\in I}\rangle\), given a morphism \(h:G^{{}^{\prime\prime}}\times G\to G^{\prime}\), we can define as in the category of sets \(\psi:G^{\prime\prime}\to G^{\prime G}\) by \(\psi(o^{\prime\prime})(o)=h(o^{\prime\prime},o)\). First, we need to check that for every \(o^{\prime\prime}\in O^{\prime\prime}\), \(\psi(o^{\prime\prime})\) is a morphism from \(G\) to \(G^{\prime}\), and then that \(\psi\) itself is a morphism from \(G^{\prime\prime}\) to \(G^{\prime G}\).
To see the first part, fix \(o^{\prime\prime}\in O^{\prime\prime}\), and suppose that \(o,p\in O\) are such that \(oR_{i}p\). Then, as \(\langle o^{\prime\prime},o\rangle S_{i}\times_{I}R_{i}\langle o^{\prime\prime},p\rangle\), it follows that \(h(o^{\prime\prime},o)R^{\prime}_{i}h(o^{\prime\prime},p)\), that is, \(\psi(o^{\prime\prime})(o)R^{\prime}_{i}\psi(o^{\prime\prime})(p)\).
To show that \(\psi\) is a morphism, take now \(o^{\prime\prime},p^{\prime\prime}\in O^{\prime\prime}\) such that \(o^{\prime\prime}S_{i}p^{\prime\prime}\). For any \(o,p\in O\) such that \(oR_{i}p\), we have that \(\langle o^{\prime\prime},o\rangle S_{i}\times_{I}R_{i}\langle p^{\prime\prime},p\rangle\), and thus \(h(o^{\prime\prime},o)R_{i}^{\prime}h(p^{\prime\prime},p)\) so \(\psi(o^{\prime\prime})(o)R_{i}^{\prime}\psi(p^{\prime\prime})(p)\). This implies that \(\psi(o^{\prime\prime})R_{i}^{\prime\prime}\psi(p^{\prime\prime})\).
Thus \(\psi\) is a morphism that makes the following diagram commute, and its uniqueness follows from its definition on the corresponding sets.
Therefore, as a consequence of Theorem 3.3 and Proposition 3.4:
**Theorem 3.5**.: \(\mathbf{Gam}_{I}\) _is a cartesian complete category._
**Proposition 3.6**.: _If \(G\neq\emptyset\), then there is an injection \(\psi:G^{\prime}\to G^{\prime G}\)._
Proof.: Define for each \(o^{\prime}\in O^{\prime}\), \(\psi(o^{\prime})=\boldsymbol{o^{\prime}}\), where \(\boldsymbol{o^{\prime}}\) is the constant function defined by \(\boldsymbol{o^{\prime}}(o)=o^{\prime}\) for every \(o\in O\). Clearly \(\psi\) is injective. Now, if we have that for some player \(i\in I\), \(o^{\prime}R_{i}^{\prime}p^{\prime}\), then \(\psi(o^{\prime})R_{i}^{\prime\prime}\psi(p^{\prime})\): indeed, if \(o,p\in O\) and \(oR_{i}p\), then \(\boldsymbol{o^{\prime}}(o)R_{i}^{\prime}\boldsymbol{p^{\prime}}(p)\). A simpler argument works for the preferences.
A particular case of \(\mathbf{Gam}_{I}\) is the category of "games" with a single player \(\mathbf{1}\), denoted \(\mathbf{Gam_{1}}\). It is immediate to see that Theorem 3.3 and Proposition 3.4 apply to \(\mathbf{Gam_{1}}\), where \(I=\{\mathbf{1}\}\). In this category the objects represent _decision problems_. The accessibility relation \(R_{\mathbf{1}}\) can be understood as the feasibility of changing from a possible outcome to another, while \(\preceq_{\mathbf{1}}\) represents the preferences over the outcomes.
We also have \(\mathbf{Gam}_{\emptyset}\), in which the games have no players and therefore no relations are given over the sets of outcomes. Thus, each game in this category can be identified with the set of its outcomes and the morphisms among games in this category with the corresponding set functions.
**Proposition 3.7**.: _Given a morphism \(f:G\to G^{\prime}\) in \(\mathbf{Gam}_{I}\),_
1. \(f\) _is monic iff_ \(f:O\to O^{\prime}\) _is injective._
2. \(f\) _is epic iff_ \(f:O\to O^{\prime}\) _is surjective._
Proof.:
1. If \(f:O\to O^{\prime}\) is not injective, there exists outcomes \(o\neq p\in O\) such that \(f(o)=f(p)\). Consider the game \(\mathbf{1}\) with a single outcome \(*\) in which all the relations \(R_{i}\) and \(\preceq_{i}\) are the identity on the singleton. Let \(x\) and \(y\) be the functions that send \(*\) to \(o\) and \(p\) respectively. It is easy to check that \(x\) and \(y\) are morphisms and \(f\circ x=f\circ y\) but \(x\neq y\), so \(f\) cannot be monic. In the other direction, assume that \(f\) is injective, and \(x\) and \(y\) are morphisms from a game \(G^{\prime\prime}\) to \(G\) such that \(f\circ x=f\circ y\). Then for every outcome \(o^{\prime\prime}\) of \(O^{\prime\prime}\) we have that \(f(x(o^{\prime\prime}))=f(y(o^{\prime\prime}))\), and since \(f\) is injective, \(x(o^{\prime\prime})=y(o^{\prime\prime})\), so \(x=y\).
2. If \(f\) is not surjective, then there exists an outcome \(o^{\prime}\) that is not in the image of \(f\). Consider a game \(G^{\prime\prime}\) with two different outcomes, \(O^{\prime\prime}=\{a,b\}\) and the relations \(R_{i}^{\prime\prime}\) and \(\preceq_{i}^{\prime\prime}\) on them are all equal to \(O^{\prime\prime}\times O^{\prime\prime}\). Now consider the constant morphism \(\boldsymbol{a}\) and a morphism \(g\) that sends all the outcomes in \(O^{\prime}\) to \(a\), except for \(o^{\prime}\) that goes to \(b\). Since all elements are related in \(G^{\prime\prime}\), \(g\) is a morphism and different from \(\boldsymbol{a}\), with \(\boldsymbol{a}\circ f=g\circ f\). If we assume that \(f\) is surjective and \(g,h:G^{\prime}\to G^{\prime\prime}\) are such that \(g\circ f=h\circ f\), then for every \(o^{\prime}\in O^{\prime}\) we have that \(o^{\prime}=f(o)\) for some \(o\in O\). Therefore \(g(o^{\prime})=g(f(o))=h(f(o))=h(o^{\prime})\) which proves that \(g=h\) and therefore \(f\) is epic.
### Games with different sets of players
We now generalize \(\mathbf{Gam}_{I}\) by considering all the components presented in Definition 2.2. Since the sets of players \(I\) and \(J\) may differ between a given pair of games \(G\) and \(G^{\prime}\), a morphism \(f:G\to G^{\prime}\) should indicate how \(I\) maps onto \(J\) and how it preserves the relations defining the games. More precisely:
**Definition 3.8**.: _Each object in the category \(\mathbf{Gam}\) is \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and if \(G^{\prime}=\langle J,O^{\prime},\{R_{i}^{\prime}\}_{i\in I},\{\preceq_{i}^{ \prime}\}_{i\in I}\rangle\) is also an object, a morphism \(f:G\to G^{\prime}\) consists of a pair of functions \(f_{p}:I\to J\) and \(f_{O}:O\to O^{\prime}\), such that given \(o,p\in O\), for each \(i\in I\):_
* _if_ \(\langle o,p\rangle\in R_{i}\) _then_ \(\langle f_{O}(o),f_{O}(p)\rangle\in R_{f_{p}(i)}^{\prime}\)__
* _if_ \(o\preceq_{i}p\)_, then_ \(f_{O}(o)\preceq_{f_{p}(i)}^{\prime}f_{O}(p)\)_._
_That is, for each \(i\in I\), \(f_{O}:\langle O,R_{i}\rangle\to\langle O^{\prime},R_{f_{p}(i)}^{\prime}\rangle\) and \(f_{O}:\langle O,\preceq_{i}\rangle\to\langle O^{\prime},\preceq_{f_{p}(i)}^{ \prime}\rangle\) are \(\mathbf{EndoRel}\) and \(\mathbf{PreOrd}\) morphisms, respectively. We write \(f=(f_{p},f_{O})\)._
These objects and morphisms define a category \(\mathbf{Gam}\), since similarly to the case of \(\mathbf{Gam}_{I}\) the composition of morphisms yields a morphism and each \(G\) has an associated identity morphism (consisting of the identities on the set of players and on the set of outcomes). Composition is also associative.
The following definitions highlight two particular kinds of games:
**Definition 3.9**.: _A set of players is a game \(G_{p}(I)=\langle I,\emptyset,\{\emptyset\}_{i\in I},\{\emptyset\}_{i\in I}\rangle\), with an empty set of outcomes and empty accessibility and preference relations. If the set \(I=\{i\}\) is a singleton, we refer to this game as player \(G_{p}(i)\)._
**Definition 3.10**.: _A set of outcomes is a game \(G_{O}(O)=\langle\emptyset,O,\emptyset,\emptyset\rangle\), with an empty set of players and no accessibility nor preference relations._
We abuse somewhat the terminology by naming these games _players_ and _outcomes_, but it is useful to have them as objects in the category \(\mathbf{Gam}\). Furthermore, \(G_{p}\) and \(G_{O}\) are easily checked to be functors from the category of sets to \(\mathbf{Gam}\).
**Proposition 3.11**.: _For each game \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\), \(S\subseteq I\), and \(T\subseteq O\) there is a unique morphism \(f\) from \(G_{p}(S)\) to \(G\) such that \(f_{p}=i_{S}\), the inclusion map \(S\hookrightarrow I\), and also there is a unique \(g\) from \(G_{O}(T)\) to \(G\) such that \(g_{O}=i_{T}:T\hookrightarrow O\)._
Proof.: The morphisms are \(f=(i_{S},\emptyset)\) and \(g=(\emptyset,i_{T})\), respectively.
**Theorem 3.12**.: **Gam** _is a complete and cocomplete category._
Proof.: To prove this claim we have to obtain the same constructions as in the proof of Theorem 3.3, but this time taking into account the interaction between the components \(f_{p}\) and \(f_{O}\) of morphisms \(f\):
* **Products**: given games \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and \(G^{\prime}=\langle J,O^{\prime},\{R^{\prime}_{j}\}_{j\in J},\{\preceq^{\prime}_ {j}\}_{j\in J}\rangle\), a product can be constructed as \[G\times G^{\prime}=\langle I\times J,O\times O^{\prime},\{R_{\langle i,j \rangle}\}_{\langle i,j\rangle\in I\times J},\{\preceq_{\langle i,j\rangle} \}_{\langle i,j\rangle\in I\times J}\rangle\] where \[\langle\langle o,o^{\prime}\rangle,\langle p,p^{\prime}\rangle\rangle\in R_{ \langle i,j\rangle}\text{ iff }\langle o,p\rangle\in R_{i}\text{ and }\langle o^{\prime},p^{\prime}\rangle\in R^{\prime}_{j}\] and \[\langle o,o^{\prime}\rangle\preceq_{\langle i,j\rangle}\langle p,p^{\prime} \rangle\text{ iff }o\preceq_{i}p\text{ and }o^{\prime}\preceq_{j}p^{\prime}.\] \(R_{\langle i,j\rangle}\) is reflexive and symmetric, since \(R_{i}\) and \(R^{\prime}_{j}\) are reflexive and symmetric relations. By the same token, \(\preceq_{\langle i,j\rangle}\) is a reflexive and transitive relation. The natural projections \(\pi_{1}:G\times G^{\prime}\to G\) and \(\pi_{2}:G\times G^{\prime}\to G^{\prime}\) are easily checked to be morphisms.
* **Terminal object**: let \(\mathbb{T}=\langle I_{\mathbb{T}},O_{\mathbb{T}},\{R^{\mathbb{T}}\},\{\preceq^{ \mathbb{T}}\}\rangle\) be the game in which \(I_{\mathbb{T}}\) and \(O_{\mathbb{T}}\) are singletons, while \(R^{\mathbb{T}}\) and \(\preceq^{\mathbb{T}}\), are the identity on \(O^{\mathbb{T}}\). Given any other game \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) there are two unique functions, \(!_{Gp}:I\to I_{\mathbb{T}}\) and \(!_{GO}:O\to O_{\mathbb{T}}\), and they determine the unique game morphism from \(G\) to \(\mathbb{T}\).
* **Equalizers**: let \(f\) and \(g\) be two morphisms from \(G\) to \(G^{\prime}\). Then we can define \(E=\langle I_{E},O_{E},\{R^{E}_{i}\}_{i\in I_{E}},\{\preceq^{E}_{i}\}_{i\in I _{E}}\rangle\), where \(I_{E}=\{i\in I:f_{p}(i)=g_{p}(i)\}\), \(O_{E}=\{o\in O:f_{O}(o)=g_{O}(o)\}\), and for every \(i\in I_{E}\), \(R^{E}_{i}=R_{i}|_{O_{E}}\), and \(\preceq^{E}_{i}=\preceq_{i}|_{O_{E}}\). Since \(R^{E}_{i}\) is a restriction of \(R_{i}\), it is also reflexive and symmetric, while \(\preceq^{E}_{i}\), being a restriction of \(\preceq_{i}\) it is also a reflexive and transitive relation for each \(i\in I\). The equalizing morphism \(e:E\to G\) is given by the inclusion maps \(e_{p}:I_{E}\to I\) and \(e_{O}:O_{E}\to O\).
* **Coproducts**: Let \(\Delta_{X}\) be the set of pairs \(\langle x,x\rangle\) for \(x\in X\). Given games \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and \(G^{\prime}=\langle J,O^{\prime},\{R^{\prime}_{j}\}_{j\in J},\{\preceq^{\prime}_ {j}\}_{j\in J}\rangle\), a coproduct can be constructed as \[G+G^{\prime}=\langle I+J,O+O^{\prime},\{R^{+}_{k}\}_{k\in I+J},\{\preceq^{+}_{ k}\}_{k\in I+J}\rangle\] where \[R^{+}_{i}=R_{i}\cup\Delta_{O^{\prime}}\text{ if }i\in I,\text{ and }R^{+}_{j}=R^{\prime}_{j}\cup\Delta_{O},\text{ if }j\in J\] and similarly, \[\preceq^{+}_{i}=\preceq_{i}\cup\Delta_{O^{\prime}}\text{ if }i\in I,\text{ and }\preceq^{+}_{j}=\preceq^{\prime}_{j}\cup\Delta_{O},\text{ if }j\in J.\] Here we need to add the identity relations to make sure both \(R^{+}_{k}\) and \(\preceq^{+}_{k}\) are reflexive. Thus, \(R^{+}_{k}\) is a reflexive and symmetric relation since \(R_{i}\) and \(R_{j}\) for \(i\in I\) and \(j\in J\), as well as \(\Delta_{O}\) and \(\Delta_{O^{\prime}}\) are reflexive and symmetric relations. In turn, and \(\preceq^{+}_{k}\) is a reflexive and transitive relation, since \(\preceq_{i}\), \(\preceq_{j}\), \(\Delta_{O}\) and \(\Delta_{O^{\prime}}\) are reflexive and transitive relations.
Let \(i_{O}\) and \(i_{O^{\prime}}\) be the injections from \(O\) and \(O^{\prime}\) into \(O+O^{\prime}\) respectively. To see that adding the identity relations on the outcomes preserves the universal property of the coproduct, consider morphisms \(f:G\to G^{\prime\prime}\) and \(g:G^{\prime}\to G^{\prime\prime}\). We can build the set functions \([f,g]_{p}\) and \([f,g]_{O}\) as usual. If \(\langle x,y\rangle\in R_{k}^{+}\), with \(k\in I\), we consider two cases: * \(x,y\in O\), then \([f,g]_{O}(i_{O}(x))=f_{O}(x)\) and \([f,g]_{O}(i_{O}(y))=f_{O}(y)\) so, since \(f\) is a morphism, \(\langle f_{O}(x),f_{O}(y)\rangle\in R_{f_{p}(k)}^{\prime\prime}\). * \(x,y\in O^{\prime}\) then we must have \(x=y\) so \([f,g]_{O}(i_{O^{\prime}}(x))=g_{O}(x)=g_{O}(y)=[f,g]_{O}(i_{O^{\prime}}(y))\). In this case, we know that since \(R_{f_{p}(k)}^{\prime\prime}\) is reflexive, \(\langle g_{O}(x),g_{O}(y)\rangle\in R_{f_{p}(k)}^{\prime\prime}\). \[\diagram\node{\node{\node{\node{\node{\node{\node{\node{\node{\node{\node{ \node{\node{\node{\node{\node{\node{\nodenode{\node{\nodenode{\nodenode{\nodenode{\ddots
Proof.: : Given two games \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and \(G^{\prime}=\langle J,O^{\prime},\{R^{\prime}_{j}\}_{j\in J},\{\preceq^{\prime}_{ j}\}_{j\in J}\rangle\), we define a game:
\[G^{\prime G}=\langle K,\mathcal{O},\{R_{\kappa}\}_{\kappa\in K},\{\preceq_{ \kappa}\}_{\kappa\in K}\rangle\]
where
\[K=\{\kappa:I\to J|\text{ there exists }\rho:O\to O^{\prime}\text{ such that }\ (\kappa,\rho)\in Hom_{\mathbf{ Gam}}(G,G^{\prime})\}\]
and
\[\mathcal{O}=\{\rho:O\to O^{\prime}|\text{ there exists }\kappa:I\to J\text{ such that }(\kappa,\rho)\in Hom_{\mathbf{ Gam}}(G,G^{\prime})\}.\]
For each \(\kappa\in K\),
\[R_{\kappa}=\{(\rho,\rho^{\prime}):(\kappa,\rho),(\kappa,\rho^{\prime})\in \operatorname{Hom}_{\mathbf{Gam}}(G,G^{\prime})\text{ and for all }o,p\in O\text{ and for all }i\in I\]
\[(o,p)\in R_{i}\Rightarrow\rho(o)R^{\prime}_{\kappa(i)}\rho^{\prime}(p)\} \cup\Delta_{\mathcal{O}}.\]
With this specification, \(R_{\kappa}\) is trivially reflexive and we can check that it is symmetric as well: Suppose that \((\rho,\rho^{\prime})\in R_{\kappa}\). By definition, for all \(o,p\in O\) and \(i\in I\), if \(oR_{i}p\), then, since \(R_{i}\) is symmetric, \(pR_{i}o\) and thus \(\rho(p)R^{\prime}_{\kappa(i)}\rho^{\prime}(o)\). By the symmetry of \(R^{\prime}_{\kappa(i)}\), \(\rho^{\prime}(o)R^{\prime}_{\kappa(i)}\rho(p)\), and this proves that \((\rho^{\prime},\rho)\in R_{\kappa}\).
In the case of the preference relations, for each \(\kappa\in K\), \(\rho\preceq_{\kappa}\rho^{\prime}\) if and only if for all \(o\in O\) and \(i\in I\), \(\rho(o)\preceq_{\kappa(i)}\rho^{\prime}(o)\).
While the reflexivity of \(\preceq_{\kappa}\) is trivially satisfied, transitivity can be proven as follows. Assume that \(\rho\preceq_{\kappa}\rho^{\prime}\) and \(\rho^{\prime}\preceq_{\kappa}\rho^{\prime\prime}\). This means that for all \(i\in I\) and all \(o\in O\), \(\rho(o)\preceq_{\kappa(i)}\rho^{\prime}(o)\) and \(\rho^{\prime}(o)\preceq_{\kappa(i)}\rho^{\prime\prime}(o)\). Since \(\preceq_{\kappa(i)}\) is transitive for each \(i\in I\), it follows that \(\rho\preceq_{\kappa}\rho^{\prime\prime}\). Thus, \(\preceq_{\kappa}\) is a preorder.
For any other game \(G^{\prime\prime}=\langle L,O^{\prime\prime},\{S_{l}\}_{l\in L},\{\preceq_{l} \}_{l\in L}\rangle\), given a morphism \(h:G^{{}^{\prime\prime}}\times G\to G^{\prime}\), we define maps \(\Psi_{p}\) and \(\Psi_{O}\) as follows: given \(l\in L\), \(i\in I\), \(o^{\prime\prime}\in O^{\prime\prime}\), and \(o\in O\),
* \(\Psi_{p}(l)(i)=h_{p}(l,i)\)
* \(\Psi_{O}(o^{\prime\prime})(o)=h_{O}(o^{\prime\prime},o)\),
We need to check that for every \(l\in L\) and \(o^{\prime\prime}\in O^{\prime\prime}\), \((\Psi_{p}(l),\Psi_{O}(o^{\prime\prime}))\) is a morphism from \(G\) to \(G^{\prime}\), and then that \(\Psi=(\Psi_{p},\Psi_{O})\) itself is a morphism from \(G^{\prime\prime}\) to \(G^{\prime\prime G}\).
To see the first part, fix \(l\in L\) and \(o^{\prime\prime}\in O^{\prime\prime}\), and suppose that \(o,p\in O\) are such that \(oR_{i}p\). Then, as \(\langle o^{\prime\prime},o\rangle S_{l}\times R_{i}\langle o^{\prime\prime},p\rangle\), it follows that \(h(o^{\prime\prime},o)R^{\prime}_{h_{p}(l,i)}h(o^{\prime\prime},p)\), that is, \(\Psi_{O}(o^{\prime\prime})(o)R^{\prime}_{\Psi_{p}(l)(i)}\Psi_{O}(o^{\prime \prime})(p)\). Similarly, we can check that the preferences are preserved.
To show that \(\Psi\) is a morphism, take \(l\in L\) and \(o^{\prime\prime},p^{\prime\prime}\in O^{\prime\prime}\) such that \(o^{\prime\prime}S_{l}p^{\prime\prime}\). For any \(o,p\in O\) satisfying \(oR_{i}p\), we have that \(\langle o^{\prime\prime},o\rangle S_{l}\times R_{i}\langle p^{\prime\prime},p\rangle\), and thus \(h(o^{\prime\prime},o)R^{\prime}_{h_{p}(l,i)}h(p^{\prime\prime},p)\) so \(\Psi_{O}(o^{\prime\prime})(o)R^{\prime}_{\Psi_{p}(l)(i)}\Psi_{O}(p^{\prime \prime})(p)\). This implies that \(\Psi_{O}(o^{\prime\prime})R_{\Psi_{p}(l)}\Psi_{O}(p^{\prime\prime})\).
Thus \(\Psi\) is a morphism that makes the following diagram commute, and its uniqueness follows from its definition on the corresponding sets.
Analogously to what we saw in Proposition 3.6, we have the following result:
**Proposition 3.15**.: _If \(G\neq\boldsymbol{0}\), then there is an embedding \(\psi\) of \(G^{\prime}\) in \(G^{\prime G}\)._
Proof.: Let \(\psi_{p}:J\to J^{I}\) and \(\psi_{O}:O^{\prime}\to O^{\prime O}\), where for every \(i\in I\) and \(j\in J\), \(\psi_{p}(j)(i)=j\), and for every \(o\in O\) and \(o^{\prime}\in O^{\prime}\), \(\psi_{O}(o^{\prime})(o)=o^{\prime}\). We denote with \(\boldsymbol{j}\) and \(\boldsymbol{o^{\prime}}\) these constant functions. Furthermore, notice that for any \(\kappa:I\to J\), the pair \(\langle\kappa,\boldsymbol{o^{\prime}}\rangle\) is a morphism from \(G\) to \(G^{\prime}\). Indeed, if \(oR_{i}p\), then \(\boldsymbol{o^{\prime}}(o)R^{\prime}_{\kappa(i)}\boldsymbol{o^{\prime}}(p)\) for any \(o,p\in O\) and \(i\in I\). This proves that in particular \((\boldsymbol{j},\boldsymbol{o^{\prime}})\) is a morphism, so \(\boldsymbol{j}\in K\) and \(\boldsymbol{o^{\prime}}\in\mathcal{O}\).
Finally, we can show that \(\psi=(\psi_{p},\psi_{O})\) is a morphism. Consider \(o^{\prime},p^{\prime}\in O^{\prime}\) such that \(\langle o^{\prime},p^{\prime}\rangle\in R^{\prime}_{j}\) for some \(j\in J\). We want to show that \(\langle\psi_{O}(o^{\prime}),\psi_{O}(p^{\prime})\rangle\in R_{\psi_{p}(j)}\). By definition, this means that \(\langle\boldsymbol{o^{\prime}},\boldsymbol{p^{\prime}}\rangle\in R_{ \boldsymbol{j}}\). We already know that \((\boldsymbol{j},\boldsymbol{o^{\prime}})\) and \((\boldsymbol{j},\boldsymbol{p^{\prime}})\) are morphisms from \(G\) to \(G^{\prime}\), and for all \(i\in I\) and \(o,p\in O\), \(\langle o,p\rangle\in R_{i}\) it should be the case that \(\langle\boldsymbol{o^{\prime}}(o),\boldsymbol{p^{\prime}}(p)\rangle\in R^{ \prime}_{\boldsymbol{j}(i)}\), but this is equivalent to the hypothesis \(\langle o^{\prime},p^{\prime}\rangle\in R^{\prime}_{j}\).
The following proposition illustrates the flexibility and expressivity of this formalism:
**Proposition 3.16**.: _Each set function \(f:I\to J\) induces a functor \(F_{f}:\mathbf{Gam}_{I}\to\mathbf{Gam}_{J}\) such that for every set of outcomes \(O\) of a game in \(\mathbf{Gam}_{I}\), \((f,1_{O})\) is a morphism in \(\mathbf{Gam}\)._
Proof.: For each \(G=\langle O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\in\mathbf{Gam}_ {I}\), we put \(F_{f}(G)=\langle O,\{R^{f}_{j}\}_{j\in J},\{\preceq^{f}_{j}\}_{j\in J}\rangle \in\mathbf{Gam}_{J}\), where
\[\langle o,p\rangle\in R^{f}_{j}\text{ iff there exists }i\in I\text{ such that }f(i)=j\text{ and }\langle o,p\rangle\in R_{i}.\]
For a morphism \(g:G\to G^{\prime}\) in \(\mathbf{Gam}_{I}\), \(F_{f}(g):F_{f}(G)\to F_{f}(G^{\prime})\) is given by the same function \(g:O\to O^{\prime}\). We can see that \(F_{f}(g)\) is a morphism: if \(\langle o,p\rangle\in R^{f}_{j}\), then for some \(i\in I\) such that \(f(i)=j\), \(\langle o,p\rangle\in R_{i}\). Therefore, \(\langle g(o),g(p)\rangle\in R^{\prime}_{i}\), so \(\langle g(o),g(p)\rangle\in R^{\prime f}_{j}\). It follows from the definition that \(F_{f}\) preserves compositions and identities.
If we now consider a game \(G\) in \(\mathbf{Gam}_{I}\), we have that \(F_{f}(G)\) is in \(\mathbf{Gam}_{J}\), and we can apply the functors \(F_{I}\) and \(F_{J}\) (see Definition 3.13), respectively. The pair \((f,1_{O}):F_{I}(G)\to F_{J}(F_{f}(G))\) is a morphism in \(\mathbf{Gam}\): if \(\langle o,p\rangle\in R_{i}\), then \(\langle o,p\rangle\in R^{f}_{f(i)}\).
It is well known that if a category has coproducts and coequalizers, then it has pushouts. The following example shows an application of this construction.
**Example 3.17**.: _Consider these two games \(G=\langle\{1,2\},O,\{R_{1},R_{2}\},\{\preceq_{1},\preceq_{2}\}\rangle\) and \(G^{\prime}=\langle\{2,3\},O^{\prime},\{R^{\prime}_{2},R^{\prime}_{3}\},\{ \preceq^{\prime}_{2},\preceq^{\prime}_{3}\}\rangle\). Since player \(2\) participates in both games, we want to amalgamate \(G\) and \(G^{\prime}\) in a way that reflects this situation. For this, we use the player game \(G_{p}(2)=\langle\{2\},\emptyset,\{\emptyset\},\{\emptyset\}\rangle\), and the morphisms \(f\) from \(G_{p}(2)\) to \(G\) and \(f^{\prime}\) from \(G_{p}(2)\) to \(G^{\prime}\) given by proposition 3.11._
_The pushout of \(G\) and \(G^{\prime}\) is obtained by building their coproduct and then finding the quotient
_under the relation \(\sim_{2}\) that identifies player 2 in each game._
_The set of players of \(G+G^{\prime}\) can be written as \(\{\langle 1,0\rangle,\langle 2,0\rangle,\langle 2,1\rangle,\langle 3,1\rangle\}\) (the second component in each pair indicates to which game corresponds to: \(0\) for \(G\) and \(1\) for \(G^{\prime}\)). The relation \(\sim_{2}\) identifies \(\langle 2,0\rangle\) and \(\langle 2,1\rangle\)._
_We can now describe the components of this pushout \(G+G^{\prime}/_{\sim_{2}}\). The set of players is composed by the equivalence classes \([\langle 1,0\rangle]\), \([\langle 2,0\rangle,\langle 2,1\rangle]\), \([\langle 3,1\rangle]\). The set of outcomes is \(O+O^{\prime}\) since there is no restriction imposed by the morphisms from \(G_{p}(2)\). The accessibility relations are: \(R^{+}_{[\langle 1,0\rangle]}=R^{+}_{\langle 1,0\rangle}=R_{1}\cup\Delta_{O^{ \prime}}\), \(R^{+}_{[\langle 3,1\rangle]}=R^{+}_{\langle 3,1\rangle}=\Delta_{O}\cup R^{ \prime}_{3}\), and for \(R^{+}_{[\langle 2,0\rangle,\langle 2,1\rangle]}\) we have that \(\langle x,y\rangle\in R^{+}_{[\langle 2,0\rangle,\langle 2,1\rangle]}\) if and only if \(\langle x,y\rangle\in R^{+}_{\langle 2,0\rangle}\) or \(\langle x,y\rangle\in R^{+}_{\langle 2,1\rangle}\). That is, if and only if \(\langle x,y\rangle\in R_{2}+R^{\prime}_{2}\)._
_Analogously, \(\preceq^{+}_{[\langle 1,0\rangle]}=\preceq_{1}\cup\Delta_{O^{\prime}}\), \(\preceq^{+}_{[\langle 3,1\rangle]}=\Delta_{O}\cup\preceq^{\prime}_{3}\), and \(\preceq^{+}_{[\langle 2,0\rangle,\langle 2,1\rangle]}=\preceq_{2}+\preceq^{ \prime}_{2}\)._
_Notice that in the new game players \(1\) and \(3\) keep their original preferences and accessibility relations on the outcomes of their games, and have trivial ones over those of the other one. On the other hand, player \(2\) extends their relations to include both of the games in which they participated._
## 4 Subgames and equilibria
As usual, we can say that \(G\) is a subgame of a game \(G^{\prime}\) if the inclusion map is a morphism in the category \(\mathbf{Gam}\), or \(\mathbf{Gam}_{I}\). We can spell out what this means in detail:
**Definition 4.1**.: _In \(\mathbf{Gam}_{I}\), a game \(G=\langle O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) is a subgame of \(G^{\prime}=\langle O^{\prime},\{R^{\prime}_{i}\}_{i\in I},\{\preceq^{\prime}_ {i}\}_{i\in I}\rangle\) if \(O\subseteq O^{\prime}\) and for each \(i\in I\), \(R_{i}\) is a subset of the restriction of \(R^{\prime}_{i}\) to \(O\), while each \(\preceq_{i}\) is a preorder on \(O\) that is included in \(\preceq^{\prime}_{i}\)._
_In \(\mathbf{Gam}\), a game \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) is a subgame of \(G^{\prime}=\langle J,O^{\prime},\{R^{\prime}_{j}\}_{j\in J},\{\preceq^{\prime}_ {j}\}_{j\in J}\rangle\) if \(I\subseteq J\), \(O\subseteq O^{\prime}\) and for each \(i\in I\), \(R_{i}\) is a subset of the restriction of \(R^{\prime}_{i}\) to \(O\), while each \(\preceq_{i}\) is a preorder on \(O\) that is included in \(\preceq^{\prime}_{i}\)._
The goal of Game Theory is to postulate _solution concepts_ for games, indicating what results should be expected if players exhibit different ways of making decisions [1]. In the case of games represented by multi-graphs we capture the notion of a solution as a selection of a subset of the set of outcomes on which we keep all the relations of accessibility and preferences among those outcomes:
**Definition 4.2**.: _A solution concept is a mapping \(\phi\) such that given a game \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) yields a game_
\[\phi(G)=\langle I,O_{\phi},\{R_{i}^{\phi}\}_{i\in I},\{\preceq_{i}^{\phi}\}_{i \in I}\rangle\]
_such that \(O_{\phi}\subseteq O\) and for each \(i\in I\), \(R_{i}^{\phi}\) and \(\preceq_{i}^{\phi}\) are the restrictions of \(R_{i}\) and \(\preceq_{i}\) to \(O_{\phi}\)._
Notice that the set of players of \(\phi(G)\) is the same as in \(G\), so this definition works both in \(\mathbf{Gam}_{I}\) and \(\mathbf{Gam}\). The game \(\phi(G)\) is the graph induced by the selected outcomes. The definition of \(\phi\) captures which outcomes can be recommended (under some criterion).
In the case of \(\mathbf{Gam_{1}}\), the notion of solution for a decision problem consists of a subgame in which each outcome \(o^{*}\in O_{\phi}\) is an _optimal solution_. That is, for every \(p\in O\), if \(o^{*}R_{\mathbf{1}}p\), \(p\preceq_{\mathbf{1}}o^{*}\). An optimal solution is actually a maximal element of \(\preceq_{\mathbf{1}}\) among those accessible through \(R_{\mathbf{1}}\).
The literature presents several alternative solution concepts, but the most widely used one is _Nash equilibrium_, which we adapt to our representation of games:
**Definition 4.3**.: _An outcome \(o^{*}\in O\) is a Nash equilibrium if for each \(i\), and for each \(p\) such that \(\langle o^{*},p\rangle\in R_{i}\), \(p\preceq_{i}o^{*}\)._
_For each game \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\), then, \(\phi^{NE}(G)\) is the subgame of \(G\) with the same set of players \(I\), the set of outcomes \(O_{\phi^{NE}}\subseteq O\) of all the Nash equilibria in the game, and the restrictions of \(R_{i}\) and \(\preceq_{i}\) to \(O_{\phi^{NE}}\)._
_For simplicity, we will write \(o^{*}\in\phi^{NE}(G)\) instead of \(o^{*}\in O_{\phi^{NE}(G)}\)._
**Example 4.4**.: _Consider a game \(G_{BoS}\) which can be seen as a multi-graph representation of the following strategic game (known as the Battle of the Sexes):_
\begin{tabular}{c|c|c} & \(C\) & \(D\) \\ \hline \(A\) & _(2,1)_ & _(0,0)_ \\ \hline \(B\) & _(0,0)_ & _(1,2)_ \\ \end{tabular} _in \(G_{BoS}\)_,_ \(I=\{1,2\}\) _and_ \(O=\{r,s,t\}\)_, where_ \(r\) _can be identified with_ \((A,C)\)_,_ \(s\) _with_ \((B,D)\) _and_ \(t\) _with both_ \((A,D)\) _and_ \((B,C)\)_. The accessibility and preference relations are represented in Figure_ 3_. Notice the difference with Figure_ 2_, in which the preferences of both players (represented by dashed curves) go in_ different _directions between_ \(p\) _and_ \(o\) _and_ \(q\) _and_ \(o\)_. Here, instead, the preferences of both players are the same between_ \(r\) _and_ \(t\) _and between_ \(s\) _and_ \(t\)_._
_We can see (recall that the accessibility relations are not transitive) that both \(r,s\in\phi^{NE}(G_{BoS})\)._
While game morphisms typically do not preserve equilibria, it is possible to identify equilibria in products, coproducts, and exponents by building upon the equilibria of the original games.
**Theorem 4.5**.: _Given games \(G\) and \(G^{\prime}\) in \(\mathbf{Gam}_{I}\), with \(I\neq\emptyset\),then_
1. \(o^{*}\in\phi^{NE}(G)\) _and_ \(o^{\prime*}\in\phi^{NE}(G^{\prime})\) _if and only if_ \(\langle o^{*},o^{\prime*}\rangle\in\phi^{NE}(G\times_{I}G^{\prime})\)_._
2. \(o^{*}\in\phi^{NE}(G)\) _if and only if_ \(i_{G}(o^{*})\in\phi^{NE}(G+_{I}G^{\prime})\)_. A similar result applies to equilibria in_ \(\phi^{NE}(G^{\prime})\)__
3. \(o^{\prime*}\in\phi^{NE}(G^{\prime})\) _if and only if_ \(\boldsymbol{o^{\prime*}}\in\phi^{NE}(G^{\prime G})\)_, where_ \(\boldsymbol{o^{\prime*}}\) _is the constant function with value_ \(o^{\prime*}\)_._
Proof.:
1. Assume that \(o^{*}\in\phi^{NE}(G)\), \(o^{\prime*}\in\phi^{NE}(G^{\prime})\) and that for some \(i\in I\), \(\langle\langle o^{*},o^{\prime*}\rangle,\langle p,p^{\prime}\rangle\rangle\in R_{ i}^{\times_{I}}\). Then, by the definition of \(R_{i}^{\times_{I}}\), \(\langle o^{*},p\rangle\in R_{i}\) and \(\langle o^{\prime*},p^{\prime}\rangle\in R_{i}^{\prime}\). Since \(o^{*}\) and \(o^{\prime*}\) are equilibria, it turns out that \(p\preceq_{i}o^{*}\) and \(p^{\prime}\preceq_{i}o^{\prime*}\), from where \(\langle p,p^{\prime}\rangle\preceq_{i}^{\times_{I}}\langle o^{*},o^{\prime*}\rangle\). To prove the converse implication, consider \(\langle o^{*},o^{\prime*}\rangle\in\phi^{NE}(G\times_{I}G^{\prime})\) and that for some \(i\in I\), \(\langle o^{*},p\rangle\in R_{i}\). Then, since \(R_{i}^{\prime}\) is reflexive, \(\langle\langle o^{*},o^{\prime*}\rangle,\langle p,o^{\prime*}\rangle\rangle \in R_{i}^{\times_{I}}\), therefore \(p\preceq_{i}o^{*}\). The same argument proves that \(o^{\prime*}\in\phi(G^{\prime})\).
2. Assume that \(o^{*}\in\phi^{NE}(G)\). If we assume that for some \(i\in I\), \(\langle i_{O}(o^{*}),x\rangle\in R_{i}+R_{i}^{\prime}\), then we must have \(x=i_{O}(p)\) for some \(p\in O\). Then \(\langle o^{*},p\rangle\in R_{i}\). By the hypothesis, \(p\preceq_{i}o^{*}\) and thus \(o^{*}\in\phi^{NE}(G+G^{\prime})\). If now we assume that \(i_{O}(o^{*})\in\phi^{NE}(G+G^{\prime})\) and for some \(i\in I\) and \(p\in O\), \(o^{*}R_{i}p\), then \(\langle i_{O}(o^{*}),i_{O}(p)\rangle\in R_{i}+R_{i}^{\prime}\), so by hypothesis \(i_{O}(p)(\preceq_{i}+\preceq_{i}^{\prime})i_{O}(o^{*})\). It follows that \(p\preceq_{i}o^{*}\), so \(o^{*}\in\phi^{NE}(G)\).
3. Assume that \(o^{\prime*}\in\phi^{NE}(G^{\prime})\). Consider for a fixed \(i\in I\), \(f\in\mathcal{O}\) such that \(\langle f,o^{\prime*}\rangle\in R_{i}^{\prime\prime}\). This means that for every \(o,p\in O\), if \(\langle o,p\rangle\in R_{i}\) then \(\langle f(o),o^{\prime*}(p)\rangle\in R_{i}^{\prime}\). In particular, since \(\langle o,o\rangle\in R_{i}\) we have by hypothesis that \(f(o)\preceq_{i}^{\prime}o^{\prime*}\). Thus \(f\preceq_{i}^{\prime\prime}o^{\prime*}\). For the converse, consider \(o^{\prime*}\in O^{\prime}\) such that \(o^{\prime*}\in\phi^{NE}(G^{\prime G})\). Then for any \(i\in I,f\in\mathcal{O}\), if \(\langle f,o^{\prime*}\rangle\in R_{i}^{\prime\prime}\) then \(f\preceq_{i}^{\prime\prime}o^{\prime*}\). We want to show that \(o^{\prime*}\in\phi^{NE}(G^{\prime})\). For this suppose that \(p^{\prime}\in O^{\prime}\) is such that \(\langle p^{\prime},o^{\prime*}\rangle\in R_{i}^{\prime}\). Therefore, for the constant function \(\boldsymbol{p^{\prime}}\), \(\langle\boldsymbol{p^{\prime}},\boldsymbol{o^{\prime*}}\rangle\in R_{i}^{\prime\prime}\) also holds, so by hypothesis, \(\boldsymbol{p^{\prime}}\preceq_{i}^{\prime\prime}o^{\prime*}\). This means that for all \(o\in O\), \(\boldsymbol{p^{\prime}}(o)\preceq_{i}^{\prime}o^{\prime*}(o)\), i.e. \(p^{\prime}\preceq_{i}^{\prime}o^{\prime*}\).
**Theorem 4.6**.: _Given games \(G\) and \(G^{\prime}\) in_ **Gam** _with non-empty sets of players, then_
1. \(o^{*}\in\phi^{NE}(G)\) _and_ \(o^{\prime*}\in\phi^{NE}(G^{\prime})\) _if and only if_ \(\langle o^{*},o^{\prime*}\rangle\in\phi^{NE}(G\times G^{\prime})\)_._
Figure 3: Multi-graph of \(G_{BoS}\). Red and blue lines correspond to players 1 and 2, respectively. Full lines correspond to accessibility relations and dashed ones to preferences.
2. \(o^{*}\in\phi^{NE}(G)\) _if and only if_ \(i_{G}(o^{*})\in\phi^{NE}(G+G^{\prime})\)_. Similarly for equilibria in_ \(\phi^{NE}(G^{\prime})\)_._
3. \(o^{\prime*}\in\phi^{NE}(G^{\prime})\) _if and only if_ \(\boldsymbol{o^{\prime*}}\in\phi^{NE}(G^{\prime\prime G})\)_, where_ \(\boldsymbol{o^{\prime*}}\) _is the constant function with value_ \(o^{\prime*}\)_._
Proof.:
1. Assume that \(o^{*}\in\phi^{NE}(G)\), \(o^{\prime*}\in\phi^{NE}(G^{\prime})\). Thus for every \(i\in I\), \(p\in O\), if \(\langle o^{*},p\rangle\in R_{i}\), then \(p\preceq_{i}o^{*}\), and for all \(j\in J\), \(p^{\prime}\in O^{\prime}\), if \(\langle o^{\prime*},p^{\prime}\rangle\in R^{\prime}_{j}\), then \(p^{\prime}\preceq_{i}^{\prime}o^{\prime*}\). Now consider any pair \(\langle i,j\rangle\in I\times J\) such that \(\langle\langle o^{*},o^{\prime*}\rangle,\langle p,p^{\prime}\rangle\rangle \in R_{\langle i,j\rangle}\). Then, by the definition of \(R_{\langle i,j\rangle}\), \(\langle o^{*},p\rangle\in R_{i}\) and \(\langle o^{\prime*},p^{\prime}\rangle\in R^{\prime}_{i}\). Since \(o^{*}\) and \(o^{\prime*}\) are equilibria, it turns out that \(p\preceq_{i}o^{*}\) and \(p^{\prime}\preceq_{j}^{\prime}o^{\prime*}\), from where \(\langle p,p^{\prime}\rangle\preceq_{\langle i,j\rangle}\langle o^{*},o^{\prime *}\rangle\). To prove the converse implication, consider \(\langle o^{*},o^{\prime*}\rangle\in\phi^{NE}(G\times G^{\prime})\) and that for some \(i\in I\), \(\langle o^{*},p\rangle\in R_{i}\). Then, since for any \(j\in J\), \(R^{\prime}_{j}\) is reflexive, \(\langle\langle o^{*},o^{\prime*}\rangle,\langle p,o^{\prime*}\rangle\rangle \in R_{\langle i,j\rangle}\), so \(\langle p,o^{\prime*}\rangle\preceq_{\langle i,j\rangle}\langle o^{*},o^{ \prime*}\rangle\) therefore \(p\preceq_{i}o^{*}\). The same argument proves that \(o^{\prime*}\in\phi^{NE}(G^{\prime})\).
2. Assume that \(o^{*}\in\phi^{NE}(G)\). If we assume that for some \(k\in I+J\), \(\langle i_{O}(o^{*}),x\rangle\in R^{+}_{k}\), then we must have \(k\in I\), and \(x=i_{O}(p)\) for some \(p\in O\). Then \(\langle o^{*},p\rangle\in R_{k}\). By the hypothesis, \(p\preceq_{k}o^{*}\), so \(x\preceq_{k}^{+}i_{O}(o^{*})\) and thus \(o^{*}\in\phi^{NE}(G+G^{\prime})\). If now we assume that \(i_{O}(o^{*})\in\phi^{NE}(G+G^{\prime})\) and for some \(i\in I\) and \(p\in O\), \(o^{*}R_{i}p\), then \(i_{O}(o^{*})R^{+}_{i}i_{O}(p)\), so by hypothesis \(i_{O}(p)\preceq_{i}^{+}i_{O}(o^{*})\). It follows that \(p\preceq_{i}o^{*}\), so \(o^{*}\in\phi^{NE}(G)\).
3. Assume that \(\langle f,\boldsymbol{o^{\prime*}}\rangle\in R_{\kappa}\), for some \(\kappa:I\to J\). Then \((\kappa,f)\) and \((\kappa,\boldsymbol{o^{\prime*}})\) are morphisms from \(G\) to \(G^{\prime}\) in **Gam**, and for all \(i\in I,o,p\in O\), if \(\langle o,p\rangle\in R_{i}\), then \(\langle f(o),\boldsymbol{o^{\prime*}}(p)\rangle\in R^{\prime}_{\kappa(i)}\). By the hypothesis that \(o^{\prime*}\in\phi^{NE}(G^{\prime})\), it turns out that \(f(o)\preceq_{\kappa(i)}^{\prime}o^{\prime*}(p)\), so \(f\preceq_{\kappa}\boldsymbol{o^{\prime*}}\). If \(\boldsymbol{o^{\prime*}}\) is a Nash equilibrium in \(G^{\prime G}\), and \(\langle p^{\prime},o^{\prime*}\rangle\in R^{\prime}_{j}\) for some \(j\in J\), then consider the constant functions \(\boldsymbol{p^{\prime}}\) and \(\boldsymbol{j}\). By an argument similar to that in Proposition 3.15, we know that \((\boldsymbol{j},\boldsymbol{p^{\prime}})\) and \((\boldsymbol{j},\boldsymbol{o^{\prime*}})\) are morphisms. Furthermore, \(\langle\boldsymbol{p^{\prime}},\boldsymbol{o^{\prime*}}\rangle\in R_{\boldsymbol {j}}\), so by hypothesis, \(\boldsymbol{p^{\prime}}\preceq_{j}\boldsymbol{o^{\prime*}}\). Therefore, for all \(i\in I\) and \(o\in O\), \(\boldsymbol{p^{\prime}}(o)\preceq_{\boldsymbol{j}(i)}^{\prime}o^{\prime*(o)}\), proving that \(p^{\prime}\preceq_{j}^{\prime}o^{\prime*}\) as required.
Furthermore, we can also find the Nash equilibria in coproducts of games in which we identify some subset of players:
**Proposition 4.7**.: _Given games \(G=\langle I,O,\{R_{i}\}_{i\in I},\{\preceq_{i}\}_{i\in I}\rangle\) and \(G^{\prime}=\langle J,O^{\prime},\{R^{\prime}_{j}\}_{j\in J},\{\preceq_{j}^{ \prime}\}_{j\in J}\rangle\), such that \(I\cap J=S\neq\emptyset\), let \(G+G^{\prime}/_{\sim_{S}}\) be the pushout based on \(G_{p}(S)\) as in Example 3.17. Then \(o^{*}\in\phi^{NE}(G)\) if and only if \(i_{G}(o^{*})\in\phi^{NE}(G+G^{\prime}/_{\sim_{S}})\)._
Proof.: Because of the way we defined the pushout, the equivalence classes of outcomes are singletons and therefore we just identify them with their single element. Assume that \(o^{*}\in\phi^{NE}(G)\) and \(\langle i_{G}(o^{*}),p\rangle\in R^{+}_{[i]}\) for some \(i\in I+J\). This means that there exists some \(j\in[i]\) such that \(\langle i_{G}(o^{*}),p\rangle\in R^{+}_{j}\). It follows that \(p\) must be the inclusion of some outcome \(p^{\prime}\in O\) and thus \(\langle i_{G}(o^{*}),i_{G}(p^{\prime})\rangle\in R^{+}_{j}\), so \(\langle o^{*},p^{\prime}\rangle\in R_{j}\). Then \(p^{\prime}\preceq_{j}o^{*}\) and therefore \(p\preceq_{j}^{+}i_{G}(o^{*})\). In other words \(p\preceq_{[i]}^{+}i_{G}(o^{*})\).
For the converse, assume that \(\langle o^{*},p\rangle\in R_{k}\) for some \(k\in I\). Then \(\langle i_{G}(o^{*}),i_{G}(p)\rangle\in R^{+}_{[k]}\). Since by hypothesis \(i_{G}(o^{*})\in\phi^{NE}(G+G^{\prime}/_{\sim_{S}})\), we have two cases:
* if \(k\in I\setminus S\), \([k]=\{\langle k,0\rangle\}\) and thus \(\langle i_{G}(o^{*}),i_{G}(p)\rangle\in R^{+}_{\langle k,0\rangle}\) so \(i_{G}(p)\preceq_{\langle k,0\rangle}^{+}i_{G}(o^{*})\) and therefore \(p\preceq_{k}o^{*}\).
* if \(k\in S\), \([k]=\{\langle k,0\rangle,\langle k,1\rangle\}\) but since \(i_{G}(o^{*}),i_{G}(p)\in i_{G}(O)\) this means that \(\langle i_{G}(o^{*}),i_{G}(p)\rangle\in R^{+}_{\langle k,0\rangle}\) so \(p\preceq_{k}o^{*}\) as before.
Categories with equilibria-preserving morphisms
We can regard \(\phi^{NE}\) as an operator taking games to games, but in general, it is not a functor in \(\mathbf{Gam}\). To see this, we observe that even though game morphisms preserve the accessibility relations and preferences of the players, this is not enough to make sure that Nash equilibria will be preserved:
**Example 5.1**.: _Consider two simple two-player games: \(G\) with \(O=\{a,b\}\) and \(R_{1}=R_{2}=\preceq_{1}=\preceq_{2}=1_{O}\) and \(G^{\prime}\) with \(O^{\prime}=O\), \(R^{\prime}_{1}=R^{\prime}_{2}=1_{O}\cup\{\langle a,b\rangle,\langle b,a\rangle\}\), \(\preceq^{\prime}_{1}=1_{O}\cup\{\langle a,b\rangle\}\), and \(\preceq^{\prime}_{2}=1_{O}\cup\{\langle b,a\rangle\}\). The identity function on \(O\) gives a morphism. Both outcomes are Nash equilibria in \(G\) but there is no equilibrium in \(G^{\prime}\)._
We see in the example above that \(1_{O}\) is a morphism from \(G\to G^{\prime}\), but since there are no outcomes in \(\phi^{NE}(G^{\prime})\), there is no way to pick a morphism from \(\phi^{NE}(G)\) to \(\phi^{NE}(G^{\prime})\) to fulfill the role of \(\phi^{NE}(1_{O})\).
Notice that this negative result applies to both \(\mathbf{Gam}\) and \(\mathbf{Gam}_{I}\). A way of addressing this issue is by taking hints from [10] and [11] in which the idea of preservation of equilibria is built in the definition of morphism. In this spirit, we look at some subcategories of \(\mathbf{Gam}\) with equilibria-preserving morphisms.
**Definition 5.2**.: _Let **NE** be the category of games in which all the outcomes are Nash equilibria. This is a full subcategory of \(\mathbf{Gam}\), meaning that all the morphisms in \(\mathbf{Gam}\) between two objects in **NE** are also morphisms in **NE**._
Notice that if \(G\) is a game in \(\mathbf{NE}\), \(\phi^{NE}(G)=G\). That is, objects in \(\mathbf{NE}\) are fixed points under \(\phi^{NE}\).
**Definition 5.3**.: _We say that a morphism \(f:G\to G^{\prime}\) in \(\mathbf{Gam}\) preserves Nash equilibria if \(f_{O}(\phi^{NE}(O))\subseteq\phi^{NE}(O^{\prime})\)._
All the morphisms in \(\mathbf{NE}\) trivially preserve Nash equilibria. Motivated by this, we can also define another subcategory of \(\mathbf{Gam}\) with equilibria preserving morphisms:
**Definition 5.4**.: _Let \(\mathbf{Gam}^{NE}\) be the subcategory of \(\mathbf{Gam}\) with all the same objects, but only the equilibria-preserving morphisms._
It can be easily checked that \(\mathbf{Gam}^{NE}\) is a category since identities and compositions of equilibria-preserving morphisms are equilibria-preserving. Furthermore, \(\mathbf{NE}\) is a subcategory of \(\mathbf{Gam}^{NE}\) since as pointed out before, all its morphisms trivially preserve equilibria.
As a consequence of Theorem 4.6 and Proposition 4.7 we have:
**Corollary 5.5**.: _Products, coproducts and pushouts based on player games can be constructed in \(\mathbf{Gam}^{NE}\)._
A sufficient requirement ensuring that a morphism preserves Nash equilibrium is as follows:
**Lemma 5.6**.: _Let \(f:G\to G^{\prime}\) in \(\mathbf{Gam}\) be such that \(f_{p}:I\to J\) and \(f_{O}:O\to O^{\prime}\) are surjective and for each \(i\in I\) and every pair \(o,p\in O\):_
* \(\langle o,p\rangle\in R_{i}\) _iff_ \(\langle f_{O}(o),f_{O}(p)\rangle\in R^{\prime}_{f_{p}(i)}\)__
* \(o\preceq_{i}p\)_, iff_ \(f_{O}(o)\preceq^{\prime}_{f_{p}(i)}f_{O}(p)\)_._
_Then, if \(o^{*}\) is a Nash equilibrium in \(G\), \(f_{O}(o^{*})\) is a Nash equilibrium in \(G^{\prime}\). That is, \(f\) is a morphism in \(\mathbf{Gam}^{NE}\)._
Proof.: Assume that \(f_{O}(o^{*})\) is not a Nash equilibrium. Then, there exist \(j\in J\) and \(o^{\prime}\in O^{\prime}\) such that
\[f_{O}(o^{*})R^{\prime}_{j}o^{\prime},\ \ \text{while}\ \ o^{\prime}\not\preceq^{ \prime}_{j}f_{O}(o^{*}).\]
Since \(f_{p}\) and \(f_{O}\) are surjective, there exists \(i\in I\) and \(o\in O\) such that \(f_{p}(i)=j\) and \(f_{O}(o)=o^{\prime}\), respectively. Then, we have that \(f_{O}(o^{*})R^{\prime}_{f_{p}(i)}f_{O}(o)\) and \(f_{O}(o)\not\preceq^{\prime}_{f_{p}(i)}f_{O}(o^{*})\). By the assumptions on \(f\), we get that \(o^{*}R_{i}o\) and since \(o^{*}\) is a Nash equilibrium, \(o\preceq_{i}o^{*}\), so \(f_{O}(o)\preceq^{\prime}_{f_{p}(i)}f_{O}(o^{*})\), which is a contradiction.
**Example**: Consider the games of examples 2.3 and 4.4, taking the players of game \(PD\) to be \(I_{PD}=\{1,2\}\) while those of \(G_{BoS}\) are, \(I_{BoS}=\{2,3\}\). That is, 2 plays both games. The pushout game as defined in Example 3.17, can be denoted \(G_{PD}+_{2}G_{BoS}\), and depicted as follows:
By the result in Theorem 4.6, 2, the Nash equilibria of \(G_{PD}\) and \(G_{BoS}\), namely \((D,D),r\) and \(s\) are included in the coproduct. Since in the construction of the pushout the equivalence classes of the outcomes are all singletons, those are the equilibria of \(G_{PD}+_{2}G_{BoS}\). The output in this graph can be understood as the Nash equilibria, that is, a game in **NE**.
|
2309.04654 | Mask-CTC-based Encoder Pre-training for Streaming End-to-End Speech
Recognition | Achieving high accuracy with low latency has always been a challenge in
streaming end-to-end automatic speech recognition (ASR) systems. By attending
to more future contexts, a streaming ASR model achieves higher accuracy but
results in larger latency, which hurts the streaming performance. In the
Mask-CTC framework, an encoder network is trained to learn the feature
representation that anticipates long-term contexts, which is desirable for
streaming ASR. Mask-CTC-based encoder pre-training has been shown beneficial in
achieving low latency and high accuracy for triggered attention-based ASR.
However, the effectiveness of this method has not been demonstrated for various
model architectures, nor has it been verified that the encoder has the expected
look-ahead capability to reduce latency. This study, therefore, examines the
effectiveness of Mask-CTCbased pre-training for models with different
architectures, such as Transformer-Transducer and contextual block streaming
ASR. We also discuss the effect of the proposed pre-training method on
obtaining accurate output spike timing. | Huaibo Zhao, Yosuke Higuchi, Yusuke Kida, Tetsuji Ogawa, Tetsunori Kobayashi | 2023-09-09T01:05:59Z | http://arxiv.org/abs/2309.04654v1 | # Mask-CTC-based Encoder Pre-training for Streaming End-to-End Speech Recognition
###### Abstract
Achieving high accuracy with low latency has always been a challenge in streaming end-to-end automatic speech recognition (ASR) systems. By attending to more future contexts, a streaming ASR model achieves higher accuracy but results in larger latency, which hurts the streaming performance. In the Mask-CTC framework, an encoder network is trained to learn the feature representation that anticipates long-term contexts, which is desirable for streaming ASR. Mask-CTC-based encoder pre-training has been shown beneficial in achieving low latency and high accuracy for triggered attention-based ASR. However, the effectiveness of this method has not been demonstrated for various model architectures, nor has it been verified that the encoder has the expected look-ahead capability to reduce latency. This study, therefore, examines the effectiveness of Mask-CTC-based pre-training for models with different architectures, such as Transformer-Transducer and contextual block streaming ASR. We also discuss the effect of the proposed pre-training method on obtaining accurate output spike timings, which contributes to the latency reduction in streaming ASR.
Streaming automatic speech recognition, latency reduction, Mask-CTC
## I Introduction
In recent years, deep learning has become the core technology of automatic speech recognition (ASR) [1, 2]. End-to-end ASR further integrates the traditional separated components (i.e., acoustic, pronunciation, and language models) into a single deep neural network, significantly contributing to the simplicity of ASR developments [2, 3, 4]. End-to-end ASR models can be realized in various approaches, including connectionist temporal classification (CTC) [5], Transducer [6], and attention-based encoder-decoder [2, 4]. These end-to-end ASR approaches have greatly benefited from the adoption of Transformer [7, 8, 9, 10, 11], enabling a model to capture global contexts using the self-attention mechanism.
Streaming properties (i.e., real-time processing) are of vital importance in the applications of ASR systems. With the superior performance of Transformer, many efforts have been devoted to making Transformer-based ASR models streaming. Triggered attention-based ASR [12] obtains alignment information from CTC and realizes frame-synchronous decoding according to the CTC spike timings. Meanwhile, contextual block streaming ASR (CBS-ASR) [13] splits the input into blocks (chunks), and streaming encoder feature extraction is conducted on each block with the contexts inherited from the previous blocks. A block boundary detection algorithm is applied to detect the index boundary in each block, which enables block-synchronous beam search decoding. Apart from the above streaming models based on attention-based encoder-decoder, the Transducer-based model can be naturally applied to streaming ASR. Transducer trains a model to align the output of the acoustic encoder with the output of the label encoder, enabling frame-synchronous decoding and making it a suitable framework for streaming ASR [8]. Transformer-Transducer (Transformer-T) [9] adopts Transformer for the acoustic encoder, where the chunk-wise attention mask limits the look-ahead range of the self-attention layer to ensure streaming properties.
For streaming ASR models in general, performance degradation occurs when the look-ahead range is limited from global to local, suppressing the advantage of the Transformer architecture (i.e., processing with long-range contexts). Consequently, longer look-ahead ranges are often required to provide adequate future contexts, leading to the growth of latency requirements. Therefore, capturing long-term contexts within short look-ahead ranges is essential for building successful streaming ASR systems.
One approach to realizing such a property is to utilize the Mask-CTC [14, 15] framework. With conditional masked language model (CMLM) [16, 17] and CTC multi-task training, Mask-CTC trains an encoder network to extract acoustic feature representation that contributes to capturing long-term output dependencies by anticipating future contexts. Its capability of learning context-rich bi-directional representations has been validated in the spoken language understanding task [18, 19]. In our previous work [20], we conducted supervised pre-training with the Mask-CTC objective on the encoder and CTC modules of the triggered attention-based model, which has shown effective results for improving accuracy while reducing latency. In this work, we aim to further examine the effectiveness of Mask-CTC-based pre-training for streaming models with various architectures.
In addition, a detailed perspective on the latency can be analyzed by focusing on the timing of output spikes. In streaming ASR, the timing of output spikes is generally delayed due to the lack of long-term context information. We expect that the proposed pre-training based on Mask-CTC will introduce the look-ahead capability to the encoder, thereby enabling accurate prediction of the timing of output spikes. This property contributes to the early determination of recognition results, which is essential for streaming applications (e.g., a system
that interacts with a user in real-time).
This study, therefore, attempts to demonstrate the effectiveness of Mask-CTC-based pre-training for streaming models with different architectures, including Transformer-T and CBS-ASR, and to discuss the contribution of the proposed pre-training to latency reduction.
The rest of this paper is organized as follows. Section II overviews the streaming ASR models and the Mask-CTC model. Section III describes the proposed pre-training approach for constructing low latency and high recognition accuracy streaming ASR models. In Section IV, we demonstrate the effectiveness of Mask-CTC-based pre-training through experiments and discuss the latency reduction effect with output spike timing measurements. Finally, Section V concludes this paper.
## II Background
In this section, we introduce two types of streaming ASR approaches that we study in this work: Transformer-Transducer [9] and contextual block streaming ASR [13]. We also describe Mask-CTC [14], which is the key to our proposed method.
### _Transformer-Transducer_
A Transducer-based ASR model contains three components: acoustic encoder, label encoder, and joint network. Given a streaming input to a current time index \(t\), the output probability of the \(u\)-th token is calculated as follows:
\[\mathbf{h}_{t}^{\text{AE}}=\mathrm{AcousticEncoder}(\mathbf{x} _{1:t}), \tag{1}\] \[\mathbf{h}_{u-1}^{\text{LE}}=\mathrm{LabelEncoder}(y_{1:u-1}),\] (2) \[\mathbf{h}=\mathrm{Tanh}(\mathrm{Linear}(\mathbf{h}_{t}^{\text{ AE}})+\mathrm{Linear}(\mathbf{h}_{u-1}^{\text{LE}})),\] (3) \[P(y_{u}|y_{1:u-1},\mathbf{x}_{1:t})=\mathrm{SoftMax}(\mathbf{h}). \tag{4}\]
First, the acoustic encoder embeds the input sequence \(\mathbf{x}_{1:t}\) into vector \(\mathbf{h}_{t}^{\text{AE}}\) (Eq. (1)). Meanwhile, the label encoder generates \(\mathbf{h}_{u-1}^{\text{LE}}\) from the previous output token sequence \(y_{1:u-1}\) (Eq. (2)). The two outputs are then sent to the joint network, projected to the same dimension, and added up (Eq. (3)). Finally, the output probabilities against tokens in a vocabulary \(\mathcal{V}\) are calculated based on the previous result (Eq. (4)). The Transducer framework predicts the current symbol for each input frame based on the past output tokens, which naturally introduces streaming fashion into decoding.
Various neural network types can be applied to implement the acoustic and label encoders [6, 21, 22]. In the work of [9], Transformer [7] is applied to the acoustic encoder to achieve high accuracy and LSTM [21] for the label encoder in consideration of the model size control. Chunk-wise attention masks are applied to the self-attention layers of the Transformer acoustic encoder to enable streaming feature extraction. This architecture is referred to as a Transformer-Transducer (Transformer-T).
### _Contextual block streaming ASR_
Contextual block streaming ASR (CBS-ASR) [13] introduces streaming properties to attention-based encoder-decoder models. For streaming feature extraction in the encoder, CBS-ASR utilizes block processing with a context inheritance mechanism proposed in [23]. The speech input is segmented into blocks containing past, central, and future frames with the numbers of \(N_{l}\), \(N_{c}\), and \(N_{r}\). The input blocks are passed on to the encoder, where the central frames are utilized for the output with local contexts provided by the past and future frames as well as the global contexts provided by a context embedding vector inherited from the previous block. Streaming decoding is achieved by a block boundary detection (BBD) algorithm [13], which takes end-of-sentence prediction or token repetition as stopping criteria from detecting the index boundaries on-the-fly and enables the beam search synchronous to the encoded blocks. The streaming processing in CBS-ASR is calculated as follows:
\[H_{b},\mathbf{c}_{b}=\mathrm{BlockEncoder}(Z_{b},\mathbf{c}_{b-1}), \tag{5}\] \[\alpha(y_{0:i},H_{1:B})\approx\sum_{b=1}^{B}\sum_{j=I_{b-1}+1}^{I_ {b}}\log p(y_{i}|y_{0:j-1},H_{1:b}). \tag{6}\]
Eq. (5) represents the streaming encoding of the \(b\)-th input sequence \(Z_{b}\), where \(|Z_{b}|=N_{l}+N_{c}+N_{r}\). The encoded acoustic features \(H_{b}\) is obtained from \(Z_{b}\) and the contextual vector from the previous block \(\mathbf{c}_{b-1}\). Eq. (6) represents the score of the partial hypothesis \(y_{0:i}\) during streaming beam search decoding, where \(y_{0}\) is the start-of-sequence token. \(I_{b}\) denotes the index boundary of the \(b\)-th input block derived from the BBD algorithm.
### _Mask-CTC_
The Mask-CTC framework [14] aims to learn feature representations suitable for anticipation of future contexts. Mask-CTC trains an encoder-decoder model with the joint CMLM [16] and CTC objectives. During training, tokens in the ground truth are randomly masked, and the masked tokens are predicted based on contextual information captured by the encoder and other unmasked output tokens. For the input \(X\) and observed tokens \(Y_{\text{obs}}\), the output probabilities of the masked tokens \(Y_{\text{mask}}\) are computed as follows:
\[P_{\text{cmlm}}(Y_{\text{mask}}|Y_{\text{obs}},X)=\prod_{y\in Y_{\text{mask} }}P_{\text{cmlm}}(y|Y_{\text{obs}},X), \tag{7}\]
where \(Y_{\text{obs}}\) is \(Y\backslash Y_{\text{mask}}\). Based on the CMLM mask prediction of the decoder, the encoder network of Mask-CTC is trained to consider the long-term bidirectional dependencies between output tokens, which enables it to generate acoustic feature representations that anticipate future information.
Such properties are desirable in streaming ASR as it allows the model to capture more future contexts with a limited look-ahead range. In such a way, the Mask-CTC framework can be a potential solution for improving streaming ASR, enhancing a model to achieve high accuracy while keeping low latency.
## III Mask-CTC-based pre-training method
We present a simple and general Mask-CTC-based pre-training method for achieving high-accuracy and low-latency streaming ASR. Specifically, this paper aims to demonstrate the effectiveness of the Mask-CTC pre-training regardless of model architectures and discusses whether such pre-training can extract features suitable for anticipation as intended, focusing on the alignment of the output tokens.
As different end-to-end streaming ASR models, we focus on Transformer-T (see Section II-A) and CBS-ASR (see Section II-B), which cover both Transducer and encoder-decoder model architectures. For both models, the adoption of Transformer has realized high recognition accuracy in their non-streaming baselines. However, when applied to streaming scenarios, the look-ahead ranges of self-attention layers are limited from global to local. This leads to an inevitable performance drop by degrading the Transformer's capability to capture long-range contexts, which limits applications where low latency is a top priority for recognition.
To remedy such an effect, we need the feature representation for the input sequence that considers long-term contextual dependencies and anticipates future information, which corresponds to the properties of the Mask-CTC encoder network as described in Section II-C. To introduce the desirable properties of the Mask-CTC model into the streaming ASR, we propose a simple two-step training method as follows, which is also described in Fig. 1:
* **Stage 1 (Feature representation learning):** The Mask-CTC model is pre-trained to obtain an encoder network that can consider long-term dependencies and anticipate future information.
* **Stage 2 (Streaming ASR training):** The pre-trained Mask-CTC model is exploited to initialize the streaming ASR models. For Transformer-T, the acoustic encoder with the chunk-wise attention is initialized with the Mask-CTC encoder. For CBS-ASR, both the Mask-CTC encoder and CTC networks are used to initialize the corresponding components.
With the two-step training method above, we expect to inherit the characteristics of Mask-CTC to a streaming ASR model to capture long-term contextual information and reduce the latency dependency.
## IV Experiments
Speech recognition experiments were conducted to examine the effectiveness of the Mask-CTC-based pre-training method using ESPnet2 [24, 25]. We also investigated the essential effect of the proposed pre-training method by studying the output token alignments of the streaming ASR models.
### _Datasets_
The models were trained and evaluated using the Wall Street Journal (WSJ) [26] dataset, which contains 81h English utterances of read articles from the newspaper and the TED-LIUM2 (TED2) [27] dataset, which contains 207h English spontaneous speech. For the output tokens, we used SentencePiece [28] to construct a 80 subword vocabulary for WSJ and a 500 subword vocabulary for TED2, respectively. For robust model training, we applied SpecAugment [29] to the input data.
### _Experimental setup_
For the Transformer-T model, the acoustic encoder was implemented with 12 Transformer encoder layers and a single LSTM layer for the label encoder. For streaming feature extraction, a chunk-wise attention mask was implemented and applied to the encoder layers as in [9]. The latency value was calculated as the product of the maximum look-ahead range (i.e., chunk size \(-1\)) and a frame rate of 40ms.
For WSJ experiments, the CBS-ASR model consisted of 6 Conformer encoder layers [30] and 6 Transformer decoder layers. The input block settings followed \(N_{l}\) as eight, \(N_{c}\) as four, and \(N_{r}\) varying from 0 to 6. The latency for CBS-ASR was calculated as the product of the maximum look-ahead range in the block (i.e., \(N_{c}+N_{r}-1\)) and a frame rate of 40ms. For TED2 experiments, the CBS-ASR model consisted of 12 Conformer encoder layers [30] and 6 Transformer decoder layers. The \(N_{r}\) was set to 6.
For the pre-trained Mask-CTC model, the encoder was constructed with the identical setting as the target streaming model. The CMLM decoder was built with six Transformer decoder layers. All the models were trained by 150 epochs, and the final models were obtained by averaging the snapshots of the ten epochs of the minimal loss for Transformer-T and the best accuracy for CBS-ASR. For decoding, a beam search was conducted with a beam size of ten for all. We used the word error rate (WER) for measuring the ASR performance.
### _Experimental results_
For both the Transformer-T and CBS-ASR systems, the performances of the following models are compared.
* **Baseline**[9, 13, 31]: Existing streaming ASR models, including Transformer-T and CBS-ASR. The parameters for all the components were randomly initialized.
* **Enhanced**: Streaming ASR models with Mask-CTC-based pre-training. Components of the streaming ASR were initialized with pre-trained Mask-CTC modules. For
Fig. 1: Illustration of Mask-CTC-based pre-training using Transformer-Transducer model. In stage 1, encoder is trained with Mask-CTC framework. In stage 2, Transformer-Transducer model is initialized with pre-trained encoder and fine-tuned with streaming objective.
Transformer-T, the acoustic encoder was initialized with the Mask-CTC encoder. For CBS-ASR, both encoder and CTC modules were initialized with corresponding Mask-CTC modules.
The experimental results of Transformer-T and CBS-ASR are summarized in Table I and Table II. Non-streaming Transformer-T and CBS-ASR with 1240ms latency were used as lower bounds in the experiments.
The results on WSJ show that for both Transformer-T and CBS-ASR, the enhanced models outperformed the baseline models by achieving lower WERs under all latency settings, suggesting the accuracy enhancements introduced by the Mask-CTC-based pre-training method. For WSJ dataset, 40ms and 80ms latency reductions were reached for Transformer-T and CBS-ASR, respectively, while achieving better or equal recognition accuracy than the baseline models. For instance, the enhanced Transformer-T with 120ms latency achieved lower WERs (16.6% for eval92 and 20.8% for dev93) than the WERs of the baseline with 160ms latency (16.8% for eval92 and 20.9% for dev93). Such results demonstrated that our method contributed to the construction of streaming ASR models with low latency and high accuracy. For TED2 dataset, the enhanced CBS-ASR model also achieved 0.2 percentage point of WER reduction compared to the baseline model, which proves the general effectiveness of the proposed method regardless of the dataset. The results for systems with different architectures, such as Transducer and Encoder-Decoder, also demonstrated that the Mask-CTC-based pre-training was effective regardless of the model architecture.
### _Analysis of output token alignments_
The work of [32] argued that the streaming model attends to shift the token boundaries to the future side to obtain more contextual information, which results in delay of the posterior probability spikes for the output tokens compared to non-streaming models. In contrast, if the encoder network learns the feature representations that anticipate future information, the output tokens can be confirmed earlier and the token boundary shifting issue should be remedied in some instances. Therefore, we measured the delay of the spike occurrences in streaming models by comparing them to the alignments obtained from a non-streaming model. The delay is expected to be reduced with the Mask-CTC-based pre-training method.
We conducted measurements on the dev93 validation set of WSJ. We used the baseline and enhanced models with 200ms latency settings for Transformer-T and compared their alignments with a non-streaming Transformer-T model. The alignments were obtained from the output of the joint network. For CBS-ASR, the latency was also set to 200ms, and we compared the output token boundaries between the baseline and enhanced models. The ASR alignments were obtained from the CTC predictions of CBS-ASR in the same manner as [32] and the reference alignments were obtained with the Montreal Forced Aligner [33]. Figure 2 illustrates one example of output token alignments given by Transformer-T. Here, the color in the background represents the reference alignment to the speech input. The non-streaming ASR (top) managed to predict accurate token alignments. However, the
Fig. 2: Output token alignments of non-streaming and streaming Transformer-Transducer models.
baseline streaming ASR (bottom) showed a significant delay in the alignments, indicating token boundary shifting due to the lack of contexts. Meanwhile, our enhanced streaming ASR (middle), with a Mask-CTC-based pre-trained encoder network, largely improved the alignments of the streaming ASR. We calculated the average output delay reduction across the dev93 validation set for both Transformer-T and CBS-ASR. For Transformer-T, the spike output delay was reduced by 44ms, and for CBS-ASR, 46ms. Such results help us to understand the knowledge learned from the Mask-CTC-based pre-training method and the reason for the latency reduction capability.
## V Conclusion
In this study, an attempt was made to demonstrate the effectiveness of Mask-CTC-based pre-training for achieving low latency and high accuracy in streaming speech recognition. Experimental results showed the effectiveness of the method on various model architectures, including Transformer-Transducer and contextual block streaming ASR. Furthermore, by studying the output spike timings of the streaming models, we discovered that more precise alignments of the input and output sequences are learnt by the pre-training, which contributes to the latency reduction in streaming ASR.
|
2310.20681 | Phenomenology of Lepton Masses and Mixing with Discrete Flavor
Symmetries | The observed pattern of fermion masses and mixing is an outstanding puzzle in
particle physics, generally known as the flavor problem. Over the years, guided
by precision neutrino oscillation data, discrete flavor symmetries have often
been used to explain the neutrino mixing parameters, which look very different
from the quark sector. In this review, we discuss the application of
non-Abelian finite groups to the theory of neutrino masses and mixing in the
light of current and future neutrino oscillation data. We start with an
overview of the neutrino mixing parameters, comparing different global fit
results and limits on normal and inverted neutrino mass ordering schemes. Then,
we discuss a general framework for implementing discrete family symmetries to
explain neutrino masses and mixing. We discuss CP violation effects, giving an
update of CP predictions for trimaximal models with nonzero reactor mixing
angle and models with partial $\mu-\tau$ reflection symmetry, and constraining
models with neutrino mass sum rules. The connection between texture zeroes and
discrete symmetries is also discussed. We summarize viable higher-order groups,
which can explain the observed pattern of lepton mixing where the non-zero
$\theta_{13}$ plays an important role. We also review the prospects of
embedding finite discrete symmetries in the Grand Unified Theories and with
extended Higgs fields. Models based on modular symmetry are also briefly
discussed. A major part of the review is dedicated to the phenomenology of
flavor symmetries and possible signatures in the current and future experiments
at the intensity, energy, and cosmic frontiers. In this context, we discuss
flavor symmetry implications for neutrinoless double beta decay, collider
signals, leptogenesis, dark matter, as well as gravitational waves. | Garv Chauhan, P. S. Bhupal Dev, Ievgen Dubovyk, Bartosz Dziewit, Wojciech Flieger, Krzysztof Grzanka, Janusz Gluza, Biswajit Karmakar, Szymon Zięba | 2023-10-31T17:47:19Z | http://arxiv.org/abs/2310.20681v3 | # Phenomenology of Lepton Masses and Mixing with Discrete Flavor Symmetries
###### Abstract
The observed pattern of fermion masses and mixing is an outstanding puzzle in particle physics, generally known as the _flavor problem_. Over the years, guided by precision neutrino oscillation data, discrete flavor symmetries have often been used to explain the neutrino mixing parameters, which look very different from the quark sector. In this review, we discuss the application of non-Abelian finite groups to the theory of neutrino masses and mixing in the light of current and future neutrino oscillation data. We start with an overview of the neutrino mixing parameters, comparing different global fit results and limits on normal and inverted neutrino mass ordering schemes. Then, we discuss a general framework for implementing discrete family symmetries to explain neutrino masses and mixing. We discuss CP violation effects, giving an update of CP predictions for trimaximal models with nonzero reactor mixing angle and models with partial \(\mu-\tau\) reflection symmetry, and constraining models with neutrino mass sum rules. The connection between texture zeroes and discrete symmetries is also discussed. We summarize viable higher-order groups, which can explain the observed pattern of lepton mixing where the non-zero \(\theta_{13}\) plays an important role. We also review the prospects of embedding finite discrete symmetries in the Grand Unified Theories and with extended Higgs fields. Models based on modular symmetry are also briefly discussed. A major part of the review is dedicated to the phenomenology of flavor symmetries and possible signatures in the current and future experiments at the intensity, energy, and cosmic frontiers. In this context, we discuss flavor symmetry implications for neutrinoless double beta decay, collider signals, leptogenesis, dark matter, as well as gravitational waves.
keywords: Discrete Symmetries, Flavor mixing, CP Violation, Neutrino Oscillation, Phenomenology +
Footnote †: journal: Progress in Particle and Nuclear Physics
###### Contents
* 1 Introduction
* 2 Flavor Symmetry and Lepton Masses and Mixing: Theory
* 2.1 General Framework
* 2.2 Flavor Symmetry, Nonzero \(\theta_{13}\) and Nonzero \(\delta_{\rm CP}\)
* 2.3 Flavor Symmetry and Neutrino Mass Models
* 2.4 Flavor and Generalized CP Symmetries
* 2.5 Higher Order Discrete Groups
* 2.6 Flavor Symmetry and Grand Unified Theory
* 2.7 Flavor Symmetry and the Higgs Sector
* 2.8 Modular Symmetry
* 3 Flavor Symmetry at Intensity Frontier
* 3.1 Neutrino Oscillation Experiments
* 3.2 Neutrinoless Double Beta Decay
* 3.3 Lepton Flavor and Universality Violation
* 4 Flavor Symmetry at Energy Frontier
* 4.1 Example Group : \(\Delta(6n^{2})\)
* 4.2 Decay Lengths and Branching Ratios of RHNs
* 4.3 Lepton Flavor Violation at Colliders
* 4.4 Correlation between Collider Signals and Leptogenesis
* 4.5 Collider Signals in Other Flavor Models
* 4.6 Higgs to Diphoton Decay
* 5 Flavor Symmetry and Cosmic Frontier
* 5.1 Flavor Symmetry and Dark Matter
* 5.2 Flavor Symmetry and Baryon Asymmetry of the Universe
* 5.3 Flavor Symmetry and Gravitational Waves
* 6 Summary and Outlook
* A \(A_{4}\) symmetry
## 1 Introduction
Over the past few decades, we have seen spectacular progress in understanding neutrinos. The neutrino quantum oscillation phenomenon established that at least two neutrino quantum states are massive, although the masses are tiny, at the sub-electronvolt level, \(m_{\nu}\lesssim{\cal O}(0.1)\) eV [1, 2]. A tremendous effort led to this result, as earlier experimental studies of neutrino physics faced the challenge of low event statistics for a scarce set of observables. It started around half a century ago with the pioneering Homestake experiment [3] and the so-called solar neutrino problem [4] and culminated with the 2015
Nobel Prize in Physics for the discovery of neutrino oscillations by the Super-Kamiokande and SNO collaborations [5, 6, 7], which showed that neutrinos have mass.
The simplest neutrino mass theory is based on the three-neutrino (\(3\nu\)) paradigm with the assumption that flavor states (\(\nu_{e},\nu_{\mu},\nu_{\tau}\)) are mixed with massive states (\(\nu_{1},\nu_{2},\nu_{3}\)) with definite masses (\(m_{1},m_{2},m_{3}\)), at least two of which are non-zero. The standard parametrization of the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) unitary mixing matrix1 reads [11, 12, 13]
Footnote 1: In an equivalent parametrization [8, 9, 10], the lepton mixing matrix can be written in a ‘symmetrical’ form where all three CP violating phases are ‘physical’.
\[U =U(\theta_{23})U(\theta_{13},\delta_{\rm CP})U(\theta_{12})U_{M}( \alpha_{1},\alpha_{2})\] \[=\begin{pmatrix}1&0&0\\ 0&c_{23}&s_{23}\\ 0&-s_{23}&c_{23}\end{pmatrix}\begin{pmatrix}c_{13}&0&s_{13}e^{-i\delta_{\rm CP} }\\ 0&1&0\\ -s_{13}e^{i\delta_{\rm CP}}&0&c_{13}\end{pmatrix}\begin{pmatrix}c_{12}&s_{12} &0\\ -s_{12}&c_{12}&0\\ 0&0&1\end{pmatrix}\begin{pmatrix}e^{i\alpha_{1}}&0&0\\ 0&e^{i\alpha_{2}}&0\\ 0&0&1\end{pmatrix}, \tag{1.1}\]
where \(M\) stands for Majorana neutrinos, \(c_{ij}\equiv\cos(\theta_{ij})\), \(s_{ij}\equiv\sin(\theta_{ij})\), and the Euler rotation angles \(\theta_{ij}\) can be taken without loss of generality from the first quadrant, \(\theta_{ij}\in[0,\pi/2]\), and the Dirac CP phase \(\delta_{\rm CP}\) and Majorana phases \(\alpha_{1},\alpha_{2}\) are in the range \([0,2\pi]\)[14]. This choice of parameter regions is independent of matter effects [15].
In the \(3\nu\) paradigm, there are two non-equivalent mass orderings: normal mass ordering (NO) with \(m_{1}<m_{2}<m_{3}\) and inverted mass ordering (IO) with \(m_{3}<m_{1}<m_{2}\). The neutrino masses can be further classified into normal hierarchical mass spectrum (NH) with \(m_{1}\ll m_{2}<m_{3}\), inverted hierarchical mass spectrum (IH) with \(m_{3}\ll m_{1}<m_{2}\) and quasi-degenerate mass spectrum (QD) with \(m_{1}\simeq m_{2}\simeq m_{3}\)[14]. Within QD, quasi-degenerate NH (QDNH) mass spectrum with \(m_{1}\lesssim m_{2}\lesssim m_{3}\) and quasi-degenerate IH (QDIH) mass spectrum with \(m_{3}\lesssim m_{1}\lesssim m_{2}\)[16] can be distinguished. NO/IO notation is sometimes used interchangeably with NH/IH in the literature. However, NO/IO notation is more general since both NH and QDNH are NO and IH and QDIH are IO. For more discussion, see section 14.7 in the PDG review [14]. In what follows, we will use the NO/IO notation. Neutrino masses can be expressed by the smallest of the three neutrino masses (\(m_{0}\)) and experimentally determined mass-squared differences (\(\Delta m_{21}^{2},\Delta m_{31}^{2},\Delta m_{32}^{2}\))
\[\begin{array}{cc}\text{Normal mass ordering (NO)}&\text{ \
NO at an overall level of \(\sim 2.5\sigma\) corresponding to \(\Delta\chi^{2}\sim 6.4-6.5\). Lower octant atmospheric angle \(\theta_{23}\) best-fit is favored for NO in NuFit 5.2 [19, 20] and Capozzi et al [23] results. These analysis include the most recent Superkamiokande (SK) data (with \(\sin^{2}\theta_{23}<0.5\) preference) [24, 25]. The \(\theta_{23}\) best-fit octant preference discussed above is illustrated in Fig. 1.1 together with other neutrino parameters as given in Tab. 1.1.
The initial results by the T2K collaboration [28] indicated CP violation in the lepton sector, a preference for NO at the \(3\sigma\) level and also for \(\theta_{23}\) in the second octant. These data are confirmed with an improved analysis [29, 30]; they continue to prefer normal mass ordering and upper octant of \(\theta_{23}\) with a nearly maximal \(\delta_{\rm CP}\). Also, the NO\(\nu\)A collaboration [31] reports on nonvanishing \(\delta_{\rm CP}\) effects, though the best-fit values of CP phase differ between the two groups. Both T2K
Figure 1.1: Schematics of neutrino oscillation data from Tab. 1.1. On the plots, green represents best fits, red \(1\sigma\) ranges, and blue \(3\sigma\) ranges. The golden vertical lines symbolize the historic tribimaximal (TBM) pattern values, which will be explained in Chapter 2. The plot is an update of plots presented in Refs. [26, 27].
and NO\(\nu\)A prefer NO over IO, but T2K prefers \(\delta_{\rm CP}=-90^{o}\) whereas NO\(\nu\)A prefers a region \(\delta_{\rm CP}\sim 180^{o}\).3 Joint fits between NO\(\nu\)A +T2K and Super-K+T2K are ongoing, with the aim to obtain improved oscillation parameter constraints due to resolved degeneracies, and to understand potentially non-trivial systematic correlations [33]. The next-generation oscillation experiments, such as JUNO [34], Hyper-K [35], DUNE [36] and IceCube upgrade [37], will significantly improve the prospects of measuring \(\delta_{\rm CP}\) and determining the mass ordering and the octant of \(\theta_{23}\)[33].
Footnote 3: The mild tension between T2K and NO\(\nu\)A could in principle be resolved by invoking new physics [32].
Oscillation experiments do not put a limit on the Majorana phases \(\alpha_{1}\), \(\alpha_{2}\). However, predictions for the Majorana phases can be obtained using the neutrinoless double beta decay in conjunction with information on the neutrino masses [38, 39]. Also, the oscillation experiments are only sensitivity to the squared mass differences, and not to the individual masses of neutrinos. Therefore, _the lightest neutrino mass \(m_{0}\) is a free parameter and the other two masses are determined through_ Eq. (1.2). However, there are limits on the absolute neutrino mass scale from other experiments, namely, from tritium beta decay [40], neutrinoless double beta decay [41], and precision measurements of the cosmic microwave background (CMB) and large-scale structure (LSS) [42]. We discuss them now.
* A direct and model-independent laboratory constraint on the neutrino mass can be derived from the kinematics of beta decay or electron capture [40]. These experiments measure an effective electron neutrino mass \[m_{\beta}^{2}=\frac{\sum_{i}m_{i}^{2}|U_{ei}|^{2}}{\sum_{i}|U_{ei}|^{2}}=\sum _{i}m_{i}^{2}|U_{ei}|^{2}\,,\] (1.3) _assuming \(U\) is unitary_. This can be expressed through oscillation parameters as [43] \[m_{\beta}^{2}=c_{13}^{2}c_{12}^{2}m_{1}^{2}+c_{13}^{2}s_{12}^{2}m_{2}^{2}+s_{ 13}^{2}m_{3}^{2}=\begin{cases}\text{NO:}&m_{0}^{2}+\Delta m_{21}^{2}c_{13}^{2} s_{12}^{2}+\Delta m_{3\ell}^{2}s_{13}^{2}\,,\\ \text{IO:}&m_{0}^{2}-\Delta m_{21}^{2}c_{13}^{2}c_{12}^{2}-\Delta m_{3\ell}^{ 2}c_{13}^{2}.\end{cases}\] (1.4) Here \(\ell=1\) (2) for NO (IO) in \(\Delta m_{3\ell}^{2}\). The current oscillation data impose an ultimate lower bound of \(m_{\beta}>0.008\) (0.047) eV for NO (IO). At present, the best direct limit on \(m_{\beta}\) comes from the tritium beta decay experiment KATRIN: \(m_{\beta}<0.8\) eV at 90% CL [44], with projected sensitivity down to \(m_{\beta}<0.2\) eV at 90% CL [45]. The future Project 8 experiment using the Cyclotron Radiation Emission Spectroscopy (CRES) technique is expected to reach a sensitivity for \(m_{\beta}\) down to 0.04 eV [46]. Some recent advances in the CRES technique were reported in Ref. [47]. Fig. 1.2 (bottom panel) summarizes present and future experimental bounds with corresponding projections to \(m_{0}\) axis in NO scenario. As we can see, IO is completely within the future Project 8 sensitivity.
* If neutrinos are Majorana particles, neutrinoless double beta decay (\(0\nu\beta\beta\)) experiments [41] can also provide direct information on neutrino masses via the effective Majorana mass [43] \[m_{\beta\beta}=\Big{|}\sum_{i}m_{i}U_{ei}^{2}\Big{|}=\Big{|}m_{1 }c_{13}^{2}c_{12}^{2}e^{i2\alpha_{1}}+m_{2}c_{13}^{2}s_{12}^{2}e^{i2\alpha_{2 }}+m_{3}s_{13}^{2}e^{-i2\delta_{\rm CP}}\Big{|}\] (1.5) \[=\begin{cases}\text{NO:}&m_{0}\left|c_{13}^{2}c_{12}^{2}e^{i2( \alpha_{1}-\delta_{\rm CP})}+\sqrt{1+\frac{\Delta m_{13}^{2}}{m_{0}^{2}}}c_{13 }^{2}s_{12}^{2}e^{i2(\alpha_{2}-\delta_{\rm CP})}+\sqrt{1+\frac{\Delta m_{13}^ {2}}{m_{0}^{2}}}\,s_{13}^{2}\right|\\ \text{IO:}&m_{0}\left|\sqrt{1-\frac{\Delta m_{13}^{2}+\Delta m_{13}^{2}}{m_{0}^ {2}}}\,c_{13}^{2}c_{12}^{2}e^{i2(\alpha_{1}-\delta_{\rm CP})}+\sqrt{1-\frac{ \Delta m_{23}^{2}}{m_{0}^{2}}}\,c_{13}^{2}s_{12}^{2}e^{i2(\alpha_{2}-\delta_{ \rm CP})}+s_{13}^{2}\right|\!.\end{cases}\] The present best upper limit on \(m_{\beta\beta}\) comes from the KamLAND-Zen experiment using \({}^{136}\)Xe: \(m_{\beta\beta}<0.036-0.156\) eV at 90% CL [48], where the range is due to the nuclear matrix element (NME) uncertainties. Several next-generation experiments are planned with different isotopes [51], with ultimate discovery sensitivities to \(m_{\beta\beta}\) down
to 0.005 eV. Based on Tab. 1.1, the update for \(m_{\beta\beta}\) predictions from Eq. (1.5) as a function of the lightest neutrino mass is plotted in Fig. 1.2. The light gray shaded region shows the current upper limit range for \(m_{\beta\beta}\) (\(36-156\) meV at 90% CL) from KamLAND-Zen [48] (comparable limits were obtained from GERDA [52]), whereas the light magenta shaded region gives the future upper limit range for \(m_{\beta\beta}\) (\(4.7-20.3\) meV at 90% CL) from nEXO [49], with the shaded area in each case arising from NME uncertainties and corresponding projections to the \(m_{0}\) axis in the NO scenario. Comparable future sensitivities are discussed for other experiments, such as LEGEND-1000 [53] and THEIA [54], not shown in this plot. The dark gray shaded region is disfavored by KATRIN [44]. The vertical dashed lines are the cosmological upper limits (for NO and IO) on the sum of neutrino masses (\(\sum m_{i}<0.12\) eV at 95% CL) from Planck [50]; see the next item and Fig. 1.3.
* Massive neutrinos impact CMB and LSS. Thus, precision cosmological data restrict neutrino masses [55]. Here we use the most stringent limit from Planck [50]: \(\sum m_{i}<0.12\) eV at 95% CL (Planck TT,TE,EE+lowE+lensing+BAO). A slightly stronger limit of \(\sum m_{i}<0.09\) eV at 95% CL has been obtained in Refs. [56; 57], while Ref. [58] has argued in favor of a weaker limit of \(\sum m_{i}<0.26\) eV. Sum of light neutrino masses \(\sum m_{i}\) is plotted in Fig. 1.3 against the lightest neutrino mass \(m_{0}\) (left), effective electron neutrino mass \(m_{\beta}\) (middle) and effective Majorana
Figure 1.2: The effective electron neutrino mass \(m_{\beta}\) (Eq. (1.4)) and the effective Majorana neutrino mass \(m_{\beta\beta}\) (Eq. (1.5)) plotted against the lightest neutrino mass \(m_{0}\). The gray, reddish and light reddish shaded regions represent the KATRIN upper bound (\(m_{\beta}<0.8\) eV at 90% CL) [44], KATRIN future bound (\(m_{\beta}<0.2\) eV at 90% CL) [45] and Project 8 future bound (\(m_{\beta}<0.04\) eV) [46] respectively. The light gray and light magenta shaded regions represent the current upper limit range from KamLAND-Zen [48] (\(36-156\) meV at 90% CL) and future sensitivity range from nEXO [49] (\(4.7-20.3\) meV at 90% CL) respectively. All regions are presented with NO projections to \(m_{0}\) axis. The cosmology NO/IO upper limits for \(m_{0}\) correspond to the Planck data [50] (see the discussion in item (iii)).
mass \(m_{\beta\beta}\) (right) with the NuFit \(3\sigma\) oscillation parameters from Tab. 1.1 for both NO and IO scenarios. The horizontal gray-shaded region represents the current Planck upper limit [50]. Future cosmology sensitivity forecast is represented by the brown-green shaded area, with an uncertainty of the order of 15 meV [59] (see also [60]). The dashed lines represent the lowest allowed values of \(\sum m_{i}=58\) meV (NO) and 97 meV (IO) by current oscillation data.
Understanding the pattern of neutrino mixing is crucial because it is part of the long-standing flavor puzzle. As seen with the naked eye in Fig. 1.4, the mixing patterns for quarks and neutrinos are intrinsically very different. The neutrino case is more "democratic" except for the \(U_{e3}\) parameter, which is proportional to the small \(s_{13}\) in Eq. (1.1). In fact, before the non-zero \(\theta_{13}\) discovery by Daya Bay [62] and RENO [63] in 2012, the reactor mixing angle \(\theta_{13}\) was
Figure 1.4: Flavor puzzle. Sizes of circles represent magnitudes of mixing among quarks (on the left) and neutrinos (on the right). Figure taken from the arXiv version of Ref. [61].
Figure 1.3: Sum of light neutrino masses \(\sum m_{i}\) plotted against the lightest neutrino mass \(m_{0}\) (left), effective electron neutrino mass \(m_{\beta}\) (middle) and effective Majorana mass \(m_{\beta\beta}\) (right) with the NuFit \(3\sigma\) oscillation parameters from Tab. 1.1. The horizontal gray-shaded region represents the current Planck upper limit [50]. Future cosmology sensitivity forecast is represented by a brown-green shaded area, with an uncertainty of the order of 15 meV [59] (see also Ref. [60]). The lowest allowed values of \(\sum m_{i}\) for NO and IO from NuFit data are also shown by the dashed lines. Other current and future exclusion regions shown here are described in Fig. 1.2. The plot is an updated variation of the plot presented in Ref. [57].
thought to be vanishingly small. Guided by the \(\theta_{13}\sim 0^{\circ}\) assumption and to be consistent with the observed solar and atmospheric mixing angles, several flavor mixing schemes were postulated. By substituting \(\theta_{13}=0^{\circ}\) and \(\theta_{23}=45^{\circ}\) in the general lepton mixing matrix given in Eq. (1.1), up to the phase matrix \(U_{M}\), most of the popular mixing schemes such as bi-maximal (BM) [64, 65, 66, 67], tribimaximal (TBM) [68, 69], hexagonal (HG) [70], and golden ratio (GR) [71, 72, 73, 74, 75, 76] mixing schemes can be altogether written as
\[U_{0}=\left(\begin{array}{ccc}c_{12}&s_{12}&0\\ -\frac{s_{12}}{\sqrt{2}}&\frac{c_{12}}{\sqrt{2}}&-\frac{1}{\sqrt{2}}\\ -\frac{s_{12}}{\sqrt{2}}&\frac{c_{12}}{\sqrt{2}}&\frac{1}{\sqrt{2}}\end{array} \right). \tag{1.6}\]
Substituting \(s_{12}=1/\sqrt{2},\ 1/\sqrt{3},\ 1/2\), and \(\tan\theta_{12}=1/\varphi\) (with \(\varphi=(1+\sqrt{5})/2\) being the golden ratio), one can explicitly obtain the fixed mixing schemes BM, TBM, HG and GR4 respectively. In each case the Dirac CP phase \(\delta_{\rm CP}\) is undefined as \(\theta_{13}=0\) and can be easily extended to include the non-vanishing Majorana phases defined in Eq. (1.1) via the definition \(U_{0}\to U_{0}U_{M}\). Using the diagonalization relation
Footnote 4: There exists an alternate version of GR mixing where \(\cos\theta_{12}=\varphi/2\)[74, 75].
\[m_{\nu}=U_{0}^{*}{\rm diag}(m_{1},m_{2},m_{3})U_{0}^{\dagger}, \tag{1.7}\]
such a mixing matrix can easily diagonalize a \(\mu-\tau\) symmetric (transformations \(\nu_{e}\rightarrow\nu_{e}\), \(\nu_{\mu}\rightarrow\nu_{\tau}\), \(\nu_{\tau}\rightarrow\nu_{\mu}\) under which the neutrino mass term remains unchanged) neutrino mass matrix of the form [61]
\[m_{\nu}=\left(\begin{array}{ccc}A&B&B\\ B&C&D\\ B&D&C\end{array}\right), \tag{1.8}\]
where the elements \(A,B,C\) and \(D\) are in general complex. With \(A+B=C+D\) this matrix yields the TBM mixing pattern
\[U_{\rm TBM}=\left(\begin{array}{ccc}\sqrt{\frac{2}{3}}&\frac{1}{\sqrt{3}}&0 \\ -\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&-\frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}}\end{array}\right). \tag{1.9}\]
Such first-order approximations of the neutrino oscillation data motivated theorists to find other symmetry-based aesthetic frameworks which can lead towards these fixed mixing matrices.
In this regard, non-Abelian discrete groups turned out to be popular as appropriate flavor symmetries for the lepton sector. Discrete groups have always played a key role in physics starting from crystallographic groups in solid-state physics, to discrete symmetries such as \(C\), \(P\), and \(T\), which have shaped our understanding of nature. In neutrino physics, for a long time, various discrete groups such as \(S_{3}\), \(A_{4}\), \(S_{4}\), \(A_{5}\), \(T^{\prime}\), \(\Delta(27)\), \(D_{n}\), \(T_{7}\), \(\Delta(6n^{2})\)[77, 26, 78] etc. have been extensively used to explain fermion mixing. Among the various discrete groups used for this purpose, \(A_{4}\) emerged as the most widely adopted choice initially proposed as an underlying family symmetry for quark sector [79, 80]. Interestingly, in the last decade, thanks to the reactor neutrino experiments Double Chooz [81], Daya Bay [62], and RENO [63] (also T2K [82], MINOS [83], and others [84]), the reactor neutrino mixing angle is conclusively measured to be 'large' (see Tab. 1.1). In addition to this, as mentioned earlier, a non-zero value of the Dirac CP phase \(\delta_{\rm CP}\) is favored by the oscillation experiments. Such an observation has ruled out the possibility of simple neutrino mixing schemes
like in Eq. (1.6). Therefore, it is consequential to find modifications, corrections or the successors of the above mixing schemes which are still viable; this will be discussed in the next Chapter. Models based on non-Abelian discrete flavor symmetries often yield interesting predictions and correlations among the neutrino masses, mixing angles and CP phases. Involvement of such studies may have broader applications in various aspects of cosmology (matter-antimatter asymmetry of the Universe, DM, gravitational waves), collider physics and other aspects of particle physics, which will be discussed in subsequent Chapters.
Moreover, the light neutrino sector and masses connected with three families of neutrinos and charged leptons can be a reflection of a more general theory where weakly interacting (sterile) neutrinos exist with much higher masses (GeV, TeV, or higher up to the GUT scale). A related problem is the symmetry of the _full_ neutrino mass matrix, including the sterile sector. In this framework, the known neutrino mass and flavor states can be denoted by \(|\nu_{i}^{(m)}\rangle\) and \(|\nu_{\alpha}^{(f)}\rangle\), respectively, where \(i=1,2,3\) and \(\alpha=e,\mu,\tau\). Any extra, beyond SM (BSM) sterile mass and flavor states (typically much heavier than the active ones) can be denoted by \(|\widetilde{\nu}_{j}^{(m)}\rangle\) and \(|\widetilde{\nu}_{\beta}^{(f)}\rangle\), respectively for \(j,\beta=1,\ldots,n_{R}\). In this general scenario mixing between an extended set of neutrino mass states \(\{|\nu_{i}^{(m)}\rangle,|\widetilde{\nu}_{\beta}^{(m)}\rangle\}\) with flavor states \(\{|\nu_{\alpha}^{(f)}\rangle,|\widetilde{\nu}_{\beta}^{(f)}\rangle\}\) is described by
\[\begin{pmatrix}|\nu_{\alpha}^{(f)}\rangle\\ |\widetilde{\nu}_{\beta}^{(f)}\rangle\end{pmatrix}=\begin{pmatrix}U&V_{h}\\ V_{hl}&V_{hh}\end{pmatrix}\begin{pmatrix}|\nu_{i}^{(m)}\rangle\\ |\widetilde{\nu}_{j}^{(m)}\rangle\end{pmatrix}\equiv\mathcal{U}\begin{pmatrix} |\nu_{i}^{(m)}\rangle\\ |\widetilde{\nu}_{j}^{(m)}\rangle\end{pmatrix}. \tag{1.10}\]
The SM flavor states \(|\nu_{\alpha}^{(f)}\rangle\) are then given by
\[|\nu_{\alpha}^{(f)}\rangle=\sum_{i=1}^{3}\underbrace{(U)_{\alpha i}\,|\nu_{i} ^{(m)}\rangle}_{\text{SM part}}+\sum_{j=1}^{n_{R}}\underbrace{(V_{h})_{\alpha j }\,|\widetilde{\nu}_{j}^{(m)}\rangle}_{\text{BSM part}}. \tag{1.11}\]
The mixing matrix \(\mathcal{U}\) in (1.10) diagonalizes a general neutrino mass matrix
\[M_{\nu}=\left(\begin{array}{cc}M_{L}&M_{D}\\ M_{D}^{T}&M_{R}\end{array}\right), \tag{1.12}\]
using a congruence transformation
\[\mathcal{U}^{T}M_{\nu}\mathcal{U}\simeq\text{diag}(m_{i},M_{j}). \tag{1.13}\]
The structure and symmetry of the heavy neutrino sector \(M_{R}\) in Eq. (1.12), altogether with \(M_{D}\) variants, influence the masses and mixing of the light sector, beginning with the seesaw type of models [85]. The extended _unitary_ mixing matrix \(\mathcal{U}\) in (1.10), with nonzero submatrices \(V\), makes \(U\) nonunitary. In fact, oscillation experiments do not exclude such cases, giving the following ranges of elements [86] (present analysis and future projections):
\[\left|U^{\text{Current}}\right|_{3\sigma}^{2}=\left(\begin{array}{ ccc}[0.606,0.742]&[0.265,0.337]&[0.020,0.024]\\ [0.051,0.270]&[0.198,0.484]&[0.392,0.620]\\ [0.028,0.469]&[0.098,0.685]&[0.140,0.929]\end{array}\right), \tag{1.14}\] \[\left|U^{\text{Future}}\right|_{3\sigma}^{2}=\left(\begin{array}[ ]{c}[0.653,0.699]&[0.291,0.311]&[0.020,0.024]\\ [0.074,0.108]&[0.355,0.454]&[0.447,0.561]\\ [0.129,0.359]&[0.212,0.423]&[0.349,0.595]\end{array}\right). \tag{1.15}\]
As can be seen from these numbers, the current (and future) precision on the nonunitarity of the neutrino mixing matrix is still far away from the ultra-high precision achieved in the quark sector [87]. The interval matrices (1.14) and (1.15) include nonunitary cases. This information can be used to derive bounds between known three neutrino flavors and
additional neutrino states [88, 89]. Neutrino mixing constructions based on discrete flavor symmetries discussed in the next chapters are based on unitary \(3\times 3\) mixing matrices and variants of non-unitary distortions.
The number of model-building options available with discrete flavor symmetries is vast, and many have already been thoroughly reviewed in the literature. In Ref. [77], the authors have presented a pedagogical review of various non-Abelian discrete groups, including their characters, conjugacy classes, representation, and tensor products, which are essential for particle physics phenomenology. In Ref. [78], the authors discussed the application of non-Abelian finite groups to the theory of neutrino masses and mixing with \(\theta_{13}\sim 0\) to reproduce fixed mixing schemes like TBM and BM, based on finite groups like \(A_{4},S_{4}\), etc. After measurement of the reactor mixing angle, in Ref. [26], the authors reviewed various discrete family symmetries and their (in)direct model-building approaches. They also discussed combining grand unified theories with discrete family symmetry to describe all quark and lepton masses and mixing. In Ref. [90], the author reviewed the scenarios for flavor symmetry combined with generalized CP symmetry to understand the observed pattern of neutrino mixing and the related predictions for neutrino mixing angles and leptonic Dirac CP violation. Finally, along with conventional flavor symmetric approaches, in Ref. [91], the authors also reviewed the modular invariance approach to the lepton sector. Many other excellent reviews also partially cover issues discussed here [92, 93, 94, 95, 96, 97, 98]; see also the Snowmass contributions [99, 100, 101, 102]. Apart from the update to the mentioned reviews, our main focus in this review is a discussion on the phenomenology and testability of discrete flavor symmetries at the energy, intensity, and cosmic frontier experiments.
The rest of the review is organized as follows. In Chapter 2, we present a general framework for understanding neutrino masses and mixing with non-Abelian discrete flavor symmetries, discuss the compatibility of a few surviving mixing schemes with present neutrino oscillation data, and elaborate on explicit flavor models. We also mention various neutrino generation mechanisms and possible consequences once we augment them with discrete flavor symmetries. Then, we discuss the implications of combining flavor symmetries with CP, higher order discrete groups, Grand Unified Theories, extended Higgs sector, and finally, allude to the recently revived modular invariance approach to address the flavor problem. In Chapter 3, we discuss the impact of discrete flavor symmetries in intensity frontiers such as neutrino oscillation experiments, neutrinoless double beta decay, lepton flavor and universality violation. Then in Chapter 4, with some specific examples, we elaborate on the role of flavor symmetry at colliders, which includes a discussion on prospects of right-handed neutrino detection at colliders, lepton flavor violation and constraints on the \(h\to\gamma\gamma\) decay width. In Chapter 5, we elaborate on the consequences of flavor symmetry at cosmic frontier, including studies on DM, leptogenesis and gravitational waves. Finally, in Chapter 6, we summarize and conclude.
## 2 Flavor Symmetry and Lepton Masses and Mixing: Theory
From Eq. (1.1) we find that the neutrino mixing matrix is expressed in terms of mixing angles and CP violating phases, and we are yet to understand the experimentally observed mixing pattern [20]. The masses and mixing of the leptons (as well as of the quarks) are obtained from the Yukawa couplings related to the families. Therefore, _it is natural to ask whether any fundamental principle governs such a mixing pattern._
### General Framework
The primary approaches which try to address the issue of the neutrino mixing pattern include (i) random analysis without imposing prior theories or symmetries on the mass and mixing matrices [103, 104, 105]; (ii) more specific studies with imposed
mass or mixing textures for which models with underlying symmetries can be sought [106, 107, 108, 110], and finally, (iii) theoretical studies where some explicit symmetries at the Yukawa Lagrangian level are assumed and corresponding extended particle sector is defined. In the anarchy hypothesis (i), the leptonic mixing matrix manifests as a random draw from an unbiased distribution of unitary \(3\times 3\) matrices and does not point towards any principle or its origin. This hypothesis does not make any correlation between the neutrino masses and mixing parameters. However, it predicts probability distribution for the parameters which parameterize the mixing matrix. Though random matrices cannot solve fundamental problems in neutrino physics, they generate intriguing hints on the nature of neutrino mass matrices. For instance, in Ref. [111], preference has been observed towards random models of neutrino masses with sterile neutrinos. In the intermediate approach (ii), some texture zeros of neutrino mass matrices can be eliminated. For instance, in Ref. [109] 570 (298) inequivalent classes of texture zeros in the Dirac (Majorana) case were found. For both cases, about 75% of the classes are compatible with the data. In the case of maximal texture zeros in the neutrino and charged lepton mass matrices, there are only about 30 classes of texture zeros for each of the four categories defined by Dirac/Majorana nature and normal/inverted ordering of the neutrino mass spectrum. Strict texture neutrino mass matrices can also be discussed in phenomenological studies. For more, see section 2.3.
In what follows, we will discuss the symmetry-based approach (iii) to explain the non-trivial mixing in the lepton sector known as family symmetry or horizontal symmetry. Such fundamental symmetry in the lepton sector can easily explain the origin of neutrino mixing, which is considerably different from quark mixing. Incidentally, both Abelian and non-Abelian family symmetries have the potential to shed light on the Yukawa couplings. The Abelian symmetries (such as Froggatt-Nielsen symmetry [112]) only point towards a hierarchical structure of the Yukawa couplings, whereas non-Abelian symmetries are more equipped to explain the non-hierarchical structures of the observed lepton mixing as observed by the oscillation experiments.
If we consider a family symmetry \(G_{f}\), the three generations of leptons and quarks can be assigned to irreducible representations or multiplets, hence unifying the flavor of the generations. If \(G_{f}\) contains a triplet representation (3), all three fermion families can follow the same transformation properties. For example, let us consider that non-zero neutrino mass is generated through the Weinberg operator \(HL^{T}cLH\) where the lepton and Higgs doublets transform as a triplet (\(\mathbf{\bar{3}}\)) and singlet under a family symmetry, say, \(SU(3)\). To construct \(SU(3)\) invariant operator, an additional scalar field \(\Phi\) (also known as _flavon_ ) is introduced, and the effective operator takes the form \(HL^{T}\Phi^{T}\Phi LH\). A suitable vacuum alignment (\(\langle\Phi\rangle\propto(u_{1},u_{2},u_{3})^{T}\)) for the flavon is inserted in such a way that the obtained mass matrix is capable of appropriate mixing pattern. As a result, \(G_{f}\) is spontaneously broken once flavons acquire non-zero vacuum expectation values (VEV). Continuous family symmetry such as \(U(3)\), \(O(3)\) (and their subgroups \(SU(3)\) and \(SO(3)\)) can in principle be used for this purpose to understand the neutrino mixing. However, the non-Abelian discrete flavor symmetric approach is much more convenient as in such a framework, obtaining the desired vacuum alignment (which produces correct mixing) of the flavon can be obtained easily [113, 114]. At this point, it is worth mentioning that these non-Abelian discrete symmetries can also originate from a continuous symmetry [115, 116, 117, 118, 119, 120, 121, 122, 123]. For example widely used discrete groups such as \(A_{4},S_{4},A_{5},\Delta(27),T_{7}\) can originate from the continuous group \(SU(3)\)[26]. In another example [120], the authors showed that continuous \(SO(3)\) can also give rise to \(A_{4}\), further broken into smaller \(Z_{3}\) and \(Z_{2}\) symmetries. A few years back, it was proposed that various non-Abelian discrete symmetries can also originate from superstring theory through compactification of extra dimensions and known as the modular invariance approach [94, 114, 124].
In this report, we concentrate on all these aspects of discrete family symmetries discussed above and their implications
for understanding lepton mixing and its extensions. The model building with flavor symmetries is not trivial since the underlying flavor symmetry group \(G_{f}\) must be broken. Usually, this symmetry \(G_{f}\) is considered to exist at some large scale (sometimes with proximity to GUT scale [78]) and to be broken at lower energies with residual symmetries of the charged lepton and neutrino sectors, represented by the subgroups \(G_{e}\) and \(G_{\nu}\), respectively. Therefore, to obtain definite predictions and correlations of the mixing, the choice of the non-Abelian discrete group \(G_{f}\) and its breaking pattern to yield remnant subgroups \(G_{e}\) and \(G_{\nu}\) shapes the model building significantly. Without any residual symmetry, the flavor \(G_{f}\) loses its predictivity markedly. For a detailed discussion on the choice of various discrete symmetries and their generic predictions, see Refs. [95, 26, 90]. In Tab. 2.1, we mention the basic details such as order or number of elements (first and second columns), irreducible representations (third column) and generators (fourth column) of small groups (which contain at least one triplet) such as \(A_{4},S_{4},T^{\prime}\)\(\Delta(27)\) and \(A_{5}\). A pedagogical review, including catalogues of the generators and multiplication rules of these widely used non-Abelian discrete groups, can be found in Ref. [77].
Now, for model building purposes, there exist various approaches based on the breaking pattern of \(G_{f}\) into its residual symmetries, also known as direct, semi-direct and indirect approaches [95, 26]. After breaking of \(G_{f}\), different residual symmetries exist for charged lepton (typically \(G_{e}=Z_{3}\)) and neutrino sector (typically \(G_{\nu}=Z_{2}\times Z_{2}\), also known as the Kline symmetry). It is known as the direct approach. In a semi-direct approach, one of the generators of the residual symmetry is assumed to be broken. On the contrary, in the indirect approach, no residual symmetry of flavor groups remains intact, and the flavons acquire special vacuum alignments whose alignment is guided by the flavor symmetry. Usually, different flavons take part in the charged lepton and neutrino sectors. To show how the family symmetry shapes the flavor model building, let us consider \(G_{f}=S_{4}\) as a guiding symmetry. Geometrically, this group can be seen as the symmetry group of a rigid cube, a group of permutation four objects. Therefore, the order of the group is \(4!=24\) and the elements can be conveniently generated by the generators \(S,T\) and \(U\) satisfying the relation
\[S^{2}=T^{3}=U^{2}=1\ \ \text{and}\ \ ST^{3}=(SU)^{2}=(TU)^{2}=1. \tag{2.1}\]
In their irreducible triplet representations, these three generators can be written as [77, 78]
\[S=\frac{1}{3}\left(\begin{array}{ccc}-1&2&2\\ 2&-1&2\\ 2&2&-1\end{array}\right);T=\left(\begin{array}{ccc}1&0&0\\ 0&\omega^{2}&0\\ 0&0&\omega\end{array}\right)\ \ \text{and}\ \ U=\mp\left(\begin{array}{ccc}1&0&0\\ 0&0&1\\ 0&1&0\end{array}\right). \tag{2.2}\]
where \(\omega=e^{2i\pi/3}\). These generators can also be expressed as 3-dimensional irreducible (faithful) real representation
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline Group & Order & Irreducible Representations & Generators \\ \hline \(A_{4}\) & 12 & \(1,1^{\prime},1^{\prime\prime},3\) & \(S,T\) \\ \(S_{4}\) & 24 & \(1,1^{\prime},2,3,3^{\prime}\) & \(S,T(U)\) \\ \(T^{\prime}\) & 24 & \(1,1^{\prime},1^{\prime\prime},2,2^{\prime},2^{\prime\prime},3\) & \(S,T(R)\) \\ \(\Delta(27)\) & 27 & \(1_{r,s}(r,s=0,1,2),3_{01,02}\) & \(C,D\) \\ \(A_{5}\) & 60 & \(1,3,3^{\prime},4,5\) & \(\tilde{S}\), \(\tilde{T}\) \\ \hline \end{tabular}
\end{table}
Table 2.1: Basic characteristics of a few small groups with triplet irreducible representations. For details, see Ref. [77]. For instance, a possible representation for generators of the \(S_{4}\) group is defined in the text, see Eqs. (2.1) and (2.2).
matrices
\[S=\left(\begin{array}{ccc}-1&0&0\\ 0&1&0\\ 0&0&-1\end{array}\right);T=\frac{1}{2}\left(\begin{array}{ccc}1&\sqrt{2}&1\\ \sqrt{2}&0&-\sqrt{2}\\ -1&\sqrt{2}&-1\end{array}\right)\ \ \text{and}\ \ U=\mp\left(\begin{array}{ccc}1&0&0\\ 0&1&0\\ 0&0&-1\end{array}\right). \tag{2.3}\]
In the direct approach the charged lepton mass matrix (\(M_{\ell}\)) respects the generator \(T\) whereas the neutrino mass matrix (\(M_{\nu}\)) respects the generators \(S,U\) satisfying the conditions
\[T^{\dagger}M_{\ell}^{\dagger}M_{\ell}T=M_{\ell}^{\dagger}M_{\ell},\ S^{T}M_{ \nu}S=M_{\nu}\ \text{and}\ U^{T}M_{\nu}U=M_{\nu}, \tag{2.4}\]
which leads to [95]
\[[T,M_{\ell}^{\dagger}M_{\ell}]=[S,M_{\nu}]=[U,M_{\nu}]=0. \tag{2.5}\]
The non-diagonal matrices \(S,U\) can be diagonalized by the TBM mixing matrix given in Eq. (1.9). Therefore, the TBM mixing scheme can be elegantly derived from the direct approach of the \(S_{4}\) group. For generic features of semi-direct and indirect approaches to the flavor model building, we refer the readers to [97, 26, 95]. The TBM mixing pattern explained here can be generated using various discrete groups. For detailed models and groups see \(A_{4}\)[113, 125, 126, 114], \(S_{4}\)[127, 128], \(\Delta(27)\)[129], \(T^{\prime}\)[130]. In addition, explicit models with discrete flavor symmetry for BM [131, 132, 133, 134], GR [73, 135], HG [136] mixing can easily be constructed.
### Flavor Symmetry, Nonzero \(\theta_{13}\) and Nonzero \(\delta_{\rm CP}\)
After precise measurement of the non-zero value of the reactor mixing angle \(\theta_{13}\)[62, 63, 81, 82, 83] the era of fixed patterns (such as BM, TBM, GR, HG mixing) of the lepton mixing matrix is over. Also, as mentioned earlier, long baseline neutrino oscillation experiments such as T2K [137] and NO\(\nu\)A [138] both hint at CP violation in the lepton sector. Therefore, each of the fixed patterns needs some modification to be consistent with the global fit of the neutrino oscillation data [19, 20, 21, 22, 23]. There are two distinct ways of generating a mixing pattern that appropriately deviates from fixed mixing schemes such as BM, TBM, GR, and HG. The first approach is based on symmetry assertion, which demands considering larger symmetry groups that contain a larger residual symmetry group compared to the fixed mixing schemes such us TBM [94, 139, 140, 141, 142, 143]. On the other hand, in the second approach, the setups for the BM, TBM, GR, and HG mixing schemes are supplemented by an additional ingredient which breaks these structures in a well-defined and controlled way [144]. This can be achieved in various ways. An apparent source for such corrections can be introduced through the charged lepton sector [145, 146, 147, 148, 149]. Thus, in models where in the neutrino sector the mass matrix solely reproduces the mixing scheme, a non-diagonal charged lepton sector will contribute to the PMNS matrix \(U=U_{\ell}^{\dagger}U_{\nu}\) where \(U_{\ell}\) and \(U_{\nu}\) respectively are the diagonalizing matrices of the charged lepton and neutrino mass matrix. In addition, one can also consider small perturbations around the BM/TBM/GR/HG vacuum-alignment conditions [150, 151, 152, 153], which can originate from higher dimensional operators in the flavon potential yielding desired deviation. The minimal flavon field content can also be extended to incorporate additional contributions to the neutrino mass matrix to achieve correct deviation from fixed mixing schemes [154, 155, 156, 157, 158, 159]. To summarize, the fixed mixing schemes can still be regarded as a first approximation, necessitating specific corrections to include non-zero \(\theta_{13}\) and \(\delta_{\rm CP}\). For example, even if the TBM mixing is obsolete, two successors are still compatible
with data. These are called TM\({}_{1}\) and TM\({}_{2}\) mixing and are given by
\[|U_{\text{TM}_{1}}|=\left(\begin{array}{ccc}\frac{2}{\sqrt{6}}&*&*\\ \frac{1}{\sqrt{6}}&*&*\\ \frac{1}{\sqrt{6}}&*&*\end{array}\right)\text{ and }|U_{\text{TM}_{2}}|= \left(\begin{array}{ccc}*&\frac{1}{\sqrt{3}}&*\\ *&\frac{1}{\sqrt{3}}&*\\ *&\frac{1}{\sqrt{3}}&*\end{array}\right), \tag{2.6}\]
respectively. Clearly, Eq. (2.6) shows that TM\({}_{1}\) and TM\({}_{2}\) mixings preserve the first and the second column of the TBM mixing matrix given in Eq. (1.9). Here, the reactor mixing angle becomes a free parameter, and the solar mixing angle can stick close to its TBM prediction.
To illustrate this, let us again consider the discrete flavor symmetry \(G_{f}=S_{4}\). In contrast to the breaking pattern mentioned in Eqs. (2.4), (2.5), \(S_{4}\) is considered to be broken spontaneously into \(Z_{3}=\{1,T,T^{2}\}\) (for the charged lepton sector) and \(Z_{2}=\{1,SU\}\) (for the neutrino sector) such that it satisfies
\[[T,M_{\ell}^{\dagger}M_{\ell}]=[SU,M_{\nu}]=0. \tag{2.7}\]
Following the above prescription, the matrix that diagonalizes \(SU\) (see Eq. (2.2)) can be written as \(U_{\text{TBM}}U_{23}(\theta,\gamma)\) where the '23' rotation matrix is given by (\(c_{\theta}=\cos\theta\), \(s_{\theta}=\sin\theta\) and \(\gamma\) is the associated phase factor)
\[U_{23}=\left(\begin{array}{ccc}1&0&0\\ 0&c_{\theta}&s_{\theta}e^{-i\gamma}\\ 0&-s_{\theta}e^{i\gamma}&c_{\theta}\end{array}\right). \tag{2.8}\]
The obtained effective mixing matrix is called \(U_{\text{TM}_{1}}\) and can be written as
\[U_{\text{TM}_{1}}=\left(\begin{array}{ccc}\frac{2}{\sqrt{6}}&\frac{c_{ \theta}}{\sqrt{3}}&\frac{s_{\theta}}{\sqrt{3}}e^{-i\gamma}\\ -\frac{1}{\sqrt{6}}&\frac{c_{\theta}}{\sqrt{3}}-\frac{s_{\theta}}{\sqrt{2}}e^ {i\gamma}&-\frac{s_{\theta}}{\sqrt{3}}e^{-i\gamma}-\frac{c_{\theta}}{\sqrt{2} }\\ -\frac{1}{\sqrt{6}}&\frac{c_{\theta}}{\sqrt{3}}-\frac{s_{\theta}}{\sqrt{2}}e^ {i\gamma}&-\frac{s_{\theta}}{\sqrt{3}}e^{-i\gamma}+\frac{c_{\theta}}{\sqrt{2} }\end{array}\right), \tag{2.9}\]
The above matrix has the TM\({}_{1}\) mixing structure mentioned in Eq. (2.6). This is also an example of the method of a semi-direct approach to the flavor model building. Similarly, the generic structure for the structure for TM\({}_{2}\) mixing matrix can be written as
\[U_{\text{TM}_{2}}=\left(\begin{array}{ccc}\frac{2c_{\theta}}{\sqrt{6}}& \frac{1}{\sqrt{3}}&\frac{2s_{\theta}}{\sqrt{6}}e^{-i\gamma}\\ -\frac{c_{\theta}}{\sqrt{6}}+\frac{s_{\theta}}{\sqrt{2}}e^{i\gamma}&\frac{1}{ \sqrt{3}}&-\frac{s_{\theta}}{\sqrt{3}}e^{-i\gamma}-\frac{c_{\theta}}{\sqrt{2} }\\ -\frac{c_{\theta}}{\sqrt{6}}+\frac{s_{\theta}}{\sqrt{2}}e^{i\gamma}&\frac{1}{ \sqrt{3}}&-\frac{s_{\theta}}{\sqrt{3}}e^{-i\gamma}+\frac{c_{\theta}}{\sqrt{2} }\end{array}\right). \tag{2.10}\]
The above discussion shows special cases of the TBM mixing, which can still be relevant for models with discrete flavor symmetries. Now, imposing sufficient corrections to the other fixed mixing schemes like BM, GR, and HG we can make them consistent with observed data [161, 162, 163, 164, 165, 166]. The modified mixing matrix can be obtained by lowering the residual symmetry \(G_{\nu}\) for the neutrino sector. This generates a correction matrix for these fixed mixing patterns. The general form of these corrections can be summarized as [167, 90, 133]
\[U=U_{e}^{\dagger}U_{0}U_{p}, \tag{2.11}\]
where \(U_{0}\) is the general form of the relevant fixed pattern mixing scheme mentioned in Eq. (1.6), \(U_{e}\) is the generic correction matrix and \(U_{p}\) is additional phase matrix contributing in the Dirac and Majorana phases mentioned in Eq. (1.1). _These corrections help us to obtain interesting correlations among \(\sin\theta_{12},\sin\theta_{23},\sin\theta_{13}\) and \(\delta_{\text{CP}}\) of PMNS mixing matrix_[168].
In Tab. 2.2, we mention the typical predictions for TM\({}_{1}\) and TM\({}_{2}\) mixing matrices, including the Jarlskog invariant \(J_{CP}=c_{12}s_{12}c_{23}s_{23}c_{13}^{2}s_{13}\sin\delta_{\rm CP}\)[169].
Figure 2.1: \(\delta_{\rm CP}\) plotted against \(\sin^{2}\theta_{23}\) within TM\({}_{1}\) and TM\({}_{2}\) models using Eqs. (2.12) and (2.13) and with the NuFit 5.2 oscillation data [19; 20] from Tab. 1.1. The \(1\sigma\), \(2\sigma\), \(3\sigma\) regions (also in next figures) were derived from \(\chi^{2}\) tables (NuFit 5.2 data files [160] with SK atmospheric data) for corresponding two-dimensional projections of the global analysis, minimized for a given mass ordering. The blue (red) shaded region represents TM\({}_{1}\) (TM\({}_{2}\)) model predictions for \(\sin^{2}\theta_{13}\) in the \(3\sigma\) range.
Figure 2.2: \(\delta_{\rm CP}\) plotted against \(\sin^{2}\theta_{13}\) within TM\({}_{1}\) and TM\({}_{2}\) models using Eqs. (2.12) and (2.13) and with the NuFit 5.2 oscillation data [19; 20] from Tab. 1.1. The blue (red) shaded region represents TM\({}_{1}\) (TM\({}_{2}\)) model predictions for \(\sin^{2}\theta_{23}\) in the \(3\sigma\) range.
The correlations among the neutrino mixing angles \((\theta_{23},\theta_{12},\theta_{13})\) and phase \((\delta_{\rm CP})\) for \({\rm TM}_{1}\) and \({\rm TM}_{2}\) respectively can be written as [170]
\[{\rm TM}_{1} : s_{12}^{2}=\frac{1-3s_{13}^{2}}{3-3s_{13}^{2}},\quad\cos\delta_{ \rm CP}=\frac{(1-5s_{13}^{2})(2s_{23}^{2}-1)}{4s_{13}s_{23}\sqrt{2(1-3s_{13}^{ 2})(1-s_{23}^{2})}}, \tag{2.12}\] \[{\rm TM}_{2} : s_{12}^{2}=\frac{1}{3-3s_{13}^{2}},\quad\cos\delta_{\rm CP}=- \frac{(2-4s_{13}^{2})(2s_{23}^{2}-1)}{4s_{13}s_{23}\sqrt{(2-3s_{13}^{2})(1-s_{2 3}^{2})}}. \tag{2.13}\]
This helps us illuminate the feasibility of these models in the context of present neutrino oscillation data. In this regard, following Eqs. (2.12) and (2.13), we have plotted correlations in \(\sin^{2}\theta_{23}-\delta_{\rm CP}\), \(\sin^{2}\theta_{13}-\delta_{\rm CP}\) and \(\sin^{2}\theta_{12}-\sin^{2}\theta_{13}\) planes for both NO and IO in Figs. 2.1, 2.2 and 2.3, respectively. In these plots we have also shown the \(1\sigma\), \(2\sigma\), \(3\sigma\) allowed regions of neutrino oscillation data, based on the two degrees of freedom (2 dof) tabularized data given in Ref. [160]. The best-fit values are denoted by \(\star\) (\(\bullet\)) for NO (IO). The \(\sin^{2}\theta_{23}-\delta_{\rm CP}\) correlation plotted in Fig. 2.1 is important because of the existing ambiguities on octant of \(\theta_{23}\) (i.e., whether it is \(\theta_{23}>45^{\circ}\) or \(\theta_{23}<45^{\circ}\) ) and precise value of \(\delta_{\rm CP}\). Here the blue and red shaded regions represent the predicted correlation of \(\theta_{23}\) and \(\delta_{\rm CP}\) for \({\rm TM}_{1}\) and \({\rm TM}_{2}\) for \(3\sigma\) allowed range of \(\sin^{2}\theta_{13}\). For correlations \(\sin^{2}\theta_{13}-\delta_{\rm CP}\) in Fig. 2.2, the spreading of the shaded region depends on the \(3\sigma\) allowed range of \(\theta_{23}\). In Fig. 2.3, the \(\sin^{2}\theta_{12}-\sin^{2}\theta_{13}\) correlations for \({\rm TM}_{1}\) and \({\rm TM}_{2}\) mixing schemes yield tight constraints on the allowed ranges of \(\sin^{2}\theta_{12}\): NO (IO) \(0.3168-0.3195\) (\(0.3167-0.3195\)) for \({\rm TM}_{1}\) and \(0.3405-0.3413\) (\(0.3405-0.3413\)) for \({\rm TM}_{2}\). Furthermore, the \(\sin^{2}\theta_{12}\) prediction for the \({\rm TM}_{2}\) mixing lies at the edge of the \(3\sigma\) allowed region. Thus, a _precise measurement of \(\sin^{2}\theta_{12}\) can potentially rule out the \({\rm TM}_{2}\)_mixing scheme and corresponding flavor symmetric models.
Explicit models to obtain \({\rm TM}_{1}\) and \({\rm TM}_{2}\) mixing can be found on various occasions in the literature [170, 171, 144, 172]. We present examples of such models for \({\rm TM}_{1}\) and \({\rm TM}_{2}\) mixing with \(A_{4}\) non-Abelian discrete flavor symmetry. Here, we will present an example of the \({\rm TM}_{1}\) mixing in the context of a hybrid flavor symmetric scoto-seesaw scenario (FSS) [173, 174] where effective neutrino mass is generated via both type-I seesaw and scotogenic contributions. In the next subsection, we will discuss various ways to obtain light neutrino masses. The particle content of our model and charge assignment under different symmetries are shown in Tab. 2.3. The role of each discrete symmetry, particle content and charge assignment in this table are described in detail in Refs. [173, 175]. In this setup, the charged lepton Lagrangian can be written up to
the leading order as
\[\mathcal{L}_{l} = \frac{y_{e}}{\Lambda}(\bar{L}_{\phi T_{1}})_{1}He_{R}+\frac{y_{\mu} }{\Lambda}(\bar{L}_{\phi T_{1}})_{\nu}H\mu_{R}+\frac{y_{\tau}}{\Lambda}(\bar{L} _{\phi T_{1}})_{1^{\prime\prime}}H\tau_{R}+h.c., \tag{2.14}\] \[= \frac{y_{e}}{\Lambda}(\bar{L}_{1}\phi_{T_{1}}+\bar{L}_{2}\phi_{T_ {3}}+\bar{L}_{3}\phi_{T_{2}})He_{R}+\frac{y_{\mu}}{\Lambda}(\bar{L}_{3}\phi_{T _{3}}+\bar{L}_{1}\phi_{T_{2}}+\bar{L}_{2}\phi_{T_{1}})H\mu_{R}\] (2.15) \[+\frac{y_{\tau}}{\Lambda}(\bar{L}_{2}\phi_{T_{2}}+\bar{L}_{1}\phi _{T_{3}}+\bar{L}_{3}\phi_{T_{1}})H\tau_{R}\]
where \(\Lambda\) is the cut-off scale of the FSS model. \(y_{e}\), \(y_{\mu}\) and \(y_{\tau}\) are the coupling constants. In Eq. (2.14), the terms in the first parenthesis represent products of two \(A_{4}\) triplets forming a one-dimensional representation which further contract with \(1\), \(1^{\prime\prime}\) and \(1^{\prime}\) of \(A_{4}\), corresponding to \(e_{R}\), \(\mu_{R}\) and \(\tau_{R}\), respectively. Following multiplication rules given in Appendix A, the complete \(A_{4}\) decomposition is written in Eq. (2.15). Now, when the flavon \(\phi_{T}\) gets VEV in the direction \(\langle\phi_{T}\rangle=(\langle\phi_{T_{1}}\rangle+\langle\phi_{T_{2}}\rangle +\langle\phi_{T_{3}}\rangle)^{T}=(v_{T},0,0)^{T}\), the charged lepton Lagrangian can be written as
\[\mathcal{L}_{l} = \frac{y_{e}}{\Lambda}\bar{L}_{1}v_{T}He_{R}+\frac{y_{\mu}}{ \Lambda}\bar{L}_{2}v_{T}H\mu_{R}+\frac{y_{\tau}}{\Lambda}\bar{L}_{3}v_{T}H\tau _{R}. \tag{2.16}\]
Finally, when the SM Higgs field \(h\) also gets non-zero VEV as \(\langle h\rangle=v\), following Eq. (2.16), the diagonal charged lepton mass matrix can be written as
\[m_{l} = \frac{vv_{T}}{\Lambda}\begin{pmatrix}y_{e}&0&0\\ 0&y_{\mu}&0\\ 0&0&y_{\tau}\end{pmatrix}. \tag{2.17}\]
Now, the Lagrangian for neutrino mass contributions in FSS can be written as
\[\mathcal{L}=\frac{y_{N}}{\Lambda}(\bar{L}\phi_{S})_{1}\tilde{H}N_{R}+\frac{1} {2}M_{N}\bar{N}_{R}^{c}N_{R}+\frac{y_{s}}{\Lambda^{2}}(\bar{L}\phi_{A})_{1^{ \prime}}\xi i\sigma_{2}\eta^{*}f+\frac{1}{2}M_{f}\bar{f}^{c}f+h.c., \tag{2.18}\]
where \(y_{N}\) and \(y_{s}\) are the coupling constant and \(M_{N}\) is the Majorana mass of the right-handed neutrino \(N_{R}\) while \(M_{f}\) is the mass of the fermion \(f\). Again, following Appendix A and Eq. (2.18), the \(A_{4}\) decomposition for the contribution to the neutrino sector can be written as
\[\mathcal{L} = \frac{y_{N}}{\Lambda}(\bar{L}_{1}\phi_{S_{1}}+\bar{L}_{2}\phi_{S_ {3}}+\bar{L}_{3}\phi_{S_{2}})\tilde{H}N_{R}+\frac{1}{2}M_{N}\bar{N}_{R}^{c}N_{R} \tag{2.19}\] \[+\frac{y_{s}}{\Lambda^{2}}(\bar{L}_{3}\phi_{A_{3}}+\bar{L}_{1} \phi_{A_{2}}++\bar{L}_{2}\phi_{A_{1}})\xi i\sigma_{2}\eta^{*}f+\frac{1}{2}M_{f }\bar{f}^{c}f+h.c.,\] \[= \frac{y_{N}}{\Lambda}(\bar{L}_{2}v_{S}-\bar{L}_{3}v_{S})\tilde{H} N_{R}+\frac{1}{2}M_{N}\bar{N}_{R}^{c}N_{R}+\frac{y_{s}}{\Lambda^{2}}(\bar{L}_{1}v_{A} +2\bar{L}_{2}v_{A})\xi i\sigma_{2}\eta^{*}f+\frac{1}{2}M_{f}\bar{f}^{c}f+h.c. \tag{2.20}\]
In the above Lagrangian, we put VEVs of \(\phi_{S}\), \(\phi_{A}\) and \(\xi\) in directions \(\langle\phi_{S}\rangle=(\langle\phi_{S_{1}}\rangle,\langle\phi_{S_{2}}\rangle,\langle\phi_{S_{3}}\rangle)^{T}=(0,-v_{S},v_{S})^{T}\), \(\langle\phi_{A}\rangle=(\langle\phi_{A_{1}}\rangle,\langle\phi_{A_{2}}\rangle,\langle\phi_{A_{3}}\rangle)^{T}=(2v_{A},v_{A},0)^{T}\) and \(v_{\xi}\), respectively, getting the appropriate flavor structure. Following
\begin{table}
\begin{tabular}{c c c c c c c|c c c c} \hline Fields & \(e_{R}\), \(\mu_{R}\), \(\tau_{R}\) & \(L_{\alpha}\) & \(H\) & \(N_{R}\) & \(f\) & \(\eta\) & \(\phi_{S}\) & \(\phi_{A}\) & \(\phi_{T}\) & \(\xi\) \\ \hline \(A_{4}\) & \(1\), \(1^{\prime\prime}\), \(1^{\prime}\) & \(3\) & \(1\) & \(1\) & \(1\) & \(1\) & \(3\) & \(3\) & \(3\) & \(1^{\prime\prime}\) \\ \(Z_{4}\) & \(i\) & \(i\) & \(1\) & \(1\) & \(1\) & \(1\) & \(i\) & \(i\) & \(1\) & \(1\) \\ \(Z_{3}\) & \(\omega^{2}\) & \(\omega\) & \(1\) & \(\omega^{2}\) & \(1\) & \(1\) & \(\omega^{2}\) & \(\omega\) & \(\omega^{2}\) & \(1\) \\ \(Z_{2}\) & \(1\) & \(1\) & \(1\) & \(1\) & \(-1\) & \(1\) & \(1\) & \(1\) & \(1\) & \(-1\) \\ \hline \end{tabular}
\end{table}
Table 2.3: Field contents and transformation under the symmetries of our model. flavons field in second block of the table are introduced to implement the \(A_{4}\) symmetry.
Eq. (2.18), the Yukawa coupling for Dirac neutrinos and scotogenic contributions can be written as
\[Y_{N}=(Y_{N}^{e},Y_{N}^{\mu},Y_{N}^{\tau})^{T}=(0,y_{N}\frac{v_{S}} {\Lambda},-y_{N}\frac{v_{S}}{\Lambda})^{T}, \tag{2.21}\] \[Y_{F}=(Y_{F}^{e},Y_{F}^{\mu},Y_{F}^{\tau})^{T}=(y_{s}\frac{v_{ \rm F}}{\Lambda}\frac{v_{A}}{\Lambda},y_{s}\frac{v_{\rm F}}{\Lambda}\frac{2v_{ A}}{\Lambda},0)^{T}\equiv(\kappa,2\kappa,0)^{T}. \tag{2.22}\]
Finally, with the above Yukawa couplings, the total effective light neutrino mass matrix (with both type-I and scotogenic contributions) is given by
\[m_{\nu} = -\frac{v^{2}}{M_{N}}Y_{N}^{i}Y_{N}^{j}+\mathcal{F}(m_{\eta_{R}},m _{\eta_{I}},M_{f})M_{f}Y_{f}^{i}Y_{f}^{j} \tag{2.23}\] \[= \left(\begin{array}{ccc}b&2b&0\\ 2b&-a+4b&a\\ 0&a&-a\end{array}\right), \tag{2.24}\]
where \(a=y_{N}^{2}\frac{v^{2}}{M_{N}}\frac{v_{\rm F}^{2}}{\Lambda^{2}},b=y_{s}^{2} \frac{v_{\rm F}^{2}}{\Lambda^{2}}\frac{v_{\rm F}^{2}}{\Lambda^{2}}\mathcal{F }(m_{\eta_{R}},m_{\eta_{I}},M_{f})M_{f}=\kappa^{2}\mathcal{F}(m_{\eta_{R}},m_ {\eta_{I}},M_{f})M_{f}\) and the loop function \(\mathcal{F}\) is written as
\[\mathcal{F}(m_{\eta_{R}},m_{\eta_{I}},M_{f})=\frac{1}{32\pi^{2}}\Big{[}\frac{m _{\eta_{R}}^{2}\log\left(M_{f}^{2}/m_{\eta_{R}}^{2}\right)}{M_{f}^{2}-m_{\eta _{R}}^{2}}-\frac{m_{\eta_{I}}^{2}\log\left(M_{f}^{2}/m_{\eta_{I}}^{2}\right)}{ M_{f}^{2}-m_{\eta_{I}}^{2}}\Big{]}, \tag{2.25}\]
with \(m_{\eta_{R}}\) and \(m_{\eta_{I}}\) being the masses of the neutral component of \(\eta\). The total mass matrix \(m_{\nu}\) therefore can be diagonalized by a mixing matrix of \(\text{TM}_{1}\) mixing pattern given by
\[U=\left(\begin{array}{ccc}\sqrt{\frac{2}{3}}&\frac{\cos\theta}{\sqrt{3}}& \frac{e^{-i\psi}\sin\theta}{\sqrt{3}}\\ -\frac{1}{\sqrt{6}}&\frac{\cos\theta}{\sqrt{3}}+\frac{e^{i\psi}\sin\theta}{ \sqrt{2}}&-\frac{\cos\theta}{\sqrt{2}}+\frac{e^{-i\psi}\sin\theta}{\sqrt{3}} \\ -\frac{1}{\sqrt{6}}&\frac{\cos\theta}{\sqrt{3}}-\frac{e^{i\psi}\sin\theta}{ \sqrt{2}}&\frac{\cos\theta}{\sqrt{2}}+\frac{e^{-i\psi}\sin\theta}{\sqrt{3}} \end{array}\right)U_{M} \tag{2.26}\]
where \(U_{M}\) is the Majorana phase matrix defined in Eq. (1.1). The correlation among the oscillation parameters is given in Eq. (2.12). In literature, the \(\text{TM}_{1}\) mixing has been reproduced using various discrete groups such as \(S_{4}\) in the context of type-I or type-II seesaw scenarios [176, 177, 144, 177].
To reproduce the \(\text{TM}_{2}\) mixing, we adopt a modified version of the Altarelli-Feruglio (AF) model [114, 155]. In this scenario, light neutrino masses are generated completely via type-I seesaw, and hence three copies of RHNs are included which we consider to be a triplet under \(A_{4}\). With the involvement of the \(A_{4}\) flavons \(\phi_{s},\phi_{T}\) (both triplet) and \(\xi\) (singlet) one can obtain the TBM mixing. Now to accommodate nonzero \(\theta_{13}\), \(\delta_{\text{CP}}\), the desired \(\text{TM}_{2}\) mixing can be achieved in the involvement of one additional flavon \(\xi^{\prime}\) (\(1^{\prime}\) under \(A_{4}\)) which contributes to the right-handed neutrino mass. In addition to the \(A_{4}\) discrete symmetry, we also consider a \(Z_{3}\) symmetry which forbids the exchange of \(\phi_{s}\) and \(\phi_{T}\) in the Lagrangian. The complete particle content and their transformations under the symmetries are given in Tab. 2.4.
Here we have considered the VEVs for the scalar fields as \(\langle\phi_{S}\rangle=(v_{S},v_{S},v_{S})^{T}\), \(\langle\phi_{T}\rangle=(v_{T},0,0)^{T}\), \(\langle\xi\rangle=v_{\xi}\), \(\langle\xi^{\prime}\rangle=v_{\xi^{\prime}}\)[114, 155, 178].
\begin{table}
\begin{tabular}{l|c c c c c c c c} \hline \hline Fields & \(e_{R}\), \(\mu_{R}\), \(\tau_{R}\) & \(L\) & \(N_{R}\) & \(H\) & \(\phi_{S}\) & \(\phi_{T}\) & \(\xi\) & \(\xi^{\prime}\) \\ \hline \(SU(2)\) & 1 & 2 & 1 & 2 & 1 & 1 & 1 & 1 \\ \(A_{4}\) & 1,1\({}^{\prime\prime}\), 1\({}^{\prime}\) & 3 & 3 & 1 & 3 & 3 & 1 & 1\({}^{\prime}\) \\ \(Z_{3}\) & \(\omega\) & \(\omega\) & \(\omega^{2}\) & 1 & \(\omega^{2}\) & 1 & \(\omega^{2}\) & \(\omega^{2}\) \\ \hline \hline \end{tabular}
\end{table}
Table 2.4: Transformation of the fields needed to realize the \(TM_{2}\) mixing. Here \(\omega\) is the third root of unity. \(\xi^{\prime}\) is essential to generate non-zero \(\theta_{13}\).
In Tab. 2.4, \(H\) is the \(SU(2)\) Higgs doublet (with VEV \(v\)) and singlet under \(A_{4}\). Now, with the symmetries and particle content present in Tab. 2.4, the Lagrangian for the charged leptons can be written as
\[\mathcal{L}_{CL} = \big{(}y_{e}(L\phi_{T})_{1}e_{R}+y_{\mu}(L\phi_{T})_{1^{\prime}} \mu_{R}+y_{\tau}(L\phi_{T})_{1^{\prime\prime}}\tau_{R}\big{)}\frac{H}{\Lambda}, \tag{2.27}\]
where \(\Lambda\) is the cutoff scale of the theory and \(y_{e},y_{\mu},y_{\tau}\) are the corresponding coupling constants. Note that each term in the first parentheses represents products of two \(A_{4}\) triplets \(L,\phi_{T}\) which further contracts with \(e_{R}\), \(\mu_{R}\), \(\tau_{R}\), which are charged under \(A_{4}\) as \(1,1^{\prime\prime}\) and \(1^{\prime}\), respectively. Following the prescription given in Eq. (2.15), the charged lepton mass matrix can be obtained as
\[M_{\ell} = \frac{vv_{T}}{\Lambda}\begin{pmatrix}y_{e}&0&0\\ 0&y_{\mu}&0\\ 0&0&y_{\tau}\end{pmatrix}. \tag{2.28}\]
In the presence of the \(A_{4}\) flavons \(\phi_{s},\xi,\xi^{\prime}\) (with VEV \(\langle\phi_{S}\rangle=(v_{S},v_{S},v_{S})^{T}\), \(\langle\xi\rangle=v_{\xi}\), and \(\langle\xi^{\prime}\rangle=v_{\xi^{\prime}}\)), the Lagrangian for the neutrino sector can be written as
\[\mathcal{L}_{\nu} = y(LN_{R})H+(x_{A}\xi+x_{B}\phi_{S}+x_{N}\xi^{\prime})\overline{N^ {c}}_{R}N_{R}, \tag{2.29}\] \[= y(L_{1}N_{R_{1}}+L_{2}N_{R_{3}}+L_{3}N_{R_{2}})H+x_{A}(\overline {N^{c}}_{R_{1}}N_{R_{1}}+\overline{N^{c}}_{R_{2}}N_{R_{3}}+\overline{N^{c}}_{ R_{3}}N_{R_{2}})\] (2.30) \[+x_{B}\phi_{S_{1}}(2\overline{N^{c}}_{R_{1}}N_{R_{1}}-\overline{N ^{c}}_{R_{2}}N_{R_{3}}-\overline{N^{c}}_{R_{3}}N_{R_{2}})/3+x_{B}\phi_{S_{1}}( 2\overline{N^{c}}_{R_{1}}N_{R_{1}}-\overline{N^{c}}_{R_{2}}N_{R_{3}}- \overline{N^{c}}_{R_{3}}N_{R_{2}})/3\] \[+x_{B}\phi_{S_{3}}(2\overline{N^{c}}_{R_{3}}N_{R_{3}}-\overline{N ^{c}}_{R_{1}}N_{R_{2}}-\overline{N^{c}}_{R_{2}}N_{R_{1}})/3+x_{N}\xi^{\prime}( \overline{N^{c}}_{R_{2}}N_{R_{2}}+\overline{N^{c}}_{R_{1}}N_{R_{3}}+\overline {N^{c}}_{R_{3}}N_{R_{1}})\]
where \(y\), \(x_{A}\), \(x_{B}\) are the coupling constants. After spontaneous breaking of electroweak and flavor symmetries, we obtain the Dirac and Majorana mass matrices as
\[m_{D}=yv\begin{pmatrix}1&0&0\\ 0&0&1\\ 0&1&0\end{pmatrix},M_{R}=\begin{pmatrix}a+2b/3&-b/3&-b/3\\ -b/3&2b/3&a-b/3\\ -b/3&a-b/3&2b/3\end{pmatrix}+\begin{pmatrix}0&0&d\\ 0&d&0\\ d&0&0\end{pmatrix}, \tag{2.31}\]
where \(a=2x_{A}v_{\xi},b=2x_{B}v_{S}\) and \(d=2x_{N}v_{\xi^{\prime}}\). The mass matrices for Dirac, and Majorana neutrinos are obtained from the Lagrangian written in Eqs.(2.27) and (2.30) following the \(A_{4}\) multiplication rules given in A. The light neutrino mass matrix can be obtained through the type-I seesaw mechanism using the relation \(M_{\nu}=-M_{D}^{T}M_{R}^{-1}M_{D}\). After diagonalizing \(M_{\nu}\) with the tribimaximal mixing matrix \(U_{\rm TB}\) we find,
\[M_{\nu}^{\prime} = U_{\rm TB}^{T}M_{\nu}U_{\rm TB} \tag{2.32}\] \[= y^{2}v^{2}\begin{pmatrix}\frac{2(a+b)-d}{2(a^{2}-b^{2}-ad+d^{2})} &0&\frac{-\sqrt{d}d}{2(a^{2}-b^{2}-ad+d^{2})}\\ 0&\frac{1}{a+d}&0\\ \frac{-\sqrt{3}d}{2(a^{2}-b^{2}-ad+d^{2})}&0&\frac{2(b-a)+d}{2(a^{2}-b^{2}-ad+ d^{2})}\end{pmatrix}. \tag{2.33}\]
The above matrix is diagonal for \(d=0\) (contribution corresponding to \(\xi^{\prime}\)). Therefore, the light neutrino mass matrix will no longer be diagonalized by \(U_{TB}\) and a further rotation in the 13-plane can diagonalize \(m_{\nu}^{\prime}\) given in Eq. (2.33). So the
final diagonalizing matrix for the light neutrino mass matrix can be written as
\[U = \begin{pmatrix}\sqrt{\frac{2}{3}}&\frac{1}{\sqrt{3}}&0\\ -\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&-\frac{1}{\sqrt{2}}\\ -\frac{1}{\sqrt{6}}&\frac{1}{\sqrt{3}}&\frac{1}{\sqrt{2}}\\ \end{pmatrix}\begin{pmatrix}\cos\theta&0&\sin\theta e^{-i\psi}\\ 0&1&0\\ -\sin\theta e^{i\psi}&0&\cos\theta\\ \end{pmatrix}U_{M}, \tag{2.34}\] \[= \begin{pmatrix}\sqrt{\frac{2}{3}}\cos\theta&\frac{1}{\sqrt{3}}& \sqrt{\frac{2}{3}}e^{i\psi}\sin\theta\\ -\frac{\cos\theta}{\sqrt{6}}+\frac{e^{i\psi}\sin\theta}{\sqrt{2}}&\frac{1}{ \sqrt{3}}&-\frac{\cos\theta}{\sqrt{2}}-\frac{e^{i\psi}\sin\theta}{\sqrt{6}}\\ -\frac{\cos\theta}{\sqrt{6}}-\frac{e^{i\psi}\sin\theta}{\sqrt{2}}&\frac{1}{ \sqrt{3}}&\frac{\cos\theta}{\sqrt{2}}-\frac{e^{i\psi}\sin\theta}{\sqrt{6}}\\ \end{pmatrix}U_{M},\]
where, as in the TM\({}_{1}\) case, \(U_{M}\) is the Majorana phase matrix defined in Eq. (1.1). The structure of the mixing matrix coincides with the TM\({}_{2}\) mixing obtained in the context of \(A_{4}\) non-Abelian discrete flavor symmetry.
As mentioned earlier, the mixing matrix in Eq. (1.9) and the corresponding mass matrix Eq. (1.8) obeys the underlying \(\mu-\tau\) symmetry (also known as the \(\mu-\tau\) permutation symmetry). As this feature is outdated now for obvious reasons, there is another class of flavor CP model known as \(\mu-\tau\) reflection symmetry [96]. This symmetry can be expressed as the transformation:
\[\nu_{e}\to\nu_{e}^{C},\ \nu_{\mu}\to\nu_{\tau}^{C},\ \nu_{\tau}\to\nu_{\mu}^{C} \tag{2.35}\]
where 'C' stands for the charge conjugation of the corresponding neutrino field under which the neutrino mass term remains unchanged. The scheme leads to the predictions \(\theta_{23}=45^{\circ}\), \(\delta_{\rm CP}=90^{\circ}\) or \(270^{\circ}\). This mixing scheme is still experimentally viable [20]. Under the discussed \(\mu-\tau\) symmetry, the elements of the lepton mixing matrix satisfy
\[|U_{\mu i}|=|U_{\tau i}|\ \ \ \ \ {\rm where}\ \ \ i=1,2,3. \tag{2.36}\]
Such a mixing scheme is also known as cobimaximal (CBM) mixing scheme [179]. Eq. (2.36) indicates that the moduli of \(\mu\) and \(\tau\) flavor elements of the \(3\times 3\) neutrino mixing matrix are equal. With these constraints, the neutrino mixing matrix can be parametrized as [180, 181]
\[U_{0} = \left(\begin{array}{ccc}u_{1}&u_{2}&u_{3}\\ v_{1}&v_{2}&v_{3}\\ v_{1}^{*}&v_{2}^{*}&v_{3}^{*}\\ \end{array}\right), \tag{2.37}\]
where the entries in the first row, \(u_{i}\)'s are real (and non-negative) with trivial (vanishing) values of the Majorana phases. Here \(v_{i}\) satisfies the orthogonality condition \({\rm Re}(v_{j}v_{k}^{*})=\delta_{jk}-u_{k}u_{k}\). In Ref. [181] it was argued that the mass matrix leading to the mixing matrix given in Eq. (2.37) can be written as
\[M_{0} = \left(\begin{array}{ccc}a&d&d^{*}\\ d&c&b\\ d^{*}&b&c^{*}\\ \end{array}\right), \tag{2.38}\]
where \(a,b\) are real and \(d,c\) are complex parameters. As a consequence of the symmetry given in Eqs.(2.36)-(2.38), we obtain the predictions for maximal \(\theta_{23}=45^{\circ}\) and \(\delta_{\rm CP}=90^{\circ}\) or \(270^{\circ}\) in the basis where the charged leptons are considered to be diagonal. This scheme, however, still leaves room for nonzero \(\theta_{13}\). Realization of such a mixing pattern is possible with various discrete flavor symmetries (\(A_{4},\Delta(27)\), etc.), for example, see Refs. [182, 183, 184, 185, 186].
Earlier, we mentioned that fixed mixing schemes such as BM, TBM, GR, HG are ruled out and require specific corrections to accommodate non-zero \(\theta_{13}\) and \(\delta_{\rm CP}\). These corrections can also provide a possible deviation from
\(45^{\circ}\). For example, to achieve experimentally viable TM\({}_{1}\) and TM\({}_{2}\) mixing, we have shown instances where additional contribution to the neutrino sector over TBM mixing generates necessary corrections. However, this can be achieved in various ways. A correction in the charged lepton sector is one such possibility. This can be achieved by additional non-trivial contribution in the charged lepton sector [150].
Note that the most recent best-fit values for \(\theta_{23}\) (Tab. 1.1) prefer lower octant (\(\theta_{23}<45^{\circ}\)) for NO and upper octant (\(\theta_{23}>45^{\circ}\)) for IO. Given this, it seems well motivated to introduce partial \(\mu-\tau\) reflection symmetry, for which the \(\delta_{\rm CP}\) and \(\theta_{23}\) are not fixed but correlated [187]
\[|U_{\mu 1}|=|U_{\tau 1}| : \cos\delta_{\rm CP}=\frac{(c_{23}^{2}-s_{23}^{2})(c_{12}^{2}s_{ 13}^{2}-s_{12}^{2})}{4c_{12}s_{12}c_{23}s_{23}s_{13}}, \tag{2.39}\] \[|U_{\mu 2}|=|U_{\tau 2}| : \cos\delta_{\rm CP}=\frac{(c_{23}^{2}-s_{23}^{2})(c_{12}^{2}-s_{ 12}^{2}s_{13}^{2})}{4c_{12}s_{12}c_{23}s_{23}s_{13}}. \tag{2.40}\]
The \(\delta_{\rm CP}-\theta_{23}\) correlations in Eqs. (2.39) and (2.40) for the partial \(\mu-\tau\) reflection symmetry (\(|U_{\mu 1}|=|U_{\tau 1}|\), \(|U_{\mu 2}|=|U_{\tau 2}|\)) are given in Fig. 2.4. These correlations partially overlap the \(1\sigma\), \(2\sigma\), \(3\sigma\) regions but do not include the best-fit values.
A similar investigation of partial \(\mu-\tau\) reflection symmetry can also be performed for the 3+1 neutrino mixing scheme, denoted as (3+1)\(\nu\), leading to the \(\delta_{\rm CP}/\theta_{23}\) correlations as given below [182]
\[|U_{\mu 1}|=|U_{\tau 1}| : \cos\delta_{\rm CP}=\frac{(a_{1}^{2}+b_{1}^{2})-(c_{1}^{2}+d_{1 }^{2})}{2(c_{1}d_{1}-a_{1}b_{1})} \tag{2.41}\] \[|U_{\mu 2}|=|U_{\tau 2}| : \cos\delta_{\rm CP}=\frac{(a_{2}^{2}+b_{2}^{2})-(c_{2}^{2}+d_{2 }^{2})}{2(a_{2}b_{2}-c_{2}d_{2})}\] (2.42) \[|U_{\mu 3}|=|U_{\tau 3}| : \cos\delta_{\rm CP}=\frac{(a_{3}^{2}+b_{3}^{2})-(c_{3}^{2}+d_{3 }^{2})}{2(a_{3}b_{3}-c_{3}d_{3})}\] (2.43) \[|U_{\mu 4}|=|U_{\tau 4}| : \tan^{2}\theta_{24}=\sin^{2}\theta_{34} \tag{2.44}\]
where
\[a_{1} =c_{12}s_{13}s_{23}s_{24}s_{34}-c_{12}c_{23}c_{34}s_{13}, b_{1} =c_{34}s_{12}s_{23}-c_{12}c_{13}c_{24}s_{14}s_{34}+c_{23}s_{12}s_{24} s_{34}, \tag{2.45}\] \[c_{1} =c_{23}c_{24}s_{12}+c_{12}c_{13}s_{14}s_{24}, d_{1} =c_{12}c_{24}s_{13}s_{23},\] \[a_{2} =c_{12}(c_{34}s_{23}+c_{23}s_{24}s_{34})+s_{12}c_{13}c_{24}s_{14}s_ {34}, b_{2} =s_{12}(s_{13}s_{23}s_{24}s_{34}-c_{23}c_{34}s_{13}),\] \[c_{2} =c_{12}c_{23}c_{24}-s_{12}c_{13}s_{14}s_{24}, d_{2} =s_{12}c_{24}s_{13}s_{23},\] \[a_{3} =c_{13}(c_{23}c_{34}-s_{23}s_{24}s_{34}), b_{3} =c_{24}s_{13}s_{14}s_{34},\] \[c_{3} =s_{13}s_{14}s_{24}, d_{3} =c_{13}c_{24}s_{23}.\]
The results for the (3+1)\(\nu\) scenario are gathered in Fig. 2.5. From the overlapped regions in Fig. 2.5, it is clear that if we demand a total \(\mu-\tau\) reflection symmetry, i.e., \(|U_{\mu i}|=|U_{\tau i}|\) for all four columns then the atmospheric mixing angle \(\theta_{23}\) is restricted within a narrow region around \(45^{\circ}\) and the Dirac CP phase is also restricted around the maximal CP violating values. Since the present best fit values for \(\theta_{23}\) clearly favors a deviation from \(45^{\circ}\) (see Fig. 1.1), a partial \(\mu-\tau\) reflection symmetry in the (3+1)\(\nu\) scenario may accommodate appropriate deviation. From Fig. 2.5, we find that \(\delta_{\rm CP}-\theta_{23}\) correlations are partly overlapping the (\(1\sigma\), \(2\sigma\) and \(3\sigma\)) \(\delta_{\rm CP}-\theta_{23}\) regions for all three partial \(\mu-\tau\) reflection symmetries but only for the \(|U_{\mu 1}|=|U_{\tau 1}|\) symmetry, the \(\delta_{\rm CP}-\theta_{23}\) correlation region includes the best-fit value for IO. In addition to this it is worth mentioning that the partial \(\mu-\tau\) reflection symmetry in the fourth column (\(|U_{\mu 4}|=|U_{\tau 4}|\)) restricts \(\theta_{34}\) within the range \(4.9^{\circ}-9.8^{\circ}\)[182].
### Flavor Symmetry and Neutrino Mass Models
Apart from the observed pattern of neutrino mixing, the origin of tiny neutrino mass is still unknown to us. Over decades, the exclusive evidence for non-zero neutrino masses stems solely from neutrino oscillation experiments, which are sensitive to the mass-squared differences and not to the absolute scale of neutrino masses (see Tab. 1.1). On the other hand, bounds on absolute neutrino masses come from cosmological surveys [50, 188] and the end-point spectrum of tritium beta decay. Combining all these results, we come to the neutrino mass ranges discussed in the Introduction, which are at the milli-electronvolt scale at the most. Based on the established neutrino mass spectrum, different models
try to explain it. For a detailed discussion of many mechanisms to generate light neutrino masses, the readers are referred to Refs. [189, 190, 191, 192, 193, 194] and references therein. Here, we briefly mention a few of them that are frequently used in realistic flavor model building.
Seesaw models
One of the most popular ways to generate tiny neutrino mass is to start with the high scale suppressed lepton number violating Weinberg operator \(HL^{T}LH/\Lambda\) mentioned earlier, which is non-renormalizable. This can give rise to various seesaw mechanisms such as type-I, type-II, type-III, inverse and linear seesaw [195, 196, 197, 198, 199]. They can be embedded in the form of Eq. (1.12), see Ref. [200]. However, to elucidate the observed pattern of neutrino mixings, one must include additional ingredients such as discrete flavor symmetries. Examples of discrete flavor symmetric models for type-I, type-II, and inverse seesaw which are very efficient in explaining tiny mass as well as correct mixing as observed by the neutrino oscillation experiments can be found in Refs. [155, 156, 157, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215]. Based on their scale, these flavor symmetric seesaw mechanisms may include a wide range of phenomenological implications in lepton flavor violation, collider phenomenology or leptogenesis [216].
Radiative mass models
Another class of models that can be connected with flavor symmetries are radiative neutrino mass models in which masses of neutrinos are absent at the tree-level and are generated at 1- or higher-loop orders. These models explain the lightness of neutrino masses with sizable Yukawa couplings and suppression provided by the loop factor. A broad review of various radiative neutrino models can be found in Ref. [194]. The key feature of these models is that they can be verified experimentally because the masses of exotic particles that take part in the neutrino mass generation are in the TeV range, which the current colliders' experiments can probe. Furthermore, these models may contribute to electric dipole moments, anomalous magnetic moments and meson decays, matter-antimatter asymmetry [194, 217, 218]. Most interestingly, some radiative models naturally incorporate potential DM candidates [219, 220]. Additional symmetries that explain tiny neutrino masses also stabilize the DM. On top of that, radiative neutrino mass models with discrete flavor symmetries can also explain the observed lepton mixing for obvious reasons, for instance, modular \(S_{3}\) and \(A_{4}\)[221, 222, 223, 224, 225].
Neutrino mass sum rules
Neutrino mass mechanisms augmented with discrete flavor symmetries predict a range of neutrino masses and mixings and can yield interesting correlations between several observables, such as leptonic mixing angles, phases, and neutrino masses. Models that have these features enhance the testability at neutrino experiments. Discrete flavor symmetric models can give rise to sum rules for neutrino mixing angles and masses. The mixing sum rules relate the leptonic mixing angles to the Dirac CP-violating phase \(\delta_{\rm CP}\)[226, 227, 228, 145]. For example, considering appropriate deviations, the approximate mixing sum rules can be written as [229, 230, 231]
\[\sin^{2}\theta_{12} \simeq \frac{1}{2}+\sin\theta_{13}\cos\delta, \tag{2.46}\] \[\sin^{2}\theta_{12} \simeq \frac{1}{3}+\frac{2\sqrt{2}}{3}\sin\theta_{13}\cos\delta, \tag{2.47}\]
for BM and TBM mixing, respectively. For the implication of the mixing sum rules at the neutrino oscillation experiments, see Ref. [228]. On the other hand, the mass sum rules [232, 233, 234, 235, 236, 237], which describe the interrelation between the three complex neutrino eigenvalues are particularly important because of their substantial implication [238, 239, 240, 241] in the prediction of the effective mass parameter (\(m_{\beta\beta}\)) appearing in the neutrinoless double beta decay described in Eq. (1.5). Theoretically,
it is natural to wonder if these sum rules are related to residual or accidental symmetry. The neutrino mass sum rules can originate from discrete flavor symmetric models in which neutrino masses can arise from the aforementioned mass generation mechanisms. However, the most general mass sum rule can be written as [240]
\[A_{1}\tilde{m}_{1}^{p}e^{i\chi_{1}}+A_{2}\tilde{m}_{2}^{p}e^{i\chi_{2}}+A_{3} \tilde{m}_{3}^{p}e^{i\chi_{3}}=0, \tag{2.48}\]
where \(\tilde{m_{i}}\) are three complex mass eigen values, \(p\neq 0,\chi_{1}\in[0,2\pi],A_{i}>0\). Here \(A_{i}\) and \(\chi_{i}\) stand for appropriate complex coefficients and phase factors (without the Majorana phases). The power \(q\) of the complex mass eigenvalues characterizes the sum rule. For example, a simple sum rule can be obtained when a type-I seesaw mass mechanism is augmented by \(A_{5}\) non-Abelian discrete flavor symmetry leading to [162]
\[\frac{1}{\tilde{m}_{1}}+\frac{1}{\tilde{m}_{2}}=\frac{1}{\tilde{m}_{3}}. \tag{2.49}\]
Comparing Eq. (2.48) and Eq. (2.49) we find that \(A_{i}=1,\chi_{1,2}=0,\chi_{3}=\pi\) and \(p=-1\). Such mass sum rules (i.e., when \(p=-1\)) are called _inverse sum rules_, obtained from a diverse combination of neutrino mass mechanisms and discrete flavor symmetries [114, 155, 232, 242, 243].
In Tab. 2.5, we have mentioned various simple sum rules for the complex light neutrino mass eigenvalues (\(\tilde{m}_{i}\)) obtained with combinations of neutrino mass generation mechanism and discrete flavor symmetries [236, 240, 241]. Similar but less simple mass sum rules can also be obtained for models with modular symmetry. The authors reported in Ref. [285] four different mass sum rules for models based on modular symmetries where a residual symmetry in the lepton sector is preserved. The reported sum rules (SR) within these modular invariance approaches (which also follow the most general form given in Eq. (2.48)) are called SR 1 (Case I and Case II), SR 2, SR 3, and SR 4 [285]. Studies show that these sum
\begin{table}
\begin{tabular}{c|c|c} \hline Sum Rule & Group & Seesaw Type \\ \hline \(\tilde{m}_{1}+\tilde{m}_{2}=\tilde{m}_{3}\) & \(A_{4}\)[244, 245, 246, 247, 248]; \(S_{4}\)[249]; \(A_{5}\)[73] & Weinberg \\ \(\tilde{m}_{1}+\tilde{m}_{2}=\tilde{m}_{3}\) & \(\Delta(54)\)[250]; \(S_{4}\)[251] & Type II \\ \hline \(\tilde{m}_{1}+2\tilde{m}_{2}=\tilde{m}_{3}\) & \(S_{4}\)[252] & Type II \\ \hline \(2\tilde{m}_{2}+\tilde{m}_{3}=\tilde{m}_{1}\) & \(A_{4}\)[113, 114, 115, 234, 245, 246, 247, 248, 253, 254, 255, 256] & Weinberg \\ & \(S_{4}\)[128, 260]; \(T^{\prime}\)[261, 262, 263, 264, 265, 266, 267, 268, 269, 260, 261, 262, 264, 266, 267, 265] & \\ \(2\tilde{m}_{2}+\tilde{m}_{3}=\tilde{m}_{1}\) & \(A_{4}\)[266] & Type II \\ \hline \(\tilde{m}_{1}+\tilde{m}_{2}=2\tilde{m}_{3}\) & \(S_{4}\)[267] & Dirac \\ \(\tilde{m}_{1}+\tilde{m}_{2}=2\tilde{m}_{3}\) & \(L_{e}-L_{\mu}-L_{\tau}\)[268] & Type II \\ \hline \(\tilde{m}_{1}+\frac{\sqrt{3}+1}{2}\tilde{m}_{3}=\frac{\sqrt{3}-1}{2}\tilde{m} _{2}\) & \(A_{5}\)[269] & Weinberg \\ \hline \(\tilde{m}_{1}^{-1}+\tilde{m}_{2}^{-1}=\tilde{m}_{3}^{-1}\) & \(A_{4}\)[153]; \(S_{4}\)[244, 251]; \(A_{5}\)[76, 162] & Type I \\ \(\tilde{m}_{1}^{-1}+\tilde{m}_{2}^{-1}=\tilde{m}_{3}^{-1}\) & \(S_{4}\)[251] & Type III \\ \hline \(2\tilde{m}_{2}^{-1}+\tilde{m}_{3}^{-1}=\tilde{m}_{1}^{-1}\) & \(A_{4}\)[114, 153, 232, 234, 235, 242, 270, 271, 272, 273, 274, 275, 276, 277, 243] & Type I \\ \(\tilde{m}_{1}^{-1}+\tilde{m}_{3}^{-1}=2\tilde{m}_{2}^{-1}\) & \(A_{4}\)[279, 280, 281]; \(T^{\prime}\)[282] & Type I \\ \hline \(\tilde{m}_{3}^{-1}\pm 2i\tilde{m}_{2}^{-1}=\tilde{m}_{1}^{-1}\) & \(\Delta(96)\)[283] & Type I \\ \hline \(\tilde{m}_{1}^{1/2}-\tilde{m}_{3}^{1/2}=2\tilde{m}_{2}^{1/2}\) & \(A_{4}\)[233] & Type I \\ \(\tilde{m}_{1}^{1/2}+\tilde{m}_{3}^{1/2}=2\tilde{m}_{2}^{1/2}\) & \(A_{4}\)[284] & Scotogenic \\ \hline \(\tilde{m}_{1}^{-1/2}+\tilde{m}_{2}^{-1/2}=2\tilde{m}_{3}^{-1/2}\) & \(S_{4}\)[237] & Inverse \\ \hline \end{tabular}
\end{table}
Table 2.5: Sum rules for the complex light neutrino mass eigenvalues \(\tilde{m}_{i}\) defined in Eq. (2.48) obtained with various combinations of neutrino mass generation mechanisms and discrete flavor symmetries [236, 240, 241].
rules can not be conclusively connected to a particular mass generation mechanism, discrete symmetry, or any remnant symmetry in the lepton sector [286]. Rather, they are more connected to minimal breaking of the symmetries, which introduces a minimal number of parameters related to nonzero neutrino mass eigenvalues. However, the Majorana phases (connected with the complex mass eigenvalues \(\tilde{m}_{i}\)) also appear in the mass sum rules, making them ideal observable to test them in the neutrinoless double beta decay experiments. The occurrence of such sum rules severely constrains the prediction for \(m_{\beta\beta}\), see Fig. 3.3 in Section 3.2. A detailed discussion of the role of individual sum rules can be found in Refs. [238, 239, 240, 241, 286, 236].
Flavor Models: Majorana vs Dirac neutrinos
While neutrino oscillation experiments are insensitive to the nature of neutrinos, experiments looking for lepton number violating signatures can probe the Majorana nature of neutrinos. Neutrinoless double beta decay is one such lepton number violating process which has been searched for at several experiments without any positive result so far but giving stricter bounds on the effective neutrino mass, as discussed in Chapter 1. Although negative results at neutrinoless double beta decay experiments do not prove that the light neutrinos are of Dirac nature, it is nevertheless suggestive enough to come up with scenarios predicting Dirac neutrinos with correct mass and mixing. There have been several proposals already that can generate tiny Dirac neutrino masses [287, 288, 289, 290, 291, 292]. In Refs. [293, 294, 295, 296], the authors showed that it is possible to propose various seesaw mechanisms (type-I, inverse and linear seesaw) for Dirac neutrinos with \(A_{4}\) discrete flavor symmetry. Here the symmetry is chosen in such a way that it naturally explains the hierarchy among different terms in the neutrino mass matrix, contrary to the conventional seesaws where this hierarchy is ad-hoc.
Texture zeroes
When a flavor neutrino mass matrix contains zero elements, this is called a texture zeros mass matrix. Texture zeroes make the neutrino mass and mixing models simpler. Such constructions are interesting as they lead to a reduction of independent mass parameters in theory5. There is a vast literature on the subject, e.g. see the list of references in the recent work [301].
Footnote 5: The number of free parameters in neutrino models can also be reduced with the requirement of zero mass determinant [297, 298] or the zero trace (”zero-sum” \(m_{\mu_{1}}+m_{\nu_{2}}+m_{\nu_{3}}=0\)) condition [299, 300].
In the three-generation scenario, the low energy Majorana neutrino mass matrix \(M_{\nu}\) is a \(3\times 3\) complex symmetric matrix having six independent elements given by
\[M_{\nu} = \begin{pmatrix}m_{ee}&m_{e\mu}&m_{e\tau}\\ m_{e\mu}&m_{\mu\mu}&m_{\mu\tau}\\ m_{e\tau}&m_{\mu\tau}&m_{\tau\tau}\end{pmatrix}. \tag{2.50}\]
For three generations and one-zero textures, all six one-zero textures \(G_{1}-G_{6}\) in Tab. 2.6 can accommodate the experimental data [302], see also Refs. [303, 304, 305].
The crosses "\(\times\)" stand for the non-zero entries and "\(-\)" represent symmetric elements (the matrices are assumed to be symmetric). There are 15 possible two-zero textures categorised in different classes6, as shown in Tab. 2.7. The sum of crosses in each matrix in Tabs. 2.6 and 2.7 gives the number of independent parameters of 5(4) for textures with one (two) zeroes. Textures with more than two independent zeroes appear to be excluded by the experiments7 (three independent
parameters are not enough to accommodate neutrino data).
In the \(3\times 3\) scenario, among the 15 possible textures, only 7 are phenomenologically allowed [308, 309, 106]. In Ref. [310], nine patterns were compatible with data (two of them only marginally for considered experimental data). The results, in general, depend strongly on available data. After measurement of non-zero \(\theta_{13}\), the updated analysis can be found in Refs. [311, 312, 313] and the number of viable textures have been reduced significantly, namely:
1. Class \(A\) is allowed only for NH.
2. Class \(B\) is allowed for both NH and IH. \(B_{1}\) and \(B_{4}\) predict negative values of \(\cos\delta_{\rm CP}\) whereas the classes \(B2\) and \(B3\) predict positive values of \(\cos\delta_{\rm CP}\). The textures \(B1\) and \(B3\) predict \(\theta_{23}\) in the lower octant and the textures \(B2\) and \(B4\) predict \(\theta_{23}\) in the upper octant for NH. The predictions are opposite for the IH.
3. Class \(C\) class is allowed mainly in the IH. This class is marginally allowed in the NH when \(\theta_{23}\) is close to \(45^{\circ}\). In this class when \(\theta_{23}<45^{\circ}\), one must have \(-90^{\circ}<\delta_{\rm CP}<90^{\circ}\) and when \(\theta_{23}>45^{\circ}\), one must have \(90^{\circ}<\delta_{\rm CP}<270^{\circ}\).
4. The textures in the classes \(D\), \(E\) and \(F\) are forbidden by the data.
In Ref. [108] numerical scan over neutrino parameters using adaptive Monte Carlo generator confirmed seven allowed patterns with two zeroes (the CP conserving case) and identified additional cases for the non-degenerated neutrino masses (mass ordered scenarios). In Ref. [314], the stability of phenomenological consequences of texture zeros under radiative corrections in the type-I see-saw scenario is discussed. It has been shown that additional patterns are allowed under certain conditions due to these effects. Comparing these results with the classification of two-zero textures, three of the six forbidden textures turn out to agree with experimental data due to the renormalization group evolution of the Yukawa
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{\(A_{1}\)} & \multicolumn{2}{|c|}{\(A_{2}\)} & \multicolumn{2}{|c|}{\(D_{1}\)} & \multicolumn{2}{|c|}{\(D_{2}\)} \\ \hline \(\left(\begin{array}{ccc}0&0&\times\\ 0&\times&\times\\ -&-&\times\end{array}\right)\) & \(\left(\begin{array}{ccc}0&\times&0\\ -&\times&\times\\ 0&-&\times\end{array}\right)\) & \(\left(\begin{array}{ccc}0&\times&0\\ -&\times&\times\\ -&0&\times\end{array}\right)\) & \(\left(\begin{array}{ccc}\times&\times&\times\\ -&\times&0\\ -&0&0\end{array}\right)\) & \\ \hline \hline \multicolumn{2}{|c|}{\(B_{1}\)} & \multicolumn{2}{|c|}{\(B_{2}\)} & \multicolumn{2}{|c|}{\(B_{3}\)} & \multicolumn{2}{|c|}{\(B_{4}\)} & \multicolumn{2}{|c|}{\(E_{1}\)} & \multicolumn{2}{|c|}{\(E_{2}\)} & \multicolumn{2}{|c|}{\(E_{3}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&0\\ -&0&\times\\ 0&-&\times\end{array}\right)\) & \(\left(\begin{array}{ccc}\times&0&\times\\ 0&\times&\times\\ -&-&0\end{array}\right)\) & \(\left(\begin{array}{ccc}\times&0&\times\\ 0&0&\times\\ -&-&\times\end{array}\right)\) & \(\left(\begin{array}{ccc}\times&\times&0\\ -&\times&\times\\ -&-&\times\end{array}\right)\) & \(\left(\begin{array}{ccc}0&\times&\times\\ -&\times&\times\\ -&-&0\end{array}\right)\) & \(\left(\begin{array}{ccc}0&\times&\times\\ -&\times&\times\\ -&0&\times\end{array}\right)\) & \(\left(\begin{array}{ccc}0&\times&\times\\ -&\times&0\\ -&0&\times\end{array}\right)\) \\ \hline \multicolumn{2}{|c|}{\(C\)} & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} & \multicolumn{2}{|c|}{\(F_{3}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} & \multicolumn{2}{|c|}{\(F_{3}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&0&\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&\times\\ -&-&0\times\\ -&-&0\end{array}\right)\) & \multicolumn{2}{|c|}{\(F_{1}\)} & \multicolumn{2}{|c|}{\(F_{2}\)} \\ \hline \(\left(\begin{array}{ccc}\times&\times&
couplings
\[\begin{pmatrix}\times&0&0\\ 0&\times&\times\\ 0&-&\times\end{pmatrix},\begin{pmatrix}\times&0&\times\\ 0&\times&0\\ -&0&\times\end{pmatrix},\begin{pmatrix}\times&\times&0\\ -&\times&0\\ 0&0&\times\end{pmatrix}.\]
The matrices of the form
\[\begin{pmatrix}0&\times&\times\\ -&0&\times\\ -&-&\times\end{pmatrix},\begin{pmatrix}0&\times&\times\\ -&\times&\times\\ -&-&0\end{pmatrix},\begin{pmatrix}0&\times&\times\\ -&\times&0\\ -&0&\times\end{pmatrix},\]
remain forbidden. Tab. 2.8, extracted from Ref. [314], summarizes the situation. In this work the quasi-degenerate cases are also considered, however, they are already excluded by cosmological and \((\beta\beta)_{0\nu}\) data, see Figs. 1.3 and 1.2 (so not repeated in Tab. 2.8).
The question is if models with zeroes in neutrino mass matrix constructions (either in the effective mass matrix \(M_{\nu}\) or in \(M_{D},M_{R}\)) have anything to do with discrete symmetries. _The answer is positive_. There are examples in the literature where texture zeros are related to discrete symmetries. For instance, in Ref. [315], the \(A_{4}\)-based texture one-zero neutrino mass model within the inverse seesaw mechanism for DM is discussed. The obtained effective neutrino mass matrix is in the form
\[M_{\nu}=\begin{pmatrix}X+X^{{}^{\prime}}&0&\Delta+\Delta^{{}^{\prime}}\\ 0&\Delta^{{}^{\prime}}&X^{{}^{\prime}}\\ \Delta+\Delta^{{}^{\prime}}&X^{{}^{\prime}}&\Delta^{{}^{\prime\prime}}\end{pmatrix}. \tag{2.51}\]
Not-primed and doubly-primed elements gather the light neutrino mass matrix contributions while primed elements come from the inverse seesaw mass mechanism construction. For more on a connection between mass matrix textures and discrete symmetries and further references, see Ref. [316] (the inverse neutrino mass matrix with one texture zero and TM mixing) or Ref. [317] (textures of neutrino mass matrix with \(S_{4}\) symmetry in which some pairs of mass matrix elements are equal, up to the sign).
The textures of neutrino matrices can also be studied using the matrix theory, in particular an inverse eigenvalue (singular value) problem (IEP). It is a method that reconstructs a matrix from a given spectrum [318, 319]. This is an important field on its own with many applications. One application which seems natural from the neutrino physics point of view is the reconstruction of the neutrino mass matrix from experimental constraints. Especially important would
\begin{table}
\begin{tabular}{l l l} \hline \hline Neutrino masses & Majorana phases & \\ \hline Normal ordering, \(m_{1}\approx 0\) & arbitrary & \(\left(\begin{array}{cc}\cdot&\cdot&\cdot\\ \cdot&\cdot&\cdot\end{array}\right)\) \\ \hline Inverted ordering, \(m_{3}\approx 0\) & \(\varphi_{1}\approx\varphi_{2}\) & \(\left(\begin{array}{cc}\cdot&\circ&\circ\\ \circ&\cdot&\cdot\end{array}\right)\) \\ & \(\varphi_{1}\not\approx\varphi_{2}\) & \(\left(\begin{array}{cc}\cdot&\cdot\\ \cdot&\cdot&\cdot\end{array}\right)\) \\ \hline \hline \end{tabular}
\end{table}
Table 2.8: Possible positions of radiatively generated texture zeros in the neutrino mass matrix, marked by a “\(\circ\)”. For \(\varphi_{1}\approx\varphi_{2}\approx\pi\), at most, 3 of the four zeros can be produced simultaneously. Table taken from Ref. [314] where a convention for Majorana phases in the PMNS matrix is \(\phi_{1,2}=-\alpha/2\), see Eq. (1.1).
be one class of IEP, namely an inverse eigenvalue problem with prescribed entries whose goal can be stated as follows: given a set \(\mathcal{L}=\{i_{\nu},j_{\nu}\}_{\nu=1}^{l}\), \(1\leq i_{\nu},j_{\nu}\leq n\), a set of \(l\) values \(\{a_{1},\ldots,a_{l}\}\) and a set of n values \(\{\lambda_{1},\ldots,\lambda_{n}\}\) find a matrix \(A\in\mathbb{M}^{n\times n}\) such that
\[\sigma(A)=\{\lambda_{1},\ldots,\lambda_{n}\}\,, \tag{2.52}\] \[A_{i,j_{\nu}}=a_{\nu}\text{ for }\nu=1\ldots,l\,, \tag{2.53}\]
where \(\sigma(A)\) is a spectrum of the matrix \(A\). Some of the classical results of IEP are Schur-Horn theorem [320, 321], Mirsky theorem [322], Sing-Thompson theorem [323, 324]. For example, the Mirsky theorem says
**Theorem 2.1**.: _A square matrix with eigenvalues \(\lambda_{1},\ldots,\lambda_{n}\) and main diagonal elements \(a_{1},\ldots,a_{n}\) exists if and only if_
\[\sum_{i=1}^{n}a_{i}=\sum_{i=1}^{n}\lambda_{i}\,. \tag{2.54}\]
There are many extensions of these classical results, allowing arbitrary location of the prescribed elements, see for example Ref. [325]. This method can be applied to studies of discrete symmetries in the neutrino sector. As an example one can systematically examine the structure of the mass matrix in the texture-zeros approach, where one would like to reconstruct a matrix with a prescribed spectrum agreeing with the experimental observations and with zero matrix elements in specific locations. However, it is also well suited for studies of more general structures of the neutrino mass and mixing matrices. Some applications of the IEP in the neutrino sector have already been done, e.g., in Ref. [85] the discussion of the mass spectrum in the seesaw scenario is given. The application of the inverse singular value problem in the study of neutrino mixing matrices was considered in Ref. [89].
A different approach, also using tools from the matrix theory, to the modeling of the structure of the neutrino mass matrix has been proposed in Ref. [326], where it has been applied to the Altarelli-Feruglio model [113, 114] and its perturbation from the TBM regime. This approach is based on the observation that one of the invariants of the \(n\times n\) complex matrix \(A\) is a square of the Frobenius norm
\[R^{2}=\|A\|_{F}^{2}=Tr\left(AA^{\dagger}\right)=\sum_{i,j}^{n}|a_{ij}|^{2}\,, \tag{2.55}\]
which can be interpreted as an equation of the \(n^{2}\)-dimensional hyper-sphere. Thus, it makes it natural to express elements of the matrix \(A\) in terms of spherical coordinates. For the physically interesting case of \(n=3\), elements of the neutrino mass matrix \(M_{\nu}\) (assumed to be real) can be parametrized as
\[M_{11} =R\sin(\chi)\left(\prod_{i=1}^{6}\sin(\phi_{i})\sin(\phi_{7}) \right),\;M_{12}=R\sin(\chi)\left(\prod_{i=1}^{6}\sin(\phi_{i})\cos(\phi_{7}) \right)\,, \tag{2.56}\] \[M_{13} =R\sin(\chi)\left(\prod_{i=1}^{5}\sin(\phi_{i})\cos(\phi_{6}) \right),\;M_{21}=R\sin(\chi)\left(\prod_{i=1}^{4}\sin(\phi_{i})\cos(\phi_{5}) \right)\,,\] (2.57) \[M_{22} =R\sin(\chi)\left(\prod_{i=1}^{3}\sin(\phi_{i})\cos(\phi_{4}) \right),\;M_{23}=R\sin(\chi)\left(\prod_{i=1}^{2}\sin(\phi_{i})\cos(\phi_{3}) \right)\,,\] (2.58) \[M_{31} =R\sin(\chi)\sin(\phi_{1})\cos(\phi_{2})\,,\;M_{32}=R\sin(\chi) \cos(\phi_{1})\,,\;M_{33}=R\cos(\chi)\,. \tag{2.59}\]
As a consequence of this parametrization, we get interrelations between different elements, and also, it is straightforward to produce texture-zeros by a particular choice of angles. Moreover, the Frobenius norm can also be expressed in terms
of singular values
\[\|A\|_{F}=\sqrt{\sum_{i=1}^{q}\sigma_{i}^{2}}\,, \tag{2.60}\]
where \(q\) is the rank of \(A\) and similarly to (2.55) can be interpreted as a \(q\)-dimensional sphere. For the normalized matrix in the case \(n=3\), \(\bar{A}=\frac{A}{\|A\|_{F}}\), the normalized singular values can then be defined as \(\bar{\sigma}_{1}=\sin\alpha\sin\beta\), \(\bar{\sigma}_{2}=\sin\alpha\cos\beta\), \(\bar{\sigma}_{3}=\cos\alpha\), where \(\alpha,\beta\in[0,\frac{\pi}{2}]\). Thus, one can see that only two angles \(\alpha\) and \(\beta\) are necessary to describe normalized singular values of \(\bar{A}\) reflecting the fact that only two independent mass ratios are relevant. As we have seen the 9-dimensional sphere (2.55) carries more information than the one given in the singular value space, requiring in total 8 angles to describe it fully. The additional 6 angles are related to the unitary matrices of the singular value decomposition of \(\bar{A}\)
\[\bar{A}=L^{\dagger}\Sigma R\,. \tag{2.61}\]
Finally, we can express angles \(\alpha\) and \(\beta\) in terms of singular values as
\[\sin\alpha=\sqrt{\frac{\bar{\sigma}_{1}^{2}+\bar{\sigma}_{2}^{2}}{\bar{\sigma }_{1}^{2}+\bar{\sigma}_{2}^{2}+\bar{\sigma}_{3}^{2}}}\,,\,\sin\beta=\sqrt{ \frac{\bar{\sigma}_{1}^{2}}{\bar{\sigma}_{1}^{2}+\bar{\sigma}_{2}^{2}}}\,. \tag{2.62}\]
Another interesting realization of discrete flavor symmetries with specific mixing matrices can be realized in the framework of so-called magic matrices. An \(n\times n\) matrix \(A\) is _magic_ if the row sums and the column sums are all equal to a common number \(\alpha\):
\[\sum_{i=1}^{n}A_{ij}=\sum_{j=1}^{n}A_{ij}=\alpha. \tag{2.63}\]
Neutrino mass matrix [327] is magic, in the sense that the sum of each column and the sum of each row are all identical, see also Ref. [328]. Zeros in the magic neutrino mass matrix are discussed in Ref. [329]. In Ref. [330], the magic neutrino mass model within the type-I and II seesaw mechanism paradigm are investigated. It is based on the neutrino mass model triggered by the \(A_{4}\) discrete flavor symmetry where a minimal scenario with two right-handed neutrinos and with broken \(\mu-\tau\) symmetry leads to leptogenesis.
Neutrino and quark models
Neutrino and quark sectors are qualitatively different regarding their mixing and mass patterns. However, there are ambitious trials to consider them together. For instances, in Ref. [331] the quark-lepton complementarity (QLC) relations [332, 333, 334, 335, 336] are considered with \(\theta_{12}+\theta_{12}^{q}\simeq 45^{o}\) and \(\theta_{23}+\theta_{23}^{q}\simeq 45^{o}\). The QLC relations indicate that there could be a quark-lepton symmetry based on a flavor symmetry. In Ref. [331] a discrete symmetry \(A_{4}\times Z_{2}\) is considered in the context of charged leptons and quarks, and tribimaximal neutrino mixing, see also Ref. [151]. We should mention that there is also an intriguing, the so-called King-Mohapatra-Smirnov (KMS) relation [333, 337, 338, 339]\(U=U_{CKM}^{\star}U_{TBM}^{\star}\), which gives \(|U^{13}|\simeq|\sin\theta_{C}/\sqrt{2}|\simeq 0.156\). For a recent discussion on the possible origin of this relation, see Ref. [340]. The KMS relation will also be discussed in section 2.6 on the discrete symmetries and the GUT scale. For the application of dihedral groups to the lepton and quark sectors, see also Ref. [341].
### Flavor and Generalized CP Symmetries
As discussed earlier (see Eq. (1.1), the PMNS mixing matrix is characterized by three mixing angles which are well measured as well as three CP phases, namely the Dirac CP phase (\(\delta_{\rm CP}\)), and the two Majorana phases, which are largely
unconstrained. The flavor symmetry approach can predict the three mixing angles, making exploring how CP phases can be predicted similarly attractive. A hint of how this can be achieved can be seen in the transformations given in Eq. (2.35) for the \(\mu\)-\(\tau\) reflection symmetry. This overall symmetry operation can be seen as a canonical CP transformation augmented with the \(\mu\)-\(\tau\) exchange symmetry, which will later be a case of the "generalized" CP transformation. This example predicts both atmospheric mixing angle and \(\delta_{\rm CP}\) to be maximal. Therefore, such transformations represent an interesting extension of the discrete symmetry framework discussed earlier with an additional invariance under a CP symmetry. We will first discuss the basic properties of CP transformations. We then discuss combining CP and flavor symmetry group \(G_{f}\) in a consistent framework.
The action of a generalized CP transformation \(X\) on a field operator is given by
\[\varphi^{\prime}(x_{0},\vec{x})\,=\,X\,\varphi^{*}(x_{0},-\vec{x}) \tag{2.64}\]
where \(X\) is a constant unitary matrix i.e. \(XX^{\dagger}=\mathbb{1}\). The requirement of CP invariance on the neutrino mass matrix \(M_{\nu}\) leads to the condition
\[X^{T}M_{\nu}X\,=\,M_{\nu}^{*} \tag{2.65}\]
For example, in the case of the \(\mu\)-\(\tau\) reflection symmetry, the mass matrix given in Eq. (2.38) is invariant under
\[\mathcal{S}^{T}M_{0}\mathcal{S}=M_{0}^{*}, \tag{2.66}\]
where the transformation matrix is given by
\[\mathcal{S} = \left(\begin{array}{ccc}1&0&0\\ 0&0&1\\ 0&1&0\end{array}\right). \tag{2.67}\]
On comparing Eq. (2.66) with Eq. (2.65), we can easily identify \(\mathcal{S}\) as the CP transformation \(X\) in this case. Note that if neutrinos are Dirac particles, condition in Eq. (2.65) is replaced by \(X^{\dagger}m_{\nu}^{\dagger}m_{\nu}X\,=\,(m_{\nu}^{\dagger}m_{\nu})^{*}\). It can be shown explicitly that only a generalized transformation leads to vanishing CP invariants, thus leading to vanishing CP phases [342, 343].
Now consider the case with both flavor and CP symmetry. The existence of both discrete flavor and generalized CP symmetries determines the possible structure of the generalized CP symmetry matrices. For example, if we consider a discrete flavor symmetry \(G_{f}\) in the lepton sector, the transformation matrix given in Eq. (2.67) satisfy the _consistency condition_ given by [343, 344]
\[\mathcal{S}\rho(g)^{*}\mathcal{S}^{-1}=\rho(u(g)), \tag{2.68}\]
(or equivalently in the general case \(S\to X\)) where \(u\) is an automorphism of a group of \(G_{f}\) which maps an element \(g\in G_{f}\) into \(g^{\prime}=u(g)\in G_{f}\) where the latter belong to the conjugacy class of \(g^{-1}\). Since this automorphism is class-inverting, it is an outer automorphism8 of \(G_{f}\). One can derive this condition by applying a generalized CP symmetry transformation, subsequently, a flavor transformation associated with the group element \(g=G_{f}\) and an inverse generalized CP symmetry transformation in order. As the Lagrangian remains unchanged, the resulting transformation must correspond to an element of \(G_{f}\). This can lead to interesting predictions for the leptonic CP phases (Dirac and Majorana) and the mixing angles.
Footnote 8: In an outer automorphism, the automorphism cannot be represented by a conjugation with a group element i.e. \(g\to hgh^{-1}\), \(h\notin G_{f}\)
Following Ref. [345], in Fig. 2.6, it is indicated that typically CP symmetry is considered in the neutrino sector. As an example, we consider \(G_{f}=S_{4}\) and CP [260, 346, 347, 348, 349]. Our discussion follows specifically the case \(G_{f}=S_{4}\), \(G_{\nu}=Z_{2}\times\,CP\) and \(G_{e}=Z_{3}\) (see Ref. [260] for details). Given this choice of residual group for \(G_{e}\), the only possible choice for a generator of the group is \(Q=T\). In this case, it is also found that all choices of generator \(Z\) and \(X\) are related through similarity transformations to the following three options: \(Z=S\), \(Z=SU\) and \(Z=U\). Using the consistency relation as given in Eq. (2.68) along with the unitary symmetric nature of \(X\), the following forms of CP transformations are (in the 3-dim real representation of \(S_{4}\), see Eq. (2.3)) : \(X_{1}\propto 1\), \(X_{2}\propto S\), \(X_{3}\propto U\), \(X_{4}\propto SU\), \(X_{5}\propto TST^{2}S\), \(X_{6}\propto T^{2}STS\), where \(S,T,U\) are the three generators of the group \(S_{4}\). Also note that \(X_{i}\)'s being proportional to the generators of \(G_{f}\) is just a coincidence in this case and does not hold generally.
Few other such examples are listed here for generalized CP symmetry transformation with various discrete groups such as \(\Delta(27)\)[350, 351], \(A_{4}\)[352], \(\Delta(48)\)[353, 354], \(\Delta(6n^{2})\)[355, 356], \(\Delta(96)\)[357], \(A_{5}\)[358, 359], \(\Delta(3n^{3})\)[360].
### Higher Order Discrete Groups
In the above, we have discussed fixed mixed schemes (BM, TBM, GR, HG), which are already ruled out by data and mixing schemes (TM\({}_{1}\),TM\({}_{2}\),CBM) which are consistent with observations. We have also seen that smaller discrete groups (such as \(A_{4},S_{4},\Delta(27),T_{7}\)) still can explain the correct mixing with 'appropriate adjustments' in the old flavor symmetric models. We will discuss a few aspects of explaining lepton mixing with larger groups. In this technique, we look for new groups that predict a different leptonic mixing pattern. Therefore, our new method is to start with much higher order groups (\(G_{f}\)), which essentially breaks down to two groups \(G_{e}\) and \(G_{\nu}\) for charged leptons and neutrino sector. For example, when we start with a discrete group of the order less than 1536, and the residual symmetries are fixed at \(G_{e}=Z_{3}\) and \(G_{\nu}=Z_{2}\times Z_{3}\), the only surviving group which can correct neutrino mixing are \(\Delta(6n^{2})\) with \(n=10\), \((Z_{18}\times Z_{6})\times S_{3}\) and \(\Delta(6n^{2})\) with \(n=16\)[361]. In a more updated study, considering discrete subgroups of \(U(3)\), it has been shown that the smallest group which satisfies \(3\sigma\) allowed range of neutrino oscillation data is \(\Delta(6n^{2})\) with \(n=18\), i.e., the order of the group is 1944 [122]. This study also proposes an analytical formula to predict full columns of the lepton mixing matrix. For a review of similar studies, the reader is referred to Ref. [362] and references therein.
Figure 2.6: General scheme considered in literature for ways of merging discrete flavor groups with CP effects; figure taken from Ref. [345]. For a concrete example, see Section 4.1 and associated phenomenological studies in the subsequent Chapters.
### Flavor Symmetry and Grand Unified Theory
To have a complete understanding of the flavor problem in particle physics, propositions are there to combine family symmetry and Grand Unified Theory (GUT) [363, 364, 365]. These special versions of Grand Unified Theories of Flavor include a discrete flavor symmetry giving GUT predictions through the associated Clebsch factors [366, 367] explaining the observed lepton mixing. This construction can provide a novel connection between the smallest lepton mixing angle, e.g., as discussed in Section 2.3 in the form of the KMS relation [333, 337, 338, 339] between the reactor mixing angle \(\theta_{13}\) and the largest quark mixing angle, the Cabibbo angle \(\theta_{C}\)
\[\theta_{13}=\theta_{C}/\sqrt{2}. \tag{2.69}\]
See other works on the subject [368, 369, 229, 340]. The symmetry choice makes several combinations of discrete flavor symmetry and GUT possible. For classification of such models, the readers are referred to the review [370] and references therein for explicit models. These models, however, greatly depend on the fields' symmetry-breaking pattern and vacuum alignment. In addition, such unified constructions have consequences in leptogenesis [371, 372, 373, 374, 375] and at LHC [376]. Recently, GUT frameworks have also been implemented in modular invariance approach [377, 378, 379, 380, 381].
### Flavor Symmetry and the Higgs Sector
There are many possibilities to extend the SM Higgs sector, e.g. by introducing singlet scalar fields, two- (2HDM) and multi-Higgs doublets (NHDM), and triplet multiplets. These models include charged and neutral Higgs bosons with rich phenomenology, including modifications of the SM-like Higgs couplings, flavour-changing neutral currents (FCNC), CP violation effects, and in cosmology, scalar DM candidates, and modification of the phase transitions in the early Universe. The 2HDM and NHDM doublets as copies of the SM Higgs doublet, in analogy to fermion generations, is a natural choice. Also, many BSM models, including gauge unification models, supersymmetry, and even string theory constructions, inherently lead to several Higgs doublets at the electroweak scale [382]. However, SM scalar sector extensions lead to many free parameters. For example, the free parameters exceed one hundred in the three-Higgs-doublet model (3HDM). One can impose additional flavour symmetry to reduce the number of free parameters. To discuss BSM Higgs sectors in the context of flavor symmetries, the starting point is the Yukawa Lagrangian. The imposition of a flavor symmetry on the leptonic part of the Yukawa Lagrangian has been discussed in Refs. [370, 78, 383]. In the SM, the application of family symmetry is limited due to Schur's lemma [384], which implies that for three-dimensional mass matrices of charged leptons and neutrinos, their diagonalization matrices are proportional to identity. Thus, the PMNS matrix becomes trivial. This drawback can be overcome in two ways. One approach is to break the family symmetry group by a scalar singlet, flavon [26]. Despite many attempts, it has failed to reconstruct the PMNS matrix. In the second approach, a non-trivial mixing can be achieved by extending the Higgs sector by additional multiplets [385, 386, 387]. The general classification of which symmetry groups can be implemented in the scalar sector of the 2HDM is studied in Refs. [388, 389, 390]. Analogous analysis of an overall possible set of finite reparametrization symmetry groups in the 3HDM is presented in Ref. [391]. For the four-Higgs-doublet model, a systematic study of finite non-abelian symmetry groups that can be imposed on the scalar sector is given in Ref. [392].
Recent activities in the field of flavour symmetry and the Higgs sector are mostly related to either to new analytical and model building studies (see e.g. Refs. [393, 394, 395]) or to numerical verification of existing ones. For the second approach, multi-Higgs doublet models were examined, which include two Higgs doublets (2HDM) [384] and three Higgs doublets
(3HDM) [396]. Finite, non-Abelian, discrete subgroups of the \(U(3)\) group up to order 1035 were investigated to search for specific groups that could explain the masses and mixing matrix elements of leptons. For both 2HDM and 3HDM models, both Dirac and Majorana neutrinos were examined. From the model building point of view, it was also assumed that the total Lagrangian has a full flavor symmetry, but the Higgs potential is only form invariant [397, 398], meaning that only the scalar potential coefficients may change while the terms in the potential do not vary. The scan was performed with the help of the computer algebra system GAP [399]. A limitation to subgroups with irreducible three-dimensional faithful representations has been performed to reduce an enormous number of evaluated subgroups significantly. The results were utterly negative for 2HDM [384] - for such an extension of the SM, up to considered highest dimensional discrete groups, there is no discrete symmetry that would fully match the masses and parameters of the mixing matrix for leptons. However, 3HDM provides nontrivial relations among the lepton masses and mixing angles, leading to nontrivial results [396]. Namely, _some of the scanned groups provide either the correct neutrino masses and mixing angles or the correct masses of the charged leptons_. The group \(\Delta(96)\) is the smallest group compatible with the experimental data of neutrino masses and PMNS mixing whereas \(S_{4}\) is an approximate symmetry for Dirac neutrino mixings, with parameters staying within \(3\sigma\) of the measured \(\theta_{12},\theta_{23},\theta_{13}\), and \(\delta_{\rm CP}\). Thus, phenomenological investigations based on the 3HDM results or further theoretical studies of discrete groups beyond 3HDM are worth future studies.
### Modular Symmetry
Recently, in an appealing proposal to understand the flavor pattern of fermions, the idea of modular symmetry [113, 124] has been reintroduced. In conventional approaches, a plethora of models exist based on non-Abelian discrete flavor symmetries and finite groups. The spectrum of the models is so large that it is difficult to obtain a clear clue of the underlying flavor symmetry. Additionally, there are a few major disadvantages to using this conventional approach. Firstly, the effective Lagrangian of a typical flavor model includes a large set of flavons. Secondly, though a vacuum alignment of flavons essentially determines the flavor structure of quarks and leptons, often auxiliary symmetries are needed to forbid unwanted operators from contributing to the mass matrix. The third and most crucial disadvantage of conventional approaches is the breaking sector of flavor symmetry, bringing many unknown parameters, hence compromising the minimality. On the contrary, the primary advantage of models with modular symmetry [113, 124] is that the flavon fields might not be needed, and the flavor symmetry can be uniquely broken by the vacuum expectation value of the modulus \(\tau\). Here, the Yukawa couplings are written as modular forms, functions of only one complex parameter i.e., the modulus \(\tau\), which transforms non-trivially under the modular symmetry. Furthermore, all the higher-dimensional operators in the superpotential are completely determined by modular invariance if supersymmetry is exact; hence auxiliary Abelian symmetries are not needed in this case. Like in the conventional model-building approach, models with modular symmetry can also be highly predictive. The fundamental advantage is that the neutrino masses and mixing parameters can be expressed by a few input parameters. Modular symmetry is a property of supersymmetric theories, in which the action \(\mathcal{S}\) remains unchanged under modular transformations [124, 400]
\[\gamma:\tau\rightarrow\frac{a\tau+b}{c\tau+d},\ \ \chi^{\mathcal{I}} \rightarrow(c\tau+d)^{-k}\rho^{\mathcal{I}}(\gamma)\chi^{\mathcal{I}}, \tag{2.70}\]
here \(\gamma\) is the element of the modular group where \(a\), \(b\), \(c\), and \(d\) are integers satisfying \(ad-bc=1\), \(\tau\) is an arbitrary complex number in the upper complex plane, \(\rho^{(I)}(\gamma)\) denotes the representation matrix of the modular transformation \(\gamma\), and \(k_{I}\) is the weight associated with the supermultiplet \(\chi^{(I)}\). With this setup, the superpotential \(\mathcal{W}(\tau,\chi)\) is also invariant
under the modular transformation and can be expanded in terms of the supermultiplets \(\chi^{(I_{i})}\) (for \(i=1,\cdots,n\)) as
\[\mathcal{W}(\tau,\chi)=\sum_{n}\sum_{\{I_{1},\ldots,I_{n}\}}Y_{I_{1} \ldots I_{n}}(\tau)\chi^{(I_{1})}\cdots\chi^{(I_{n})}\,, \tag{2.71}\]
where the modular forms transform as
\[Y_{I_{1}\ldots I_{n}}(\tau)\rightarrow(c\tau+d)^{k_{Y}}\rho_{Y} (\gamma)Y_{I_{1}\ldots I_{n}}(\tau). \tag{2.72}\]
Here the coefficients \(Y_{I_{1}\ldots I_{n}}(\tau)\) take the modular forms, and are the key elements of the modular symmetry approach. In the theory, \(k_{Y}\) and \(\rho_{Y}\) must satisfy \(k_{Y}=k_{I_{1}}+\cdots+k_{I_{N}}\) which can be used to constrain the charge assignments of superfields and modular forms. For example, for the modular group \(\Gamma_{4}\simeq S_{4}\), the functions \(Y(\tau)\) are modular forms of the level \(N=4\) and weight \(2k_{I}\) with five linearly independent modular forms (\(Y_{i}(\tau)\) for \(i=1,2,\cdots,5\)). These five linearly independent forms \(Y_{i}(\tau)\) arrange themselves into two irreducible representations of \(S_{4}\), a doublet 2 and a triplet \(3^{\prime}\) which can be written as [379]
\[Y_{\mathbf{2}}(\tau)\equiv\begin{pmatrix}Y_{1}(\tau)\\ Y_{2}(\tau)\end{pmatrix}\,\quad Y_{3^{\prime}}(\tau)\equiv\begin{pmatrix}Y_{3}( \tau)\\ Y_{4}(\tau)\\ Y_{5}(\tau)\end{pmatrix}. \tag{2.73}\]
The modular forms \(Y_{2}(\tau)\) and \(Y_{3^{\prime}}(\tau)\) dictate the flavor structure of the charged lepton and neutrino mass matrices, controlling the neutrino masses and mixing pattern. There are already several activities adopting this approach, for a few examples see Refs. [400, 401, 412]. As modular symmetric models make it possible to understand fermionicixing, there is a scope for phenomenological studies of mixings and possible connections with matter-antimatter asymmetry and DM [413, 414, 222, 408]. However, testing the modular symmetry in the context of current and future neutrino experiments like T2HK JUNO, NO\(\nu\)A, and DUNE, makes it very hard to obtain robust correlations among the neutrino oscillation parameters. A discussion in this direction can be found in Section 3.1. Nonetheless, the sum rules obtained in models based on modular symmetries with residual symmetries may shed some light in this regard [285].
## 3 Flavor Symmetry at Intensity Frontier
Flavor symmetry can potentially provide an important link between the outstanding puzzle of neutrino mass-mixing with various other aspects of particle physics, cosmology, and astroparticle physics, such as neutrinoless double-beta decay, lepton flavor violating decays, the nature of DM, baryon asymmetry of the Universe, nonstandard interactions, etc. If a flavor symmetry connects these seemingly uncorrelated sectors, the constraints from various cosmological, collider, and neutrino experiments may probe the existence of such symmetry. Such connections and correlations allow us to probe discrete flavor symmetries at many frontiers. In this Chapter we discuss the so-called intensity frontier experiments which include low-energy rare processes like neutrinoless double beta decay, LFV, and long-baseline neutrino experiments. Since these rare LNV/LFV processes are predicted to be either absent or highly suppressed in the SM, even a single unambiguous event detection would signal new physics.
### Neutrino Oscillation Experiments
As discussed earlier, a wide class of models with various discrete flavor symmetry groups (\(G_{f}\)) exists. With high statistics and their ability to measure the mixing parameters more precisely, the current and future neutrino oscillation experiments
provide an excellent testing ground for the flavor symmetry models. Such studies crucially depend on the breaking pattern of \(G_{f}\) into its residual subgroups for charged lepton sector \(G_{e}\) and neutrino sector \(G_{\nu}\). For example, in Ref. [415], the authors have studied implication of breaking of \(G_{f}\times CP\) (with \(G_{f}=A_{4},S_{4},A_{5}\)) into \(G_{e}>Z_{2}\), \(G_{\nu}=Z_{2}\times CP\) in the context of ESSnuSB experiment [416]. Such a breaking pattern is usually observed in the semi-direct approach of flavor model building [417]. In this approach the PMNS matrix depends on two free parameters for \(G_{f}=A_{4},S_{4}\), and \(A_{5}\)[39]. In a similar approach [418, 419] considered the breaking pattern into \(G_{e}=Z_{k},k>2\) or \(Z_{m}\times Z_{n}\), \(m,n\geq 2\) and \(G_{\nu}=Z_{2}\times CP\) residual symmetry for charged lepton and neutrino sectors, respectively, in the context of ESSnuSB [420, 421], T2HK [422], DUNE [36], and JUNO [34] experiments. In this approach the PMNS matrix is more constrained and depends on a single free angle [423, 424, 425, 343]. In each case, distinct constraints were obtained on the neutrino oscillation parameters \(\delta_{\rm CP}\) and \(\theta_{23}\). In Ref. [415] it was demonstrated that out of the 11 (7) one-(two-)parameter models, five (five) are compatible with the present global data at \(3\sigma\).
In Fig. 3.1, we show the compatibility of one and two-parameter models with any potentially true values of \(\sin^{2}\theta_{23}\) and \(\delta_{\rm CP}\) in the context of ESSnuSB, T2HK, DUNE, and their combination [418]. The models are based on \(A_{5}\times CP\) (Model 1.1), \(A_{5}\times CP\) (Model 1.2) \(S_{4}\times CP\) (Model 1.3), \(S_{4}\times CP\) (Model 1.4), \(A_{5}\times CP\) (Model 1.5), \(A_{5}\) (Model 2.1), \(S_{4}\) (Model 2.2) and \(A_{5}\) (Model 2.3) discrete groups. A detailed discussion on the specification of each model and their individual compatibility with a combination of experiments can be found in Ref. [418]. In Fig. 3.1, we show the compatibility of one and two-parameter models with any potentially true values of \(\sin^{2}\theta_{23}\) and \(\delta_{\rm CP}\) in the context of ESSnuSB, T2HK, DUNE, and their combination. The dark (light) green, blue, and red shaded regions represent \(3\sigma\) (\(5\sigma\)) allowed regions for ESSnuSB, T2HK, DUNE respectively. Similarly, the continuous (doted) cyan and yellow lines represent \(3\sigma\) (\(5\sigma\)) allowed regions for ESSnuSB combined with atmospheric data and ESSnuSB long-baseline experiments. The stars are the best fit values. Figure taken from the arXiv version of Ref. [418].
Figure 3.1: Compatibility of one and two-parameter models with any potentially true values of \(\sin^{2}\theta_{23}\) and \(\delta_{\rm CP}\) in the context of ESSnuSB,T2HK, DUNE, and their combination. The dark (light) green, blue, and red shaded regions represent \(3\sigma\) (\(5\sigma\)) allowed regions for ESSnuSB, T2HK, DUNE respectively. Similarly, the continuous (doted) cyan and yellow lines represent \(3\sigma\) (\(5\sigma\)) allowed regions for ESSnuSB combined with atmospheric data and ESSnuSB long-baseline experiments. The stars are the best fit values. Figure taken from the arXiv version of Ref. [418].
3.1, the dark (light) green, blue, and red shaded regions represent \(3\sigma\) (\(5\sigma\)) allowed regions for ESSnuSB, T2HK, DUNE respectively. Similarly, the continuous (doted) cyan and yellow lines represent \(3\sigma\) (\(5\sigma\)) allowed regions for ESSnuSB combined with atmospheric data and ESSnuSB long-baseline experiments. The complementarity among these neutrino oscillation experiments offers us some insight into distinguishing various classes of discrete flavor symmetric models. The best fit value for models 1.1, 1.2 falls outside experimentally allowed range of parameters whereas the model 2.3 satisfies all experimental constraints. As shown in Fig. 3.2, the high-precision measurement of \(\sin^{2}\theta_{12}\) by JUNO will be crucial in discriminating among and excluding most of the considered models (see models 1.4 and 1.5 on left and models 1.1, 1.2, 1.4 on right of the \(\sin^{2}\theta_{12}\) best-fit value.
Leveraging DUNE's excellent capability for \(\delta_{\rm CP}\) and \(\theta_{23}\) measurements, the prospects of generalized CP symmetries with texture zeros [426] and 'bi-large' mixing [427] have also been investigated. In another study [428], the authors have demonstrated an approach to construct operators for neutrino non-standard interactions based on \(A_{4}\) discrete symmetry and its feasibility at DUNE. For studies of consequences of partial \(\mu-\tau\) reflection symmetry at DUNE and Hyper-Kamiokande, see Refs. [182, 187, 429]. Guided by the considered discrete flavor symmetry \(G_{f}\), sum rules involving neutrino masses and mixing may also have inherent characteristics [99, 286, 430, 431] to be confronted with the neutrino experiments mentioned here. Usually, it is very hard to obtain specific correlations among the neutrino mixing parameters within the modular invariance approach and hence there are very few studies on the viability of these models in the context of neutrino oscillation experiments. Recently, in Ref. [432], the authors have discussed the implication of modular symmetry in neutrino oscillation experiments. In this work, three different \(A_{4}\) modular symmetric models were considered. The numerical predictions of these three models were tested in the context of T2HK, DUNE, and JUNO experiments, showing a relative comparison of the models and their compatibility with these experiments. Furthermore, for a discussion on testing non-standard neutrino interactions at neutrino oscillation originating from modular invariance approach (as well as flavor symmetry-based approach), see Ref. [433].
Figure 3.2: Compatibility of one-parameter models with any potentially true value of \(\sin^{2}\theta_{12}\) in the context of JUNO. The vertical gray line indicates the best-fit value of \(\sin^{2}\theta_{12}\) from global neutrino oscillation data discussed in Ref. [418]. Figure taken from the arXiv version of Ref. [418].
### Neutrinoless Double Beta Decay
Some flavor models based on discrete symmetries provide predictions for the absolute neutrino mass scale as well as the Majorana phases. Of particular interest are models that predict a correlation between the three complex neutrino mass eigenvalues and the Majorana phases, \(\tilde{m_{i}}=m_{i}e^{i\alpha_{i}}\), hence allow only a certain portion of the parameter space shown in Fig. 1.2 for \(0\nu\beta\beta\). This has been extensively studied in the literature in the context of different flavor models, e.g. TBM [233], \(\mu-\tau\)[434], \(A_{4}\)[435, 436], \(S_{4}\)[249, 251, 437], \(A_{4}\times Z_{4}\)[438], and \(\Delta(3n^{2})\)[439]. A comprehensive summary of \(0\nu\beta\beta\) predictions from five categories of flavor models, namely, generalized CP, sum rules, charged lepton corrections, texture zeros, and modular symmetries, can be found in Ref. [440]. As discussed in Ref. [241], neutrino mass sum rules are present in over sixty flavor models [236, 240, 285, 286, 430]. Fig. 3.3 (updated from Ref. [241]) gives a representation of the predictions for \(m_{\beta\beta}\) from different mass sum rules, namely five sum rules connected with modular symmetry models (SR1-I, SR1-II, SR2, SR3, SR4) [285] and twelve sum rules connected with models that predict a correlation between the three complex neutrino mass eigenvalues [240]; see Section 2.3 for the explanation of the different sum rules. We can see clearly in Fig. 3.3 that the current KamLand-Zen constraint [48] has already killed some flavor models, and future experiments like nEXO [49] will be able to rule out the IO scenario for the remaining models, and also the NO scenario for some models.
Figure 3.3: A summary plot of the predictions for \(m_{\beta\beta}\) from different mass sum rules for NO and IO. The current experimental bound from KamLand-Zen Ref. [48] is shown in gray and the future sensitivity of nEXO Ref. [49] is in purple. The different shading corresponds to different values of the nuclear matrix elements, leading to the weakest and strongest bounds on \(|m_{\beta\beta}|\). The figure is updated from Ref. [241].
### Lepton Flavor and Universality Violation
Flavor symmetry models can also make distinctive predictions for LFV and LFUV observables. The basic idea is that couplings between flavons and leptons can result in special flavor structures with specific LFV predictions. Thus, discovering LFV signal can provide crucial information to distinguish flavor symmetries and new physics scenarios. See Refs. [239, 441] for reviews of the impact of flavor symmetry models on LFV processes. More recent studies of LFV in specific flavor models can be found in Refs. [442, 443, 444, 173, 445, 446]. For example, the FSS model [175] (reproducing TM\({}_{1}\) mixing with \(A_{4}\) discrete flavor symmetry) described in Section 2.2 also contributes to LFV decays such as \(\ell_{\alpha}\to\ell_{\beta}\gamma\) and \(\ell_{\alpha}\to 3\ell_{\beta}\) (\(\alpha,\beta=e,\mu,\tau\)). Predictions of these LFV decays crucially depend on the VEV alignment of the associated flavons which also dictates the neutrino masses and mixing. Owing to the particular flavor structure given in Eqs. (2.21) and (2.22), the scotogenic part within this FSS framework only contributes to the LFV decay \(\mu\to e\gamma\) and \(\mu\to 3e\) conversion and puts a strong constraint on the allowed parameter space. In Fig. 3.4, we have shown the prediction for the branching ratios for \(\mu\to e\gamma\) (cyan dots, left panel) and \(\mu\to 3e\) (blue dots, right panel) against scotogenic fermion (\(f\)) mass \(M_{f}\). In this FSS model, \(f\) is also a potential DM candidate. In both panels of Fig. 3.4, the dotted regions represent the \(3\sigma\) allowed regions which satisfy correct neutrino masses and mixing [20] as well as correct DM relic density. The horizontal red lines in both of these panels represent the current sensitivity of MEG [447] and SINDRUM [448] experiments and substantially contain the allowed parameter space and restrict DM mass \(M_{f}\gtrsim 1750\) GeV. More interestingly, this \(A_{4}\) FSS model can be falsified by the future sensitivity (given by horizontal magenta line) of MEG II [449] and Mu3e Phase-I [450] experiment for \(\mu\to e\gamma\) and \(\mu\to 3e\) decays, respectively. On the other hand, due to the considered flavor symmetry, the Yukawa couplings \(Y_{F}^{\tau}\), \(Y_{N}^{e}\) vanish and hence \(\tau\to 3e\) processes are strictly disallowed in this model. Thus, with an example of a \(A_{4}\) flavor symmetric scoto-seesaw framework, we find that flavor symmetry can have distinct consequences on lepton flavor violating decays.
In some cases, the flavor model predictions for LFV, especially in the tau sector, are expec
Figure 3.4: Branching ratios for \(\mu\to e\gamma\) (left panel) and \(\mu\to 3e\) (right panel) against scotogenic fermion mass \(M_{f}\) satisfying \(3\sigma\) allowed range of neutrino oscillation data [20] as well as correct DM relic density. The horizontal red and magenta lines represent current and future experimental sensitivity. Figure taken from Ref. [175].
near future by Belle II. The role of leptonic CP phases (which come out as a prediction in many flavor models) in the LFV observables has been explored recently in Ref. [451]. Now, considering the mixing of light and heavy RHNs, flavor symmetric models can also constrain the VEV of the flavon fields involved, which helps to realize the desired flavor structure to explain observed leptonic mixing. Following Eqs. (1.10)-(1.13), the leptonic non-unitarity can be defined as [452, 453, 454, 455]
\[U\simeq(\mathbb{I}-\eta)U_{\rm PMNS} \tag{3.1}\]
where the non-unitary parameter is defined as \(\eta=\frac{1}{2}FF^{\dagger}\) and \(F=M_{D}M_{R}^{-1}\). The present bound on \(\eta\) obtained from various non-standard interactions can be summarized as [455, 456, 455]
\[|\eta|\leq\left(\begin{array}{ccc}1.3\times 10^{-3}&1.2\times 10^{-5}&1. 4\times 10^{-3}\\ 1.2\times 10^{-5}&2.2\times 10^{-4}&6.0\times 10^{-4}\\ 1.4\times 10^{-3}&6.0\times 10^{-4}&2.8\times 10^{-3}\end{array}\right). \tag{3.2}\]
In a flavor symmetric model, the mass matrices for the Dirac (\(M_{D}\)) and heavy Majorana (\(M_{R}\)) neutrinos appearing in \(\eta\) can be obtained with the involvement of the flavons. Hence, the constraints on \(\eta\) for Eq. (3.2) can, in principle, constrain the VEV of the associated flavon field. For example, considering an \(A_{4}\) flavor symmetric inverse seesaw scenario in Ref. [157], the authors showed that the flavon VEV (\(v_{f}\)) can be constrained as \(v_{f}\leq 6.15\lambda\) TeV, where \(\lambda\) is the ratio of the modulus of the coupling constants involved in \(M_{D}\) and \(M_{R}\) respectively. Along with the constrain on \(\eta\) given in Eq. (3.2), the Dirac CP phase \(\delta_{\rm CP}\) can also play an instrumental role in obtaining limits on \(v_{f}\); for a detailed discussion see Ref. [157].
Flavor models can also shed some light on the recent hints for lepton flavor universality violation. In Ref. [457], a comprehensive analysis is given based on the flavor group \(G_{f}=D_{17}\times Z_{17}\) with scalar leptoquarks aiming to explain some anomalies in \(B\)-physics connected with lepton flavor universality between \(\tau\) and \(e,\mu\), as well as muon anomalous magnetic moment \(g-2\). For other studies of the LFUV in flavor symmetry models, see e.g. Refs. [458, 459, 460].
## 4 Flavor Symmetry at Energy Frontier
Flavor models with extended particle content can lead to interesting collider signals. For instance, in the flavor models with Majorana RHNs, the structure of the complex Dirac Yukawa couplings can be fixed by the flavor (and CP) symmetries. This in turn gives concrete predictions for the LNV/LFV signatures associated with the RHNs, depending on their mass spectrum. In this section, we will illustrate the collider phenomenology in a class of models with residual flavor and CP symmetries. The specific discrete flavor symmetry groups \(G_{f}\) chosen here are the series of groups \(\Delta(6\,n^{2})\)[461], known to give several interesting neutrino mixing patterns [355, 360, 361, 439, 462]. As discussed in Refs. [463, 464], this framework provides an excellent probe of flavor symmetries in collider experiments. For example, it can result in one of the three RHNs to be very long-lived at some special parameter points, termed as points of _enhanced residual symmetry_ (ERS), which can be searched through _long-lived particle_ (LLP) searches [465, 466]. In comparison, the remaining two RHNs can be probed via prompt/displaced vertex signals at the LHC [467, 468] or future hadron collider FCC-hh [469, 470]; see discussion in Section 4.2.
### Example Group : \(\Delta(6n^{2})\)
The discrete groups \(\Delta(6\,n^{2})\), (\(n\in Z\), \(n\geq 2\)) [461], can be characterized by four generators \(a\), \(b\), \(c\) and \(d\) along with the identity element \(e\), fulfilling the relations
\[a^{3}\,=\,e\,,\,\,\,c^{n}\,=\,e\,,\,\,\,d^{n}\,=\,e\,,\,\,\,c\,d\,=\,d\,c\,,\, \,\,a\,c\,a^{-1}\,=\,c^{-1}d^{-1}\,,\,\,\,a\,d\,a^{-1}\,=\,c\,,\]
\[b^{2}\,=\,e\,,\,\,\,(a\,b)^{2}\,=\,e\,,\,\,\,b\,c\,b^{-1}\,=\,d^{-1}\,,\,\,\,b \,d\,b^{-1}\,=\,c^{-1}\,. \tag{4.1}\]
Upon breaking of the the flavor group \(G_{f}\) at low energies, the neutrino and charged lepton sectors are still invariant under residual flavor and CP symmetry groups. The residual symmetry in the charged lepton sector is chosen to be the diagonal abelian subgroup of \(Z_{3}\), i.e. \(G_{\ell}=Z_{3}^{\rm(D)}\), while in the neutrino sector, we choose the residual symmetry to be \(G_{\nu}=Z_{2}\times{\rm CP}\). The generators of \(Z_{2}\) symmetry \(Z\)(r) and CP symmetry \(X\)(r) commute for all representations r of \(G_{f}\). The transformations related to CP symmetry correspond to the automorphisms of the flavor group. The differences between the residual symmetries \(G_{\ell}\) and \(G_{\nu}\) determine the forms of the lepton mixing matrix, charged lepton mass matrix, the neutrino Yukawa \(Y_{D}\) and the RHN Majorana mass matrix \(M_{R}\). Since, we explicitly choose the charged lepton mass matrix to be diagonal, the charged lepton sector does not contribute to the lepton mixing. As for the neutrino sector, we assume the Dirac neutrino Yukawa coupling matrix \(Y_{D}\) to be invariant under \(G_{\nu}\) and the Majorana matrix \(M_{R}\) does not break either \(G_{f}\) or CP. The light neutrino masses are obtained by the type-I seesaw formula [195, 196]:
\[M_{\nu}=-v^{2}Y_{D}M_{R}^{-1}Y_{D}^{T}\,, \tag{4.2}\]
where \(v\) is the SM Higgs VEV.
As an example, we consider a particular case with generator of \(Z_{2}\) symmetry \(Z\,=\,c^{n/2}\), where \(c\) is one of the generators of the group \(\Delta(6n^{2})\) and the corresponding CP transformations reads \(X(s)=\,a\,b\,c^{s}\,d^{2s}\,P_{23}\), where \(s\) is a parameter that runs from \(0\) to \((n-1)\) and \(P_{23}\) is the permutation matrix in the 2-3 plane. For this case, the form of \(Y_{D}\) is given by
\[Y_{D}\,=\,\Omega^{s}(3)\,R_{13}(\theta_{L})\,\left(\begin{array}{ccc}y_{1}& 0&0\\ 0&y_{2}&0\\ 0&0&y_{3}\end{array}\right)\,R_{13}(-\theta_{R})\,\Omega^{s}(3^{\prime})^{ \dagger}\;, \tag{4.3}\]
where the angles \(\theta_{L}\) and \(\theta_{R}\) are free parameters, with values in the range \([0,\pi)\) and \(R_{ij}(\theta)\) denotes rotation by an angle \(\theta\) in the \(ij\) plane. \(\Omega^{s}(3)\) is a unitary matrix connected with \(X(3)(s)\), with the following structure:
\[\Omega^{s}(3)\,=\,e^{i\phi_{s}}\,U_{\rm TBM}\,\left(\begin{array}{ccc}1&0&0 \\ 0&e^{-3\phi_{s}}&0\\ 0&0&-1\end{array}\right) \tag{4.4}\]
(where \(\phi_{s}=\pi s/n\)), whereas the form of the unitary matrix \(\Omega^{s}(3^{\prime})\) depends on whether \(s\) is even or odd, i. e.
\[\Omega^{s\;{\rm even}}(3^{\prime})\,=\,U_{\rm TBM}\;,\,\,\,\Omega^{s\;{\rm odd }}(3^{\prime})\,=\,U_{\rm TBM}\,\left(\begin{array}{ccc}i&0&0\\ 0&1&0\\ 0&0&i\end{array}\right)\;, \tag{4.5}\]
with \(U_{\rm TBM}\) given in Eq. (1.9). Finally, we can find the form for the PMNS mixing matrix as
\[U\,=\,\Omega^{s}(3)\,R_{13}(\theta_{\rm L}-\psi)\,K_{\nu}\,, \tag{4.6}\]
where \(K_{\nu}\) is a diagonal matrix with entries equal to \(\pm 1\) and \(\pm i\), making neutrino masses non-negative, and the angle \(\psi\) is defined by
\[\tan^{2}\psi=\frac{m_{1}+m_{3}-\sqrt{m_{1}^{2}+m_{3}^{2}+2m_{1}m_{3} \cos(4\theta_{R})}}{m_{1}+m_{3}+\sqrt{m_{1}^{2}+m_{3}^{2}+2m_{1}m_{3}\cos(4 \theta_{R})}}\,. \tag{4.7}\]
\(\theta_{L}\) in Eq. (4.6) is determined by reproducing the best-fit values of the measured neutrino mixing angles (cf. Tab. 1.1).
As for the RHN Majorana mass matrix \(M_{R}\), since it leaves \(G_{f}\) and CP invariant, its form is s imply
\[M_{R}\,=\,M_{N}\,\left(\begin{array}{ccc}1&0&0\\ 0&0&1\\ 0&1&0\end{array}\right)\,, \tag{4.8}\]
with \(M_{N}>0\) setting the overall mass scale of the RHNs. From Eq. (4.8), we see that the three RHNs are exactly degenerate in the flavor symmetry limit. However, if we want to successfully generate the BAU via resonant leptogenesis [471, 472] using the RHN freeze-out in the early universe, we need at least two quasi-degenerate RHNs. This can be achieved by introducing a small symmetry breaking term (that can be sourced from higher-dimensional operators)
\[\delta M_{R}=\kappa\,M_{N}\,\left(\begin{array}{ccc}2&0&0\\ 0&0&-1\\ 0&-1&0\end{array}\right) \tag{4.9}\]
with \(\kappa\ll 1\). Then the RHN masses acquire a (small) correction
\[M_{1}=M_{N}\,(1+2\,\kappa)\,\,\,\text{and}\,\,\,M_{2}=M_{3}=M_{N} \,(1-\kappa)\,, \tag{4.10}\]
thus making two RHN pairs quasi-degenerate, adequate for resonant leptogenesis (see Section 4.4).
Above we sketched a typical construction connected with CP and discrete flavor symmetries in the lepton sector, leading to the parametrization of the Dirac Yukawa coupling matrix given by Eq. (4.3). This is a very predictive scenario, with only five real parameters determining the lepton mixing, namely, three Yukawa couplings \((y_{1},y_{2},y_{3})\) corresponding to the three light neutrino masses, and two rotation angles (\(\theta_{L}\) and \(\theta_{R}\)) for lepton mixing. Just two extra parameters (\(\kappa\) and \(M_{N}\)) are needed for explaining the BAU. For TeV-scale \(M_{N}\), this leads to interesting predictions that can be tested at both energy and intensity frontier experiments. In the following, we will briefly discuss some phenomenological features of this simple scenario. For more details, see Ref. [464].
### Decay Lengths and Branching Ratios of RHNs
TeV-scale Majorana RHNs give rise to spectacular multilepton signals at the LHC and future colliders [467, 473, 474, 468]. The general problem is to get substantial production rate. In the context of minimal type-I seesaw for our TeV-scale RHN scenario, light neutrino masses and mixing require the Yukawa couplings to be significantly suppressed, with values on the order of \(10^{-7}\). This, in turn, suppresses the Drell-Yan production of the RHNs at LHC for the smoking-gun signal of same-sign dilepton plus two jets without missing transverse energy [475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485, 486]. However, in typical UV-complete RHN models with additional gauge interactions, the production rate can be enhanced without relying on their mixing with the SM neutrinos [467]. For example, in \(U(1)_{B-L}\) extensions of the SM [487, 488], the RHNs can be pair-produced via the \(Z^{\prime}\)-mediated process: \(pp\to Z^{\prime}\to NN\)[489, 490, 491, 492, 493, 494, 495, 496]. Similarly, in the left-right symmetric models based on the \(SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\) gauge group [497, 364, 498], the RHNs can be produced via the RH current:
\(pp\to W_{R}\to N\ell\)[475, 499, 500, 501, 502, 503, 504, 505, 506, 507, 508, 509, 510, 511]. Depending on the new gauge boson mass, cross sections up to a few fb are possible at the LHC.
After being produced, the RHNs typically decay into SM final states: \(N\to W\ell\), \(Z\nu\), \(h\nu\)[479]. The total decay width \(\Gamma_{i}\) of the RHN \(N_{i}\) at the tree-level depends on the Yukawa coupling \(Y_{D}\), and is given by
\[\Gamma_{i} = \frac{(Y_{D}^{\dagger}\,Y_{D})_{ii}}{8\,\pi}M_{i}. \tag{4.11}\]
Thus in flavor symmetry models, the RHN decay lengths depend indirectly on the choice of the generator \(Z(\mathbf{r})\) of the \(Z_{2}\) symmetry and the choice of the CP transformation \(X(\mathbf{r})\). For the example case considered above, see Eq. (4.3), the expressions for the decay widths are independent of values of \(s\) and only depend on the Yukawa couplings \(y_{f}\) and the angle \(\theta_{R}\):
\[\Gamma_{1} = \frac{M_{N}}{24\,\pi}\,\left(2\,y_{1}^{2}\,\cos^{2}\theta_{R}+y_{ 2}^{2}+2\,y_{3}^{2}\,\sin^{2}\theta_{R}\right)\,, \tag{4.12a}\] \[\Gamma_{2} = \frac{M_{N}}{24\,\pi}\,\left(y_{1}^{2}\,\cos^{2}\theta_{R}+2\,y_ {2}^{2}+y_{3}^{2}\,\sin^{2}\theta_{R}\right)\,,\] (4.12b) \[\Gamma_{3} = \frac{M_{N}}{8\,\pi}\,\left(y_{1}^{2}\,\sin^{2}\theta_{R}+y_{3}^{ 2}\,\cos^{2}\theta_{R}\right)\,. \tag{4.12c}\]
We can convert these decay rates into decay lengths \(L_{i}=\gamma/\Gamma_{i}\) in the laboratory frame, where \(\gamma\) is the boost factor of RHN which can be determined depending on how the RHN is produced. Some numerical results for the decay lengths based on Eqs. (4.12) are plotted in Fig. 4.1 as a function of \(\theta_{R}\). Here we have assumed the production of RHNs via \(Z^{\prime}\) with mass \(M_{Z^{\prime}}=4\) TeV in a \(U(1)_{B-L}\) model, which implies \(\gamma=M_{Z^{\prime}}/2M_{N}=8\) (13.3) for \(M_{N}=250\) (150) GeV. We find that the decay lengths are connected with neutrino mass ordering, as strong NO arises for \(y_{1}=0\) so that \(m_{1}\) vanishes, \(m_{2}=y_{2}^{2}\,v^{2}/M_{N}\) and \(m_{3}=y_{3}^{2}\,|\cos 2\,\theta_{R}|\,v^{2}/M\), while strong IO arises for \(y_{3}=0\) so that \(m_{3}=0\), \(m_{1}=y_{1}^{2}\,|\cos 2\,\theta_{R}|\,v^{2}/M_{N}\) and \(m_{2}=y_{2}^{2}\,v^{2}/M_{N}\). As can be seen from the decay length expressions (4.12), for strong NO and strong IO corresponding to \(y_{1}=0\) and \(y_{3}=0\) respectively (i.e. when \(m_{0}=0\)), there are ERS points for \(\theta_{R}\to\pi/2\), \(3\pi/2\) (NO) or \(\theta_{R}\to 0\), \(\pi\) (IO), at which \(\Gamma_{3}\to 0\), i.e. the RHN \(N_{3}\) becomes long-lived. The larger the ERS is, the smaller the deviation from points of ERS will be, i.e. \(\theta_{R}\) is expected to deviate from \(\theta_{R,0}\) by a small amount, \(\delta\theta_{R}=|\theta_{R}-\theta_{R,0}|\). For \(10^{-4}\lesssim\delta\theta_{R}\lesssim 10^{-2}\), this could lead to displaced vertex signatures from \(N_{3}\) decay that are accessible to future dedicated LLP experiments, such as FASER [512] and MATHUSLA [463]. Most signals from \(N_{1,2}\) decays are prompt but can also be slightly displaced
Figure 4.1: Decay lengths for \(N_{1,2,3}\) are plotted against \(\theta_{R}\) for different values of the RHN mass scale \(M_{N}\) (with \(m_{0}=0\)). The left (right) panel is for NO (IO). The shaded (unshaded) region roughly indicates the displaced/long-lived (prompt) signal regime. Figure adapted from Ref. [464] under CC BY 4.0 license.
depending on the choice of \(\theta_{R}\). The distinction between the two cases (prompt vs displaced) is marked here by \(L=1\) cm (horizontal line in Fig. 4.1) for the LHC, although the exact value might vary slightly depending on the details of the detector (CMS vs ATLAS). The points of ERS are of particular relevance for phenomenology since \(\theta_{L}\) deviating from \(\theta_{L,0}=0\) or \(\pi\) leads to a non-zero value of the reactor mixing angle \(\theta_{13}\). ERS points are also relevant for leptogenesis, as discussed in Chapter 5.
The underlying Yukawa structure \(Y_{D}\) not only predicts the decay lengths of the RHNs, but also their decay branching ratios (BRs), as the partial decay widths are proportional to \(\left|\left(Y_{D}\right)_{\alpha i}\right|^{2}\). Considering the decay of long-lived \(N_{3}\) at an LLP detector with \(m_{0}=0\) and \(M_{N}=250\) GeV, we find the following proportion of BRs:
\[\text{BR}(N_{3}\to e^{\pm}W^{\mp}):\text{BR}(N_{3}\to\mu^{\pm}W^{\mp}):\text{ BR}(N_{3}\to\tau^{\pm}W^{\mp})\,=\,\left\{\begin{array}{ll}1:27.7:18.1&( \text{NO})\\ 8.5:1:3.7&(\text{IO})\end{array}\right.\,, \tag{4.13}\]
independent of \(\theta_{R}\) and \(s\), and almost independent of \(M_{N}\), if \(M_{N}\gg m_{W}\). Thus, measuring these RHN decay BRs at an LLP detector for at least two charged lepton flavors provides an independent test of the neutrino mass hierarchy at the energy frontier. Decay signals of \(N_{1,2}\) at LHC (prompt or displaced vertex) can also be used to test mass hierarchy, but specifically, they depend on \(\theta_{R}\) as well as on the chosen CP symmetry \(X(s)\).
### Lepton Flavor Violation at Colliders
The decays of Majorana RHNs into charged leptons lead to LNV as well as LFV signals. Consider the \(Z^{\prime}\)-mediated production that leads to the same-sign dilepton final state [492]:
\[pp\to Z^{\prime}\to N_{i}N_{i}\to\ell_{\alpha}^{\pm}\ell_{\beta}^{\pm}+2W^{ \mp}\to\ell_{\alpha}^{\pm}\ell_{\beta}^{\pm}+4j\,. \tag{4.14}\]
Since the partial decay widths of RHNs depend on \(Y_{D}\), the LNV signal cross-section is affected by the choice for generator \(Z\) of the \(Z_{2}\) symmetry and the choice of the CP transformation \(X\), see discussion in Section 4.1.
We can probe the high-energy CP phases in the Yukawa coupling matrix at colliders by constructing simple observables out of the same-sign dilepton charge asymmetry. In particular, we can define two observables \(\sigma_{\text{LNV}}^{\alpha,-}\) (difference) and \(\sigma_{\text{LNV}}^{\alpha,+}\) (sum) of the same-sign charged-lepton final states of a given flavor \(\alpha\):
\[\sigma_{\text{LNV}}^{\alpha,\pm}\,=\sum_{i}\sigma_{\text{prod}}(pp\to N_{i}N_{ i})\left(\left[\text{BR}(N_{i}\to\ell_{\alpha}^{-}W^{+})\right]^{2}\pm\left[ \text{BR}(N_{i}\to\ell_{\alpha}^{+}W^{-})\right]^{2}\right)\times\left[\text{ BR}(W\to jj)\right]^{2}\,. \tag{4.15}\]
The flavored CP asymmetries \(\varepsilon_{i\alpha}\) relevant for leptogenesis turn out to be related to the ratio \(\sigma_{\text{LNV}}^{\alpha,-}/\sigma_{\text{LNV}}^{\alpha,+}\)[513, 514, 515]. Thus, measuring \(\sigma_{\text{LNV}}^{\alpha,-}/\sigma_{\text{LNV}}^{\alpha,+}\) can help measure the CP asymmetry, which is predicted by the group theory parameters. The normalized LNV cross sections \(\sigma_{\text{LNV}}(\ell_{\alpha}^{\pm}\ell_{\beta}^{\pm})\) with respect to the new gauge coupling \(g^{\prime}\) for our example case are shown in Fig. 4.2. It can be concluded that comparing the LNV final states with different charged-lepton flavor combinations can provide an independent, complementary test of the neutrino mass ordering at the high-energy frontier.
### Correlation between Collider Signals and Leptogenesis
Leptogenesis [516] provides an attractive link between two seemingly distinct hints for BSM physics, namely, neutrino masses and mixing, and the observed BAU, via the seesaw mechanism [196]. General details of the leptogenesis mechanism will be discussed in Section 5.2. Here, we briefly summarize the collider prospects of testing TeV-scale resonant leptogenesis [471] in the \(\Delta(6n^{2})\) flavor model discussed in Section 4.1, along with an extra \(U(1)_{B-L}\) so that TeV-scale
RHNs can be produced more efficiently at colliders than in the minimal type-I seesaw. In principle, we could also consider other gauge groups under which the RHNs and the SM are charged, such as the left-right symmetric framework. However, it turns out that in the left-right models, the additional washout effects induced by the right-handed gauge interactions impose a lower bound of \(M_{W_{R}}\gtrsim 20\) TeV to get successful leptogenesis, thereby precluding the possibility of testing it at the LHC [510, 517, 518, 519, 519]. On the other hand, the corresponding leptogenesis bound in the \(U(1)_{B-L}\) model considered here is rather weak due to the double Boltzmann suppression of the \(Z^{\prime}\)-induced washout effects [520, 521, 514].
In the presence of the RHNs, the flavored CP symmetries \(\varepsilon_{i\alpha}\) for resonant leptogenesis depend on the structure of \(Y_{D}\) (see Section 5.2), and therefore, can be computed analytically using the form of \(Y_{D}\) given in Eq. (4.3) for the flavor model considered here. This CP asymmetry can then be translated into a BAU (\(\eta_{B}\)) as described in Section 5.2, and compared with the observed value \(\eta_{B}^{\rm obs}\). Since the CP asymmetry is a function of \(Y_{D}\), the choice for generator \(Z\) of the \(Z_{2}\) symmetry and the choice of the CP transformation \(X\) determines the form of \(\varepsilon_{i}\) and in turn, the \(\eta_{B}\).
Fig. 4.3 shows the predictions for \(\eta_{B}/\eta_{B}^{\rm obs}\) in the \((M_{Z^{\prime}},M_{N})\) plane for both NO (left) and IO (right) in a particular
Figure 4.3: Prediction for BAU \(\eta_{B}\) relative to the observed value \(\eta_{B}^{\rm obs}\) in the \((M_{Z^{\prime}},M_{N})\) plane for a fixed \(g_{B-L}=0.1\) in a \(\Delta(6n^{2})\) model for strong NO (IO) in the left (right) panel, with \(\theta_{R}\) set to the respective ERS points. The red boxes correspond to \(\eta_{B}\) within 10% of \(\eta_{B}^{\rm obs}\). The contours show the RHN production cross sections (in ab) at the \(\sqrt{s}=14\) TeV LHC (solid) and at \(\sqrt{s}=100\) TeV FCC-hh (dashed). The vertical shaded region is the current exclusion from LHC dilepton data. Figure taken from Ref. [464].
Figure 4.2: Normalized LNV signals as a function of the RHN mass scale \(M_{N}\) at \(\sqrt{s}=14\) TeV LHC for all possible lepton flavor combinations in the strong NO (left) and strong IO (right) limit. Here we have fixed \(M_{Z^{\prime}}=4\) TeV. Figure taken from the arXiv version of Ref. [464].
case of the \(\Delta(6n^{2})\) model at ERS points [464]. The red boxes correspond to \(\eta_{B}\) within 10% of \(\eta_{B}^{\rm obs}\). Now to see whether these points are accessible at colliders, we superimpose the RHN pair-production cross sections \(\sigma(pp\to Z^{\prime}\to N_{i}N_{i})\) in attobarns (ab) for both \(\sqrt{s}=14\) TeV LHC (solid contours) and \(\sqrt{s}=100\) TeV FCC-hh (dashed contours). Successful leptogenesis for strong NO yields \(\sigma_{\rm prod}\lesssim 5\) ab at \(\sqrt{s}=14\) TeV LHC, which makes it difficult to get any observable events even with the final target luminosity of 3 ab\({}^{-1}\) at HL-LHC. The situation worsens for strong IO, where the cross sections are smaller by at least an order of magnitude compared to the strong NO case. On the other hand, a future 100 TeV hadron collider like FCC-hh [470] can reach a \(\sigma_{\rm prod}\) up to 2000 ab for the region of successful leptogenesis, which should yield up to 1000 LNV events with 30 ab\({}^{-1}\) integrated luminosity. In fact, by going to higher \(Z^{\prime}\) masses, the detection prospects at 100 TeV collider can be improved significantly. This is due to relaxed experimental limits on the new gauge coupling \(g_{B-L}\) for \(M_{Z^{\prime}}\gtrsim 6\) TeV [523]. For instance, at \(M_{Z^{\prime}}=7\) TeV, \(g_{B-L}\) can be as large as one. Since \(\sigma_{\rm prod}\) scales as \(g_{B-L}^{2}\) for \(M_{N}<M_{Z^{\prime}}/2\), apart from a mild suppression due to change in the \(Z^{\prime}\) mass, the cross-section gets enhanced by a factor of 100. The detection prospects will be even better in other \(Z^{\prime}\) model variants like the leptophobic case, where the LHC bound is somewhat weaker.
In general, not working within the context of any particular flavor model, the current status and future prospects of testing low-scale leptogenesis in colliders and other laboratory experiments are summarized in Fig. 4.4 for the minimal type-I seesaw with either two or three RHNs [522]. It illustrates the fact that LHC and future colliders provide ample opportunity to test low-scale leptogenesis parameter space. One can also note a large enhancement in the allowed mixing-mass space for three RHNs compared to the two RHN scenario. The exact picture depends strongly on the leptogenesis scenarios with degenerate or non-degenerate RHNs, freeze-in and freeze-out transitions, thermal initial conditions, LFV and oscillation constraints (e.g. mass of the lightest neutrino \(m_{0}\)). For an update, see also the talk in Ref. [524]. For other works showing the collider/laboratory prospects of testing low-scale leptogenesis, see Refs. [525, 526, 527, 528]. Phenomenological
Figure 4.4: Summary of LHC and future collider sensitivities to different low-scale leptogenesis scenarios with two and three RHNs. The upper (lower) shaded regions are excluded by laboratory (seesaw) constraints. In the legend, HNLs means ‘Heavy Neutral Leptons’, which are called RHNs in this review. The \(x\)-axis is the RHN mass scale, and the \(y\)-axis gives the square of the light-heavy neutrino mixing in the muon flavor, which has the best experimental prospects (compared to electron and tau flavors). Figure taken from the arXiv version of Ref. [522].
aspects of the high- and low-scale leptogenesis with RHNs in the context of BAU and discrete symmetries will be further discussed in Chapter 5.
### Collider Signals in Other Flavor Models
In the above discussions involving the group discussed in Section 4.1, it is clear that the analyses of the flavor symmetry models lead to the rich phenomenology connected with intensity and energy frontiers. Such phenomenological studies of discrete flavor symmetries can be extended to other discrete groups as well. We discuss below a few of these possibilities.
In Ref. [529], a model has been considered with two Higgs doublets and flavor symmetry, which can explain large leptonic mixing angles through a specific alignment of VEVs in the scalar sector. In Ref. [530], some phenomenological consequences of the model for collider physics and the DM problem were further explored. The -even scalar fields of the considered model give two generations of fields that couple completely off-diagonally to the charged leptons. Thus, the model predicts LFV processes and. As a consequence, LFV processes can also be searched for at hadron colliders (with an expectation of about ten events in each case) through the process, where denotes either of the neutral components of the second generation -even scalar field. There are also contributions to the diphoton decay width of the Higgs boson. Still, even with couplings to the charged scalars of order unity, the deviations from the SM prediction are beyond the reach of HL-LHC. However, these percent level deviations from SM for scalar masses around TeV could be studied with future lepton colliders. The non-standard sector of this model is rich enough to include DM candidates (scalars and Majorana neutrinos). As shown in the analysis, they can also be probed at hadron colliders, with the possibility to account for the observed relic density for DM with a mass between 47 and 74 GeV or in the interval 600 GeV and 3.6 TeV.
A comprehensive collider study of the parameter space of the flavor symmetry group was performed in Ref. [531]. At the leading order, breaking the flavor group into residual symmetries in the charged lepton (neutrino) sector generates the TBM mixing. The required fit to the observed PMNS matrix is achieved through slightly broken residual symmetries induced by a shift in one of the flavon VEVs. A thorough study in constraining the 6-dimensional model parameter space is conducted using the experimental data from, MEG, Higgs scalar mixing, Higgs width measurements, and a recast 8 TeV ATLAS analysis. The most stringent results are obtained from the LFV limit on - set by the MEG experiment, leading to approximately 60% parameter space to be excluded and the recast ATLAS analysis leading to 40% exclusion.
In another work [532], the authors consider studies on LFV Higgs decays in the context of 3HDM with symmetry. A remnant symmetry, which arises due to a specific vacuum alignment, leads to strongly suppressed FCNC. However, this symmetry is slightly broken by perturbations, leading to mixing between the scalars and, hence, to LFV Higgs decays. The original motivation for this work was to explain the anomaly in channel reported by CMS in 2015. They find if the extra scalars are light, the contribution to can be suppressed while the flavor-violating couplings are still allowed to be large. Due to the symmetry, sizable leads to enhanced branching fractions also for LFV decays. Another study correlating the LFV Higgs and boson decays to LFV in the charged lepton sector can be found in Ref. [533].
In Ref. [534], collider signatures of flavor symmetry with gauged are studied for a renormalizable two-parameter neutrino model. _So a mixture of discrete and continuous symmetries is also possible_. Specifically, prospects
for \(Z^{\prime}\) production and detection at LHC through decays into neutral Higgs scalars are studied, which subsequently decay into charged leptons with a specific flavor pattern determined from the flavor symmetry group.
As another example, collider signatures of vector-like fermions (VLF) with a non-abelian flavor symmetry group \(Q_{6}\times Z_{2}\) symmetry is studied in Ref. [535]. This group determines fermion masses as well as mixing. Only the third-generation fermions get their masses directly, while the rest obtain their masses in a see-saw-like mechanism. In this work, genetic algorithms are used to optimize the construction of neural networks that can maximize the statistical significance of a possible discovery (if any) for these VLFs at HL-LHC. While vector-like leptons can only probe masses safely up to 200 GeV, the prospects for vector-like quarks are better with sensitivity up to 3.8 TeV.
In a unified supersymmetric framework based on GUT \(SU(5)\times A_{4}\) symmetry [376], studies on muon anomalous magnetic moment \(g-2\) and DM have been performed in the context of LHC data where the right-handed smuon with masses are predicted to be around 100 GeV and with lightest (non-universal) gaugino masses being around 250 GeV.
### 4.6 Higgs to Diphoton Decay
The LHC data on the diphoton decay channel of the SM Higgs boson (with mass 125 GeV) [536, 537] can also have interesting phenomenological consequences in the context of discrete flavor symmetric scenarios. For example, for the scoto-seesaw FSS TM\({}_{2}\) model discussed in Section 2.2, the partial width of \(h\to\gamma\gamma\) receives additional and significant contributions due to the \(\eta^{\pm}\) and \(\eta_{R}\) one-loop effects [175]. The signal strength of \(h\to\gamma\gamma\) relative to the SM prediction is given by
\[R_{\gamma\gamma} = \frac{\left[\sigma(gg\to h)\times\text{Br}(h\to\gamma\gamma) \right]_{\text{Model}}}{\left[\sigma(gg\to h)\times\text{Br}(h\to\gamma\gamma) \right]_{\text{SM}}}=\frac{\Gamma_{\text{Total}}^{\text{SM}}\times\Gamma(h \to\gamma\gamma)_{\text{Model}}}{\Gamma_{\text{Total}}^{\text{Model}}\times \Gamma(h\to\gamma\gamma)_{\text{SM}}}. \tag{4.16}\]
While computing \(R_{\gamma\gamma}\) in our analysis, we have taken the total decay width of the Higgs boson in the SM as \(\Gamma_{\text{Total}}^{\text{SM}}=4.1\) MeV [538].
In Fig. 4.5, we have plotted the prediction for \(R_{\gamma\gamma}\) in the context of \(A_{4}\) flavor sym
Figure 4.5: Predictions for the diphoton decay of the Higgs boson within the FSS model [175]. \(R_{\gamma\gamma}\) defined in (4.16) is plotted against \(m_{\eta^{+}}\) (left) and \(m_{\eta_{R}}\) (right). The white region is the current experimental allowed range measured by ATLAS [537].
against \(m_{\eta^{+}}\) (left panel) and \(m_{\eta_{R}}\) (right panel). The horizontal white region (\(R_{\gamma\gamma}=1.04^{+0.10}_{-0.09}\)) represents the current allowed region measured by the ATLAS experiment using 139 fb\({}^{-1}\) of \(pp\) collision data at \(\sqrt{s}=13\) TeV [537]. Here \(\lambda_{3}\) is a coupling for \((H^{\dagger}H)(\eta^{\dagger}\eta)\) interaction, \(\eta\) being the usual inert scalar doublet appearing in scotogenic models. As we can see, a wide range of scalar boson masses can be probed through the diphoton Higgs decay channel using the present LHC experimental results. With the increasing data collection at LHC and HL-LHC, the precision of \(R_{\gamma\gamma}\) will improve, giving prospects for better determination of allowed regions for specific flavor model parameters. Thus, phenomenology-based \(R_{\gamma\gamma}\) constraints can be used for further studies and predictions for producing exotic discrete flavor model signals at present and future colliders.
## 5 Flavor Symmetry and Cosmic Frontier
In this Chapter, we will discuss some implications of flavor symmetry on various astrophysical and cosmological observables, including DM, BAU and gravitational waves.
### Flavor Symmetry and Dark Matter
There is overwhelming astrophysical and cosmological evidence for the existence of DM, such as the large-scale structure data, gravitational lensing, and rotation curve of galaxies [539]. However, a laboratory discovery is still awaited. The relic abundance of DM has been measured by WMAP [540], and more recently by PLANCK [188], which set it at 26.8% of the total energy budget of the Universe. However, a broad classification of DM scenarios can satisfy this condition, such as weakly interacting massive particles (WIMP) [541], feebly interacting massive particle (FIMP) [542], strongly interacting massive particle (SIMP) [543], asymmetric DM (ADM) [544], and so on. For reviews of various DM candidates, see e.g. Refs. [545, 546, 547, 548, 549]. Over the years, several attempts have been made to connect neutrino physics with DM [550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560]. In a toy example [557], the authors showed a one-to-one correspondence with WIMP DM and type-I seesaw. Earlier, we discussed that discrete flavor symmetric constructions have the potential to explain neutrino masses and mixing as well as can ensure the stability of DM.
For example [158, 159], with vectorlike singlet (\(\chi^{0}\))-doublet (\(\psi\)) DM particle spectrum assisted with additional scalars such as \(\phi,\eta\) (charged under a global \(U(1)\) flavor symmetry) the interaction between DM and neutrino sector can be written as
\[\mathcal{L}_{int}=\left(\frac{\phi}{\Lambda}\right)^{n}\bar{\psi}\tilde{H} \chi^{0}+\frac{(HL^{T}LH)\phi\eta}{\Lambda^{3}}. \tag{5.1}\]
The first term in Eq. (5.1), having a Yukawa-like configuration, acts like a Higgs portal coupling of the DM potentially accessible at various ongoing and future direct search and collider experiments. The second term in Eq. (5.1) plays a crucial role in generating non-zero \(\theta_{13}\) as the existing \(A_{4}\) flavons of the theory ensure the TBM mixing. A schematic view is given in Fig. 5.1. Once the \(U(1)\) flavor symmetry is broken, it ensures both the stability of DM and generates non-zero \(\theta_{13}\). The coupling strength of the DM \(\left(\frac{\phi}{\Lambda}\right)^{n}=\epsilon^{n}\) is constrained by the correct DM relic abundance, and \(\epsilon\) is proportional to the magnitude of \(\theta_{13}\). Additionally, the future precise measurement of the leptonic CP phase by T2K and NO\(\nu\)A experiments will reduce the uncertainty in \(n\)[159]. Few other examples of studies of DM with discrete flavor symmetry can be found in Refs. [550, 551, 552, 553, 556, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570].
Apart from WIMP (or freeze-out) DM paradigm, discrete flavor symmetric constructions can also be extended to the FIMP (or freeze-in) mechanism of DM. In such a scenario, non-thermal DM populating the Universe via freeze-in
mechanism requires tiny dimensionless couplings (\(\sim 10^{-12}\)). On the other hand, if neutrinos have a tiny Dirac mass, it also requires a coupling of a similar order of magnitude. In Ref. [571], the authors have shown that such tiny coupling required in both dark and visible sectors may originate in \(A_{4}\) discrete flavor symmetry. In another effort [572], it has been shown that the non-zero value of the reactor mixing angle \(\theta_{13}\) can originate from Planck scale suppressed operators and can as well realize a super-WIMP DM scenario. The study of DM with non-Abelian discrete symmetries demands more attention to explore the connection between neutrino physics and DM, if any.
### Flavor Symmetry and Baryon Asymmetry of the Universe
Cosmological observations reveal that our Universe possesses a net excess of baryons over antibaryons [188, 50] which is often termed as the baryon asymmetry of the Universe (BAU). This is measured by the ratio of the excess number density of baryons over antibaryons to photons at present time [188]:
\[\eta_{B}^{\rm obs}=\frac{n_{B}-n_{\bar{B}}}{n_{\gamma}}=(6.12\pm 0.08)\times 10^{ -10}\,. \tag{5.2}\]
The mechanism of dynamical generation of the BAU is known as _baryogenesis_ (see Refs. [573, 574, 575] for reviews) and must satisfy three basic Sakharov conditions, namely, baryon number violation, \(C\) and \(CP\) violation, and out-of-equilibrium [576]. Although the SM possesses all three ingredients, it is not sufficient to reproduce the observed BAU [577]. One attractive mechanism to produce the BAU is via _leptogenesis_[516], where one first produces a lepton asymmetry, and then uses the electroweak sphaleron processes [578, 579, 580] to convert it to a baryon asymmetry; see Refs. [581, 582, 583, 584] for reviews. This is particularly appealing, because it relies on the seesaw mechanism which also accounts for the neutrino masses and mixing, thus providing a link between the two seemingly disparate pieces of evidence for BSM physics.
In Chapter 2, we have briefly reviewed various neutrino mass generation mechanisms, which once augmented with discrete flavor symmetries explain the origin of tiny neutrino masses as well as observed neutrino mixing. Most of these mass-generation mechanisms include additional particles whose involvement also plays a crucial role in leptogenesis. The presence of various seesaw realizations of light neutrino masses in the models with discrete flavor symmetry enables us to study leptogenesis through the decay of associated heavy particles. For example, in type-I [195, 196, 197, 198, 8, 585, 586, 587], and III [588] seesaw mechanisms and variants, fermion singlet RHNs (\(N_{i}\)), scalar triplets (\(\Delta_{i}\)), and fermionic triplets (\(\Sigma_{i}\))
Figure 5.1: A schematic representation of DM (\(\psi,\chi^{0}\)) interaction with SM to generate non-zero \(\theta_{13}\) in the presence of the \(U(1)\) flavor symmetry. The \(A_{4}\) flavons help in generating base TBM mixing. Figure adapted from the arXiv version of Ref. [158].
are respectively included. Each scenario can potentially lead to successful leptogenesis within a wide mass range of these additional particles [589].
#### Resonant Leptogenesis
For example, in the case of the simple type-I seesaw, lepton asymmetry can be elegantly generated through the out-of-equilibrium decay of RHNs in the early Universe. The CP asymmetry parameter \(\epsilon_{i\alpha}\) can be evaluated from the interference between the tree and one-loop level decay amplitudes of the RHN \(N_{i}\) decaying into a lepton doublet \(L_{\alpha}\) with specific flavor \(\alpha\) and the Higgs doublet (\(H\)), i.e.
\[\varepsilon_{i\alpha}=\frac{\Gamma(N_{i}\to L_{\alpha}H)-\Gamma(N_{i}\to \bar{L}_{\alpha}H^{c})}{\Gamma(N_{i}\to L_{\alpha}H)+\Gamma(N_{i}\to\bar{L}_{ \alpha}H^{c})}\,, \tag{5.3}\]
where the subscript \(c\) stands for the charge conjugation. There are two types of loop-level diagrams involving vertex and wave-function corrections [590]. It turns out that the wave-function corrections can be dominant when at least two RHNs have a small mass difference comparable to their widths [591, 592, 593]. This is known as the resonant leptogenesis [471, 472], which allows the RHN mass scale to be lowered down to the electroweak-scale [594, 595, 596].
In the resonant regime, Eq. (5.3) can be written in a compact form as [472]
\[\varepsilon_{i\alpha}\simeq\frac{1}{8\pi\left(Y_{D}^{\dagger}Y_{D}\right)_{ii }}\sum_{j\neq i}\mathrm{Im}\Big{[}(Y_{D}^{\star})_{\alpha i}(Y_{D})_{\alpha j} \Big{]}\mathrm{Re}\left[\left(Y_{D}^{\dagger}Y_{D}\right)_{ij}\right]\mathcal{ F}_{ij}\,, \tag{5.4}\]
where \(Y_{D}\) is the Yukawa coupling matrix in the basis where the RHN mass matrix is diagonal (this is typically indicated by \(\hat{Y}_{D}\), but we remove the hat for brevity). The resonant enhancement factor is given by
\[\mathcal{F}_{ij}\,=\,\frac{M_{i}M_{j}(M_{i}^{2}-M_{j}^{2})}{(M_{i}^{2}-M_{j}^ {2})^{2}+A_{ij}^{2}}. \tag{5.5}\]
Here the \(Y_{D}\) matrices are evaluated in the RHN mass basis, and \(A_{ij}\) regulates the behavior of the CP asymmetry in the limit \(\Delta M_{ij}\equiv|M_{i}-M_{j}|\to 0\). As pointed out in Refs. [597, 596], in the resonant regime there are two distinct contributions to the CP asymmetry from RHN mixing and oscillation effects, both of which can be effectively captured by Eq. (5.4) but with different regulators:
\[A_{ij}^{\mathrm{mix}}\,=\,M_{i}\Gamma_{j}\,,\quad A_{ij}^{\mathrm{osc}}\,=\, (M_{i}\Gamma_{i}+M_{j}\Gamma_{j})\left[\frac{\det\left(\mathrm{Re}\left(Y_{D }^{\dagger}Y_{D}\right)\right)}{\left(Y_{D}^{\dagger}Y_{D}\right)_{ii}\left(Y _{D}^{\dagger}Y_{D}\right)_{jj}}\right]^{1/2}\,. \tag{5.6}\]
The net CP asymmetry is then given by \(\varepsilon_{i\alpha}^{\mathrm{tot}}=\varepsilon_{i\alpha}^{\mathrm{mix}}+ \varepsilon_{i\alpha}^{\mathrm{osc}}\). This analytic approximation is in good agreement with the full quantum kinetic treatment [597, 598, 526] in the strong washout regime.
Since the regulator part is independent of the lepton flavor \(\alpha\), we can sum over \(\alpha\) to obtain the total CP asymmetry for a given RHN \(N_{i}\):
\[\varepsilon_{i}\;\equiv\;\sum_{\alpha}\varepsilon_{i\alpha}\,=\frac{1}{8\pi \left(Y_{D}^{\dagger}Y_{D}\right)_{ii}}\sum_{j\neq i}\mathrm{Im}\left[\left(Y_ {D}^{\dagger}Y_{D}\right)_{ij}\right]\mathrm{Re}\left[\left(Y_{D}^{\dagger}Y_ {D}\right)_{ij}\right]\mathcal{F}_{ij}\,. \tag{5.7}\]
Within a semi-analytic Boltzmann approach, the flavor-dependent lepton asymmetry parameter (\(\eta_{L_{\alpha}}\)) can be written as [595, 599]
\[\eta_{L_{\alpha}}\simeq\frac{3}{2z_{c}K_{\alpha}^{\mathrm{eff}}}\sum_{i} \epsilon_{i\alpha}d_{i} \tag{5.8}\]
where \(z_{c}=M_{N}/T_{c}\), \(T_{c}\) is the critical temperature below which the electroweak sphalerons freeze-out, \(K_{\alpha}^{\rm eff}\) are the effective washout factors and \(d_{i}\) are the corresponding dilution factors given in terms of ratios of thermally-averaged rates for decays and scatterings involving the RHNs [595, 596]. The obtained lepton asymmetry then can be converted into the observed BAU via \((B+L)\)-violating electroweak sphaleron processes [579, 580] which gives the final baryon asymmetry
\[\eta_{B}\simeq-0.013\sum_{\alpha}\eta_{L_{\alpha}}\,, \tag{5.9}\]
where the pre-factor is a product of the sphaleron conversion rate of 28/79 [580] and the entropy dilution factor of 1/27.3 [596]. It was the ratio of Eq. (5.9) to the observed value given in Eq. (5.2) plotted in Fig. 4.3.
Let us exemplify the usefulness of Eq. (5.4) by applying it to a particular case of the \(\Delta(6n^{2})\) group discussed in Section 4.1 with \(s\) even. We find that \(\varepsilon_{3\alpha}=0\), i.e., the RHN mass eigenstate \(N_{3}\) does not contribute to the CP asymmetry, and in the strong NO and IO limits,
\[\varepsilon_{1\alpha}^{\rm NO} \approx \frac{y_{2}\,y_{3}}{9}\,\left[-2\,y_{2}^{2}+y_{3}^{2}\,(1-\cos 2 \,\theta_{R})\right]\,\sin 3\,\phi_{*}\sin\theta_{R}\,\sin\theta_{L,\alpha}\, \mathcal{F}_{12}\,, \tag{5.10}\] \[\varepsilon_{1\alpha}^{\rm IO} \approx \frac{y_{1}\,y_{2}}{9}\,\left[-2\,y_{2}^{2}+y_{1}^{2}\,(1+\cos 2 \,\theta_{R})\right]\,\sin 3\,\phi_{*}\cos\theta_{R}\,\cos\theta_{L,\alpha}\, \mathcal{F}_{12}\,, \tag{5.11}\]
with \(\theta_{L,\alpha}=\theta_{L}+\rho_{\alpha}\,4\pi/3\) and \(\rho_{e}=0\), \(\rho_{\mu}=1\), \(\rho_{\tau}=-1\). For strong NO (IO) \(\varepsilon_{i\alpha}\) becomes very small, if \(\theta_{R}\approx 0,\,\pi\) (\(\theta_{R}\approx\pi/2,3\pi/2\)). In addition, \(\mathcal{F}_{ij}\) vanishes for \(\cos 2\,\theta_{R}=0\). The CP asymmetries \(\varepsilon_{2\alpha}=-\varepsilon_{1\alpha}\) with \(\mathcal{F}_{12}\leftrightarrow\mathcal{F}_{21}\). For \(s\) odd, similar expressions are obtained with \(\sin(3\,\phi_{s})\leftrightarrow-\cos(3\,\phi_{s})\).
These analytic forms of the CP asymmetry enable us to correlate the high-and low-energy CP phases. This is illustrated in Figure 5.2 for the \(\Delta(6n^{2})\) group with \(n=26\), where we plot the predictions for the BAU (which depend on the high-energy CP phases) and for the \(0\nu\beta\beta\) observable \(m_{\beta\beta}\) (which depends on the low-energy CP phases) corresponding to the \(s\) values from \(0\) to \(n-1\). The horizontal blue-band corresponds to \(\eta_{B}\) values within 10% of the observed value. We find that there exist some \(s\) values for which the correct BAU can be obtained, while the \(m_{\beta\beta}\) predictions for the IO case are either already excluded or will be tested soon in the next-generation \(0\nu\beta\beta\) experiments.
Figure 5.2: Correlation between the predicted BAU \(\eta_{B}\) and the effective neutrino mass \(m_{\beta\beta}\) for a \(\Delta(6n^{2})\) model with \(n=26\) and \(0\leq s\leq n-1\) (as shown by the numbered points). The blue-shaded horizontal bar corresponds to \(\eta_{B}\) within 10% of \(\eta_{B}^{\rm obs}\)[188]. The vertical shaded bands for the IO case indicate the smallest \(m_{\beta\beta}\) value (including the NME uncertainties) that is either ruled out by KamLAND-Zen [48] (red) or will be accessible to nEXO [49] (green). Figure taken from Ref. [464].
#### ARS Mechanism and Unified Picture
Just like the case with DM relic density which can happen either by freeze-out or freeze-in, the deviation from equilibrium for the RHNs can also happen either via freeze-out when their mass drops below the Hubble temperature), or via freeze-in when the Yukawa couplings are so small that the equilibration rate becomes much lower than the Hubble rate. The resonant leptogenesis mechanism discussed above is an example of the freeze-out scenario, while leptogenesis via oscillations (the so-called ARS mechanism) [600] is an example of the freeze-in scenario, which typically happens for GeV-scale RHNs [601, 602, 603, 604]. These two mechanisms may seem to be quite different, but it was recently shown that both can be described in a unified picture [526, 527, 605]. This can be achieved with matrix-valued quantum kinetic equations for the RHNs and the lepton asymmetries in different SM degrees of freedom. Solving these equations for RHN masses in a wide range from 50 MeV to 70 TeV, Ref. [606] obtained the range of total mixing angle \(U^{2}\) consistent with leptogenesis, as shown in Fig. 5.3. Here \(\kappa\) refers to the RHN mass splitting parameter, and the contours corrersponding to positive (negative) BAU are shown by solid (dashed) lines. The black line indicates the canonical seesaw line. The red and blue shaded regions lead to unacceptably large corrections to the light neutrino masses induced by the RHN mass splitting.
#### Other Examples
As mentioned earlier, in models with discrete flavor symmetry, structures of the Dirac Yukawa coupling, charged lepton and RHN mass matrices are entirely instrumented by the associated symmetry. Hence, the involved discrete symmetry may have a significant impact on neutrino masses, mixing, and Dirac CP phase as well high energy CP violation required for leptogenesis. As TBM mixing was a potential candidate for the lepton mixing matrix, several analyses have been performed to explain TBM mixing and leptogenesis [607, 242, 608]. Interestingly, it was also observed [607] that in the exact TBM limit, the CP asymmetry vanishes since \(Y_{D}^{\dagger}Y_{D}\propto\mathbb{I}\) (where \(Y_{D}\) in the Dirac Yukawa coupling in the basis where RHNs are diagonal). Therefore in the exact TBM scenario, leptogenesis was realized either by introducing higher-order correction in the Dirac Yukawa matrix [607, 242] or by introducing renormalization group effects [608]; see however Ref. [609] which derived a no-go theorem for the viability of the minimal radiative resonant leptogenesis. In the
Figure 5.3: Range of total mixing angle \(U^{2}\) consistent with leptogenesis for a wide range of RHN mass in a \(\Delta(6n^{2})\) model. Figure taken from the arXiv version of Ref. [606].
precision era of the neutrino mixing parameters, it is essential to study the effect of non-zero \(\theta_{13}\) and CP phases. In the \(A_{4}\) type-I seesaw scenario [155], the authors showed that a minimal (by adding a non-trivial \(A_{4}\) singlet) modification to the existing Altarelli-Feruglio [114] model may give rise to non-zero \(\theta_{13}\) as well as generate the observed BAU. Here, the high-energy CP phases or the Majorana phases appearing in the CP asymmetry also get constrained by the low-energy neutrino oscillation data, which are otherwise insensitive to the oscillation experiments. In a more minimal study [201], the authors showed that with the presence of only three \(A_{4}\) flavons, both non-zero \(\theta_{13}\) and BAU (incorporating renormalization group effects) can be generated. To unify all the sources of CP violation in the theory, the CP symmetry can be spontaneously broken by the VEV of a singlet field where its magnitude generates non-zero \(\theta_{13}\) and the phase factor becomes directly proportional to the CP asymmetry parameter in general \(A_{4}\) type-I+II scenario [156]. Recently, it has also been shown that BAU can be successfully generated in a \(S_{4}\) discrete symmetry framework with TM\({}_{1}\) mixing [177] taking lepton flavor effects into consideration. Apart from the example discussed here for \(\Delta(6\,n^{2})\), a few other works exploring leptogenesis in similar flavor symmetry models are Refs. [606, 610, 611, 612]. For more recent studies on various discrete flavor symmetry and leptogenesis, see Refs. [613, 614, 615, 616, 616, 617, 371, 618].
### Flavor Symmetry and Gravitational Waves
The recent breakthrough in gravitational wave (GW) observations by LIGO [619] provides us a new window into the early universe. A particularly interesting example of early Universe phenomena that can be a stochastic source of GWs is cosmological phase transition [620] whose observation may shed light on an array of BSM phenomena, from BAU to GUT physics and inflation [621]. As the Majorana mass term in neutrino mass models can be generated by \(B-L\) breaking at a high energy scale, the associated phase transition in the early Universe can produce a possibly observable stochastic GW background, thus providing a complementary probe of \(B-L\) physics like leptogenesis [622, 623, 624, 625].
As for the discrete flavor symmetry models, a direct test would be the observation of flavons involved in the spontaneous breaking of the discrete symmetry. This is usually assumed to happen at a high scale, which makes it challenging to test experimentally. Whatever the scale of the discrete flavor symmetry spontaneous breaking, it gives rise to degenerate vacua separated by energy barriers leading to a network of cosmic domain walls. This is a serious problem, if the walls are stable, as they could overclose the Universe [626, 627]. Solutions to this domain wall problem have been discussed in the context of non-Abelian discrete symmetries such as \(A_{4}\), which include explicit breaking terms, such that the domain walls collapse in a certain period of time before the symmetry breaking [628, 629, 630]. Such collapse of domain walls in the early Universe could lead to stochastic GWs [631, 632, 633, 634, 635], thus offering novel probes to test the discrete flavor symmetry models at current and future GW experiments (such as aLIGO/VIRGO, LISA, DECIGO, BBO, ET and CE) [636]. See also Ref. [637] for other GW imprints of flavor models, such as multipeaked stochastic GW signal from a series of cosmological phase transitions that could be a unique probe of the mechanism behind flavor hierarchies.
## 6 Summary and Outlook
The origin of neutrino masses and mixing is a fundamental question in particle physics. In this review, we have discussed flavor model building strategies based on discrete family symmetries aimed at explaining the patterns of lepton masses and flavor mixing. We consider fixed patterns (BM, TBM, GR, HG) and more elaborate symmetry groups with unbroken residual symmetries (e.g., \(A_{4}\), \(S_{4}\), \(A_{5}\), \(T^{\prime}\), \(\Delta(27)\), \(\Delta(6n^{2})\)), motivated by the increasingly precise results from neutrino
oscillation experiments. In particular, a 'large' reactor mixing angle \(\theta_{13}\) has been determined, the Dirac CP phase \(\delta\) is preferred to be nonzero and the normal mass ordering seems to be mildly favored in the current oscillation data. We discuss the far-reaching implications of these considerations in flavor model building and phenomenology. We also discussed flavor symmetry breaking and mechanisms of mass generation, and flavor symmetry in multi-Higgs doublet models.
As there are plenty (hundreds) of possible flavor symmetry models, a natural question is: How to falsify or validate any of these models? Generally, such symmetries are broken at a high scale and beyond our experimental reach. Nonetheless, phenomenological study connected with flavor symmetry effects is rich and models can be probed effectively in low-energy intensity frontier experiments (like neutrino oscillation experiments, neutrinoless double beta decay, and lepton flavor violation searches), high energy colliders (such as LHC and future lepton/hadron colliders), as well as at the cosmological frontier (via baryogenesis, dark matter and stochastic gravitational wave signals). One of the key tests for such models comes from the seesaw mechanism, namely, the presence of heavy right-handed Majorana neutrinos. Their Yukawa couplings are relevant for collider signals, LFV and leptogenesis. We give an example of such studies for the \(\Delta(6n^{2})\) group. Other groups (e.g. \(A_{4}\)) and models (e.g. scotogenic) are also discussed in the context of LFV and dark matter effects.
The concept of flavor symmetry is developing all the time. New ideas related to family symmetries also come from modular symmetries or texture zeroes. In the review, we have updated many strict correlations and predictions in models based on TM\({}_{1}\), TM\({}_{2}\) mixing, \(\mu-\tau\) reflection symmetries and status of light neutrino mass sum rules in the context of neutrinoless double beta decay. However, given the plethora of flavor models and the rich phenomenology they offer, we cannot discuss all possibilities here. Our goal in this review was to give a gist of the flurry of activities going on in this field, illustrated with a few example scenarios. With neutrino physics entering the precision era, there is an exciting prospect for more intensive studies of discrete flavor symmetries in the future for Majorana and Dirac neutrino scenarios.
## Acknowledgements
This work has been supported in part by the Polish National Science Center (NCN) under grant 2020/37/B/ST2/02371 and the Freedom of Research (Swoboda Badan) initiative of the University of Silesia in Katowice. The work of GC is supported by the U.S. Department of Energy under the award number DE-SC0020250 and DE-SC0020262. The work of PSBD is supported in part by the U.S. Department of Energy under grant No. DE-SC 0017987. We thank Julia Gehrlein for discussion on sum rules and for providing Fig. 3.3. We thank Joy Ganguly and Satabrata Mahapatra for insights on the FSS model discussed here. For the purpose of Open Access, the authors have applied a CC-BY public copyright licence to any Author Accepted Manuscript (AAM) version arising from this submission.
## Appendix A \(A_{4}\) symmetry
The \(A_{4}\) discrete symmetry group is a group of even permutations of four objects. Geometrically, it can be considered as an invariance group of a tetrahedron. It has 12 elements which can be generated by two basic objects \(S\) and \(T\) which obey the relation \(S^{2}=T^{3}=(ST)^{3}=1\).
The \(A_{4}\) group has three one-dimensional irreducible representations 1,1\({}^{\prime}\) and 1\({}^{\prime\prime}\) and one three dimensional irreducible representation 3. For a detailed discussion on \(A_{4}\) character table and three-dimensional unitary representation of the
generators, see Refs. [77, 78]. The multiplication rules of the singlets and triplets are given by [77, 78]
\[1 \otimes 1=1;\ 1^{\prime}\otimes 1^{\prime\prime}=1,\] (A.1) \[1^{\prime} \otimes 1^{\prime}=1^{\prime\prime};\ 1^{\prime\prime}\otimes 1^{\prime\prime}=1^{\prime},\] (A.2) \[3 \otimes 3=1\oplus 1^{\prime}\oplus 1^{\prime\prime}\oplus 3_{s}\oplus 3_{a},\] (A.3)
where the subscripts "\(s\)" and "\(a\)" denote symmetric and antisymmetric part respectively. In the \(T\) diagonal basis [78], writing two triplets as \((x_{1},x_{2},x_{3})\) and \((y_{1},y_{2},y_{3})\) respectively, we can write their products explicitly as
\[1 \sim x_{1}y_{1}+x_{2}y_{3}+x_{3}y_{2},\] (A.4) \[1^{\prime} \sim x_{3}y_{3}+x_{1}y_{2}+x_{2}y_{1},\] (A.5) \[1^{\prime\prime} \sim x_{2}y_{2}+x_{1}y_{3}+x_{3}y_{1},\] (A.6) \[3_{s} \sim \frac{1}{3}\begin{pmatrix}2x_{1}y_{1}-x_{2}y_{3}-x_{3}y_{2}\\ 2x_{3}y_{3}-x_{1}y_{2}-x_{2}y_{1}\\ 2x_{2}y_{2}-x_{1}y_{3}-x_{3}y_{1}\end{pmatrix},\] (A.7) \[3_{a} \sim \frac{1}{2}\begin{pmatrix}x_{2}y_{3}-x_{3}y_{2}\\ x_{1}y_{2}-x_{2}y_{1}\\ x_{3}y_{1}-x_{1}y_{3}\end{pmatrix}.\] (A.8)
|
2309.14321 | Lifelong Robot Learning with Human Assisted Language Planners | Large Language Models (LLMs) have been shown to act like planners that can
decompose high-level instructions into a sequence of executable instructions.
However, current LLM-based planners are only able to operate with a fixed set
of skills. We overcome this critical limitation and present a method for using
LLM-based planners to query new skills and teach robots these skills in a data
and time-efficient manner for rigid object manipulation. Our system can re-use
newly acquired skills for future tasks, demonstrating the potential of open
world and lifelong learning. We evaluate the proposed framework on multiple
tasks in simulation and the real world. Videos are available at:
https://sites.google.com/mit.edu/halp-robot-learning. | Meenal Parakh, Alisha Fong, Anthony Simeonov, Tao Chen, Abhishek Gupta, Pulkit Agrawal | 2023-09-25T17:45:55Z | http://arxiv.org/abs/2309.14321v2 | # Lifelong Robot Learning with Human Assisted Language Planners
###### Abstract
Large Language Models (LLMs) have been shown to act like planners that can decompose high-level instructions into a sequence of executable instructions. However, current LLM-based planners are only able to operate with a fixed set of skills. We overcome this critical limitation and present a method for using LLM-based planners to query new skills and teach robots these skills in a data and time-efficient manner for rigid object manipulation. Our system can re-use newly acquired skills for future tasks, demonstrating the potential of open world and lifelong learning. We evaluate the proposed framework on multiple tasks in simulation and the real world. Videos are available at: [https://sites.google.com/mit.edu/halp-robot-learning](https://sites.google.com/mit.edu/halp-robot-learning)
## I Introduction
A dream shared by many roboticists is to instruct robots using simple language commands such as "clean up the sink." Large language models (LLMs) can support this dream by decomposing an abstract task into a sequence of executable actions or "skills" [15]. Several LLM-based works use a _fixed_ set of skills (i.e., _skill library_) for planning [1, 14]. However, the available skills may not suffice in certain task scenarios. For instance, given the task, "clean up the sink", an LLM may plan a sequence of picks and places that move all the dishes to a dishrack. Suppose one cup contains water which must be emptied before the robot puts it away. Without access to an "empty cup" skill, the system is fundamentally incapable of achieving this task variation. On detecting failure, LLM planners may attempt to expand their abilities - the system could _request_ a new skill for "pouring" if it detects water in the cup. However, unless the robot can also _execute_ new skills, the problem remains unsolved.
Based on the tasks and scenarios the robot encounters, the planner must have the capacity to request and acquire _new skills_. Further, such skill acquisition ought to be _quick_ - a system that requires days, weeks, or months to acquire the new skill is of little utility. Concurrent to our work, the ability of an LLM-based planner to acquire new skills has been demonstrated in the virtual domain of Minecraft [35]. However, in virtual domains, new skills can be simply represented as code that can execute high-level and abstract actions. In contrast, learning a new skill for a robot also involves finding low-level actions that can affect the physical world. To the best of our knowledge, the ability to add skills to a skill library in a time and data-efficient manner and utilize them for future tasks, especially in the context of LLM-based planners, has not been demonstrated.
Existing LLM-based robotic systems struggle with online skill acquisition because common mechanisms for learning skills (e.g., end-to-end behavior cloning or reinforcement learning) typically require a large amount of data and/or training time. Some methods are able to acquire new skills in a more data-efficient manner in limited scenarios such as in-plane manipulation (e.g., TransporterNets [38]), but these skills are insufficient for 6-DoF actions (e.g., "grasp the mug from the side", "hang the mug on a rack" or "stack a book in a bookshelf"). Another body of work such as in few-shot imitation learning can efficiently solve new instances from a task family but requires large amounts of pre-training data [10, 26] which is seldom available for new skills. We first present a method that allows LLMs to request new skills to complete the given task. Second, we propose to use Neural Descriptor Fields (NDFs) [30] to realize these new skills. We choose NDFs as they require only 5-10 demonstrations to perform rigid body manipulation in the full space of 3D translations and rotations.
Our system works by prompting an LLM with a textual scene description obtained by a perception system, a library of skills expressed as Python functions, and a natural language task specification. With this information, the LLM plans and produces a sequence of skills (in the form of code) that achieves the task. Along with the skills in the skill library, we also provide the LLM with a special function for requesting a new skill to be added to the library. When the LLM plans call this learn_skill function, it returns a new skill name and a docstring description of the skill. However, such a skill is abstract and is not mapped to actions. NDFs allow the user to quickly realize this new skill by providing a few demonstrations, after which the skill is added to the skill library so that it can be re-used on future tasks. In summary, this work demonstrates a proof-of-concept implementation of an LM-powered robotic planning agent that can interactively grow its skill library based on the needs of the task. We show an instance of such a system using NDFs and perform experiments that highlight the abilities of our system.
## II Related Work
LLMs as Zero-Shot PlannersPrior work that uses large language models (LLMs) as planners include SayCan [1], InnerMonologue [14], NLMap-SayCan [4] and Socratic Models [37]. These methods make significant contributions: [1] and [37] using LLMs as planners; [14] emphasizes the importance of feedback; and [4] improves upon [1] by introducing the ability of open-vocabulary detection for grounding using CLIP and ViLD features [27][13]. The planners in these methods either generate the plan in textual
format or choose the next step based on a given set of actions described through text. Another set of methods [21][34][32][22] using LLM as planners chose to output the plans directly using a Python or symbolic API, given the function documentation and sufficiently expressive function names.
End-to-End Language Conditioned ManipulationAnother class of methods processes inputs from different modalities such as visual, textual, and sound, and train an LLM to use these inputs to output robot actions end-to-end (e.g., CLIPort [28], Interactive Language [23], RT1 [2], PerAct [29] and VIMA [17]). Another end-to-end approach is Palm-e [9] that generates textual steps as output, and are assumed to map to a small set of low level policies. One main advantage these offer is more faithful LLM grounding, in contrast to modular approaches that only list the objects in the scene and sometimes fail due to partial scene descriptions. However, they each suffer from requiring a large amount of data for training or fine-tuning. Such large data requirements also make it difficult to achieve generalization. Finally, many of these works are limited to performing 3-DoF (top-down) manipulation actions.
Low-Level Robot PrimitivesThe modular approaches [1][14][4][37][21][34] use a predefined set of primitive skills, often hardcoded or learned from behavior cloning. These low-level primitives can also be learned through methods such as [16], [11], [6], [38]. While these skills can be composed to perform a wide range of actions, many times a required skill cannot be composed from the primitive set and adding a new primitive may require careful engineering, or large number of demonstrations. Thus, we employ [30] to incorporate new skills at runtime using only a few demonstrations, with the only drawback of limiting the skills to known object categories.
## III Method
In the spirit of prior work on performing long-horizon tasks wherein a high-level planning algorithm chains together different low-level skills [12, 20, 24, 37], our system has explicit modules for perception, planning, and control (Fig. 1). The modularity of our system allows us to take advantage of state-of-the-art (SOTA) models like SAM [18] for segmentation and GPT-4 [25] for planning skill sequences. At a high level, our perception module describes the scene from RGB and depth observations, generating a language-based scene description containing information about the objects in the scene and the spatial relationship between them. Given the scene description and a library of skills, the planning module plans a sequence of steps to solve the task based on the scene description and task requirements. The skill sequence corresponds to a set of executable behaviors on the robot.
In contrast to previous work that uses LLMs in robotics, our planning module can request to learn a new skill when it determines that the existing skills are insufficient, and a data-efficient skill learning method can be used to extend the skill library with this new executable behavior. With an expanded skill library, the planner can utilize both the original primitive skills and the newly learned skills when completing subsequent tasks. Thus, our approach endows the system with a form of continual learning. In the following subsections, we describe each module in detail.
### _Perception_
The perception module (Fig. 2a) processes RGBD images to obtain and store information about the scene objects. First, the module identifies objects using an open-vocabulary object detector [39]. We also perform segmentation to obtain object masks using SAM [18] and combine them with the depth images to obtain object point clouds. In addition to object labels and segmentation masks, the planner may require additional information about the spatial arrangement of the scene. For example, if a robot needs to empty a mug, it first needs to know whether there is an object _in_ the mug, and only execute the skill of emptying it if there is. We generate spatially-grounded scene descriptions automatically by computing positional relationships between objects using the object point clouds. A scene description that is not spatially grounded only describes the objects present in the scene, without specifying the spatial relationship between them. Lastly, to enable open-vocabulary language commands that target specific object instances, we extract CLIP embeddings of each segmented object in the scene. In this way, given a scene with multiple mugs, if the task is to "pick up the red mug," we are able to identify the object that corresponds to the description of a "red mug" (additional examples in Appendix). Overall, our perception components output segmented object point clouds with associated detection labels, inter-object relations, and CLIP embeddings.
Spatially-grounded Textual Scene DescriptionTo inform the planner about the environment state, we format the perception outputs into a language-based scene description with information about the scene objects and their inter-object relations. This involves constructing a string with the names of the objects along with the relations that hold between them. The description is akin to a textual description of "scene graph". Please see Appendix for further details. Note that the particular method of describing the scene is not critical to our work and in the future vision-language models capable of describing objects and the relationship between them can replace this system.
### _Planning and Control_
Given the language command and the textual scene description from the perception system, GPT-4 is used to plan a sequence of the steps to be executed. The inputs and outputs of the LLM are structured as follows:
Skill Definitions via Code APIOne way to design a planner is to output a plan in natural language. However, a more machine-friendly alternative is to have the planner output programming code [21, 32]. Having an LLM planner directly produce code avoids the need to map a textual plan to a robot-executable plan. In addition, communicating with LLM in a programming language allows a human to give prompts in the form of comments, docstrings, and usage examples, which helps the planner understand how each skill
operates. To take advantage of these benefits, we define each skill as a Python function that takes input arguments such as object identifiers and environment locations. We provide the planner with a description and set of input/output examples for each function. The code API is initialized with a skill library \(\mathcal{S}_{0}\) containing five primary functions: find, pick, get_place_position, place, and learn_skill:
* find(object_label=None, visual_description=None, location_description=None): searches with the perception system for an object based on category, visual property, or location. Returns an object-id.
* pick(object_id): uses Contact-GraspNet [33] to find a 6-DoF grasp for the object point cloud associated with the object_id and executes the grasp.
* get_place_position(object_id, reference_id, relation): for the object given by object_id, returns the \((x,y,z)\) location determined by the text description relation relative to reference_id.
* place(object_id, place_position): places the object at the \((x,y,z)\) value given in place_position.
* learn_skill(skill_name): returns a new executable skill function and a docstring describing the skill behavior.
The above API functions also output a signal indicating whether or not the function executes properly (i.e., to catch and correct runtime errors due to syntax mistakes). If new skills are learned (discussed in Sec. III-C), the library is updated \(\mathcal{S}_{i}=\mathcal{S}_{i-1}\cup\{\pi_{i}\}\) where \(\pi_{i}\) denotes the new skill.
Full Planner Input/Output and Skill ExecutionThe planner is prompted to produce the plan in two steps. First, given the scene and task description, the planner generates a sequence of steps described in natural language. Next, the planner is provided with the code API of skills as discussed above and tasked to write code for executing the task using the given skills. For example, if the first step in the plan is to "find" a mug with the find function, the planner may output object_id = find("mug"). Since our system uses a LLM planner, the human user can interact with the planner at either stage of the planning to further refine the plan or correct mistakes. An example of the interaction between the user, planner, and robot is shown in Fig. 2. We qualitatively observe this two-step process helps the model generate higher-quality plans, as compared to producing the full plan directly. The two-step breakdown potentially helps in the same way "chain-of-thought" prompting has helped LLM find better responses [36].
The code returned by the LLM is executed using the execc construct in Python. For skills involving robot actions, the skill function calls a combination of inverse kinematics (IK), motion planning, and trajectory following using a joint-level PD controller.
Fig. 1: Our system consists of three modules: _perception_, _planning_, and _control_. The _perception_ module processes RGB-D images and outputs a textual scene description that identifies objects and their spatial relationships. The _planning_ module uses GPT-4 to plan a sequence of steps based on the available skills and the task command. We added a learn_skill(skill_name) function to the planner so that it can plan to learn a new skill if such learning is necessary for completing the task. Finally, the _control_ module executes the planned steps using the available skills or starts learning a new skill.
Fig. 2: (a) From RGBD images, our perception module obtains information about the objects and their relations, creates an object information dictionary, and generates a scene description (detection, object_pairs corresponding to given object relations, and the template is in black). (b) An example showing the interaction between the robot, the user, and the planner.
### _Learning New Skills and Expanding the Skill Library_
Requesting New Abilities with learn_skill functionThe code API for the learn_skill, contains a docstring detailing the role of the function and also includes a few examples of the desired output of using the learn_skill function. The reason for providing examples is to exploit the in-context learning ability of LLMs - these examples help the LLM figure out how to use the learn_skill function. More details are in the Appendix. The learn_skill(skill_name) returns the handle to a new executable skill function along with a docstring that describes the behavior of the function. The function is parameterized by either one or two object_ids - one for specifying which object skill_name acts upon, and another for specifying a reference object for relational skills (e.g., pick(bottle_id) vs. insert(peg_id, hole_id)). The exact parameterization is decided by the LLM. When learn_skill is called, the returned function is added to the skill library so that the new skill can be reused in the future.
Data- and time-efficient skill grounding with NDFsOur framework is agnostic to the specific method used to ground newly learned skills into actions. It can be end-to-end learning with reinforcement learning, or behavior cloning from demonstrations. In this work, we choose to use NDFs [30] to learn new skills because it allows efficiently learning a skill from just a few (\(\leq\)10) demonstrations. NDFs also facilitate a degree of category-level generalization across novel object instances, as well as generalization to novel object poses due to built-in rotation equivariance. More information on NDFs can be found in [30, 31].
Learning from FeedbackIf we specify a task the system cannot solve using the available skills (such as "pick up the mug by the handle", when the available "pick" skill grasps the mug from the rim), we would expect the LLM to directly request a new skill with learn_skill. While this occurs the majority of the time (see Experiments Section), the planner sometimes directly attempts the task using a skill that does not satisfy the task requirements. In these cases, if a user provides the _outcome_ of a task attempt (e.g., "the mug was grasped by the rim"), the planner can use this information to register its usage of an incorrect skill and subsequently call learn_skill to expand its abilities. The system can then attempt the task with the newly learned skill.
This highlights the need for feedback mechanisms that, in addition to detecting runtime errors, also inform the planner about the state of the environment after skill execution. To achieve this, we allow a human operator to manually but _optionally_ provide feedback before and after code execution. We allow the human to provide feedback after the execution of every step in the code. The combination of _outcome_ feedback from the user and the _execution_ feedback from the skill functions enables the system to detect failures, replan and if necessary expand its skillset using learn_skill.
Continual LearningLearning new skills allows one to execute a task that was previously not possible. However, the full potential of learning new skills is realized when we allow the system to _continually_ acquire and _re-use_ skills to solve future tasks. This creates a system with ever-expanding capabilities. There are many ways this can be achieved - our implementation involves simply adding a new skill function expressed as a code API to the skill library, and using the updated library for future tasks.
## IV Experiments
Environment Design and SetupWe design our experiments to achieve three goals: (1) Show a proof-of-concept implementation of LLM-based task planning and execution with interactive skill learning in the real world, (2) Evaluate the abilities of current LLMs to appropriately request and re-use new skills based on the needs of different manipulation tasks, and (3) Compare the performance of the system when different components (such as object relations) are included vs. removed.
In the real world, we tested our system on the Franka Panda robot with a Robotiq 2F-140 parallel jaw gripper. We used four calibrated RealSense cameras to obtain RGB-D images and point clouds. We also evaluated the LLM planner in isolation with a set of manually crafted tasks, scene descriptions, and success criteria. To perform additional system ablations, we evaluate our approach in simulation using PyBullet [7] and the AIRobot library [5]. Our environment includes a tabletop-mounted Panda with the default gripper, and synthetic cameras for obtaining RGB-D images and segmentation masks. We use a combination of ShapeNet [3] and manually-generated objects for experiments in simulation.
### _Real-world tasks requiring learn_skill_
We first showcase the benefits of incorporating learn_skill. The system is deployed to perform three tasks in the real world: (1) grasping a mug by a specific part, such as the handle, (2) placing a bottle in a container that must fit on a small shelf, and (3) emptying a mug from a "sink". Each task can be completed in multiple ways, some of which do not fulfill the full set of task requirements. Our reference point for comparison is the overall system with no feedback mechanism and no learn_skill capability. This version directly attempts each task using the base set of primitive skills. Below, we discuss the differences between this baseline and the full version of our system. The full set of planner inputs/outputs for these tasks can be found in the Appendix.
#### Iv-A1 Learning and requesting new pick and pick-place skills
**Task 1: Grasp mug by handle** Our warm-up task that highlights how learning new skills can benefit our system is to perform grasping by a specific part. In this case, we ask the system to "grasp the mug by the handle" (see Fig. 3A). Without learn_skill, the planner directly calls pick on the mug. This triggers a grasp detector [33] to output a set of grasps on the corresponding mug point cloud. Since most of these grasps are along the rim of the mug, the robot executes a grasp along the rim of the mug, and the task finishes.
If an incorrect skill is used, the human can prompt the system with feedback. By telling the system "the mug was
picked up by the rim", the planner puts the mug back down and requests to learn a new pick_mug_by_handle skill. We teach this as a side-grasp at the handle using NDFs with five demonstrations. After collecting the demos, we add pick_mug_by_handle to the skill library. Finally, the LLM directly calls pick_mug_by_handle and finishes the task successfully.
**Task 2: Place bottle in flatray** Our next task is to place a bottle in a container that must eventually fit in a small shelf. Here, we prompt the system to "place the bottle sideways in the container" (see Fig. 3B). When the pipeline runs using the base set of skills, the robot uses the only available "place" skill, which places the bottle upright in the tray.
Instead, when we provide the feedback "the bottle was placed upright in the tray", the LLM calls learn_skill to acquire a place_bottle_sideways_in_tray skill. This is implemented via NDFs as a side grasp on the bottle along with a reorientation and placement inside of the tray. Once this new skill has been added, the robot is able to successfully complete the task.
#### Iii-B2 Continual learning by re-using previously-learned skills
**Task 3: Empty mug from sink** Finally, we prompt the system with the abstract objective of emptying a "sink" by removing a mug from the container and placing it on the table (see Fig. 3C). This task implicitly requires _emptying_ the mug before placing it. We test the LLM's ability to satisfy this requirement by placing an additional small object (banana) inside the mug (ensuring the object is at least visible by the cameras, but difficult to pick up directly). The baseline system directly calls a combination of pick on the mug and place to put the mug down on the table.
However, with access to learn_skill and the dynamic skill library, the planner _reuses_ pick_mug_by_handle learned in Task 1 and immediately requests to learn tilt_mug so it can first move any objects in the mug to the trash container. We again use NDFs to teach tilt_mug, which reorients the mug above the tray. After emptying, the system plans to place the mug back _into_ the sink. The user tells the system "the sink is not empty, put the mug to
Fig. 3: High-level plan and images for three tasks requiring a new skill: (A) Grasp mug by the handle, (B) Place bottle in container on its side, and (C) Empty the sink. The gray comments represent execution feedback while the green text is human feedback. When learn_skill is not available, the robot fails to complete the tasks. However, by learning new skills, the planner expands its abilities and satisfies each task requirement.
the right of the sink". Finally, the LLM re-plans with this feedback and achieves the final placement on the table.
### _LLM-only skill learning evaluation_
In this section, we examine the isolated ability of the LLM-planner to utilize the learn_skill function and to appropriately re-use and/or _not_ re-use newly-learned skills on subsequent runs. This enables further analysis of GPT-4's ability to interpret manipulation scenarios represented via textual scene descriptions and correctly use the available skills provided in the code API. For each task in the following subsections, we provide a manually-constructed scene description (that does not correspond to any particular real-world scene) along with a task prompt and the skill API. We ask the planner to output code that completes the task using the API functions. The code output is manually evaluated as correct/incorrect by a human.
**Requesting new skills when needed** First, we study the ability to either (i) properly call learn_skill or (ii) properly _not_ call learn_skill, for a variety of tasks where either the base skill set is (ii) or is not (i) insufficient for the task, respectively. We report the fraction of attempts that correctly use or ignore learn_skill in a scenario where human feedback is not provided. The results are shown in the top two sections of Table I. The 91% success rate for using learn_skill without feedback indicates GPT-4 can be used for requesting an expanded skill set in a purely feed-forward fashion. Similarly, the LLM usually does not call learn_skill when it is not needed (87% success). However, some performance gap remains in both settings.
**Re-using new skills with varying level-of-detail skill descriptions** Next, we focus on the ability to properly re-use the previously-learned skills on subsequent runs, when they can either be applied or when they specifically should _not_ be applied (e.g., in scenarios where they are inappropriate or infeasible). We consider varying levels of detail in the description that accompanies the newly-learned skill as it is added to the code API. For instance, we can provide minimal information and only add the name of the new skill, or we can modify the return values of learn_skill so that the LLM writes its own docstring/function description to accompany the new skill when we add it to the API. The results are shown in the last two rows of Table I. The success rates indicate that the language model correctly uses the newly-learned skills with higher frequency when the skill descriptions also include docstrings. This makes intuitive sense, as it provides extra context for both the ability and applicability of the newly learned skill, which the LLM can attend to when generating the output code for executing the task (mimicking the chain-of-thought and "let's think step-by-step" improvements observed in prior work [19, 36]).
Despite the performance increase when describing newly-added skills in more detail, the LLM only achieves moderate overall performance (75% success rate). We observe this is due to a combination of sometimes using new skills when they should not be used (e.g., calling a side_pick_bottle skill even when the scene description says "the bottle _cannot_ be reached from the side") and re-learning the same skill multiple times (while occasionally calling it a very similar name) rather than directly utilizing the function that is already available in the API. We deem this as a somewhat negative result which points to potential gaps in such a method of LLM-based task planning. Namely, directly outputting a sequence of high-level skills (or exhaustively scoring them with a language model) does not allow more information about the operation of high-level skills (such as scenarios when they are or are _not_ applicable) to be provided or utilized during planning/reasoning.
## V Limitations
While our system takes advantage of SOTA components, they sometimes fail and trigger compounding inaccuracies in the downstream pipeline. For example, the LLM heavily depends on an accurate description of the scene, which can sometimes contain erroneous detections and incorrect object relations. We also leverage human feedback to obtain environment descriptions that inform task success and skill acquisition. Humans can provide accurate descriptions that inform when to learn new skills, but repeated user interaction makes the system less autonomous and slower to execute tasks. Leveraging learned success detectors would make the system more autonomous and self-sufficient. Similarly, human verification is typically needed to confirm the overall success or failure of a task, making it difficult to run system evaluation experiments at scale and limiting our evaluations primarily to qualitative demonstrations.
## VI Conclusion
This paper presents a modular system for achieving high-level tasks specified via natural language. Our framework can actively request and learn new manipulation capabilities, leading to an ever-expanding set of available skills to use during planning. We show how an LLM planner can use this ability to adapt its skill set to the demands of real-life task scenarios via both feed-forward reasoning and environmental feedback. In conjunction with perceptual scene representations obtained from off-the-shelf components and a data-efficient method for learning 6-DoF manipulation skills, we provide an example of a complete system. Our results demonstrate how this combination of full-stack modularity, spatially-grounded scene description, and online learning enables a qualitatively improved ability to perform manipulation tasks specified at a high level.
\begin{table}
\begin{tabular}{l c c}
**Eval Metric** & **Variation** & **Success Rate** \\ \hline
**Correct use of learn\_skill** & – & 0.91 \\
**Correctly did _not_ use learn\_skill** & – & 0.87 \\ \hline
**Correct re-use of new skill** & Name only & 0.50 \\
**(varying skill description)** & Name + docstring & 0.75 \\ \hline \end{tabular}
\end{table} TABLE I: Success rates for evaluations LLM-only learn_skill evaluation.
## VII Acknowledgement
We thank the members of Improbable AI for their feedback on the project. This work is supported by Sony, Amazon Robotics Research Award, and MIT-IBM Watson AI Lab. Anthony Simeonov is supported in part by an NSF Graduate Research Fellowship.
### _Author Contributions_
**Mechanical Parakh** Co-led the project, developed the core LLM-planning framework and full-stack system, set up and ran experiments in simulation and the real world, and drafted the paper.
**Alisha Fong** Co-led the project, integrated NDF-based skill learning into the LLM-planning framework, set up and conducted experiments in the real world and simulation, helped evaluate the LLM in isolation, and drafted the paper.
**Anthony Simeonov** helped integrate NDF-based skills into the framework, supported real robot experiments and LLM-only evaluation, and helped revise the paper.
**Tao Chen** engaged in brainstorming and discussion about system implementation and experiment design, mentored Meenal Parakh, and helped draft the paper.
**Abhishek Gupta** was involved with technical discussions, advised Meenal Parakh, and helped with project brainstorming in the early phases.
**Pulkit Agrawal** advised the project and facilitated technical discussions throughout, helped refine the project focus on interactive skill learning with LLMs, and revised the paper.
|
2309.12988 | On $p$-adic modularity in the $p$-adic Heisenberg algebra | We establish existence theorems for the image of the normalized character map
of the $p$-adic Heisenberg algebra $S$ taking values in the algebra of Serre
$p$-adic modular forms $M_p$. In particular, we describe the construction of an
analytic family of states in $S$ whose character values are the well-known
$\Lambda$-adic family of $p$-adic Eisenstein series of level one built from
classical Eisenstein series. This extends previous work treating a
specialization at weight $2$, and illustrates that the image of the character
map contains nonzero $p$-adic modular forms of every $p$-adic weight. In a
different direction, we prove that for $p=2$ the image of the rescaled
character map contains every overconvergent $2$-adic modular form of weight
zero and tame level one; in particular, it contains the polynomial algebra
$\mathbf{Q}_2[j^{-1}]$. For general primes $p$, we study the square-bracket
formalism for $S$ and develop the idea that although states in $S$ do not
generally have a conformal weight, they can acquire a $p$-adic weight in the
sense of Serre. | Cameron Franc, Geoffrey Mason | 2023-09-22T16:33:57Z | http://arxiv.org/abs/2309.12988v1 | # On \(p\)-adic modularity in the \(p\)-adic Heisenberg algebra
###### Abstract.
We establish existence theorems for the image of the normalized character map of the \(p\)-adic Heisenberg algebra \(S\) taking values in the algebra of Serre \(p\)-adic modular forms \(M_{p}\). In particular, we describe the construction of an analytic family of states in \(S\) whose character values are the well-known \(\Lambda\)-adic family of \(p\)-adic Eisenstein series of level one built from classical Eisenstein series. This extends previous work treating a specialization at weight \(2\), and illustrates that the image of the character map contains nonzero \(p\)-adic modular forms of every \(p\)-adic weight. In a different direction, we prove that for \(p=2\) the image of the rescaled character map contains every overconvergent \(2\)-adic modular form of weight zero and tame level one; in particular, it contains the polynomial algebra \(\mathbf{Q}_{2}[j^{-1}]\). For general primes \(p\), we study the square-bracket formalism for \(S\) and develop the idea that although states in \(S\) do not generally have a conformal weight, they can acquire a \(p\)-adic weight in the sense of Serre.
###### Contents
* 1 Introduction
* 2 Background and notation
* 3 The square bracket formalism and the operator \(L[0]\)
* 4 A \(\Lambda\)-adic family of states
* 5 The character map in weight zero
* 6 Powers of \(h[-2]\)
* 7 Continuous action of \(S_{\text{alg}}[\ ]\) on \(S_{R}\)
## 1. Introduction
The occurrence of _modularity_ in the theory of vertex operator algebras (VOAs) is by now a well-known, even commonplace, phenomenon. 'Modularity' usually refers to elliptic modular forms although many other types of modular and automorphic objects intervene in the theory. One runs across elliptic functions, Siegel modular forms, quasimodular forms, Jacobi forms and mock modular forms for example, not to mention vector-valued versions of these. Absent from this list, however, are \(p\)-adic modular forms and their variants.
The last several decades have witnessed an explosion of work in number theory using \(p\)-adic methods to solve outstanding classical problems, as well as introducing new \(p\)-adic versions of old results and conjectures. Of relevance to the present paper is the theory of \(p\)-adic modular forms as introduced by Serre and Katz [11, 16]. Among other applications, these new modular forms are frequently used to define \(p\)-adic variants of \(L\)-functions via the theory of \(p\)-adic interpolation. See for example [9] for an overview of this theory, and [17] for a concrete discussion of the theory
of overconvergent modular forms. Some of the essential facts are also reviewed in Section 2 below.
Given the close connection between VOAs and modular forms, it is natural to hypothesize a \(p\)-adic theory of VOAs that would extend this connection to \(p\)-adic modular forms. In [6] we introduced such a theory by adopting a set of axioms that naturally arises by completing the usual axioms of a standard VOA (or, as we shall call them, _algebraic_ VOAs) with respect to some \(p\)-adic norm. This is very different from simply tensoring with the field \(\mathbf{Q}_{p}\) of \(p\)-adic numbers, a situation that is mirrored in Serre's theory [15]. Indeed, Serre notes that merely tensoring with \(\mathbf{Q}_{p}\) essentially produces nothing new since the space of modular forms of level one has a basis defined over the integers. Similarly, although the resulting completed \(p\)-adic VOAs of [6] have many properties akin to those of an algebraic VOA, we emphasize that a \(p\)-adic VOA is _not_ a VOA in the usual sense: for example, one axiom of algebraic VOAs (the truncation property of vertex operators) is that certain generating series (fields applied to states) are finite-tailed Laurent series. By contrast, in our \(p\)-adic theory these series can have essential singularities, although the coefficients of the polar terms tend to zero in the underlying \(p\)-adic topology. While novel in the algebraic theory of VOAs, such series are commonplace in \(p\)-adic analysis and geometry.
Heisenberg algebras are central objects in both mathematics and physics, providing one of the most basic and amenable examples of relevance. Use of the word 'algebra' in this context is a shorthand for either 'Heisenberg Lie algebra' or 'Heisenberg vertex operator algebra'. It is convenient to denote the algebraic Heisenberg VOA of rank \(1\) over \(\mathbf{Q}_{p}\) by \(S_{\mathrm{alg}}\). This is to distinguish it from its \(p\)-adic completion \(S\), called the \(p\)-adic Heisenberg algebra that was constructed in [6] and shown to be a \(p\)-adic VOA. They both occur in the following commuting diagram that we will shortly explain and which goes to the heart of the main results of the present paper.
(1)
The left vertical map is the natural containment of \(S_{\mathrm{alg}}\) in its \(p\)-adic completion \(S\). \(M_{p}\) is the space of (Serre) \(p\)-adic modular forms of tame level \(1\). It is the \(p\)-adic completion of \(\mathbf{Q}_{p}[E_{4},E_{6}]\), however, \(E_{2}\) also lies in \(M_{p}\) and the right vertical map is the natural containment. \(\mathbf{Q}_{p}[E_{2},E_{4},E_{6}]\) is the ring of quasimodular forms of level \(1\).
As for the horizontal maps, \(f\) is a renormalized version of the usual character map (\(1\)-point function) of algebraic VOAs that associates a formal \(q\)-series (a certain graded trace) to a state in the VOA. This is an important, though technical, aspect of the general theory and we say more about it in Subsection 3.2. It is known [5, 13, 14], that \(f\) maps into the ring of quasimodular forms and in fact the last two references prove that \(f\) is a _surjection_. Furthermore it is proved in [6] that \(f\) is \(p\)-adically continuous, so that it extends to the completions and this is the upper map \(\hat{f}\) which is a continuous linear map of \(p\)-adic Banach spaces.
This diagram also helps explain our emphasis in the present paper on the Heisenberg algebra. In principle we would like to consider similar diagrams in which \(S_{\mathrm{alg}}\) is replaced with other algebraic VOAs and \(S\) with their \(p\)-adic completions. However,
we presently do not generally have a good understanding of either the vertex operator structure of the completion or of the image of \(f\) and this greatly complicates any study of \(\hat{f}\), for example a description of its image.
As it is, since Diagram (1) commutes and \(f\) is surjective (see below) then \(\operatorname{im}\hat{f}\) contains \(\mathbf{Q}_{p}[E_{2},E_{4},E_{6}]\) and we can ask for a description of the precise image of this map. The most natural expectation is that \(\hat{f}\)_surjects onto_\(M_{p}\). This conclusion -- if true -- remains open, and the main purpose of the present paper is to develop techniques and some explicit results that contribute towards its affirmative resolution.
We already proved in [6] that the image of \(\hat{f}\) is _strictly larger_ than the space of quasimodular forms, i.e., the image contains new \(p\)-adic modular forms, and additional examples of a similar nature are developed in [1]. The main arithmetic results of the present paper are encapsulated in
**Theorem 1.1**.: _The following hold:_
1. _For every prime_ \(p\) _and for every even weight_ \(k\) _in_ \(p\)_-adic weight space_ \(X\)_,_ \(\operatorname{im}\hat{f}\) _contains the_ \(p\)_-adic Eisenstein series_ \(G_{k}^{*}\) _of weight_ \(k\)_. In fact, there exists a_ \(\Lambda\)_-adic family of continuously varying states in the_ \(p\)_-adic Heisenberg algebra that lifts this family of Eisenstein series._
2. _When_ \(p=2\)_,_ \(\operatorname{im}\hat{f}\) _contains the space of_ \(2\)_-adic, weight_ \(0\)_, overconvergent modular forms_ \(M_{0}^{\dagger}(7/4)\)_. In particular,_ \(\operatorname{im}\hat{f}\) _contains_ \(\mathbf{Q}_{2}(j^{-1})\)_._
For the benefit of readers uninured to the language of \(p\)-adic and \(\Lambda\)-adic modular forms, the relevant terms are explained in Section 2 and (19).
The reason why we do not obtain surjectivity in weight zero when \(p=2\) is that the states in \(S\) that we use as preimages of the powers \(j^{-n}\) do not have sufficiently nice asymptotic behaviour as \(n\) varies. See Corollary 6.7 and the surrounding discussion for more details on this point. The reason why we restrict to the prime \(p=2\) is to simplify computations, as in this case the space of \(2\)-adic modular forms of tame level one has a particularly simple description as the Tate algebra \(\mathbf{Q}_{2}\langle j^{-1}\rangle\) of series in \(j^{-1}\) with coefficients that tend to zero \(2\)-adically [17]. See the start of Section 5 for a discussion of this point. A very similar description of weight zero forms holds when \(p=3\) and \(p=5\) so that in principle one might be able to extend our computations to these primes. However, it seems that new ideas would be required for a significantly more general result, and so for this reason we have contented ourselves here with the restrictions and results stated in Theorem 1.1.
We now discuss some additional aspects of the proof of the Theorem. This may be done with the aid of a diagram similar to (1), namely
(2)
Here, \(g\) is a designated \(p\)-adic modular form that we want to show lies in \(\operatorname{im}\hat{f}\); \(g\) is the \(p\)-adic limit of a Cauchy sequence of classical modular forms \(g_{i}\), say of weight \(k_{i}\). So the right vertical arrow is just notation. We want to pull this back to the Heisenberg VOA. Because \(f\) is a _graded_ surjection (again, see below) there are states \(u_{i}\in(S_{\operatorname{alg}})_{[k_{i}]}\) mapping onto \(g_{i}\). Because \(f\) has a very large kernel there is generally not a unique choice for \(u_{i}\) and in any case the resulting sequence of states \((u_{i})\) may not be a Cauchy
sequence in \(S_{\rm alg}\). As long as it is, however, and if its limit \(u\) is contained in \(S\), then the \(p\)-adic continuity of \(\hat{f}\) ensures that \(f(u)=g\) as required. The \(p\)-adic modular forms intervening in the statement of Theorem 1.1 are emblematic of cases for which we can make this procedure work.
We turn to a discussion of weights in \(S\) and \(S_{\rm alg}\) and first note that when the situation of (2) prevails, Serre [16] tells us that \((k_{i})\) is a Cauchy sequence having a limit, say \(k\), in \(p\)-adic weight space \(X\). This is the \(p\)-adic weight of \(g\) and it is natural to say that \(u\) also has a weight \(k\) in order to make \(\hat{f}\) a weight-preserving map. Actually, we say that \(u\) has \(X\)_-weight_\(k\) so as to distinguish it from the other types of weights occurring in a VOA (cf. the next paragraph). This notion of \(X\)-weight in \(S\) generalizes the usual square bracket weight of a state in \(S_{\rm alg}\). The latter type of weight belongs to \(\mathbf{Z}\), which is embedded in \(X\) as a dense subspace. So we see that not only Fock space itself, but also the modular forms and the \(X\)-weights of states all arise from the process of \(p\)-adic completion.
Weights of states in a VOA such as \(S_{\rm alg}\) are usually described as eigenvalues of some specific operators, and they are integers. The notion of the \(X\)-weight of a limit state in \(S\) introduced above is quite different. \(X\)-weights lie in the \(1\)-dimensional \(p\)-adic Lie group \(X\) and they are not eigenvalues of anything. In order to reconcile these various notions of weights we recall some details about weights in an algebraic VOA.
VOA theorists will be familiar with the fact that in \(S_{\rm alg}\), indeed in any VOA \(V\), there are two special semisimple operators \(L(0)\) and \(L[0]\) with point spectra in \(\mathbf{Z}\) such that
\[V=\oplus_{n}V_{n}=\oplus_{n}V_{[n]}\]
and \(V_{n}\), \(V_{[n]}\) are the eigenspaces for \(L(0)\) and \(L[0]\), respectively, with eigenvalue \(n\). Further details are given in Section 3. The grading on \(V\) conferred by \(L(0)\) is usually called the _conformal grading_, and if \(v\in V_{n}\) we say that \(v\) has (conformal) weight \(n\). The second grading has, for some reason, not (yet) acquired an official name. We will call it the _square bracket grading_ and say that \(v\in V_{[n]}\) has _square bracket weight_\(n\). The importance of the square bracket grading arises from the fact that, with respect to it, \(f\) in (1) becomes a _graded map_, a fact that we already alluded to above. Thus if \(u\in(S_{\rm alg})_{[k]}\) then \(f(u)\) is a quasimodular form of weight \(k\).
Unlike their behaviour in \(S_{\rm alg}\), and as far as the \(p\)-adic VOA \(S\) is concerned, the spectral properties of \(L(0)\) and \(L[0]\) diverge significantly. \(S\) does _not_ carry a conformal grading extending that on \(S_{alg}\). On the contrary it is shown in [6, Proposition 7.3] that all \(L(0)\)-eigenstates in \(S\) are already contained in \(S_{\rm alg}\). On the other hand, \(L[0]\) is described by an infinite sum (10) which does not converge on \(S\). To circumvent this apparent difficulty we utilize a family of \(p\)-adic Banach spaces introduced in [6] and denoted by \(S_{r}\) for \(r\in\mathbf{R}\). They are defined by imposing growth conditions on power series coefficients, similar to the types of growth conditions that arise in the work of Katz [11] and the theory of overconvergent modular forms which dates back to work of Dwork, Coleman and many others -- see [17] for an accessible survey of some of this theory. Moreover, these spaces \(S_{r}\) are nested
\[S_{\rm alg}\subseteq S_{r_{1}}\subseteq S_{r_{2}}\subseteq S=S_{1}\]
whenever \(r_{1}\geq r_{2}\geq 1\). The spaces \(S_{r}\) have a norm \(\left|\cdot\right|_{r}\) that is _stronger_ than the supremum norm \(\left|\cdot\right|_{1}\) giving rise to \(S\) itself. We show that \(L[0]\) operates continuously on \(S_{r}\) for \(r\geq p^{1/p}\) (Theorem 3.4) and we can study the point spectrum for this action. It turns out that the \(p\)-adic states \(u\) implicitly intervening in Theorem 1.1_do_ lie in an
for large enough \(r\) and there they not only have an \(X\)-weight but are also eigenstates for \(L[0]\). We show in Theorem 4.4 that _every_\(p\)-adic integer occurs as an eigenvalue of \(L[0]\). One expects that, in stark contrast to its action on \(S_{\mathrm{alg}}\), \(L[0]\) typically has infinite-dimensional weight spaces and we show in Corollary 6.9 that, at least when \(p=2\), the \(0\)-weight space is indeed infinite-dimensional.
We leave open the problem of establishing a more direct connection between the submodules \(S_{r}\) and the theory of overconvergent modular forms. Such a connection could follow from a reformulation of the \(p\)-adic axioms introduced in [6] using instead analogs of the so-called genus zero axioms for algebraic VOAs, as discussed in [8] and emphasized in [7]. In the \(p\)-adic theory, the algebro-geometric foundations would naturally be replaced by the theory of rigid analytic geometry [2]. Likewise, Serre's description of \(p\)-adic modular forms would need to be replaced by Katz's perspective of \(p\)-adic modular forms as sections of bundles living over the ordinary parts of classical modular curves. Such an intrinsic and geometric description of the foundations of \(p\)-adic VOAs could lead to the introduction of less explicit, and therefore potentially more general, techniques for studying \(p\)-adic characters of \(p\)-adic VOAs.
The paper is organized as follows: in Section 2 we explain some notations that we use throughout and provide additional background on a range of topics. These include \(p\)-adic weight space; special numbers such as Stirling and Bernoulli-type numbers, which are ubiquitous thanks to the nature of Zhu's exponential change-of-variables formula; modular forms, both classical and \(p\)-adic; \(\Lambda\)-rings; Heisenberg algebras. In Section 3 we cover the square bracket formalism, explain \(X\)-weights, determine in Theorem 3.4 when \(L[0]\) is \(p\)-adically continuous, and construct the Cauchy sequences \((u_{i})\) of Diagram (2) that lead to the \(p\)-adic Eisenstein series in Theorem 1.1(1). The proof of this part of the Theorem is completed in Section 4. In the more elaborate Sections 5 and 6 we prove Theorem 1.1(2) and show in Corollary 6.9 that when \(p=2\) the \(L[0]\)-eigenspace for eigenvalue \(0\) is infinite-dimensional. In the final Section 7 we treat the actions on \(S_{r}\) of the square bracket modes \(h[n]\) of the weight \(1\) Heisenberg state \(h\). This is not needed for the proof of Theorem 1.1 but is included here both because of the similarity to previous calculations and because we anticipate that it will be helpful in identifying new \(p\)-adic VOAs in the future.
## 2. Background and notation
Fix a prime \(p\). For an integer \(n\) with \(n=p^{k}m\) and \(\gcd(m,p)=1\) we write \(\nu(n)\coloneqq k\) and \(|n|\coloneqq p^{-k}\). These are the \(p\)-adic _valuations_ and _absolute values_, respectively. We omit a subscript of \(p\) on these notations in order to avoid a profusion of subscripts.
### Weight space
Let \(\mathbf{Z}_{p}\) and \(\mathbf{Q}_{p}\) denote the ring of \(p\)-adic integers and its quotient field of \(p\)-adic numbers, respectively. Following Serre [16], we denote \(p\)_-adic weight space_ as
\[X=\varprojlim_{m}\mathbf{Z}/p^{m}(p-1)\mathbf{Z}\cong\mathbf{Z}_{p}\times \mathbf{Z}/(p-1)\mathbf{Z}, \tag{3}\]
so that when \(p=2\) we have simply \(X=\mathbf{Z}_{2}\). The space \(X\) is a one-dimensional \(p\)-adic Lie group that contains \(\mathbf{Z}\) as a dense subgroup embedded diagonally. Points \((x,y)\) and \((u,v)\) are close in weight space if \(x\equiv u\pmod{p^{N}}\) for some \(N\gg 0\), and \(y\equiv v\pmod{p-1}\). In particular, integers \(a\) and \(b\) are close in weight space if \(a\equiv b\pmod{\phi(p^{N+1})}\) for some \(N\gg 0\), where \(\phi\) denotes Euler's totient function satisfying \(\phi(p^{N+1})=(p-1)p^{N}\).
We will also regard elements of \(X\) as \(p\)-adic characters of the unit group \(\mathbf{Z}_{p}^{\times}\). This latter group occurs as the middle term of a (split) short exact sequence
\[1\to 1+p\mathbf{Z}_{p}\to\mathbf{Z}_{p}^{\times}\to\mathbf{Z}/(p-1)\mathbf{Z}\to 1. \tag{4}\]
We set \(V_{p}\coloneqq\operatorname{Hom}(\mathbf{Z}_{p}^{\times},\mathbf{Z}_{p}^{ \times})\), the set of continuous endomorphisms of \(\mathbf{Z}_{p}^{\times}\) equipped with the topology of uniform convergence. Then there is a natural continuous homomorphism
\[\varepsilon\colon X\to V_{p}\]
that extends the natural map on \(\mathbf{Z}\) taking an integer \(n\) to the endomorphism \(v\mapsto v^{n}\) for \(v\in\mathbf{Z}_{p}^{\times}\). More generally if \(k\in X\), to describe how \(\varepsilon(k)\) acts on \(v\) we write \(k=(s,u)\) according to the direct product decomposition (3) and decompose \(v=v_{1}v_{2}\) where \(v_{1}^{p-1}=1\) and \(v_{2}\equiv 1\pmod{p}\) (cf. (4)). Then:
\[v^{k}\coloneqq\varepsilon(k)(v)=v_{1}^{k}v_{2}^{k}=v_{1}^{u}v_{2}^{s}.\]
The map \(\varepsilon\) is injective if \(p=2\) and bijective if \(p\) is odd.
The _even_ weights are the elements of \(2X\), equivalently, those \(k\in X\) with \((-1)^{k}=1\). For odd \(p\) this is equivalent to the component \(u\in\mathbf{Z}/(p-1)\mathbf{Z}\) of \(k\) being even. When \(p=2\) the even weights are the elements of \(2\mathbf{Z}_{2}\). Naturally, an odd weight is one that is not even, and one sees that \(k\) is even if, and only if, \(1+k\) is odd.
### Special numbers
The following sequences will play an important role in our computations, just as they do in the theory of \(p\)-adic \(L\)-functions, cf. [18]. _Bernoulli numbers_ are defined by the series
\[\sum_{k\geq 0}\frac{B_{k}}{k!}z^{k}\coloneqq\frac{z}{e^{z}-1},\]
and more generally _generalized Bernoulli polynomials_ are defined by
\[\sum_{k\geq 0}B_{k}^{(\ell)}(x)\frac{z^{k}}{k!}\coloneqq e^{zx}\left(\frac{z}{e ^{z}-1}\right)^{\ell}.\]
_Stirling numbers of the first kind \(s(n,k)\)_ are coefficients of the falling factorial
\[\sum_{k=0}^{n}s(n,k)z^{k}\coloneqq z(z-1)...(z-n+1).\]
_Stirling numbers of the second kind \(\genfrac{\{}{\}}{0.0pt}{}{n}{k}\) are defined for nonnegative \(k\) by the series_
\[\sum_{n\geq k}\genfrac{\{}{\}}{0.0pt}{}{n}{k}\frac{z^{n}}{n!}\coloneqq\frac{1 }{k!}(e^{z}-1)^{k}.\]
### Elliptic modular forms
We use several different normalizations of classical level \(1\) Eisenstein series. For _even_\(k\geq 2\) we set
\[E_{k}\coloneqq-\frac{B_{k}}{k!}+\frac{2}{(k-1)!}\sum_{n\geq 1}\sigma_{k-1}(n)q^{n}\]
while we convene that \(E_{k}=0\) for odd \(k\). We also introduce, for positive integers \(r\), \(s\),
\[\widehat{E}_{r+s}\coloneqq(-1)^{r+1}r\binom{r+s-1}{s}E_{r+s}. \tag{5}\]
This is _symmetric_ in \(r\) and \(s\), however care is needed when manipulating these normalizations. Although \(\widehat{E}_{r+s}\) is a scalar multiple of \(E_{r+s}\) the scalar in question depends on \(\{r,s\}\). These normalizations occur naturally in character formulas for VOAs, for example in (11).
\[G_{k}\coloneqq\frac{(k-1)!}{2}E_{k}=-\frac{B_{k}}{2k}+\sum_{n\geq 1}\sigma_{k-1} (n)q^{n}.\]
We also use
\[Q \coloneqq 240G_{4}=1+240\sum_{n\geq 1}\sigma_{3}(n)q^{n},\] \[R \coloneqq-504G_{6}=1-504\sum_{n\geq 1}\sigma_{5}(n)q^{n}.\]
The eta-function is
\[\eta(\tau)=\eta(q)=q^{1/24}\prod_{n=1}^{\infty}(1-q^{n})\]
and the discriminant is
\[\Delta(\tau)=\eta(\tau)^{24}=\frac{1}{1728}(Q^{3}-R^{2}).\]
Finally, the absolute modular invariant is
\[j(\tau)=\frac{Q^{3}}{\Delta}.\]
### \(p\)-adic modular forms
In [16] Serre defined the ring of \(p\)-adic modular forms of tame level one as the completion of the ring of modular forms \(\mathbf{Q}_{p}[Q,R]\) of level one with respect to the supremum norm taken on \(p\)-adic Fourier coefficients. This is a \(p\)-adic Banach algebra that we denote by \(M_{p}\). Moreover for any \(k\) in weight space \(M_{p,k}\) denotes the subspace of weight \(k\) forms. The ring \(M_{p}\) contains many new series, some of classical origin. For example, the \(p\)-adic Eisenstein series are defined for _nonzero, even_\(k\in X\) by
\[G_{k}^{*}(\tau)\coloneqq G_{k}(\tau)-p^{k-1}G_{k}(p\tau)\]
and we have \(G_{k}^{*}\in M_{p,k}\). Notice that for even integers \(k\geq 2\), each \(G_{k}^{*}\) is a classical modular form on \(\Gamma_{0}(p)\), whereas Serre showed that they are \(p\)-adic modular forms of tame level one. In fact, Serre showed more generally that every classical form on \(\Gamma_{0}(p)\) with rational Fourier coefficients is a \(p\)-adic modular form of tame level one.
The Fourier expansion for this \(p\)-adic family of Eisenstein series can be reexpressed as
\[G_{k}^{*}=\tfrac{1}{2}\zeta_{p}(1-k)+\sum_{n=1}^{\infty}\sigma_{k-1}^{*}(n)q^{ n}, \tag{6}\]
where \(\zeta_{p}\) is the Kubota-Leopoldt \(p\)-adic zeta function [16, 18] and
\[\sigma_{k-1}^{*}(n)=\sum_{\begin{subarray}{c}d|n\\ \gcd(d,p)=1\end{subarray}}d^{k-1}.\]
The \(p\)-adic zeta function is an analytic function on the set of _odd_ elements of weight space (where it vanishes) and for _even integers_\(k\geq 2\) it satisfies
\[\zeta_{p}(1-k)=-(1-p^{k-1})\frac{B_{k}}{k}.\]
Note that \(\zeta_{p}(s)\) has a pole of order \(1\) at \(s=1\), just as the complex zeta function does. Formula (6) shows that the Fourier coefficients of the series \(G_{k}^{*}\) vary analytically over weight-space as a function of \(k\). This idea was formalized by Wiles [19] as the standard example of a \(\Lambda\)-adic family of Eisenstein series; see also the book [9] of Hida for an accessible introduction to this subject.
Katz [11] described geometric foundations for the theory of \(p\)-adic modular forms that generalizes Serre's work. In this optic, \(p\)-adic modular forms are defined as sections of the vector-bundles underlying classical modular forms, but restricted to lie over certain rigid analytic subsets of modular curves deprived of discs around supersingular elliptic curves. Serre's theory arises, in essence, by removing discs of \(p\)-adic radius one around each supersingular elliptic curve (which are finite in number). By shrinking these discs, one focuses attention on smaller spaces of \(p\)-adic modular forms that may be better behaved from an arithmetic perspective, thanks to their improved radius of convergence. And indeeed, these spaces have a limit, called the space of _overconvergent modular forms_ (cf. section 3.5 of [17]). The overconvergent space possesses the useful property that the classical Hecke operator \(U_{p}\) has a discrete spectrum when acting on the space of \(p\)-adic overconvergent modular forms, whereas this discreteness fails more generally for the full ring of \(p\)-adic modular forms of tame level one as defined by Serre. See [17] for a full and accessible discussion containing many concrete examples and references to the literature of the study of overconvergent modular forms and their applications in number theory.
### The \(\Lambda\)-ring
In this section we follow Chapter 7 of [18] or, for slightly more generality, Chapter 3 of [3]. Let \(\mathcal{G}\) denote a profinite group, and let \(\Lambda(\mathcal{G})\) denote the corresponding _completed group algebra_, defined as the inverse limit
\[\Lambda(\mathcal{G})\cong\varprojlim_{\mathcal{H}\subseteq\mathcal{G}}\mathbf{ Z}_{p}[\mathcal{G}/\mathcal{H}],\]
where the inverse limit is taken with respect to the set of open subgroups \(\mathcal{H}\subseteq\mathcal{G}\), which are necessarily also closed and of finite index by compactness of \(\mathcal{G}\). There are natural transition maps and the corresponding inverse limit \(\Lambda(\mathcal{G})\) is called the _Iwasawa algebra_ of \(\mathcal{G}\). This ring was first used by Iwasawa [10] in investigations of \(p\)-adic \(L\)-functions. In this application one takes \(\mathcal{G}=\mathbf{Z}_{p}\) or more generally \(\mathcal{G}=\mathbf{Z}_{p}^{\times}\), and constructs the \(p\)-adic zeta function by using compatibilities between Stickelberger elements in group rings
\[\mathbf{Z}_{p}[\mathbf{Z}_{p}/p^{n}\mathbf{Z}_{p}]\cong\mathbf{Z}_{p}[ \mathbf{Z}/p^{n}\mathbf{Z}],\]
cf. Chapter 6 of [18], to construct an element of \(\Lambda(\mathbf{Z}_{p})\) that _is_ the \(p\)-adic zeta function (or at least, one of its branches on weight space). For this reason the ring \(\Lambda=\Lambda(\mathbf{Z}_{p})\) is sometimes simply called the _Iwasawa algebra_. Key to its role in defining \(p\)-adic \(L\)-functions as analytic functions is the fact that there is an isomorphism
\[\Lambda\cong\mathbf{Z}_{p}[T]\]
defined by the _Mahler transform_, cf. Definition 3.3.2 and Theorem 3.3.3 of [3]. Thus, by the strong triangle inequality, elements of \(\Lambda\) can be interpreted as analytic functions defined by power-series that converge for all \(z\in p\mathbf{Z}_{p}\) (note that since the \(p\)-adic
zeta function has a pole at \(s=1\), it actually defines an element of the fraction field of \(\Lambda\)).
In Section 4 below we define a family of states in the \(p\)-adic Heisenberg algebra with coefficients that are analytic functions on weight-space, and thus these coefficients can be interpreted as elements of \(\Lambda(\mathbf{Z}_{p}^{\times})\) by the Mahler transform, cf. Section 3.6 of [3]. The corresponding family maps onto the classical \(\Lambda\)-adic family of Eisenstein series discussed in [16], and in this way we construct a VOA-theoretic avatar of this \(\Lambda\)-adic family of modular forms.
### Heisenberg algebras
Let \(h(-m)\) denote independent indeterminates for all integers \(m\geq 1\) and consider the infinite polynomial ring
\[S_{\mathrm{alg}}\coloneqq\mathbf{Q}_{p}[h(-1),h(-2),\ldots].\]
We endow \(h(-m)\) with degree \(m\), so that the degree \(n\) graded piece of \(S_{\mathrm{alg}}\) has dimension equal to the number of partitions of \(n\). This ring \(S_{\mathrm{alg}}\) can be equipped with the structure of a vertex operator algebra (over \(\mathbf{Q}_{p}\)) called the _rank-one Heisenberg algebra_, or _Heisenberg VOA_. The subscript 'alg' is used to distinguish the classical VOA from its \(p\)-adic counterpart.
We shall also frequently use a slightly different way to represent states in \(S_{alg}\) in keeping with its origins as a highest weight module over the Heisenberg Lie algebra, and which is ubiquitous throughout VOA theory. Namely, \(h\) is promoted from a mere cypher to a state of weight \(1\) in \(S_{alg}\), and its vertex operator is
\[Y(h,z)\coloneqq\sum_{n\in\mathbf{Z}}h(n)z^{-n-1},\]
so that the modes \(h(n)\) are operators on \(S_{alg}\). They satisfy the canonical commutator relations of quantum mechanics, i.e., \([h(m),h(n)]=m\delta_{m+n,0}Id\). The canonical vacuum state is \(\mathbf{1}\) and \(S_{alg}\) has a natural basis consisting of states
\[h(-n_{1})h(-n_{2})...h(-n_{t})\mathbf{1}\hskip 56.905512pt(n_{1}\geq n_{2}\geq...\geq n_{t}\geq 1). \tag{7}\]
The \(p\)-adic Heisenberg algebra (or VOA) \(S\) is defined as a certain completion of \(S_{alg}\). To describe this completion, for each real number \(R\geq 1\) we introduce a norm on \(S_{\mathrm{alg}}\) following [6] as
\[\left|\sum_{I}a_{I}h^{I}\right|_{R}\coloneqq\sup_{I}|a_{I}|\,R^{|I|},\]
where \(I\) runs over all finite multi-subsets of \(\mathbf{Z}_{<0}\), and \(|I|=-\sum_{i\in I}i\). Let \(S_{R}\) denote the corresponding completion of \(S_{\mathrm{alg}}\), and write \(S=S_{1}\). Then [6, Proposition 9.1] shows that
\[S_{R}=\left\{\sum_{I}a_{I}h^{I}\in\mathbf{Q}_{p}[h(-1),h(-2),\ldots]\mid\lim_{ |I|\to\infty}|a_{I}|\,R^{|I|}=0\right\}.\]
The space \(S\) is the \(p\)-adic Heisenberg algebra, and if \(R_{1}<R_{2}\) then \(S_{R_{2}}\subseteq S_{R_{1}}\). In particular, for all \(R_{1}\geq R_{2}\geq 1\) we have [6, Corollary 9.2] containments
\[S_{\mathrm{alg}}\subseteq S_{R_{1}}\subseteq S_{R_{2}}\subseteq S.\]
The spaces \(S_{R}\) will be significant for discussing certain aspects of the \(p\)-adic extension of the square-bracket formalism of \(S_{\mathrm{alg}}\) discussed in Section 3. In particular, we will require the following simple lemma. Actually, we will at times require slight strengthenings of this Lemma, but we thought it useful to include a basic example first.
**Lemma 2.1**.: _Let \(v=\sum_{I}a_{I}h^{I}\in S\), and suppose that for \(|I|\gg 0\) we have \(a_{I}/(|I|!)\in\mathbf{Z}_{p}\). Then \(v\in S_{R}\) for all \(R\) in the range \(1\leq R<p^{1/(p-1)}\)._
Proof.: By Legendre's formula, for integers \(m\geq 0\) we have
\[\nu(m!)\geq\frac{m}{p-1}-(p-1)\log_{p}(m).\]
Therefore, for all multisets \(I\) with \(|I|\) large enough, our hypothesis on \(v\) states that
\[\nu(a_{I})\geq\frac{|I|}{p-1}-(p-1)\log_{p}(|I|).\]
Multiplying by \(-1\) and raising to the \(p\)th power yields
\[|a_{I}|\leq(p^{1/(p-1)})^{-|I|}\cdot p^{(p-1)\log_{p}(|I|)}\]
Hence if \(1\leq R<p^{1/(p-1)}\) and we write \(R=p^{\alpha}\) where \(0\leq\alpha<1/(p-1)\), then we find that
\[|a_{I}|\,R^{|I|}\leq p^{(\alpha-\frac{1}{p-1})|I|+(p-1)\log_{p}(|I|)}.\]
But, since linear growth outpaces logarithmic growth, and since \(\alpha-\frac{1}{p-1}<0\), this goes to zero as \(|I|\) grows, which proves the lemma.
## 3. The square bracket formalism and the operator \(L[0]\)
This Section, which to some extent is a continuation of the previous Section, deals with the so-called square bracket formalism, sometimes also refered to as VOAs on a cylinder, or genus one VOAs. In particular, we look closely at a certain operator \(L[0]\). The nature of its point spectrum is the fulcrum upon which the calculations of the present paper rest.
### The square bracket VOA
Although we are mainly concerned with the case of the rank \(1\) Heisenberg VOA \(S_{\mathrm{alg}}\), the general case is not much different so we work generally, at least at the outset. The basic idea was introduced by Zhu [20] and further discussion may be found in [4, 13]. It is important to point out that while these references all work over the field \(\mathbf{C}\) the theory is unchanged if we work over any base field of characteristic \(0\) such as \(\mathbf{Q}_{p}\), the main case of interest to us.
Given a VOA \((V,Y,\mathbf{1},\omega)\) there is a second quadruple \((V,Y[\ ],\mathbf{1},\tilde{\omega})\), where the underlying space \(V\) as well as the vacuum vector \(\mathbf{1}\) coincide and the square bracket conformal vector is \(\tilde{\omega}:=\omega-\frac{c}{24}\mathbf{1}\); here \(c\) is the central charge of \(V\), which is equal to \(1\) for \(S_{alg}\). The critical point here is the definition of the new vertex operator \(Y[\ ]\), defined by
\[Y[v,z]:=e^{kz}Y(v,e^{z}-1)=:\sum_{n\in\mathbf{Z}}v[n]z^{-n-1}\ \ (v\in V_{k}). \tag{8}\]
Here, we write \(V=\oplus_{k}V_{k}\) and extend the definition of \(Y[v,z]\) to all \(v\in V\) by linearity in \(v\). In particular, for the Virasoro vector the notation is
\[\sum_{n\in\mathbf{Z}}L[n]z^{-n-2}:=Y[\tilde{\omega},z].\]
For example, one can check from the definitions (loc. cit.) that
\[L[-1] =L(0)+L(-1), \tag{10}\] \[L[0] =\sum_{n\geq 0}\frac{(-1)^{n+1}}{n(n+1)}L(n). \tag{9}\]
We shall make good use of both of these formulas.
Now the quadruple \((V,Y[\ ],\mathbf{1},\tilde{\omega})\) is itself a VOA, indeed it is isomorphic to the original VOA \(V\). This is not obvious, and was first proved in [20] in some special cases including the case at hand when \(V=S_{\mathrm{alg}}\). The main utility of this fact for us right now is that we obtain a second integral grading on the space \(V\), i.e., the square bracket conformal grading defined by
\[V=\oplus_{k}V_{[k]}\]
where \(V_{[k]}:=\{v\in V\mid L[0]v=kv\}\). It is true [4] that for each integer \(n\) we have
\[\oplus_{k\leq n}V_{k}=\oplus_{k\leq n}V_{[k]}\]
but in practice it is awkward to express a given state \(v\in V_{[k]}\) as a linear combination of states in \(V_{n}\) for \(n\leq k\) and vice-versa.
### The character map
In this Subsection we will consider the trace functions and \(q\)-expansions associated to a VOA of central charge \(c\). We begin with a general VOA that carries the conformal grading \(V=\oplus_{k}V_{k}\). For a state \(v\in V_{k}\), the _zero mode_ of \(v\) is defined by
\[o(v)=v(k-1)\]
and extended by linearity to all \(v\in V\). It is well-known that zero modes preserve the the homogeneous spaces \(V_{k}\), that is \(o(v):V_{k}\to V_{k}\). Then we can define a formal \(q\)-expansion as follows:
\[Z(v)=Z(v,\tau)=Z(v,q):=\operatorname{Tr}_{V}o(v)q^{L(0)-c/24}=\sum_{k} \operatorname{Tr}_{V_{k}}o(v)q^{k-c/24}.\]
The character map, or \(1\)-point function, for \(V\) is the linear map
\[Z:V\to q^{-c/24}\mathbf{Q}_{p}[q^{-1}][[q]]\]
For example, taking \(v=\mathbf{1}\) and \(V=S_{\mathrm{alg}}\) we have
\[Z(\mathbf{1})=\frac{1}{\eta(\tau)}.\]
Continuing with this special case, the _normalized character map_\(f\) for \(S_{alg}\) that appears in (1) is defined to be \(\eta^{-1}Z\), so that
\[f(v)=\frac{Z(v)}{\eta}.\]
Now we come to the Mason-Tuite theorem [13, 14] that gives a complete and explicit description of the character map for \(S_{\mathrm{alg}}\). This comes in two parts. The first part says that, as explained in the Introduction, \(f\) induces a linear map
\[f:\oplus_{k}(S_{alg})_{[k]}\longrightarrow\mathbf{Q}_{p}[E_{2},E_{4},E_{6}].\]
Much more is true, however. As discussed in the Introduction, the normalized character map \(f\) is graded in the sense that if \(v\in V_{[k]}\) then \(f(v)=g(\tau)\) for some quasi-modular form \(g(\tau)\) of weight \(k\). This is a fundamental feature when considering the
corresponding \(p\)-adic VOAs. Finally, \(f\) surjects onto the full algebra of quasimodular forms.
The second part of the Theorem is an _explicit_ formula for \(Z(v)\). This is crucial for the applications in this paper. To describe this we need some notation. Let \(h\in(S_{\mathrm{alg}})_{1}\) be the canonical weight \(1\) state (cf. Subsection 2.6). Then the corresponding square bracket VOA is also a rank \(1\) Heisenberg VOA with the same canonical generator \(h\in(S_{\mathrm{alg}})_{[1]}\). The square bracket analog of (7) is
\[v=h[-n_{1}]....h[-n_{t}].\mathbf{1},\qquad\qquad(n_{1}\geq n_{2}\geq...\geq n_{ t}\geq 1,\ k=\sum_{i}n_{i})\]
for a state \(v\in(S_{\mathrm{alg}})_{[k]}\). This defines a basis of \(S_{\mathrm{alg}}\) as we range over all partitions. Then we have
\[f(v)=\sum_{\sigma}\prod_{(rs)}\widehat{E}_{n_{r}+n_{s}}(\tau) \tag{11}\]
where \(\sigma\in\Sigma_{t}\) ranges over all fixed-point free involutions of the symmetric group \(\Sigma_{t}\) of degree \(t\), \((rs)\) ranges over the transpositions in \(\Sigma_{t}\) whose product is equal to \(\sigma\), and we are using the notational conventions of Subsection 2.3 for the Eisenstein series.
Some special cases of (11) will be particularly useful for us. The first already played a role in [6, Theorem 10.1].
**Lemma 3.1**.: _For an odd positive integer \(r\) introduce the square bracket states_
\[v_{r}:=\frac{(r-1)!}{2}h[-r]h[-1]\mathbf{1}.\]
_Then_
\[f(v_{r})=G_{r+1}.\]
**Lemma 3.2**.: _For positive integers \(m\), \(n\) introduce the square bracket states_
\[u_{m,n}:= (-1)^{m+n}\frac{120^{m}1008^{n}}{(2m-1)!!(2n-1)!!}h[-2]^{2m}h[-3]^{2n} \mathbf{1}.\]
_Then_
\[f(u_{m,n})=Q^{m}R^{n}.\]
Proof.: Use the formula (11) with \(v=h[-2]^{2m}h[-3]^{2n}\mathbf{1}\). To be clear, this will involve a sum of terms each of which involves products of \(\widehat{E}_{2+2}\), \(\widehat{E}_{2+3}\), \(\widehat{E}_{3+3}\). However \(\widehat{E}_{5}=0\) so the formula produces a modular form that is a constant times \(Q^{m}R^{n}\). The constant in question is determined by (5) and is equal to
\[(2m-1)!!(-6)^{m}(2n-1)!!(30)^{n}.\]
All in all, this shows that
\[f(h[-2]^{2m}h[-3]^{2n}\mathbf{1},\tau)= (2m-1)!!(-6)^{m}(2n-1)!!(30)^{n}\left(\frac{1}{720}\right)^{m} \left(\frac{-1}{60.504}\right)^{n}Q^{m}R^{n}\] \[= (2m-1)!!(2n-1)!!\left(\frac{-1}{120}\right)^{m}\left(\frac{-1}{10 08}\right)^{n}Q^{m}R^{n},\]
and the Lemma follows.
### \(X\)-weights in the \(p\)-adic Heisenberg VOa
We have already discussed two types of weights of states in \(S_{\mathrm{alg}}\), namely the conformal weights of states given by eigenvalues of the round bracket operator \(L(0)\) and similarly the square bracket weights which are eigenvalues of \(L[0]\). Such weights are always rational integers. Contemplation of diagram (2) gives rise to new notions of weights of states in the completion \(S\). This was discussed in the Introduction and we shall take up this phenomenon in the next few Subsections.
The _\(X\)-weight_ of a state in \(S\) arises directly from Serre's result that \(p\)-adic modular forms carry a weight that lies in weight space \(X\), cf. Subsection 2.1. Suppose that \((u_{i})\) is a sequence of states in \(S_{\mathrm{alg}}\) such that each \(u_{i}\) has square bracket weight \(k_{i}\), i.e., \(L[0]u_{i}=k_{i}u_{i}\). And suppose further that \((u_{i})\) is a Cauchy sequence in \(S_{\mathrm{alg}}\). Let \(u\coloneqq\lim_{i\to\infty}u_{i}\in S\). Because \(f\) is \(p\)-adically continuous then \((f(u_{i}))\) is a Cauchy sequence in \(\mathbf{Q}_{p}[E_{2},E_{4},E_{6}]\) where each term \(f(u_{i})\) has a weight \(k_{i}\). The limit of \((f(u_{i}))\) is thus a Serre \(p\)-adic modular form. Then by Serre's work, the sequence \((k_{i})\) is a Cauchy sequence and has a limit \(k\in X\). This argument shows that the following definition is not vacuous.
**Definition 3.3**.: Let \((u_{i})\) be a Cauchy sequence of square bracket eigenstates as above and let \(u\coloneqq\lim_{i\to\infty}u_{i}\). Then we call \(k\) the _\(X\)-weight_ of the \(p\)-adic state \(u\).
### Continuity of \(L[0]\)
We continue to consider the \(X\)-weight of \(u\) as in Definition 3.3. In stark contrast to the weights of a state in an algebraic VOA given by eigenvalues of \(L(0)\) or \(L[0]\), the \(X\)-weight of \(u\) is not defined to be an eigenvalue of \(L[0]\) (or any other operator for that matter). In order to relate \(X\)-weights to the point-spectrum of \(L[0]\) we must resort to indirect means. The nub of the problem is this: we would like to understand \(L[0]v\), however this is not necessarily defined. This is due to the fact that, unlike its round bracket counterpart \(L(0)\), the operator \(L[0]\) is _not a bounded operator on \(S\)_. Indeed it does not converge in the algebra of operators on \(S_{\mathrm{alg}}\) and in view of the expression (10) this is hardly surprising. This circumstance means that we cannot rely on \(L[0]\) to map the Cauchy sequence \((u_{i})\) to another convergent sequence.
A solution to this dilemma is to ascertain a large enough closed subspace of \(S\) on which \(L[0]\)_is_ continuous, and then only work with Cauchy sequences \((u_{i})\) in this subspace. At any rate this is the strategy we employ here. It motivates the next result, where we take the closed subspace to be one of the \(p\)-adic Banach spaces \(S_{r}\) that we recalled in Subsection 2.6.
**Theorem 3.4**.: \(L[0]\) _is a bounded operator on \(S_{r}\) whenever \(r\geq p^{1/p}\)._
Let us first emphasize that the norm \(\left|\cdot\right|_{r}\) on \(S_{r}\) is not the same as that for \(S=S_{1}\). It is stronger than \(\left|\cdot\right|_{1}\) in the sense that a Cauchy sequence in \(S\) may not be Cauchy in \(S_{r}\). We begin the proof of Theorem 3.4 with
**Lemma 3.5**.: _Suppose that \(u\in S_{\mathrm{alg}}\). Then each mode \(u(n)\) leaves \(S_{r}\) invariant for all integers \(n\) and all \(r\geq 1\)._
Proof.: We employ the notation of Subsection 2.6. We may assume without loss that \(u=h^{J}\) for some \(J\). Then \(u(n)h^{I}=\sum_{K}m_{IK}h^{K}\) with each \(m_{IK}\in\mathbf{Z}\) and \(\left|K\right|=\left|I\right|+\left|J\right|-n-1\). Let \(v\in S_{r}\) with \(v\coloneqq\sum_{I}a_{I}h^{I}\). Then \(\lim_{\left|I\right|\to\infty}\left|a_{I}\right|r^{\left|I\right|}=0\).
Because \(r\geq 1\) then \(S_{r}\subseteq S_{1}\) and \(u(n)\) preserves limits in \(S_{1}\). Therefore
\[u(n)v=\sum_{I}a_{I}u(n)h^{I}=\sum_{I}\sum_{K}a_{I}m_{IK}h^{K}.\]
Then for fixed \(n\) and \(J\),
\[\lim_{|K|\to\infty}\left|\sum_{I}a_{I}m_{IK}\right|r^{|K|} =\lim_{|I|\to\infty}\left|\sum_{I}a_{I}m_{IK}\right|r^{|I|+|J|-n-1}\] \[\leq r^{|J|-n-1}\lim_{|I|\to\infty}\sup_{|I|}|a_{I}|\,r^{|I|}=0.\]
This shows that \(u(n)v\in S_{r}\), thus proving the Lemma.
_Remark 3.6_.: The Proposition says that each \(S_{r}\)\((r\geq 1)\) is a _weak module_ for \(S_{\mathrm{alg}}\). That is, a module with no assumed properties _vis a vis_ conformal grading.
Proof of Theorem 3.4.: Let \(w\coloneqq\sum_{I}a_{I}h^{I}\) be a state in \(S_{r}\) for some \(r\geq 1\). Thus
\[\lim_{|I|\to\infty}|a_{I}|\,r^{|I|}=0.\]
Each Virasoro mode \(L(n)\) leaves \(S_{r}\) for \(r\geq 1\) invariant by Lemma 3.5. Thus using equation (10), in order to prove the Theorem it suffices to show that the operators \(\frac{1}{n(n+1)}L(n)\) have uniformly bounded operator norms on \(S_{r}\) for suitable \(r\).
Now \(|w|_{r}\coloneqq\sup_{I}|a_{I}|\,r^{|I|}\). Set \(L(n)h^{I}=\sum_{K}m_{nIK}h^{K}\) with \(m_{nIK}\in\mathbf{Z}\) and \(|K|=|I|-n\). Then we must consider
\[\sup_{n\geq 1}\left|\tfrac{1}{n(n+1)}L(n)w\right|_{r}= \sup_{n\geq 1}\left|\sum_{I,K}\frac{a_{I}m_{nIK}}{n(n+1)}h^{K} \right|_{r}\] \[\leq \sup_{n\geq 1}\sup_{I}\left|\frac{a_{I}}{n(n+1)}\right|r^{|I|-n}= \sup_{n}r^{-n}abs\frac{1}{n(n+1)}\left|w\right|_{r}.\]
Thus we have to show that for any fixed \(r\geq p^{1/p}\), the expression
\[E_{n}\coloneqq r^{-n}\left|\frac{1}{n(n+1)}\right|\]
is uniformly bounded for \(n\geq 1\). If \(n(n+1)\) is coprime to \(p\) then \(E_{n}=r^{-n}\leq p^{-n/p}\leq 1\). Suppose that \(n+1=p^{k}m\) where \(k\geq 1\) and \(m\) is an integer coprime to \(p\). Then \(E_{n}=r^{-n}p^{k}\leq r^{-n}p^{(n+1)/p}\leq p^{-n/p+(n+1)/p}=p^{1/p}\). Finally, if \(p\mid n\) then by a similar argument we again get an (even smaller) upper bound. We skip the details. Thus we have \(|E_{n}|\leq p^{1/p}\) for all \(n\geq 1\), and this completes the proof of Theorem 3.4.
As previously discussed, we now have
**Theorem 3.7**.: _Let \(r\) be a real number satisfying \(r\geq p^{1/p}\). Suppose that \((u_{i})\) is a sequence of states in \(S_{\mathrm{alg}}\) satisfying \(L[0]u_{i}=k_{i}u_{i}\) and assume further that \((u_{i})\) is a Cauchy sequence in \(S_{r}\) with limit \(u\). Then \(u\in S_{r}\) has an \(X\)-weight, say \((s,u)\in\mathbf{Z}_{p}\times\mathbf{Z}/(p-1)\mathbf{Z}\) (cf. Subsection 2.1) and we have_
\[L[0]u=su.\]
Proof.: We first point out that because \((v_{i})\) is Cauchy in \(S_{r}\) then it is also Cauchy in \(S_{1}\) and it follows from the discussion in Subsection 2.1 that \(v\) has an \(X\)-weight \(k=(s,u)\) which is the limit of the sequence \((k_{i})\) in \(X\). Using Theorem 3.4 we calculate
\[L[0]u=\lim_{i\to\infty}L[0]u_{i}=\lim_{i\to\infty}k_{i}u_{i}=\left(\lim_{i\to \infty}k_{i}\right)v=su\]
where the last limit is in \(\mathbf{Z}_{p}\). The Theorem is proved.
_Remark 3.8_.: We shall see in Section 4 that _every_ weight \(k\in X\) occurs as the \(X\)-weight of a state in \(S_{r}\) for choices of \(r\) that permit application of Theorem 3.7. Then we shall be able to deduce that every \(p\)-adic integer lies in the point spectrum of \(L[0]\).
We complete this Section with a related Lemma that we will use later.
**Lemma 3.9**.: _For each integer \(n\) and each real number \(r\geq 1\), \(h(n)\) is a bounded operator on \(S_{r}\) satisfying_
\[\left|h(n)\right|_{r}\leq r^{-n}.\]
Proof.: Note that \(h(n)\) acts on \(S_{r}\) by Lemma 3.5. As before, let \(w\mathrel{\mathop{:}}=\sum_{I}a_{I}h^{I}\in S_{r}\), so that \(\left|w\right|_{r}=\sup_{I}\left|a_{I}\right|r^{\left|I\right|}\) and suppose first that \(n>0\). Then there are rational integers \(e_{I}\) such that \(h(n)w=n\sum_{J}a_{I}e_{I}h^{J}\) and \(J\) ranges over those index sets satisfying \(-n\in I\) and \(J=I\setminus\{-n\}\). Then we find that
\[\left|h(n)w\right|_{r}=\left|n\sum_{J}a_{I}e_{I}h^{J}\right|_{r}\leq sup_{I} \left|a_{I}\right|r^{\left|I\right|-n}=\left|w\right|_{r}r^{-n}.\]
This completes the proof of the Lemma in case \(n>0\). If \(n<0\) the proof is very similar but easier because \(h(n)\) is then a creation operator. If \(n=0\) then \(h(0)=0\) and the result is obvious. This completes the proof of the Lemma in all cases.
## 4. A \(\Lambda\)-adic family of states
We begin with the square bracket states \(v_{r}\in S_{\mathrm{alg}}\) defined in Lemma 3.1. Let us recall from Section 10, and especially Corollary 10.3 of [6] that we have
\[u_{r}\mathrel{\mathop{:}}=\frac{(1-p^{r})}{2}\sum_{m=0}^{\infty}c(r,m)h(-m-1) h(-1)\mathbf{1}-\frac{(1-p^{r})}{2}\frac{B_{r+1}}{r+1}\mathbf{1}=(1-p^{r})v_{r}\]
where
\[c(r,m)\mathrel{\mathop{:}}=\sum_{j=0}^{m}(-1)^{m+j}\binom{m}{j}(j+1)^{r-1}=m! \left\{\begin{matrix}r\\ m+1\end{matrix}\right\}\!.\]
(Cf. Subsection 2.2 for notation.) Notice that if \(r\) converges to some \(p\)-adic value through a sequence of positive integers, then \(u_{r}\) will have a \(p\)-adic limit if, and only if, \(v_{r}\) does, in which case the two limits are equal.
In order to effect an analytic continuation of \(c(r,m)\) to all of weight space, we make two adjustments above. First define
\[c_{p}(r,m)\mathrel{\mathop{:}}=\sum_{\begin{subarray}{c}j=0\\ p\nmid j+1\end{subarray}}^{m}(-1)^{m+j}\binom{m}{j}(j+1)^{r-1}.\]
**Lemma 4.1**.: _For all \(r\in X\) and \(m\in\mathbf{Z}_{\geq 0}\) we have \(c_{p}(r,m)\in m!\mathbf{Z}_{p}\)._
Proof.: Notice that if \(r\) is a positive integer then \(c(r,m)\in m!\mathbf{Z}\). Let \(r\in X\), and let \(r_{n}\) denote an increasing sequence of positive integers that converges to \(r\) in weight space. Then in particular \(r_{n}\equiv r\pmod{p-1}\) for all \(n\). Then the terms \((j+1)^{r_{n}-1}\) for \(p\mid(j+1)\) converge to \(0\) in \(\mathbf{Z}_{p}\) as \(n\) grows, and we see that
\[\lim_{n\to\infty}c(r_{n},m)=c_{p}(r,m),\]
by Euler's theorem that \((j+1)^{\phi(p^{n})}\equiv 1\pmod{p^{n}}\) when \(\gcd(p,j+1)=1\). Since each term \(c(r_{n},m)\) is contained in the closed subset \(m!\mathbf{Z}_{p}\) of \(\mathbf{Z}_{p}\), so is the limit \(c_{p}(r,m)\). This concludes the proof.
Next, for each \(r\in X\setminus\{-1\}\) we write
\[g\langle r+1\rangle\coloneqq\frac{1}{2}\sum_{m=0}^{\infty}c_{p}(r,m)h(-m-1)h(-1) \mathbf{1}+\frac{1}{2}\zeta_{p}(-r)\mathbf{1}, \tag{12}\]
The coefficients of \(g\langle r+1\rangle\) can be interpreted as analytic functions in the \(\Lambda\)-ring via the Mahler transform, as discussed in Subsection 2.5. Thus we view \(g\langle r+1\rangle\) as a \(p\)-adic analytic family of states in the Heisenberg algebra.
We will see below that \(g\langle r+1\rangle\) arises as the limit of \(v_{r}\) if \(r+1\) converges to a point in weight space through a sequence of positive integers of increasing size. Notice that each term \(c_{p}(r,m)\) is continuous for all \(r\in X\), while the vacuum term is continuous on \(X\setminus\{-1\}\). Initially this is nothing but a formal series, but observe the following:
**Proposition 4.2**.: _For all real numbers \(r\) in the range \(1\leq r<p^{1/(p-1)}\) we have_
\[g\langle r+1\rangle\in S_{r}.\]
Proof.: The monomial \(h(-m-1)h(-1)\) is equal to \(h^{I}\) for \(I=(-m-1,-1)\) with \(|I|=m+2\). Write \(R=p^{\alpha}\) for \(\alpha<\frac{1}{p-1}\) as in the proof of Lemma 2.1, and notice that in this case Lemma 4.1 and Legendre's theorem for \(\nu_{p}(m!)\) gives us:
\[|c_{p}(r-1,m)|\,R^{|I|} \leq p^{-\frac{m}{p-1}+(p-1)\log_{p}(m)}\cdot p^{\alpha(m+2)}\] \[\leq p^{(\alpha-\frac{1}{p-1})m+(p-1)\log_{p}(m)+2\alpha}\]
In particular, since \(\alpha-\frac{1}{p-1}<0\), the linear term in \(m\) of the exponent dominates as \(m\) grows and \(\lim_{m\to\infty}c_{p}(r-1,m)R^{|I|}=0\), as we wanted to show.
The next result explains why we have chosen to use the notation \(g\langle r+1\rangle\) for this interpolated family of \(p\)-adic states.
**Theorem 4.3**.: _Let \(\hat{f}\) denote the renormalized character map for the \(p\)-adic Heisenberg algebra \(S\) (cf. (1)). Then for all odd weights \(r\in X\setminus\{-1\}\), we have_
\[\hat{f}(g\langle r+1\rangle)=G_{r+1}^{*}\]
_and \(\operatorname{Res}_{r=-1}g\langle r+1\rangle=\frac{1}{p}-1\)._
Proof.: For the residue computation, notice that each \(c_{p}(r,m)\) term is analytic on all of \(X\), and so the residue comes entirely from \(\zeta_{p}(-r)\), which is known to have a residue of \(\frac{1}{p}-1\) at \(r=-1\).
For the character computation we proceed by \(p\)-adic approximation following [16], Example 1.6. Choose a sequence \(r_{n}\geq 3\) of increasing positive integers that converge to \(r\) in weight space, such that \(r_{n}\equiv r\pmod{\phi(p^{n+1})}\) and also \(r_{n}\geq n+1\) for all \(n\). Then as in [16], top of page 206, \(G_{r_{n}+1}\to G_{r+1}^{*}\). We have \(f(v_{r_{n}})=G_{r_{n}+1}\) by Lemma 3.1 so our result will follow by continuity of the map \(f\) if we can show that the sequence \(v_{r_{n}}\) converges to \(g\langle r+1\rangle\).
The vacuum term is handled by theory of the \(p\)-adic zeta function as described in Example 1.6 of [16]. Thus it remains to show that the terms \((1-p^{r_{n}})c(r_{n},m)\) converge to \(c_{p}(r,m)\) uniformly in \(m\). Euler's identity \(x^{\phi(p^{m+1})}\equiv 1\pmod{p^{m+1}}\) whenever \(p\nmid x\) implies that
\[(1-p^{r_{n}})c(r_{n},m)\equiv c_{p}(r,m)\pmod{p^{n+1}}\]
for all \(m\). Thus we obtain the desired convergence \(v_{r_{n}}\to g\langle r+1\rangle\) and this concludes the proof.
Now we can prove
**Theorem 4.4**.: _Let \(r\) be a real number satisfying \(p^{1/p}\leq r<p^{1/(p-1)}\). Then the following hold:_
1. _For any weight_ \(t\in X\) _there is a nonzero state_ \(u\in S_{r}\) _that has_ \(X\)_-weight_ \(t\)_._
2. _For any_ \(s\in\mathbf{Z}_{p}\) _there is a nonzero state_ \(u\in S_{r}\) _such that_ \(L[0]u=su\)_._
Proof.: Assuming the truth of part (a), (b) is an immediate consequence of Theorem 3.7. It is here that we use the condition \(r\geq p^{1/p}\).
As for (a), we first treat the case of _even_ weights \(t\). Indeed, if \(t\neq 0\) then we may take \(u=g\langle t\rangle\) by Theorem 4.3. On the other hand, if \(t=0\) we can simple choose \(u=\mathbf{1}\). It remains to prove (a) in the case that \(t\) is an _odd_ weight. With this in mind, assume that \((u_{i})\) is a Cauchy sequence in \(S_{r}\) with \(u\coloneqq\lim_{i\to i}u_{i}\) such that \(u\) has an _even_\(X\)-weight, call it \(k\in X\). We have already seen that such limit states exist in \(S_{r}\). We will show that as long as \(u\neq\mathbf{1}\) then \(L[-1]u\) is a nonzero state in \(S_{r}\) with corresponding \(X\)-weight \(1+k\). This will suffice to complete the proof of the Theorem for all weights except \(t=1\). But here we can simply choose the weight \(1\) generator \(h=h[-1]\mathbf{1}\) of \(S_{\mathrm{alg}}\).
Our jumping-off point is the expression (9) for the operator \(L[-1]\). Because both \(L(-1)\) and \(L(0)\) are bounded operators on \(S\) then the same is true of \(L[-1]\). In particular, if \((u_{i})\) is as above then \(L[-1]u=\lim_{i\to\infty}L[-1]u_{i}\in S_{r}\). To verify that the \(X\)-weight of this limit state is \(1+k\), we proceed as follows. Because the square bracket vertex operators close on an (algebraic Heisenberg) VOA, then in particular the square bracket Virasoro modes \(\{L[n]\}\) close on a Virasoro algebra of central charge \(1\). What we really need from this is the identity \([L[0],L[-1]]=L[-1]\). Thus if \(u_{i}\) has \(L[0]\)-weight \(k_{i}\) then
\[L[0]L[-1]u_{i} =[L[0],L[-1]]u_{i}+L[-1]L[0]u_{i}\] \[=L[-1]u_{i}+k_{i}L[-1]u_{i}\] \[=(1+k_{i})L[-1]u_{i}.\]
So each \(L[-1]u_{i}\) has \(L[0]\)-conformal weight \(1+k_{i}\), and this immediately implies that \(L[-1]u\) has \(X\)-weight \(1+k\).
It remains to show that \(L[-1]u\neq 0\). This follows from Lemma 4.6 below, thereby completing the proof of the Theorem.
_Remark 4.5_.: We point out that the states \(L[-1]u\) above of odd \(X\)-weight always satisfy \(\hat{f}(L[-1]u)=0\) so that we do not get additional \(p\)-adic modular forms by this route. The reason for this is that it is well-known that for \(u_{i}\in S_{\mathrm{alg}}\) we always have \(f(L[-1]u_{i})=0\). Hence this vanishing of characters remains true after taking limits, by continuity of the character map.
**Lemma 4.6**.: _If \(u\in S\) satisfies \(L[-1]u=0\) then \(u\) is a scalar multiple of \(\mathbf{1}\)._
Proof.: As explained in the proof of [6, Proposition 7.3], we may write
\[u=\sum_{i\geq 0}u_{i}\]
where each \(u_{i}\) is a state in \((S_{alg})_{i}\) and \(u_{i}\to 0\). We calculate
\[0=L[-1]u=\sum_{i}L[-1]u_{i}=\sum_{i}(L(-1)+L(0))u_{i}=\sum_{i}(L(-1)u_{i}+iu_{ i}).\]
If the Lemma is false then there is a _least positive_ integer \(j\) such that \(u_{j}\neq 0\). We will derive a contradiction. Since \(L(-1):(S_{\mathrm{alg}})_{i}\to(S_{\mathrm{alg}})_{i+1}\) it follows from the previous display that \(ju_{j}\) is equal to a linear combination of states \(u_{i}\) of weight _greater_ that \(j\). Since the \(u_{i}\) are linearly independent and \(j\neq 0\), this can only happen if \(u_{j}=0\), and this is the desired contradiction.
## 5. The character map in weight zero
Recall from the arithmetic theory of elliptic curves that there are finitely many supersingular \(j\)-invariants for each prime \(p\), and they are all contained in \(\mathbf{F}_{p^{2}}\). When \(p=2,3,5\) we have that \(j=0\) is the only supersingular \(j\)-invariant and it will thus suffice below to work over \(\mathbf{Q}_{p}\) for such primes. More generally, since some supersingular \(j\)-invariants may be defined over \(\mathbf{F}_{p^{2}}\) rather than over \(\mathbf{F}_{p}\) for arbitrary \(p\), in general one needs to work over the quadratic unramified extension of \(\mathbf{Q}_{p}\). The theory of [6] extends to this setting without change.
The following example is discussed at the bottom of page 202 of [15].
**Proposition 5.1**.: _When \(p=2,3,5\), we have_
\[M_{p,0}=\mathbf{Q}_{p}\langle j^{-1}\rangle.\]
In order to utilize Proposition 5.1, we now restrict to \(p=2,3,5\), and eventually we will in fact simply take \(p=2\) for simplicity.
First we recall how to view \(j^{-n}\) as a \(p\)-adic modular form of level \(1\) and weight \(0\). Bearing in mind our notation for Eisenstein series (cf. Subsection 2.3), for \(p=2,3,5\) we visibly have \(Q\equiv 1\pmod{p}\). Thus,
\[Q^{-1}=\lim_{m\to\infty}Q^{p^{m}-1}.\]
It follows that
\[j^{-n}=\Delta^{n}Q^{-3n}=\lim_{m\to\infty}\Delta^{n}Q^{3n(p^{m}-1)}.\]
Suppose that we can find states
\[J_{n,m}\in S_{\mathrm{alg}}\]
such that the following properties hold:
1. \(f(J_{n,m})=\Delta^{n}Q^{3n(p^{m}-1)}\);
2. \(J_{n}:=\lim_{m\to\infty}J_{n,m}\) exists for each \(n\);
3. there exists a bound \(B\) such that \(|J_{n}|\leq B\) for all \(n\).
Assuming that these properties hold, by continuity of the \(p\)-adic character map we will have
\[\hat{f}(J_{n})=\hat{f}(\lim_{m}J_{n,m})=\lim_{m}f(J_{n,m})=\lim_{m}\left( \Delta^{n}Q^{3n(p^{m}-1)}\right)=j^{-n}.\]
It will then follow that if \(\sum_{n\geq 0}b_{n}j^{-n}\in M_{p,0}\) for a sequence of scalars \(b_{n}\) converging to \(0\), then the state \(\sum_{n}b_{n}J_{n}\) exists in \(S\) and \(\hat{f}(\sum b_{n}J_{n})=\sum b_{n}j^{-n}\). Therefore, if we can find states \(J_{n,m}\in S_{\mathrm{alg}}\) with properties (1)-(3) above, then we will have established surjectivity of the map \(\hat{f}\) in weight zero.
As it is, we will not quite achieve this goal. Instead we will establish an estimate slightly weaker than (3). This will suffice to obtain many new \(2\)-adic modular forms in the image of \(\hat{f}\), though not all. A precise statement is given below as Corollary 6.7 and reiterated in Theorem 1.1(2) of the introduction.
### Some special states in \(S_{\rm alg}\)
In the following \(u_{m,n}\) refers to the states introduced in Lemma 3.2.
**Lemma 5.2**.: _For nonnegative integers \(a\), \(b\) introduce the square bracket state_
\[U_{ab}\coloneqq (-1)^{b}(588)^{a}(120)^{b}\times\] \[\sum_{i=0}^{a}\binom{a}{i}\frac{1}{(6i+2b-1)!!(4a-4i-1)!!}\left( \frac{250}{147}\right)^{i}h[-2]^{6i+2b}h[-3]^{4a-4i}\mathbf{1}.\]
_Then_
\[f(U_{ab})=\Delta^{a}Q^{b}.\]
Proof.: Using Lemma 3.2 we obtain
\[\Delta^{a}Q^{b}=\left(\frac{1}{12}\right)^{3a}(Q^{3}-R^{2})^{a}Q^ {b}\] \[= 12^{-3a}\sum_{i=0}^{a}(-1)^{i}\binom{a}{i}Q^{3i+b}R^{2a-2i}\] \[= 12^{-3a}\sum_{i=0}^{a}(-1)^{i}\binom{a}{i}Z(u_{(3i+b),(2a-2i)}\] \[= 12^{-3a}\sum_{i=0}^{a}\binom{a}{i}\left\{(-1)^{b}\frac{(120)^{3i +b}(1008)^{2(a-i)}}{(6i+2b-1)!!(4(a-i)-1)!!}Z(h[-2]^{6i+2b}h[-3]^{4(a-i)} \mathbf{1})\right\}\] \[= (-1)^{b}(588)^{a}(120)^{b}\eta\ \times\] \[\sum_{i=0}^{a}\binom{a}{i}\frac{1}{(6i+2b-1)!!(4a-4i-1)!!}\left( \frac{250}{147}\right)^{i}Z(h[-2]^{6i+2b}h[-3]^{4a-4i}\mathbf{1}).\]
This computation suggests that we set
\[J_{n,m}\coloneqq u_{n,3n(p^{m}-1)}\]
so that \(a=n\) and \(b=3n(p^{m}-1)\). Thus,
\[J_{n,m}=(-1)^{n(p+1)}(588\cdot 120^{3(p^{m}-1)})^{n}\] \[\sum_{i=0}^{n}\frac{\binom{n}{i}250^{i}}{(147)^{i}\cdot(6(i+np^{m} -n)-1)!!\cdot(4(n-i)-1)!!}h[-2]^{6(i+n(p^{m}-1))}h[-3]^{4(n-i)}\mathbf{1}. \tag{13}\]
As discussed previously, if we can show that the limit
\[J_{n}\coloneqq\lim_{m\to\infty}J_{n,m}\]
exists, and that the series \(J_{n}\) are bounded with \(n\), then we will obtain new \(p\)-adic modular forms of weight zero in the image of the character map. The most complicated term in the definition of \(J_{n,m}\) that involves the limit variable \(m\) is the power \(h[-2]^{6i+6n(p^{m}-1)}\). Therefore, in the next section we give a detailed study of these square-bracket states and their \(p\)-adic properties.
## 6. Powers of \(h[-2]\)
Recall that the modes \(h[n]\) are defined in (8) via the formal series
\[Y[h,z]\coloneqq\sum_{n\in\mathbf{Z}}h[n]z^{-n-1}=e^{z}Y(h,e^{z}-1)\]
where we have taken \(k=1\) because \(h\in(S_{\text{alg}})_{1}\). Of course, by definition of the algebraic Heisenberg algebra, \(Y(h,z)=\sum_{n\in\mathbf{Z}}h(n)z^{-n-1}\). Therefore,
\[h[-2] =\operatorname{Res}_{z}(z^{-2}e^{z}Y(h,e^{z}-1))\] \[=\operatorname{Res}_{z}\left(z^{-2}e^{z}\sum_{n\in\mathbf{Z}}h(n) (e^{z}-1)^{-n-1}\right)\] \[=\operatorname{Res}_{z}\left(z^{-2}e^{z}\sum_{n\geq-2}h(n)(e^{z}- 1)^{-n-1}\right)\] \[=\sum_{n\geq-2}h(n)\operatorname{Res}_{z}\left(\frac{e^{z}}{z^{2 }(e^{z}-1)^{n+1}}\right)\]
Recalling the definition of generalized Bernoulli polynomials from Subsection 2.2, we have
\[\frac{e^{z}}{z^{2}(e^{z}-1)^{n+1}}=z^{-n-3}e^{z}\left(\frac{z}{e^{z}-1}\right) ^{n+1}=\sum_{m\geq 0}B_{m}^{(n+1)}(1)\frac{z^{m-n-3}}{m!}\]
The residue arises when \(m=n+2\). This shows that
\[h[-2]=\sum_{n\geq-2}\frac{B_{n+2}^{(n+1)}(1)}{(n+2)!}h(n). \tag{14}\]
More generally for \(r\geq 2\):
\[h[-r]=\sum_{n\geq-r}\frac{B_{n+r}^{(n+1)}(1)}{(n+r)!}h(n). \tag{15}\]
_Remark 6.1_.: In principle these generalized Bernoulli numbers can have denominators that are divisible by \(p\). Computations suggest that the Clausen-von-Staudt theorem generalizes as follows:
\[p^{\lfloor\log_{2}(m+1)\rfloor}B_{n}^{(m)}(1)\in\mathbf{Z}_{p}.\]
We have proved the following slightly weaker form of this:
\[p^{\lfloor\log_{p}(mn+1)\rfloor}B_{n}^{(m)}(1)\in\mathbf{Z}_{p}.\]
We will not need such estimates below.
Now, we want to take powers of these square-bracket states. We notice that \(h(n)\) for \(n\geq 3\) all commute with each other, and they commute with \(h(-2)\), \(h(-1)\), \(h(0)\), \(h(1)\) and \(h(2)\). Therefore, let us write:
\[A =\sum_{n=-2}^{3}\frac{B_{n+2}^{(n+1)}(1)}{(n+2)!}h(n),\] \[B =\sum_{n\geq 4}\frac{B_{n+2}^{(n+1)}(1)}{(n+2)!}h(n).\]
Then \(A\) and \(B\) commute, so that we have
\[h[-2]^{u}=(A+B)^{u}=\sum_{r=0}^{u}{u\choose r}A^{r}B^{u-r}\]
Eventually we need to consider \(h[-2]^{u}h[-3]^{v}\mathbf{1}\). This will be an element in the algebraic Heisenberg with a messy description. To evaluate this, let us also write
\[C\coloneqq\sum_{n=-3}^{3}\frac{B_{n+3}^{(n+1)}(1)}{(n+3)!}h(n),\ \ D\coloneqq \sum_{n\geq 4}\frac{B_{n+3}^{(n+1)}(1)}{(n+3)!}h(n).\]
Then we have \([C,B]=0\), \([C,D]=0\), \([D,B]=0\), \([D,A]=0\) and hence
\[h[-2]^{u}h[-3]^{v}=\sum_{r=0}^{u}\sum_{t=0}^{v}{u\choose r}{v\choose t}A^{r}B^ {u-r}D^{v-t}C^{t}\ \ =\sum_{r=0}^{u}\sum_{t=0}^{v}{u\choose r}{v\choose t}A^{r}C^{t}B^{u-r}D^{v-t}\]
Since the operators \(B\) and \(D\) annihilate the vacuum vector \(\mathbf{1}\), it follows that
\[h[-2]^{u}h[-3]^{v}\mathbf{1}=A^{u}C^{v}\mathbf{1}. \tag{16}\]
In order to evaluate this, let us observe that:
\[A =h(-2)+h(-1)+\tfrac{1}{12}h(0)-\tfrac{1}{240}h(2)+\tfrac{1}{240}h( 3),\] \[C =h(-3)+\tfrac{3}{2}h(-2)+\tfrac{1}{2}h(-1)+\tfrac{1}{240}h(1)- \tfrac{1}{480}h(2)+\tfrac{1}{945}h(3)\]
As operators for \(r\geq 1\) we have \(h(r)=r\frac{\partial}{\partial h(-r)}\) and \(h(0)=0\) so that this simplifies to
\[A =h(-2)+h(-1)-\tfrac{1}{120}\tfrac{\partial}{\partial h(-2)}+ \tfrac{1}{80}\tfrac{\partial}{\partial h(-3)},\] \[C =h(-3)+\tfrac{3}{2}h(-2)+\tfrac{1}{2}h(-1)+\tfrac{1}{240}\tfrac{ \partial}{\partial h(-1)}-\tfrac{1}{240}\tfrac{\partial}{\partial h(-2)}+ \tfrac{1}{315}\tfrac{\partial}{\partial(-3)}.\]
If we combine equations (13) and (16) we obtain the expression:
\[J_{n,m}= (-1)^{n(p+1)}(588\cdot 120^{3(p^{m}-1)})^{n} \tag{17}\] \[\sum_{i=0}^{n}{n\choose i}\frac{250^{i}}{(147)^{i}\cdot(6n(p^{m}- 1)+6i-1)!!\cdot(4n-4i-1)!!}A^{6i+6n(p^{m}-1)}C^{4n-4i}\mathbf{1}.\]
To begin our analysis, notice the following:
**Lemma 6.2**.: _The partial differential operators \(A\) and \(C\) commute._
Proof.: This can be checked directly, say on a computer, or it can be deduced from the fact, discussed in Subsection 3.1, that the algebraic VOA structures on the Heisenberg algebra using either the square or round bracket modes are isomorphic. Since \(h(-2)\) and \(h(-3)\) commute by definition, this result implies that \(h[-2]\) and \(h[-3]\) also commute, and the Lemma is a consequence of this.
The preceding Lemma implies that while studying the limit over \(m\) of equation (17), we may apply powers of \(A\) before applying powers of \(C\). To this end, let us define a recursive sequence of polynomials \(p_{n}\) by setting \(p_{0}=\mathbf{1}\) and \(p_{n}=Ap_{n-1}\) for \(n\geq 1\). For simplicity let us now write \(a=h(-1)\) and \(b=h(-2)\). Then with this notation, \(A\) acts as the operator \(a+b-\tfrac{1}{120}\frac{d}{da}\).
**Proposition 6.3**.: _We have for \(k\geq 0\):_
\[p_{2k}= \sum_{i=0}^{k}(2(k-i)-1)!!(-120)^{i-k}\binom{2k}{2(k-i)}(a+b)^{2i},\] \[p_{2k+1}= \sum_{i=0}^{k}(2(k-i)-1)!!(-120)^{i-k}\binom{2k+1}{2(k-i)}(a+b)^{2i +1}.\]
Proof.: The proof is by induction. First notice that
\[Ap_{2k}= \sum_{i=0}^{k}(2(k-i)-1)!!(-120)^{i-k}\binom{2k}{2(k-i)}(a+b)^{2i+ 1}+\] \[\sum_{i=1}^{k}(2(k-i)-1)!!(-120)^{i-k-1}(2i)\binom{2k}{2(k-i)}(a+b )^{2i-1}\] \[= \sum_{i=0}^{k}(2(k-i)-1)!!(-120)^{i-k}\binom{2k}{2(k-i)}(a+b)^{2i +1}+\] \[\sum_{i=0}^{k-1}(2(k-i)-3)!!(-120)^{i-k}(2i+2)\binom{2k}{2(k-i-1) }(a+b)^{2i+1}\] \[= (a+b)^{2k+1}+\sum_{i=0}^{k}(2(k-i)-3)!!(-120)^{i-k}.\] \[\left\{(2(k-i)-1)\binom{2k}{2(k-i)}+(2i+2)\binom{2k}{2(k-i-1)} \right\}(a+b)^{2i+1}\]
Since
\[(2(k-i)-1)\binom{2k}{2(k-i)}+(2i+2)\binom{2k}{2(k-i-1)}\] \[= (2(k-i)-1)\frac{(2k)!}{(2(k-i))!(2i)!}+(2i+2)\frac{(2k)!}{(2(k-i- 1))!(2(i+1))!}\] \[= (2(k-i)-1)\left\{\frac{(2k)!}{(2(k-i))!(2i)!}+\frac{(2k)!}{(2k-2i -1)!(2i+1)!}\right\}\] \[= (2(k-i)-1)\left\{\binom{2k}{2i}+\binom{2k}{2i+1}\right\}\] \[= (2(k-i)-1)\binom{2k+1}{2(k-i)}\]
Hence we indeed have \(Ap_{2k}=p_{2k+1}\) by induction. The proof for \(Ap_{2k+1}=p_{2k}\) is analogous.
**Corollary 6.4**.: _For all \(n,m\geq 1\) we have_
\[(-1)^{n(p+1)}120^{3n(p^{m}-1)}A^{6n(p^{m}-1)}\mathbf{1}\] \[= \sum_{j=0}^{3n(p^{m}-1)}(6n(p^{m}-1)-2j-1)!!(-120)^{j}\binom{6n(p ^{m}-1)}{2j}(h(-1)+h(-2))^{2j}\]
Proof.: This follows immediately from Proposition 6.3 by setting \(2k=6n(p^{m}-1)\)
Let us summarize these computations in the following result:
**Theorem 6.5**.: _For all \(n,m\geq 1\) we have_
\[J_{n,m}= \sum_{i=0}^{n}{n\choose i}\frac{2^{2n+i}\cdot 3^{n-i}\cdot 5^{3i} \cdot 7^{2(n-i)}}{(6n(p^{m}-1)+6i-1)!!\cdot(4n-4i-1)!!}A^{6i}C^{4n-4i}\] \[\sum_{j=0}^{3n(p^{m}-1)}(6n(p^{m}-1)-2j-1)!!(-120)^{j}{6n(p^{m}-1 )\choose 2j}(h(-1)+h(-2))^{2j}.\]
Proof.: The theorem follows by combining equation (17) with Corollary 6.4.
The integrality properties of the double factorials appearing above are particularly easy to analyze if \(p=2\), because in that case they are \(2\)-adic units. Therefore at this stage we will now restrict to the case \(p=2\).
### Completion of the proof when \(p=2\)
Notice now that
\[\frac{1}{(6n(p^{m}-1)+6i-1)!!}=\frac{1}{(6np^{m}-6(n-i)-1)!!}=\frac{\prod_{j=0 }^{3(n-i)-1}((3n)2^{m+1}-2j-1)}{((3n)2^{m+1}-1)!!}\]
The product defining the double factorial \(((3n)2^{m+1}-1)!!\) contains \(3n\) copies of each representative of the unit group \((\mathbf{Z}/2^{m+1}\mathbf{Z})^{\times}\). Hence for \(m\geq 2\) we have
\[((3n)2^{m+1}-1)!!\equiv 1\pmod{2^{m+1}}.\]
By combining these observations we find that for \(p=2\):
\[\lim_{m\to\infty}\frac{1}{(6n(p^{m}-1)+6i-1)!!}=\prod_{j=0}^{3(n-i)-1}(-2j-1)= (-1)^{n-i}(6(n-i)-1)!!\]
Therefore, if we write
\[D_{n,m}=\sum_{i=0}^{n}{n\choose i}\frac{2^{2n+i}\cdot 3^{n-i} \cdot 5^{3i}\cdot 7^{2(n-i)}}{(6n(p^{m}-1)+6i-1)!!\cdot(4n-4i-1)!!}A^{6i}C^{4n-4i},\] \[E_{n,m}=\sum_{j=0}^{3n(p^{m}-1)}(6n(p^{m}-1)-2j-1)!!(-120)^{j}{6 n(p^{m}-1)\choose 2j}(h(-1)+h(-2))^{2j}\]
so that \(J_{n,m}=D_{n,m}(E_{n,m})\) by Theorem 6.5, we find that the following limit exists for each \(n\geq 1\):
\[D_{n}:=\lim_{m\to\infty}D_{n,m}=\sum_{i=0}^{n}(-1)^{n-i}{n\choose i}\frac{2^{ 2n+i}\cdot 3^{n-i}\cdot 5^{3i}\cdot 7^{2(n-i)}\cdot(6(n-i)-1)!!}{(4(n-i)-1)!!}A^{6i}C^ {4n-4i}.\]
This limit acts on the \(p\)-adic Heisenberg algebra, as it is defined by a finite sum and so it can only change \(2\)-adic valuations by a bounded amount for each \(n\). In fact, these are continuous operators on the \(p\)-adic Heisenberg algebra, and so if we can likewise show that \(\lim_{m\to\infty}E_{n,m}\) exists, then we will deduce that \(\lim_{m\to\infty}J_{n,m}\) exists.
First observe that because \(p=2\):
\[\binom{6n(p^{m}-1)}{2j}= \frac{(6n(2^{m}-1))!}{(2j)!(6n(2^{m}-1)-2j)!}\] \[= \frac{2^{3n(2^{m}-1)}(3n(2^{m}-1))!(6n(2^{m}-1)-1)!!}{2^{j}j!(2j-1)!!2^{3n(2^{m}-1)-j}(3n(2^{m}-1)-j)!(6n(2^{m}-1)-2j-1)!!}\] \[= \binom{3n(2^{m}-1)}{j}\frac{(6n(2^{m}-1)-1)!!}{(2j-1)!!(6n(2^{m}- 1)-2j-1)!!}\]
and thus
\[E_{n,m}=\sum_{j=0}^{3n(2^{m}-1)}(-120)^{j}\binom{3n(2^{m}-1)}{j}\frac{(6n(2^{m }-1)-1)!!}{(2j-1)!!}(h(-1)+h(-2))^{2j} \tag{18}\]
As above, we have the following \(2\)-adic identity
\[\lim_{m\to\infty}(6n(2^{m}-1)-1)!!=\lim_{m\to\infty}\frac{((3n)2^{m+1}-1)!!}{ \prod_{j=0}^{3n-1}((3n)2^{m+1}-2j-1)}=(-1)^{n}\frac{1}{(6n-1)!!}\]
Likewise one shows by elementary means that
\[\lim_{m\to\infty}\binom{3n(2^{m}-1)}{j}=\binom{-3n}{j}.\]
In particular, being a limit of \(2\)-adic integers, the values \(\binom{-3n}{j}\) are also \(2\)-adic integers. Thus for each \(n\), the series \(E_{n,m}\) converge to
\[E_{n}:=\lim_{m\to\infty}E_{n,m}=(-1)^{n}\frac{1}{(6n-1)!!}\sum_{j=0}^{\infty} \binom{-3n}{j}\frac{1}{(2j-1)!!}(-120(h(-1)+h(-2))^{2})^{j}.\]
We have proved most of the following theorem.
**Theorem 6.6**.: _Let \(p=2\), and define differential operators and series for all \(n\geq 1\) as follows:_
\[D_{n} =\sum_{i=0}^{n}(-1)^{n-i}\binom{n}{i}\frac{2^{2n+i}\cdot 3^{n-i} \cdot 5^{3i}\cdot 7^{2(n-i)}\cdot(6(n-i)-1)!!}{(4(n-i)-1)!!}A^{6i}C^{4n-4i},\] \[E_{n} =(-1)^{n}\frac{1}{(6n-1)!!}\sum_{j=0}^{\infty}\binom{-3n}{j}\frac {1}{(2j-1)!!}(-120(h(-1)+h(-2))^{2})^{j},\] \[J_{n} =D_{n}(E_{n}).\]
_Then the following properties hold:_
1. _For every_ \(n\geq 1\)_, the series_ \(J_{n}\) _is contained in the_ \(2\)_-adic Heisenberg algebra. More precisely, if_ \(1\leq r<2^{3/4}\)_, then_ \(J_{n}\in S_{r}\)_._
2. _The character of_ \(J_{n}\) _satisfies_ \(\hat{f}(J_{n})=j^{-n}\)_._
3. _We have_ \(|J_{n}|\leq 2^{21n}\) _for all_ \(n\geq 1\)_, where the absolute value is the_ \(2\)_-adic supremum norm._
Proof.: We have already explained why the first part of part (1) holds. For the second part of (1), notice that \(D_{n}\) is a finite differential operator that preserves each subspace \(S_{r}\). Therefore, to establish (1) it suffices to show that \(E_{n}\in S_{r}\) for values of \(r\) in the specified range. Since \(p=2\) we can ignore the double factorials when analyzing the integrality properties of \(E_{n}\). Then the second part of (1) follows immediately from the
definition of \(S_{r}\) thanks to the factor of \(2^{3j}\) appearing in the coefficients via the factor \((-120)^{j}\).
Part (2) also follows from the previous discussion, so it only remains to discuss part (3). For this, notice that \(2^{4}A\) and \(2^{4}C\) are both \(2\)-adically integral. Therefore \(2^{21n}D_{n}\) preserves \(2\)-adic integrality. Since \(E_{n}\) is \(2\)-adically integral, so then is \(2^{21n}J_{n}\) and thus \(|J_{n}|\leq 2^{21n}\) as claimed.
Recall from equation (73) of [17] that for real numbers \(r\geq 0\) the space of \(r\)-overconvergent \(2\)-adic modular forms of tame level \(1\) and weight \(0\) can be described as
\[M_{0}^{\dagger}(r)\coloneqq\left\{\sum_{n\geq 0}a_{n}j^{-n}\mid|a_{n}|\,2^{12nr }\to 0\right\}. \tag{19}\]
**Corollary 6.7**.: _When \(p=2\) the overconvergent space \(M_{0}^{\dagger}(7/4)\) is contained in the image of the normalized character map \(\hat{f}\)._
Proof.: This follows immediately from Theorem 6.6, the continuity of the character map as established in [6], and the description of \(M_{0}^{\dagger}(r)\).
_Remark 6.8_.: Corollary 6.7 could be improved if the bound in part (3) of Theorem 6.6 could be improved. For example, surjectivity of the \(2\)-adic character map in weight \(0\) would follow from an absolute bound on the series \(J_{n}\), independent of \(n\). We do not know if, or by how much, part (3) of Theorem 6.6 could be improved. A different approach could be to work with the Hauptmodul \(\Delta(2\tau)/\Delta(\tau)\) on \(\Gamma_{0}(2)\) in place of \(j^{-1}\), as explained above equation (77) in [17]. Note that by [16], all classical forms on \(\Gamma_{0}(2)\) are \(2\)-adic modular forms of tame level one. Hence \(\Delta(2\tau)/\Delta(\tau)\) is contained in Serre's ring \(M_{p}\), and it could conceivably be in the image of the \(2\)-adic character map for the Heisenberg algebra. A first step in this direction would be to provide a concrete description of a state in the \(2\)-adic Heisenberg algebra whose character is \(\Delta(2\tau)/\Delta(\tau)\).
Regarding the weights of relevant states, we have
**Corollary 6.9**.: _Assume \(p=2\) and suppose that \(2^{1/2}\leq r<2^{3/4}\). Then each \(J_{n}\) is contained in \(S_{r}\), it has \(X\)-weight \(0\), and it satisfies \(L[0]J_{n}=0\). In particular, \(S\) has an infinite-dimensional square bracket \(0\)-weight space._
Proof.: Thanks to Theorem 6.6(1), (2) we may apply Theorem 3.7. Then the Corollary follows.
## 7. Continuous action of \(S_{\rm alg}[\,]\) on \(S_{r}\)
In this section we let \(p\) denote an arbitrary prime. The main result is
**Theorem 7.1**.: _If \(R\geq p^{1/p}\) then each square bracket Heisenberg mode \(h[n]\) acts continuously on \(S_{R}\)._
The proof is broken into two pieces, treating the annihilating and creative modes separately.
### The operators \(h[m]\) for positive \(m\)
In this Section we consider the operators \(h[m],m>0\). They are easier to handle \(p\)-adically than the same operators for \(m<0\) just because their expressions in terms of \(h(j)\) are easier do describe. We will prove
**Theorem 7.2**.: _The following hold for all \(m>0\):_
_(a) If_ \(R>1\) _then_ \(h[m]\) _is a bounded operator on_ \(S_{R}\)_._
_(b) If_ \(R\geq p^{2}\) _the operators_ \(h[m]\) _have uniformly bounded norms on_ \(S_{R}\)_, indeed_ \(|h[m]|_{R}\leq p^{-1}\)_._
It follows directly from [12, equations (16), (17)] that for \(m\geq 0\) we have
\[h[m+1]=(m+1)!\sum_{j\geq m}\frac{s(j+1,m+1)}{(j+1)!}h(j+1) \tag{20}\]
where \(s(i,m)\) is a Stirling number of the first kind (cf. Subsection 2.2).
We treat the case when \(m=0\) separately.
**Lemma 7.3**.: _The operator \(h[1]\) is a contraction operator on each \(S_{R}\) for \(R\geq 1\), and indeed \(|h[1]|_{R}\leq R^{-1}\)._
Proof.: We have \(h[1]=h(1)\) so this is a special case of Lemma 3.9.
Completion of proof of Theorem 7.2.: The indexing in (20) is chosen so as to conform to [12, Theorem 6] where Komatsu-Young give the following lower bound on \(\nu(s(j+1,m+1))\): given integers \(j\geq m\geq 1\), let \(r\) be such that \(mp^{r}\leq j<mp^{r+1}\). Then
\[\nu(s(j+1,m+1))\geq\nu(j!)-\nu\left(\lfloor j/p^{r}\rfloor!\right)-mr.\]
Consequently, if \(d_{j,m}\) is the coefficient of \(h(j+1)\) in equation (20), then
\[\nu(d_{j,m})\geq\nu((m+1)!)-\nu(j+1)-\nu\left(\lfloor j/p^{r}\rfloor!\right)-mr.\]
Let \(t\) be the least positive integer such that \(m\leq p^{t}\). Then \(\nu(j+1)\leq t+r+1\) and
\[\nu_{p}(d_{j,m}) \geq\nu((m+1)!)-(t+r+1)-\nu\left(\lfloor j/p^{r}\rfloor!\right)-mr\] \[\geq\nu(m!)-\nu((mp)!)-(t+r+1+mr)\] \[=-(t+r+1+mr+m)\]
where the last equality follows from two applications of Legendre's formula for \(\nu(n!)\).
Thus in order to determine whether \(h[m+1]\) is bounded on \(S_{R}\) we must consider the expression
\[\sup_{r\geq 0}p^{t+r+1+mr+m}\left|h(j+1)\right|_{R} \leq p^{t+r+1+mr+m}R^{-(j+1)} \tag{21}\] \[\leq\sup_{r\geq 0}p^{mr+m+t+r+1}R^{-1-mp^{r}}\]
where the second inequality comes from an application of Lemma 3.9.
Suppose now that we assume that \(R>1\). Then there is a positive integer \(k\) such that \(R\geq p^{1/p^{k}}\) and we have
\[p^{r(m+1)}R^{-1-mp^{r}}\leq p^{r(m+1)-p^{-k}-mp^{r-k}}\to 0.\]
and the righthand side of this inequality goes to \(0\) as \(r\) tends to infinity. By equation (20) this shows that if \(R>1\) then for fixed \(m\geq 1\) indeed \(h[m+1]\) is bounded. This completes the proof of part (a) of the Theorem.
Turning to part (b), let us reconsider the expression (21). We assert that the supremum is achieved for \(r=0\) under the assumption \(R\geq p^{2/(p-1)}\). To see this, for any \(r\geq 1\) we have
\[1+p+...+p^{r-1}\geq r\] \[\Rightarrow 2\frac{p^{r}-1}{p-1}\geq 2r\geq r\left(1+\frac{1}{m}\right)\] \[\Rightarrow R^{m(p^{r}-1)}\geq p^{mr+r}\] \[\Rightarrow \frac{p^{m+t+1}}{R^{1+m}}\geq\frac{p^{mr+m+t+r+1}}{R^{1+mp^{r}}},\]
and this proves the assertion. Therefore equation (21) shows that
\[\left|h[m+1]\right|_{R}\leq p^{m+t+1}R^{-1-m}\]
as long as \(R\geq p^{2/(p-1)}\). By definition of \(t\) we certainly have \(t\leq m\). So if we now assume that \(R\geq p^{2}\) then the last displayed expression is bounded by \(p^{-1}\) for any \(m\).
### The operators \(h[m]\) for negative \(m\)
In this Section we consider the operators \(h[m]\) for \(m<0\). For \(m=-1\), \(-2\), \(-3\) we examined continuity properties of the square-bracket modes in Section 6 via a direct arithmetic analysis. In this section we treat the general case via a different method that relies on properties of the \(L[-1]\)-operator. We will prove
**Theorem 7.4**.: _If \(R\geq p^{1/p}\) then each operator \(h[-n]\) for \(n\geq 1\) is bounded on \(S_{R}\)._
Proof.: To begin the proof of the Theorem, we note that for all integers \(n\) we have the identities
\[[L[-1],h[n]]=-nh[n-1]. \tag{22}\]
On the other hand we see immediately from equation (9) and Lemma 3.5 that \(L[-1]\) acts continuously on \(S_{R}\) for each \(R\geq 1\). It then follows from equation (22) that if \(h[-1]\) acts continuously on some \(S_{R}\) for some \(R\geq 1\), then the same is true for all \(h[-n]\) for \(n\geq 1\). As a consequence, in proving Theorem 7.4 it suffices to prove the case \(n=-1\).
We have seen that
\[Y[h,z] =\sum_{n\in\mathbf{Z}}h[n]z^{-n-1}=e^{z}Y(h,e^{z}-1)\] \[=e^{z}\sum_{n\in\mathbf{Z}}h(-n-1)(e^{z}-1)^{n}\]
and therefore as in Section 6 we deduce that
\[h[-1]=h(-1)+\sum_{n>1}h(n-1)\frac{B_{n}^{(n)}(1)}{n!}\]
From the definitions in Subsection 2.2 we have
\[B_{n}^{\ell}(1) =\sum_{r\geq 0}\binom{n}{r}B_{r}^{(\ell)}, B_{r}^{(\ell)} \coloneqq B_{r}^{(\ell)}(0).\]
In particular,
\[B_{m}^{(m)}(1)=\sum_{r\geq 0}\binom{m}{r}B_{r}^{(m)},\]
and therefore
\[h[-1]=h(-1)+\sum_{m=2}^{\infty}\sum_{r=0}^{m}\frac{1}{r!(m-r)!}B_{r}^{(m)}h(m-1).\]
Now there is a standard equality
\[s(m,m-r)=\binom{m-1}{m-r-1}B_{r}^{(m)}.\]
Thus we see that
\[h[-1] =h(-1)+\sum_{m=2}^{\infty}\sum_{r=0}^{m}\frac{1}{r!(m-r)!}s(m,m-r )\frac{(m-r-1)!r!}{(m-1)!}h(m-1)\] \[=h(-1)+\sum_{m=2}^{\infty}\left\{\sum_{r=0}^{m}\frac{1}{(m-r)(m-1 )!}s(m,m-r)\right\}h(m-1)\] \[=h(-1)+\sum_{j=1}^{\infty}\left\{\sum_{r=0}^{j+1}\frac{1}{(j+1-r) j!}s(j+1,j+1-r)\right\}h(j)\] \[=h(-1)+\sum_{j=1}^{\infty}\left\{\sum_{m=-1}^{j}\frac{1}{(m+1)j!} s(j+1,m+1)\right\}h(j).\]
We must compute
\[\sup_{j,m}\left|\left\{\frac{1}{(m+1)j!}s(j+1,m+1)\right\}h(j)\right|.\]
Now we've already seen that \(\nu(s(j+1,m+1))\geq\nu(j!)-\nu(\lfloor j/p^{r}\rfloor)-mr\) where \(mp^{r}\leq j<mp^{r+1}\). Therefore we must consider
\[\sup_{j,m}p^{mr-\nu(m+1)+\nu(\lfloor j/p^{r}\rfloor)}\left|h(j)\right|_{R} \leq\sup_{j,m}p^{mr+\nu(\lfloor j/p^{r}\rfloor)}\left|h(j)\right|_{R}.\]
If \(m\leq p^{t}\) then \(\lfloor j/p^{r}\rfloor<mp\leq p^{t+1}\), so that \(p^{\nu(\lfloor j/p^{r}\rfloor)}\leq p^{t}\)
Arguing as before we will get the boundedness of \(h[-1]\) just as long as \(p^{mr+t}R^{-j}\) converges to \(0\) as \(j\) goes to infinity. But \(R\geq p^{1/p}\), so that
\[p^{mr+t}R^{-j}\leq p^{mr+t-j/p}\leq p^{mr+t-mp^{r-1}}\]
and the righthand side of this inequality goes to \(0\) as \(r\) goes to infinity. Now the required limit follows, and Theorem 7.4 is proved.
|
2309.11259 | Sequence-to-Sequence Spanish Pre-trained Language Models | In recent years, significant advancements in pre-trained language models have
driven the creation of numerous non-English language variants, with a
particular emphasis on encoder-only and decoder-only architectures. While
Spanish language models based on BERT and GPT have demonstrated proficiency in
natural language understanding and generation, there remains a noticeable
scarcity of encoder-decoder models explicitly designed for sequence-to-sequence
tasks, which aim to map input sequences to generate output sequences
conditionally. This paper breaks new ground by introducing the implementation
and evaluation of renowned encoder-decoder architectures exclusively
pre-trained on Spanish corpora. Specifically, we present Spanish versions of
BART, T5, and BERT2BERT-style models and subject them to a comprehensive
assessment across various sequence-to-sequence tasks, including summarization,
question answering, split-and-rephrase, dialogue, and translation. Our findings
underscore the competitive performance of all models, with the BART- and
T5-based models emerging as top performers across all tasks. We have made all
models publicly available to the research community to foster future
explorations and advancements in Spanish NLP:
https://github.com/vgaraujov/Seq2Seq-Spanish-PLMs. | Vladimir Araujo, Maria Mihaela Trusca, Rodrigo Tufiño, Marie-Francine Moens | 2023-09-20T12:35:19Z | http://arxiv.org/abs/2309.11259v2 | # Sequence-to-Sequence Spanish Pre-trained Language Models
###### Abstract
In recent years, substantial advancements in pre-trained language models have paved the way for the development of numerous non-English language versions, with a particular focus on encoder-only and decoder-only architectures. While Spanish language models encompassing BERT, RoBERTa, and GPT have exhibited prowess in natural language understanding and generation, there remains a scarcity of encoder-decoder models designed for sequence-to-sequence tasks involving input-output pairs. This paper breaks new ground by introducing the implementation and evaluation of renowned encoder-decoder architectures, exclusively pre-trained on Spanish corpora. Specifically, we present Spanish versions of BART, T5, and BERT2BERT-style models and subject them to a comprehensive assessment across a diverse range of sequence-to-sequence tasks, spanning summarization, rephrasing, and generative question answering. Our findings underscore the competitive performance of all models, with BART and T5 emerging as top performers across all evaluated tasks. As an additional contribution, we have made all models publicly available to the research community, fostering future exploration and development in Spanish language processing.
## 1 Introduction
Spanish ranks among the most extensively used languages globally. This fact has captured the interest of the NLP community, prompting efforts towards resource development for this NLP domain. Consequently, a number of pre-trained language models tailored for Spanish have emerged in recent years, predominantly employing encoder-only (Cafete et al., 2020; De la Rosa et al., 2022; Araujo et al., 2023) and decoder-only (Gutierrez-Fandino et al., 2022) architectures. These models have demonstrated exemplary performance in natural language understanding across several tasks (Canete et al., 2020) and benchmarks (Araujo et al., 2022). Nevertheless, there has been limited advancement in addressing tasks revolving around generating new sentences depending on a given input, such as summarization, generative question answering, dialogue, or translation.
Encoder-decoder models primarily serve for addressing sequence-to-sequence tasks, and over recent years, numerous architectures have emerged. The pretraining of these models is often based on the full transformer architecture (Vaswani et al., 2017) and entails more intricate learning objectives than those of encoder or decoder-only models individually. For instance, BART (Lewis et al., 2020) is specifically trained to reconstruct text that has been intentionally corrupted, while T5 (Raffel et al., 2020) is designed to adeptly fill in missing sections of text, simulating a scenario where text spans have been omitted. These models have predominantly been developed for the English language, and there have been recent efforts to pre-train them in languages other than English (Kamal Eddine et al., 2021; Sarti and Nissim, 2022). Unfortunately, when it comes to the Spanish language, there is a notable scarcity of such models that may be valuable to the NLP community.
In this paper, with the goal of democratizing sequence-to-sequence models for the NLP community in Spanish, we present BARTO and T5S, which are the Spanish versions of the BART and T5 models. We pre-train these models exclusively on Spanish corpora, aligning with their self-supervised methodology. Furthermore, we introduce models in the style of BERT2BERT (Rothe et al., 2020), utilizing well-established BERT (Devlin et al., 2019) and RoBERTa (Liu et al., 2019) models in Spanish as baselines. Subsequently, we curate a diverse set of sequence-to-sequence tasks to comprehensively assess the capabilities of our models. Our findings reveal that all models deliver competitive performance across these tasks. Notably, BARTO and T5S stand out as top performers, particularly ex
celling in tasks involving extended sequence summarization, split-and-rephrase tasks, and generative question answering. We are making these models available to the public to promote further research and application of these resources in Spanish NLP1.
Footnote 1: [https://github.com/vgaraujoy/Seq2Seq-Spanish-PLMs](https://github.com/vgaraujoy/Seq2Seq-Spanish-PLMs)
## 2 Related Work
### Language-specific Pre-trained Language Models
Pre-trained language models represent a class of advanced language models trained through self-supervised learning on large text corpora, making them versatile for various applications. Notably, two prominent models are BERT Devlin et al. (2019), an encoder-only model, and GPT Radford and Sutskever (2018); Radford et al. (2019), a decoder-only model. These models have established robust baselines for a wide range of NLP tasks in the English language.
Numerous language-specific BERT-based and GPT-based models have emerged in recent times. Examples include CamemBERT Martin et al. (2020) and GPT-fr Simoulin and Crabbe (2021) tailored for French, RobBERT Delobelle et al. (2020) designed for Dutch, FinBERT Virtanen et al. (2019) for Finnish, GePpeTto Mattei et al. (2020) for Italian, and several others. These models have consistently outperformed their multilingual counterparts, highlighting the value of their existence for language-specific tasks.
In the context of the Spanish language, we find BETO Canete et al. (2020) and ALBETO Canete et al. (2022), a BERT and ALBERT model, respectively, pre-trained on the SUC corpus Canete et al. (2020). BERTIN De la Rosa et al. (2022), a RoBERTa base model trained on the Spanish portion sampled from mC4 Xue et al. (2021). Furthermore, MarIA Gutierrez-Fandino et al. (2022) introduces a family of models, including RoBERTa and GPT-2 models trained on the corpus crawled by the National Library of Spain. A more recent model, RigoBERTa Serrano et al. (2022), follows the DeBERTa He et al. (2020) architecture and was trained with several corpora, including OSCAR, SUC, and mC4-es. Nevertheless, a notable gap exists in the availability of encoder-decoder models exclusively trained with Spanish data.
### Sequence-to-Sequence Pre-trained Language Models
A sequence-to-sequence model aims to map a fixed-length input with a fixed-length output where the length of the input and output may differ Sutskever et al. (2014). It comprises an encoder, which concurrently processes the entire input sequence, and a decoder, which receives the representations computed by the encoder and generates the output sequence in an autoregressive manner. These model types have proven to be valuable in addressing tasks including machine translation, dialogue systems, question answering, and text summarization.
Following the paradigm of pre-training and self-supervision, several models have been proposed. One of the first models is MASS Song et al. (2019), which uses a transformer to reconstruct an input sequence where a contiguous span of tokens is masked and mapped to a sequence consisting of the missing tokens. Later, T5 Raffel et al. (2020) proposed a pre-train on a multitask combination of supervised and self-supervised tasks, the latter being a task to complete fill-in dropped-out spans of text from documents. BART Lewis et al. (2020) is slightly similar to T5 but only uses a self-supervised objective in which spans are masked from the input, but the complete output is predicted to improve the decoder's language modeling ability. Moreover, Rothe et al. (2020) proposed the utilization of encoder or decoder-only pre-trained checkpoints for initializing new encoder-decoder models, showcasing competitive performance compared to purely encoder-decoder pre-trained models.
More recently, there has been a notable surge in endeavors to deploy sequence-to-sequence models for languages beyond English. BART, for instance, has been released for many other languages, including French Kamal Eddine et al. (2021), Greek Evdaimon et al. (2023), Indic Dabre et al. (2022), Arabic Kamal Eddine et al. (2022) and various other languages Shao et al. (2021); Tran et al. (2022); La Quatra and Cagliero (2023). Furthermore, T5 has been pre-trained in Italian Sarti and Nissim (2022), Arabic Nagoudi et al. (2022), Indic Aralikatte et al. (2023), among others. While the aforementioned recent models encompass a broad range of languages, the availability of Spanish models remains limited.
Sequence-to-Sequence Spanish Pre-trained Language Models
In this section, we begin by presenting our data collection and preparation procedures for pre-training our models. Subsequently, we provide detailed descriptions of each model and outline the corresponding pre-training processes.
### Pre-training Data
We employ the OSCAR 21.09 corpus, which includes a deduplicated Spanish dataset of approximately 160GB of text. Furthermore, we utilize the mC4-es corpus Xue et al. (2021), specifically adopting the Gaussian perplexity sampling subset proposed by De la Rosa et al. (2022), which boasts an extensive 500GB text dataset and has demonstrated superior model consistency. Additionally, we incorporate SUC, the corpus utilized for pre-training BETO, comprising around 14GB of raw text from diverse sources. Note that we exclude Wikipedia text from SUC, instead opting for an updated Spanish Wikipedia dump2, resulting in approximately 10GB of text.
Footnote 2: [https://dumps.wikimedia.org/eswiki/latest/](https://dumps.wikimedia.org/eswiki/latest/)
As established by prior research Liu et al. (2019); Raffel et al. (2020), the corpus quality significantly impacts the outcomes of pre-training models. Consequently, we closely follow the preprocessing methodologies previously established for both English Raffel et al. (2020) and Spanish models Gutierrez-Fandino et al. (2022); Serrano et al. (2022). Below, we describe the procedure.
1. **Document-level Formatting:** We ensure that all data adheres to a document-level format, which means that each instance is a document containing several contiguous coherent sentences. This is crucial for enabling the models to capture extensive contextual dependencies.
2. **Data Filtering:** To enhance data quality, we employ straightforward and cost-effective filtering methods. For example, we eliminate very short documents based on sentence and document length, filter out text containing repeated characters or special characters not commonly used in Spanish, and exclude pages containing code and sensitive content.
3. **Encoding Correction:** Some text samples may employ inconsistent encodings or exhibit encoding issues. To address this, we utilize the ftfy3 tool to rectify encoding errors.
Footnote 3: [https://ftfy.readthedocs.io/](https://ftfy.readthedocs.io/)
4. **Deduplication:** As a final step, we employ a deduplication process across all corpora using the text-dedup4 library. Due to its computational intensity, this step is performed at the end of the data preparation pipeline.
Footnote 4: [https://github.com/ChenghaoMou/text-dedup](https://github.com/ChenghaoMou/text-dedup)
Footnote 5: [https://github.com/google/sentenceje](https://github.com/google/sentenceje)
### BARTO Model
BARTO follows the BERT base architecture Lewis et al. (2020), which consists of an encoder and a decoder with 6 layers each. Also, it has 12 attention heads and 768 hidden dimensions in both the encoder and decoder. BARTO is pre-trained by denoising the corrupted input documents. As suggested by Lewis et al. (2020), we use a combination of text infilling and sentence permutation transformations for robust performance.
We use sentencepiece5 library to build a BPE tokenizer of 50,264 tokens. Furthermore, we rely on the fairseq6 library to perform the training. BARTO is pre-trained for 100,000 steps on 8 NVIDIA A100 GPUs with input texts of 1024 and a batch size of 2048. We use the Adam optimizer Kingma and Ba (2015), a warm-up of 10,000 steps, a dropout of 0.1, and FP16 to speed up training.
Footnote 6: [https://github.com/facebookresearch/fairseq](https://github.com/facebookresearch/fairseq)
Footnote 7: [https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md)
### TSS Model
T5S follows the T5.1.17 base version of the T5 model, which includes some improvements. This model consists of an encoder and decoder with 12 layers, 12 attention heads, and 768 hidden dimensions each. Like T5.1.1, we pre-train only using the denoising objective by filling in dropped-out spans of text from documents.
Footnote 8: [https://github.com/PiotNawrot/nanoT5](https://github.com/PiotNawrot/nanoT5)
We use the sentencepiece library to build a unigram tokenizer of 32,000 tokens. We rely on the nanoT58 library, which allows pre-training T5 models on a limited budget. Our T5S is pre-trained for 80,000 steps on 4 GPUs NVIDIA A100 with input texts of 512 and a batch size of 320. Additionally, we use the AdamW optimizer Loshchilov and Hutter (2019), a warm-up of 10,000 steps, a dropout of 0, and BP16 to speed up training.
Footnote 8: [https://github.com/PiotNawrot/nanoT5](https://github.com/PiotNawrot/nanoT5)
### BERT2BERT-style Models
Our BERT2BERT-style models follow the procedure proposed by Rothe et al. (2020), which consists of initializing encoder-decoder models with pre-trained encoder and/or decoder-only checkpoints. We use two configurations: BERT2BERT, which is an encoder initialized by a BERT-type checkpoint paired with a decoder initialized with the same checkpoint, and BERTShare, which is similar to BERT2BERT but the parameters between the encoder and decoder are shared.
We rely on the transformers library (Wolf et al., 2020) to initialize models based on two well-known architectures, BERT and RoBERTa models. On the one hand, by leveraging the BETO checkpoint, we initialize a BETO2BETO and BETSD share model. On the other hand, by leveraging the RoBERTa checkpoint from MarIA, we initialize a RoBERTa2RoBERTa and RoBERTaShare. Note that these models do not need to continue pre-training but rather fine-tune them directly in downstream tasks. We will delve into this process in more detail in the following section.
## 4 Evaluation
This section outlines the downstream tasks selected for benchmarking our sequence-to-sequence models. These datasets involve generative tasks, where input and output sequence texts are provided.
### Generative Tasks
Abstractive SummarizationSummarization involves creating a concise version of a document while retaining its key information. We consider short and long-form abstractive summarization tasks to evaluate our models. On the one hand, we considered MLSUM (Scialom et al., 2020) and WikiLingua (Ladhak et al., 2020), which are datasets with short articles and summaries. MLSUM is a collection of 226k articles with an average number of \(\sim\)1325 tokens, while WikiLingua counts 76k articles with \(\sim\)800 tokens on average. The average summary length represents approximately 2.5% and 9.5% of the length of MLSUM and WikiLingua articles, respectively. On the other hand, we use XL-Sum (Hasan et al., 2021) and EUR-Lex-Sum (Aumiller et al., 2022), which contain longer and more complex articles of about \(\sim\)18560 and \(\sim\)28972 tokens. The summary proportion of the XL-Sum and Eur-Lex-Sum datasets is approximately 3.06% and 6.74%, respectively. Although the proportions are similar to those of MLSUM and WikiLingua, note that due to the longer length of the articles of XL-Sum and EUR-Lex-Sum, their summaries are more extended, making these data sets more challenging.
Split and RephraseThe split-and-rephrase task assumes rewriting the content of a long sentence into shorter and less verbose sentences. To evaluate this task, we use the Spanish subset of the BiSECT dataset (Kim et al., 2021). The subset counts 290k instances defined based on the OPUS collection (Tiedemann and Nygaard, 2004). The average number of tokens within the input sentences is approximately 43, while after rephrasing into two sentences, the average increases to 45 tokens across a pair of sentences.
Generative Question AnsweringTo the best of our knowledge, there is currently no dataset tailored for abstractive question answering in Spanish. In line with prior work (), we utilize discriminative question answering datasets to train the models to generate the correct answers rather than predicting the specific token positions of the answer. We rely on MLQA (Lewis et al., 2019) and SQAC (Gutierrez-Fandino et al., 2021) datasets for this evaluation. MLQA presents a collection of parallel multi-lingual articles extracted from Wikipedia and offers a development set and test set professionally translated into Spanish. Unlike MLQA, SQAC was created exclusively for Spanish evaluation and contains articles extracted purely from Spanish sources.
### Fine-tuning
We follow the fine-tuning procedures proposed for the English version models (Lewis et al., 2020; Raffel et al., 2020). Because BARTO and TSS have an autoregressive decoder, they can be directly fine-tuned for sequence generation tasks. Specifically, their encoders take a complete input, and then their decoders generate a target output autoregressively. For BETO2BETO and similar models, we initialize a transformer with the BETO checkpoint in both the encoder and decoder. Note that the transformer decoder has cross-attention layers that are randomly initialized. Subsequently, we fine-tune these models following a similar process as BARTO and TSS.
We fine-tune all the models on an RTX 3090 GPU for each task using the transformers library implemented in PyTorch. For a fair compar
ison, we use the same hyperparameters with the exception of the batch size, learning rate, and the number of training epochs. While optimal hyperparameter settings can be task-dependent, we found a range of values that proved effective across tasks:
* Batch size: 4, 8, 16.
* Learning rate: 3e-5, 5e-5.
* Epochs: 3, 6.
## 5 Results
Abstractive SummarizationTable 1 presents a comparison of the results achieved by TSS, BARTO, and BERT2BERT-style models on the MLSUM and WikiLingua tasks, measured in terms of ROUGE scores Lin (2004). In particular, BARTO shows the top performance with an average of 25.48 across the tasks and ROUGE metrics. T5S is the second one with an average of 24.97, showing a difference of 2.1% with respect to BARTO. Interestingly, in some cases, TSS outperforms BARTO on the ROUGE-L metric, indicating that the words in the generated summary appear in exactly the same order as the target summary. Finally, BETO2BETO is the third one, with an average ROUGE of 24.39 and a difference of 4.4% with respect to BARTO.
The results for long-form summarization are presented in Table 2. In the case of XLSum, both BARTO and BETOShare demonstrate superior performance compared to the other models. However, BARTO exhibits an 8.5% difference advantage over BETOShare across these metrics. Regarding EUR-Lex-Sum, the contrast between BARTO and TSS compared to the other models is particularly striking. BARTO is again the top performer, followed by T5S with a 6.3% difference. Interestingly, BARTO shows a 69% difference with respect to RoBERTa2RoBERTa. These results can be attributed to the specific demands of EUR-Lex-Sum, which necessitates both the processing of lengthy articles and the generation of extensive summaries. BART and TSS exhibit superior performance in this task, primarily due to document-formatting utilization during pre-training. This approach equips the models with enhanced capabilities for processing extensive sequences, particularly when dealing with longer documents.
Split and RephraseTable 3 shows the comparison of the models in terms of SARI Xu et al. (2016) and BLEU Papineni et al. (2002) scores for the Bi-SECT dataset. We can observe that TSS exhibits the highest performance for both metrics, showing
\begin{table}
\begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{3}{c}{**MLSUM**} & \multicolumn{3}{c}{**WikiLingua**} \\ & R1 & R2 & RL & R1 & R2 & RL \\ \hline beto2beto & 28.46/28.09 & 10.87/10.34 & 22.89/22.51 & 38.02/37.92 & 17.57/17.43 & 29.38/29.24 \\ betoShare & 28.51/27.84 & 10.90/10.19 & 22.99/22.30 & 37.74/37.68 & 17.41/17.29 & 29.19/29.03 \\ roberta2Roberta & 27.94/27.69 & 9.66/9.25 & 21.92/22.07 & 35.68/35.58 & 14.53/14.51 & 26.49/26.37 \\ robertaShare & 28.43/27.86 & 10.17/9.53 & 22.54/21.92 & 35.83/35.70 & 14.95/14.75 & 26.85/26.62 \\ T5 & 29.18/28.60 & 11.42/10.82 & 23.78/23.20 & 36.83/36.63 & 18.25/18.01 & 30.88/32.05 \\ barto & 29.65/29.12 & 10.96/10.32 & 22.71/12.18 & 39.48/39.37 & 19.65/19.46 & 32.33/30.63 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Summary task results on the development/test sets for all models using the ROUGE metric.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & \multicolumn{3}{c}{**XLSum**} & \multicolumn{3}{c}{**EUR-Lex-Sum**} \\ & R1 & R2 & RL & R1 & R2 & RL \\ \hline beto2beto & 28.76/28.88 & 8.92/9.02 & 20.85/21.03 & 42.76/43.46 & 13.89/14.17 & 22.77/23.07 \\ betoShare & 28.96/29.24 & 9.17/9.22 & 21.08/21.27 & 41.66/42.76 & 13.42/13.73 & 22.66/22.95 \\ roberta2Roberta & 26.92/27.22 & 6.98/7.32 & 19.01/19.23 & 44.70/45.63 & 14.58/14.93 & 22.86/23.06 \\ robertaShare & 26.89/27.08 & 6.99/7.15 & 18.94/19.11 & 44.24/44.22 & 13.84/13.90 & 22.55/22.65 \\ T5S & 28.13/28.31 & 10.29/10.48 & 21.61/21.70 & 62.95/61.01 & 47.18/45.27 & 53.08/50.47 \\ barto & 31.02/31.26 & 10.68/10.72 & 21.96/23.81 & 66.49/65.91 & 49.99/48.39 & 56.01/54.15 \\ \hline \hline \end{tabular}
\end{table}
Table 2: Long-form summary task results on the development/test sets for all models using the ROUGE metric.
\begin{table}
\begin{tabular}{l c c} \hline \hline & \multicolumn{3}{c}{**BiSECT**} \\ & SARI & BLEU \\ \hline beto2beto & 49.45/49.27 & 37.79/37.14 \\ betoShare & 49.72/49.37 & 38.38/37.62 \\ roberta2Roberta & 50.98/50.56 & 36.00/35.22 \\ robertaShare & 51.49/51.09 & 37.19/36.16 \\ T5S & 55.95/55.62 & 43.12/42.30 \\ barto & 50.45/50.13 & 39.48/38.97 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Split-and-rephrase task results on the development/test sets for all models using SARI and BLEU.
a difference of 9.7% with respect to BARTO. Interestingly, our BARTO model achieves the fourth-highest score for SARI and the second-highest score for BLEU. The excelling performance in the SARI metric of TSS could be related to the results on the previous task associated with generating a similar sequence order of words as the target, associated with SARI that explicitly measures the goodness of words that are added, deleted, and kept. We also hypothesize the span-filling objective of T5 fosters the ability to split sentences, explaining the improved performance.
Generative Question AnsweringTable 4 presents the outcomes of our experiments focused on generative question answering. Although SQAC and MLQA were originally designed for discriminative tasks, our results indicate that they serve as a suitable benchmark even for generative question answering.
BARTO and TSS show the best performance on all metrics and tasks. Notably, BARTO and TSS present an advantage difference of 3 and 5, respectively, with respect to, the third best performer, RoBERTaShare. This significant difference could be explained by the faster adaptation of BARTO and TSS to the task, while the others may need more fine-tuning steps. Also, the fact that the self-supervised objective of BART and T5 also results in an improvement for this task compared to those of BERT and RoBERTa Lewis et al. (2020); Raffel et al. (2020).
## 6 Conclusions
This work presents a range of sequence-to-sequence models along with a diverse set of datasets to facilitate their evaluation. Specifically, we introduce BART, T5, and BERT2BERT-based models, exclusively pre-trained in Spanish. Our evaluation, encompassing tasks such as summarization, split-and-rephrase, and generative question answering, has illustrated the capability of these models to effectively tackle these challenges, with BARTO and TSS emerging as the top performers.
We believe this work establishes new benchmarks for future research in encoder-decoder architectures within the Spanish language domain. Looking ahead, we envision the pre-training of larger-scale versions of BARTO and T5S, as well as the release of other valuable variants like LED Beltagy et al. (2020), FlanT5 Chung et al. (2022), CPT Shao et al. (2021), and more. Furthermore, we see potential in addressing missing sequence-to-sequence tasks by creating dedicated datasets. Our aim is to contribute to the ongoing advancement of NLP in Spanish and beyond.
|
2308.00069 | Modular Differential Equations with Movable Poles and Admissible RCFT
Characters | Studies of modular linear differential equations (MLDE) for the
classification of rational CFT characters have been limited to the case where
the coefficient functions (in monic form) have no poles, or poles at special
points of moduli space. Here we initiate an exploration of the vast territory
of MLDEs with two characters and any number of poles at arbitrary points of
moduli space. We show how to parametrise the most general equation precisely
and count its parameters. Eliminating logarithmic singularities at all the
poles provides constraint equations for the accessory parameters. By taking
suitable limits, we find recursion relations between solutions for different
numbers of poles. The cases of one and two movable poles are examined in detail
and compared with predictions based on quasi-characters to find complete
agreement. We also comment on the limit of coincident poles. Finally we show
that there exist genuine CFT corresponding to many of the newly-studied cases.
We emphasise that the modular data is an output, rather than an input, of our
approach. | Arpit Das, Chethan N. Gowdigere, Sunil Mukhi, Jagannath Santara | 2023-07-31T18:44:23Z | http://arxiv.org/abs/2308.00069v3 | # Modular Differential Equations with Movable Poles and Admissible RCFT Characters
###### Abstract
Studies of modular linear differential equations (MLDE) for the classification of rational CFT characters have been limited to the case where the coefficient functions (in monic form) have no poles, or poles at special points of moduli space. Here we initiate an exploration of the vast territory of MLDEs with two characters and any number of poles at arbitrary points of moduli space. We show how to parametrise the most general equation precisely and count its parameters. Eliminating logarithmic singularities at all the poles provides constraint equations for the accessory parameters. By taking suitable limits, we find recursion relations between solutions for different numbers of poles. The cases of one and two movable poles are examined in detail and compared with predictions based on quasi-characters to find complete agreement. We also comment on the limit of coincident poles. Finally we show that there exist genuine CFT corresponding to many of the newly-studied cases. We emphasise that the modular data is an output, rather than an input, of our approach.
Keywords:Conformal Field Theory +
## 1 Introduction
* 2 MLDEs in \(\tau\)-space
* 2.1 Bases of modular forms
* 2.2 Generic \((n,\ell)\) MLDE
* 2.3 \((2,\ell)\) MLDE
* 3 MLDEs in \(j\)-space
* 3.1 Generic \((n,\ell)\) MLDE in \(j\)-space
* 3.2 \((2,\ell)\) MLDE in \(j\)-space
* 3.3 Reduction of \(\ell=6r+4\) to \(\ell=6r\)
* 3.4 Determining accessory parameters: the \((2,6)\) case
* 3.5 Determining accessory parameters: the general case
* 3.6 Admissible range of central charges for \((2,\ell)\) solutions
* 4 Detailed solution for the case of one movable pole
* 4.1 Solving the MLDE with one movable pole
* 4.2 Brief review of quasi-characters
* 4.3 Comparison of quasi-character and MLDE results
* 4.4 Analysis of the accessory equation
* 5 Discussion of the case of two movable poles
* 5.1 The \((2,12)\) MLDE and constraints on accessory parameters
* 5.2 \((2,12)\) admissible characters
* 6 Beyond the genericity assumption: merging of poles
* 6.1 \((2,6)\) solutions as \(p_{1}\to 0\)
* 6.2 \((2,6)\) solutions as \(p_{1}\to 1728\)
* 6.3 \((2,12)\) solutions as \(p_{1}\to p_{2}\)
* 6.4 Analogous considerations for \((2,8)\) and \((2,14)\) cases
* 6.5 Merging of movable poles in the general case
* 7 Illustrative examples of genuine CFTs
* 8 Discussion and conclusions
Critical indices at the poles
* \(\ell\) is even for \(2\)-character solutions
* Frobenius solutions of MLDEs
* \((2,0)\) and \((2,2)\) MLDEs
* \((2,6)\) and \((2,8)\) MLDEs.
* Some (2, 8) MLDE solutions and quasi-characters
## 1 Introduction
The method of Modular Linear Differential Equations (MLDEs) for the classification of Rational Conformal Field Theories (RCFT) in 2d [1; 2] has experienced a significant resurgence, with several important results having appeared in recent times [3; 4; 5; 6; 7; 8; 9; 10; 11; 12; 13; 14; 15; 16; 17; 18; 19; 20; 21]. This approach implements modular invariance and positivity of degeneracies of states to constrain possible consistent partition functions of RCFT having small numbers of characters. When combined with additional information, it leads to classifications of all RCFT within specified regions of parameter space (see for example [18; 22; 23]).
Originally the method was applied to cases with two or three independent characters satisfying what is now called the "non-zero Wronskian condition" 1, which is the vanishing of a non-negative integer \(\ell\) proportional to the number of zeroes of a certain determinant (we explain this in more detail in the following Section). \(\ell\) is known as the Wronskian index. The work of [1] completely classified admissible solutions to the two-character MLDE with \(\ell=0\). Here "admissible" means the solutions have non-negative integral Fourier coefficients, and also that the identity character is normalised to start with unity, reflecting uniqueness of the vacuum state. For two characters, it turned out that the admissible solutions all correspond to CFTs though there are a couple of subtleties that we will not go into here.
Footnote 1: Some authors refer to this case as “monic”, e.g. [13; 24], to indicate that the MLDE when written in monic form is free of poles.
The three-character case, again with \(\ell=0\), was investigated for the first time in [2] where several interesting results were found, but not a complete classification of admissible solutions. These results were extended many years later in [25; 26] and then more completely in [13; 14; 15; 10] resulting in a complete set of admissible characters of which a large fraction could be identified as CFTs. In [22], additional information was used to tabulate the complete set of CFTs with three characters and \(\ell=0\).
Studying MLDE and admissible solutions beyond \(\ell=0\) is more difficult and there are very few papers in this direction. For the two-character case, solutions with \(\ell=2\) were considered in [25; 27; 28], while solutions with \(\ell=4\) were analysed from the MLDE perspective in [9; 27; 29]. Reference [14] studied three characters for \(\ell=2\) and [13] classified solutions with three, four and five characters and \(\ell=0\). To our knowledge, no analysis of the \(\ell\geq 6\) case has been carried out even for two characters. Indeed there seems to be a consensus that the MLDE approach is intractable for \(\ell\geq 6\), and to our knowledge no attempt has been made to formulate and solve the MLDE in such cases 2. This will be the main focus of the present work.
Footnote 2: However there were some remarkably prescient observations in this direction in the concluding section of [27].
When \(\ell<6\) the poles in the MLDE, if any, must be located at the special points \(\tau=\rho\) or \(\tau=i\) in moduli space. On the other hand for \(\ell\geq 6\), the MLDE can have poles at generic points in moduli space. Hence we refer to the \(\ell\geq 6\) case as having "movable poles". To be clear, this only means that their locations are free parameters in the equation, but of course for any particular admissible solution the poles will take fixed values. Also it is important to emphasise that the solutions of the MLDE have no poles and are regular everywhere, leading to completely regular candidate partition functions. The poles are present only in the coefficient functions of the MLDE itself.
There exist other approaches beyond MLDE that have provided insights into admissible characters with \(\ell\geq 6\). These approaches avoid explicitly solving, or even formulating, an \(\ell\geq 6\) MLDE. For example, [3] employed a novel construction of Hecke operators on vector-valued modular forms. On the other hand, [6; 30] proposed the method of "quasi-characters" about which we will say more below.
The primary motivation of the present work is to study modular differential equations and their admissible solutions for arbitrary \(\ell\geq 6\). We will restrict our attention to the case of two characters (second-order equations), though some results will be more general. Despite the complications due to the presence of both movable poles and "accessory parameters", we will be able to make progress using the following strategy. We first of all parametrise such generic MLDEs in a useful way, which in fact can easily be extended beyond the case of two characters. Next we impose single-valuedness of the solutions around all the poles of the MLDE, leading to a set of equations relating the accessory parameters to the locations of the poles. These equations define a hypersurface in the space of poles and accessory parameters. Looking at the asymptotic region of this hypersurface relates the MLDE for a given \(\ell\) to that for \(\ell-6\), corresponding to one of the poles migrating to infinity. This allows us to determine the possible critical exponents for all \(\ell\geq 6\) in terms of those for \(\ell=0,2,4\), which are already
known. This in turn makes it easier to solve the MLDE explicitly, as we show explicitly in the cases of \(\ell=6,8\). Once there are two or more movable poles this becomes more difficult so we follow a slightly different strategy in the \(\ell=12\) case.
In [6] a complete classification scheme for admissible characters for the case of two characters and arbitrary \(\ell\) was provided in terms of "quasi-characters", using inspiration from mathematical works [31; 32]. The present work, based on MLDE, provides an alternate route to the same result and the two can therefore be compared. We do so at various stages and find complete agreement. This encourages us to hope that the present approach can lead to new results for three or more characters where a full classification based on quasi-characters is not available (though partial results can be found in [8]).
Before going on, let us mention two important points that will provide some context for our work. First, there exists an elegant approach to the classification of vector-valued modular forms due to Bantay and Gannon [33; 34]. This approach relies on the classification of modular data, which we do not assume in our work, so it may be considered a complementary point of view. Recently this approach was applied to the classification of \(n\leq 4\) character solutions in [23].
The second point, already alluded to earlier, is that classifying admissible characters is necessary but far from sufficient to classify CFT. In particular we know of infinite families of admissible characters that cannot correspond to any CFT. One of the most explicit tools to find genuine CFT within families of admissible characters is the coset construction [35; 36; 37]. A version of this where the numerator is a meromorphic CFT [38; 39] was applied to the explicit construction of new CFT with small numbers of characters, and their classification, in [22; 23; 25; 40; 18]. In the present work we do not address the problem of classifying actual CFTs within the space of admissible characters, rather our focus is purely on admissible MLDE solutions. Nevertheless, towards the end we will provide explicit examples of genuine CFT corresponding to the characters we construct, which makes it clear that the sub-space of CFT within the space of admissible characters is well-populated even for \(\ell\geq 6\).
## 2 MLDEs in \(\tau\)-space
We now move on to the construction of MLDE of \(n\)th order and \(\ell>0\) and the study of their admissible solutions. We label such MLDE by \((n,\ell)\).
### Bases of modular forms
We will choose a convenient basis of holomorphic modular forms of SL(2,Z). These can have any non-negative even weight \(w>2\). A generic modular form of this weight is denoted \(M_{w}(\tau)\).
These form a multiplicative ring generated by the Eisenstein series \(E_{4}(\tau),E_{6}(\tau)\), which we normalise so that their \(q\)-expansion starts with 1. We also use the cusp form:
\[\Delta=\frac{E_{4}^{3}-E_{6}^{2}}{1728}=q+{\cal O}(q^{2}) \tag{1}\]
The Klein \(j\)-invariant, which will play a key role later on, is given by:
\[j(\tau)=\frac{1728\,E_{4}^{3}}{E_{4}^{3}-E_{6}^{2}}=\frac{E_{4}^{3}}{\Delta}=q^ {-1}+744+{\cal O}(q) \tag{2}\]
Torus moduli space has cusps at \(\tau=\rho\equiv e^{\frac{2\pi i}{3}}\) and \(\tau=i\). We have \(E_{4}(\rho)=E_{6}(i)=0\). In the first case it is a fractional zero of order \(\frac{1}{3}\) and in the second, of order \(\frac{1}{2}\). Thus \(E_{4}^{3}\) and \(E_{6}^{2}\), of weight 12, both have a single full zero. The most general modular form with a single full zero is a linear combination of these two. Alternatively, and more usefully for us, it can be parametrised up to an overall constant as \(E_{4}^{3}-p\,\Delta\) for some (in principle complex) number \(p\). In this form the leading coefficient in a \(q\)-expansion is unity independent of \(p\), which follows from the cusp-form nature of \(\Delta\). Additionally, it vanishes at the point \(\tau_{p}\) in the \(\tau\)-plane where \(p=\frac{E_{4}^{3}}{\Delta}(\tau_{p})=j(\tau_{p})\). Thus \(p\) has a clear geometric meaning as the location in the \(j\)-plane of the zero of the corresponding form. Generically it is a complex number.
We will need a convenient parametrisation for modular forms of arbitrary weight. To construct a suitable basis we proceed by dividing all possible \(M_{w}\) into three classes:
1. \(\{0\leq w<12\}\cup\{w=14\}\). Here \(M_{w}\) is generated by one of the following: 1, \(E_{4}\), \(E_{6}\), \(E_{4}^{2}\), \(E_{4}E_{6}\), \(E_{4}^{2}E_{6}\).
2. \(\{w\geq 12,w\not\equiv\,2\,{\rm mod}\,12\}\). Let \(w=2(6r+u)\), then we have: \(M_{w}=M_{2u}\ \prod\limits_{I=1}^{r}(E_{4}^{3}-p_{I}\Delta)=M_{2u}\,\Delta^{r} \prod\limits_{I=1}^{r}(j-p_{I})\) and \(M_{2u}\) is one of the following: 1, \(E_{4}\), \(E_{6}\), \(E_{4}^{2}\), \(E_{4}E_{6}\). Here \(p_{I}\) are arbitrary complex numbers.
3. \(\{w>14,w=2\,{\rm mod}\,12\}\). For this case we write \(w=12r+2\), then \(M_{w}=E_{4}^{2}E_{6}\prod\limits_{I=1}^{r-1}(E_{4}^{3}-p_{I}\Delta)=E_{4}^{2}E _{6}\,\Delta^{r-1}\prod\limits_{I=1}^{r-1}(j-p_{I})\), where again \(p_{I}\) are arbitrary complex numbers.
### Generic \((n,\ell)\) Mlde
We formulate the MLDE for the case of \(n\) characters and arbitrary Wronskian index \(\ell\). The most general such equation takes the form:
\[\Big{(}D^{n}+\sum\limits_{s=1}^{n}\mu_{2s}\,\phi_{2s}(\tau)D^{n-s}\Big{)}\chi=0 \tag{3}\]
where the covariant derivative in \(\tau\), denoted \(D\), is defined by:
\[D\equiv\frac{1}{2\pi i}\frac{\partial}{\partial\tau}-\frac{w}{12}E_{2}(\tau) \tag{4}\]
when acting on a form of weight \(w\). \(\mu_{2s}\) are arbitrary parameters and the \(\phi_{2s}\) are meromorphic modular functions of weight \(2s\) whose poles are governed by the zeroes of the Wronskian, and whose overall normalisations are specified so that their leading term is unity. Explicitly we have:
\[\mu_{2s}\,\phi_{2s}=(-1)^{s}\frac{W_{n-s}}{W_{n}} \tag{5}\]
where:
\[W_{s}(\tau)\equiv\begin{vmatrix}\chi_{0}(\tau)&\chi_{1}(\tau)&\cdots&\chi_{n- 1}(\tau)\\ \vdots&\vdots&\vdots&\vdots\\ D_{\tau}^{s-1}\chi_{0}(\tau)&D_{\tau}^{s-1}\chi_{1}(\tau)&\cdots&D_{\tau}^{s- 1}\chi_{n-1}(\tau)\\ D_{\tau}^{s+1}\chi_{0}(\tau)&D_{\tau}^{s+1}\chi_{1}(\tau)&\cdots&D_{\tau}^{s+ 1}\chi_{n-1}(\tau)\\ \vdots&\vdots&\vdots&\vdots\\ D_{\tau}^{n}\chi_{0}(\tau)&D_{\tau}^{n}\chi_{1}(\tau)&\cdots&D_{\tau}^{n} \chi_{n-1}(\tau)\end{vmatrix} \tag{6}\]
and \(D\) is defined in Eq. (4).
It is easy to see from the definition that \(W_{n-1}=DW_{n}\). From Eq. (5) we find the useful relation:
\[\mu_{2}\phi_{2}=-\frac{W_{n-1}}{W_{n}}=-D\log W_{n} \tag{7}\]
The _Wronskian index_\(\ell\) is defined to be an integer \(\ell\) such that \(\frac{\ell}{6}\) is the number of zeroes of \(W_{n}\). This number does not have to be an integer because of the possibility of fractional zeroes at the cusps of moduli space, where a zero at \(\tau=\rho\) counts as \(\frac{2}{3}\) of a full zero, and at \(\tau=i\) counts as \(\frac{1}{2}\) of a full zero. Thus for general RCFT with \(n\) characters, \(\ell\) can be any non-negative integer other than 1. Note that if the total number of zeroes is fractional then they must necessarily occur at the cusps, while if the total number is integral (i.e. \(\ell\) is a multiple of 6) then they can occur anywhere in the fundamental region. More generally the fractional part of \(\frac{\ell}{6}\) describes the zeroes fixed at the cusps, while the integral part describes zeroes that are allowed to be at generic points of moduli space (including possibly the cusps). This motivates us to define \(\ell_{\rho},\ell_{i},\ell_{\tau}\) to be the contribution to \(\ell\) from the zeroes at \(\rho,i\) and generic points respectively. Here \(\ell_{\rho}\) is even, \(\ell_{i}\) is a multiple of 3 and \(\ell_{\tau}\) is a multiple of 6 3 and these quantities satisfy:
Footnote 3: Each of these quantities is six times the corresponding quantity \(w_{\rho},w_{i},w_{\tau}\) defined in [27].
\[\ell_{\rho}+\ell_{i}+\ell_{\tau}=\ell \tag{8}\]
The goal is to classify all possible MLDEs of the form Eq. (3) and then find suitable solutions to them. These take the form:
\[\chi_{i}(q)=q^{\alpha_{i}}\sum_{k=0}^{\infty}a_{i,k}\,q^{k} \tag{9}\]
We call them _admissible_ when \(a_{i,k}\) are integers \(\geq 0\) for all \(i,k\), which means they potentially correspond to degeneracies of states. Additionally \(a_{0,0}=1\), reflecting non-degeneracy of the vacuum state. In what follows we will establish several properties of the equations for generic \((n,\ell)\), including a count of the parameters on which they depend. After that we will restrict to \(n=2\) and consider certain values of \(\ell\geq 6\) in some detail and examine families of solutions.
We will start by making a _genericity assumption_ - that for any given \(\ell\), the largest possible number of zeroes of \(W_{n}\) are at generic, distinct points in moduli space, away from each other and from the special points \(\tau=\rho,i\). With this assumption, \(\ell_{\rho}\) takes its minimum allowed values of \(0,2,4\) and \(\ell_{i}\) takes its minimum allowed values of \(0,3\). Later we will consider what happens when the zeroes merge.
Writing \(\ell=6r+u,\ 0\leq u\leq 5\), the possible cases are:
\begin{tabular}{|c|c|} \hline \(u\) & \((\ell_{\rho},\ell_{i},\ell_{\tau})\) \\ \hline \(0\) & \((0,0,6r)\) \\ \(1\) & \(\left(4,3,6(r-1)\right)\) \\ \(2\) & \((2,0,6r)\) \\ \(3\) & \((0,3,6r)\) \\ \(4\) & \((4,0,6r)\) \\ \(5\) & \((2,3,6r)\) \\ \hline \end{tabular}
We will also require that the solutions of the MLDE furnish irreducible representations \(\varrho\) of the modular group PSL(2,Z). By definition,
\[\varrho(T)=\exp\left[2\pi i\,{\rm diag}\big{(}-\tfrac{c}{24},-\tfrac{c}{24}+h_ {1},\cdots,-\tfrac{c}{24}+h_{n-1}\big{)}\right] \tag{10}\]
This matrix will be reducible if it has two coincident entries, i.e. any of the \(h_{i}\) is integral or any two \(h_{i}\) differ by an integer. Using \(\varrho(S^{2})=\varrho\big{(}(ST)^{3})\big{)}=1\) it can then be shown [10] that \(S\) also has two equal eigenvalues and the representation is reducible. It follows that in irreducible representations, none of the \(h_{i}\) is integral and no two of them differ by an integer.
Now we turn to the parametrisation of the coefficient functions \(\phi_{2s}(\tau)\) for \(s\geq 2\). Writing \(\ell=6r+u,\ u=0,1,\cdots,5\) as above, we consider the different \(u\) values separately as they have slightly different characteristics. From the definition of \(\phi_{2s}\) in Eq. (5) and the fact that \(W_{n}\) has precisely \(\tfrac{\ell}{6}\) zeroes, it follows that \(\phi_{2s}\) can be expressed as a ratio of holomorphic modular
forms such that the denominator has weight \(2\ell\). This follows from the fact, mentioned earlier, that a full zero (\(\ell=6\)) is achieved by a general weight 12 modular form \(E_{4}^{3}-p\Delta\). To achieve the desired modular weight, the numerator of \(\phi_{2s}\) must be modular of weight \(2\ell+2s\).
From Sub-section 2.1, we find that, under the genericity assumption, these denominators of weight \(2\ell\) can be parametrised as follows:
\[\begin{split}\ell=6r\colon&\quad\prod_{I=1}^{r}(E_ {4}^{3}-p_{I}\Delta)\\ \ell=6r+1\colon&\quad E_{4}^{2}E_{6}\,\prod_{I=1}^{ r-1}(E_{4}^{3}-p_{I}\Delta)\\ \ell=6r+2\colon&\quad E_{4}\,\prod_{I=1}^{r}(E_{4}^ {3}-p_{I}\Delta)\\ \ell=6r+3\colon&\quad E_{6}\,\prod_{I=1}^{r}(E_{4}^ {3}-p_{I}\Delta)\\ \ell=6r+4\colon&\quad E_{4}^{2}\,\prod_{I=1}^{r}(E_ {4}^{3}-p_{I}\Delta)\\ \ell=6r+5\colon&\quad E_{4}E_{6}\,\prod_{I=1}^{r}(E_ {4}^{3}-p_{I}\Delta)\end{split} \tag{11}\]
Thus for all \(u\neq 1\) the denominators have exactly \(r\) full zeroes, whose locations as a function of \(\tau\) are determined by the \(r\) parameters \(p_{I}\), as well as \(u\) fractional zeroes whose locations are fixed and hence they are not associated to any free parameters. For \(u=1\) we instead have \(r-1\) full zeroes, two zeroes of order \(\frac{1}{3}\) at \(\tau=\rho\) and a zero of order \(\frac{1}{2}\) at \(\tau=i\).
Applying Eq. (7), we find:
\[\begin{split}\ell=6r:\quad\mu_{2}\,\phi_{2}&=E_{4} ^{2}E_{6}\sum_{I=1}^{r}\frac{1}{E_{4}^{3}-p_{I}\Delta}\\ \ell=6r+1:\quad\mu_{2}\,\phi_{2}&=\frac{2E_{6}}{3E _{4}}+\frac{E_{4}^{2}}{2E_{6}}+E_{4}^{2}E_{6}\sum_{I=1}^{r-1}\frac{1}{E_{4}^{ 3}-p_{I}\Delta}\\ \ell=6r+2:\quad\mu_{2}\,\phi_{2}&=\frac{E_{6}}{3E _{4}}+E_{4}^{2}E_{6}\sum_{I=1}^{r}\frac{1}{E_{4}^{3}-p_{I}\Delta}\\ \ell=6r+3:\quad\mu_{2}\,\phi_{2}&=\frac{E_{4}^{2}}{ 2E_{6}}+E_{4}^{2}E_{6}\sum_{I=1}^{r}\frac{1}{E_{4}^{3}-p_{I}\Delta}\\ \ell=6r+4:\quad\mu_{2}\,\phi_{2}&=\frac{2E_{6}}{3E _{4}}+E_{4}^{2}E_{6}\sum_{I=1}^{r}\frac{1}{E_{4}^{3}-p_{I}\Delta}\\ \ell=6r+5:\quad\mu_{2}\,\phi_{2}&=\frac{E_{6}}{3E _{4}}+\frac{E_{4}^{2}}{2E_{6}}+E_{4}^{2}E_{6}\sum_{I=1}^{r}\frac{1}{E_{4}^{3}- p_{I}\Delta}\end{split} \tag{12}\]
By inspection we see that in every case, the expression has a leading term \(\frac{\ell}{6}\) as \(q\to 0\). Since we are normalising every \(\phi_{2s}\) to start with 1, it follows that \(\mu_{2}=\frac{\ell}{6}\).
We now consider the behaviour of solutions around \(\tau\to i\infty\), where the appropriate coordinate is \(q=e^{2\pi i\tau}\to 0\), by inserting the leading behaviour \(\chi_{i}\sim q^{\alpha_{i}}+\mathcal{O}(q^{\alpha_{i}+1})\) into the MLDE. In a CFT, the exponents \(\alpha_{i}\) determine the central charge \(c\) and the conformal dimensions \(h_{i}\) via:
\[\alpha_{i}=-\frac{c}{24}+h_{i},\quad i=0,1,\cdots,n-1 \tag{13}\]
where \(h_{0}=0\), corresponding to the identity primary. Expanding:
\[\phi_{2}(\tau)=\sum_{n=0}^{\infty}\phi_{2,n}\,q^{n} \tag{14}\]
and inserting this as well as Eq. (9) into the MLDE Eq. (3), at leading order we find the indicial equation:
\[\alpha^{n}+\left(\mu_{2}\,\phi_{2,0}-\frac{n(n-1)}{12}\right)\alpha^{n-1}+ \cdots=0 \tag{15}\]
If the roots of this equation are \(\alpha_{i},i=0,1,\cdots,n-1\) then we see that:
\[\sum_{i=0}^{n-1}\alpha_{i}=\frac{n(n-1)}{12}-\frac{\ell}{6} \tag{16}\]
where we used \(\mu_{2}=\frac{\ell}{6}\) and \(\phi_{2,0}=1\). The above equation is the valence (or Riemann-Roch) formula.
The lower order terms in Eq. (15) are straightforward but tedious to write explicitly, and they similarly allow us to determine the parameters \(\mu_{4},\mu_{6},\cdots\mu_{2n}\) in terms of the critical exponents \(\alpha_{i},\ i=0,1,\cdots,n-1\). We refer to the parameters \(\mu_{2s}\) as _rigid parameters_ since they are completely determined by the critical exponents. Conversely if we know the \(\mu_{i}\) then they determine the critical exponents.
### \((2,\ell)\) Mlde
In this paper we will work with two characters, yet keeping the Wronskian index \(\ell\) arbitrary. To our knowledge this region of \((n,\ell)\) space has not previously been investigated barring some insightful observations in [27]. Let us mention that for two characters, the concept of movable poles essentially corresponds to the "non-rigid" case from the perspective of Fuchsian differential equations. These are the cases where the parameters in the equation are uniquely determined by the exponents of the solutions. Technically the rigid cases among the \((2,\ell)\) family arise for \(\ell=0,2\). However as we will argue below, admissible characters for \(\ell=4\) are completely determined in terms of those for \(\ell=0\). Thus the non-rigid cases of interest
start at \(\ell=6\), which is also where movable poles first arise. So for practical purposes we can think of "non-rigid" \((2,\ell)\) MLDE as being equivalent to "MLDE having movable poles". This justifies our use of "rigid" for the parameters \(\mu_{2s}\) of the previous sub-section and "non-rigid" for the rest.
The general \((2,\ell)\) MLDE is:
\[\Big{(}D^{2}+\mu_{2}\,\phi_{2}(\tau)D+\mu_{4}\,\phi_{4}(\tau)\Big{)}\chi=0 \tag{17}\]
where \(\phi_{2},\phi_{4}\) are meromorphic modular forms of weight 2 and 4 respectively.
Now consider the coefficient function \(\phi_{4}\). Its denominator must have (at most) the zeroes of the Wronskian \(W_{n}\). The numerator is then a general modular form of weight 4 higher. Also the form must be normalised so that its \(q\)-expansion starts with 1. This implies that it takes the form:
\[\begin{split}\ell=6r\colon&\phi_{4}=\frac{E_{4} \prod_{I=1}^{r}(E_{4}^{3}-b_{4,I}\Delta)}{\prod_{I=1}^{r}(E_{4}^{3}-p_{I}\Delta) }\\ \ell=6r+2\colon&\phi_{4}=\frac{E_{4}^{2}\prod_{I=1} ^{r}(E_{4}^{3}-b_{4,I}\Delta)}{E_{4}\prod_{I=1}^{r}(E_{4}^{3}-p_{I}\Delta)}= \frac{E_{4}\prod_{I=1}^{r}(E_{4}^{3}-b_{4,I}\Delta)}{\prod_{I=1}^{r}(E_{4}^{3} -p_{I}\Delta)}\\ \ell=6r+4\colon&\phi_{4}=\frac{\prod_{I=1}^{r+1}(E _{4}^{3}-b_{4,I}\Delta)}{E_{4}^{2}\prod_{I=1}^{r}(E_{4}^{3}-p_{I}\Delta)}\end{split} \tag{18}\]
The \(b_{4,I}\) are "accessory parameters" about which we will have a lot to say in the rest of this paper (they carry the subscript 4 because they arise in a weight-four modular function). Notice that for the middle case there is no pole at \(\tau=\rho\) due to cancellation of an \(E_{4}\) between the numerator and denominator. Also the last case has an extra power of \((E_{4}^{3}-b_{4,r+1}\Delta)\) in the numerator.
Returning to the MLDE Eq. (17), we have already determined that \(\mu_{2}=\frac{\ell}{6}\). Also, the leading term of \(\phi_{4}\) in a \(q\)-expansion is normalised to unity. Then the indicial equation determines:
\[\mu_{4}=\alpha_{0}\alpha_{1} \tag{19}\]
where \(\alpha_{i}\) are the critical exponents around \(q=0\). Also it is known [27] that with two characters, \(\ell\) is always even. Hence \(W\) cannot have an odd number of zeroes at \(\tau=i\). Then, recalling that \(\ell=6r+u\), one is restricted to even values of \(u\). With all the above information,
we write the general \((2,\ell)\) MLDE as follows:
\[\ell=6r:\] \[\left(D^{2}+\left(E_{4}^{2}E_{6}\sum_{I=1}^{r}\frac{1}{E_{4}^{3}-p_ {I}\Delta}\right)\!D+\alpha_{0}\alpha_{1}\,E_{4}\frac{\prod_{I=1}^{r}(E_{4}^{3 }-b_{4,I}\Delta)}{\prod_{I=1}^{r}(E_{4}^{3}-p_{I}\Delta)}\right)\chi(\tau)=0\] \[\ell=6r+2:\] \[\left(D^{2}+\left(\frac{E_{6}}{3E_{4}}+E_{4}^{2}E_{6}\sum_{I=1}^{ r}\frac{1}{E_{4}^{3}-p_{I}\Delta}\right)\!D+\alpha_{0}\alpha_{1}\,E_{4}\frac{ \prod_{I=1}^{r}(E_{4}^{3}-b_{4,I}\Delta)}{\prod_{I=1}^{r}(E_{4}^{3}-p_{I} \Delta)}\right)\chi(\tau)=0 \tag{20}\] \[\ell=6r+4:\] \[\left(D^{2}+\left(\frac{2E_{6}}{3E_{4}^{2}}+E_{4}^{2}E_{6}\sum_{I =1}^{r}\frac{1}{E_{4}^{3}-p_{I}\Delta}\right)\!D+\frac{\alpha_{0}\alpha_{1}}{ E_{4}^{2}}\frac{\prod_{I=1}^{r+1}(E_{4}^{3}-b_{4,I}\Delta)}{\prod_{I=1}^{r}(E_{4}^{3 }-p_{I}\Delta)}\right)\chi(\tau)=0\]
Well-studied special cases are the MMS equation [1] which corresponds to \(\ell=0\):
\[\Big{(}D^{2}+\alpha_{0}\alpha_{1}\,E_{4}\Big{)}\chi(\tau)=0 \tag{21}\]
and the \(\ell=2\) equation studied in [27; 28]:
\[\left(D^{2}+\frac{E_{6}}{3E_{4}}D+\alpha_{0}\alpha_{1}\,E_{4}\right)\chi(\tau )=0 \tag{22}\]
As we see, these equations have no movable poles.
Returning now to the general case, although \(p_{I}\) are generically complex, they are subject to constraints arising from the fact that \(c\) and the degeneracies are rational. As we will show below, the _symmetric polynomials_ in the \(p_{I}\) must all be real and rational. This generalises the statement in [27]) that a single pole must be real and rational. Let us mention here that for a real pole, \(\tau_{p}\) lies in the subspace of moduli space for which \(j(\tau_{p})\) is real, namely \(\{\text{Re}(\tau)=0\}\cup\{\text{Re}(\tau)=\frac{1}{2}\}\cup|\tau|=1\).
So far we have only considered the indicial equation about \(q=0\). However, the characters also need to have appropriate behaviour near the cusps \(\tau=\rho,i\) in order to be single-valued. We label the exponents around \(\tau=\rho,i\) as \(\alpha^{(\rho)},\alpha^{(i)}\) respectively to avoid confusion with the exponents \(\alpha\) around \(\tau=i\infty\). Near \(\tau=\rho\) we introduce a new coordinate:
\[z=(\tau-\rho)^{3} \tag{23}\]
When \(\tau\) circles \(\rho\) by \(e^{2\pi i/3}\), we return to the same point in moduli space. The above change of variables converts this to a regular circle \(z\to e^{2\pi i}z\), so \(z\) is a good coordinate at the cusp. In this coordinate, \(E_{4}\sim z^{\frac{1}{3}}\) and \(j\sim z\) as \(z\to 0\). The indices at \(\tau=\rho\) are found by inserting the trial solution
\[\chi(z)\sim z^{\alpha^{(\rho)}} \tag{24}\]
Regularity imposes the requirement that \(\alpha^{(\rho)}\) is a non-negative multiple of \(\frac{1}{3}\). A similar analysis tells us that \(\tau=i\) is a multiple of \(\frac{1}{2}\).
Now expanding out the MLDE Eq. (20) we get:
\[\left(-\frac{1}{4\pi^{2}}\partial_{\tau}^{2}-\frac{1}{12\pi i}E_{2}(\tau) \partial_{\tau}+\frac{1}{2\pi i}\mu_{2}\phi_{2}(\tau)\partial_{\tau}+\alpha_{0 }\alpha_{1}\phi_{4}(\tau)\right)\chi=0 \tag{25}\]
As we are working near \(\tau=\rho\) where \(E_{4}\) and \(j\) vanish while \(E_{6}\) and \(\Delta\) tend to finite values, we can replace \(\mu_{2}\phi_{2}\) by \(\frac{u}{6}\frac{E_{6}}{E_{4}}\). This is because near \(\tau=\rho\), \(\mu_{2}\phi_{2}\) has \(\frac{u}{6}\) poles where \(\ell=6r+u\), by our genericity assumption. Meanwhile \(\phi_{4}\) given in Eq. (18) reduces near \(\tau=\rho\) to:
\[\begin{split}\ell=6r\colon&\phi_{4}\simeq 0\\ \ell=6r+2\colon&\phi_{4}\simeq 0\\ \ell=6r+4\colon&\phi_{4}\simeq-\frac{b_{4,r+1} \Delta}{E_{4}^{2}}\prod_{I=1}^{r}\frac{b_{4,I}}{p_{I}}\end{split} \tag{26}\]
From the definition of the \(j\)-invariant we have:
\[\frac{E_{6}}{E_{4}}=-\frac{1}{2\pi i}\frac{\partial}{\partial\tau}(\log j) \tag{27}\]
Since \(j\sim(\tau-\rho)^{3}\), it follows that:
\[\frac{E_{6}}{E_{4}}\simeq-\frac{3}{2\pi i(\tau-\rho)}\quad\text{near }\tau=\rho \tag{28}\]
From this we can also deduce the behaviour:
\[\frac{\Delta}{E_{4}^{2}}\simeq-\frac{1}{1728}\left(\frac{E_{6}}{E_{4}}\right) ^{2}=\frac{1}{768\pi^{2}(\tau-\rho)^{2}} \tag{29}\]
Now we change variables in the MLDE using Eq. (23) and insert Eq. (24). In the case \(u=0\) we find the indicial equation:
\[3\big{(}\alpha^{(\rho)}\big{)}^{2}-\alpha^{(\rho)}=0 \tag{30}\]
The solutions are \(\alpha^{(\rho)}=0,\frac{1}{3}\). Thus both solutions exhibit regular behaviour as functions of \(\tau\). While this has been a good consistency check, it does not tell us anything new about the parameters in the MLDE.
Next, for \(u=2\), the MLDE near \(\tau=\rho\) becomes:
\[\left(-\frac{1}{4\pi^{2}}\partial_{\tau}^{2}-\frac{1}{12\pi i}E_{2}\partial_{ \tau}+\frac{1}{2\pi i}\frac{E_{6}}{3E_{4}}\partial_{\tau}\right)\chi=0 \tag{31}\]
Going now to the \(z\)-coordinate and inserting \(\chi(z)\sim z^{\alpha^{(\rho)}}\), the indicial equation is:
\[3\big{(}\alpha^{(\rho)}\big{)}^{2}-2\alpha^{(\rho)}=0 \tag{32}\]
whose solutions are \(\alpha^{(\rho)}=0,\frac{2}{3}\). Again the solution is consistent but does not provide new information.
The situation is different for the last case, \(u=4\). Eq. (26) tells us we have non-trivial behaviour for both \(\phi_{2}\) and \(\phi_{4}\). This indicial equation now becomes:
\[\alpha^{(\rho)}(\alpha^{(\rho)}-1)+\frac{\gamma}{1728}=0 \tag{33}\]
where:
\[\gamma\equiv\alpha_{0}\alpha_{1}b_{4,r+1}\,\prod_{I=1}^{r}\frac{b_{4,I}}{p_{I}} \tag{34}\]
From this we learn that \(\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)}=1\). As we have seen, these exponents are non-negative multiples of \(\frac{1}{3}\), which leads to the unique solution \(\alpha_{0}^{(\rho)}=\frac{1}{3},\alpha_{1}^{(\rho)}=\frac{2}{3}\). It now follows from Eq. (33) that:
\[\gamma=384 \tag{35}\]
This was previously noted in [6] for the case \(\ell=4\). Here we see that it is true for all \(\ell=6r+4\) as long as there are precisely two poles (of \(\frac{1}{3}\)-order each) at the cusp \(\tau=\rho\) and the rest are at generic values away from the cusp. We can think of this result as determining \(b_{4,r+1}\) in Eq. (34) in terms of the other \(b_{4,I}\). Then the (so far) independent coefficients are \(b_{4,I},I=1,2,\cdots,r\) and \(p_{I}\). Thus, despite the appearance of an apparent additional parameter \(b_{4,r+1}\), the case \(\ell=6r+4\) is actually similar to the cases \(\ell=6r,6r+2\) in that all of them have precisely \(2r\) parameters of which \(r\) correspond to poles \(p_{I}\) of the coefficient functions and the other \(r\) are the \(b_{4,I}\). As indicated above, these are the accessory parameters familiar from Fuchsian differential equations.
In standard treatments of the MLDE, starting with [1], one now solves the equation order by order using the Frobenius method, and imposes admissibility at each successive order, in particular non-negative integrality of the Fourier coefficients. We will do this eventually, but here we pause to rewrite the MLDE treating \(j(\tau)\), rather than \(\tau\), as the independent parameter. This makes it somewhat easier and more intuitive to write out general MLDEs and impose their single-valuedness around poles of the coefficient functions. Of course the Fourier coefficients in the \(j\) variable have no integrality restrictions, so we will need to return to the \(\tau\)-coordinate in order to check integrality of the coefficients in the \(q\)-expansion and thereby determine admissibility of solutions.
MLDEs in \(j\)-space
### Generic \((n,\ell)\) Mlde in \(j\)-space
In this Section we return to the general case of \(n\) characters and study MLDEs in a formalism where the independent variable is the Klein invariant \(j(\tau)\) rather than \(\tau\) (this was explored for special cases in [26; 27]). As a warm-up exercise, let us consider the one-character case. First we fix \(\ell=6r\). Then the most general allowed character is:
\[\chi(j)=\prod_{I=1}^{r}(j-p_{I}) \tag{10}\]
where \(p_{I}\) are a set of \(r\) complex numbers that describe the zeroes of the character (which is the same as the Wronskian in this case). The Mlde satisfied by this character is trivially seen to be:
\[\big{(}\partial_{j}+\psi_{2}(j)\big{)}\chi(j)=0 \tag{11}\]
where:
\[\begin{split}\psi_{2}(j)&=-\partial_{j}\log\chi(j) \\ &=-\sum_{I=1}^{r}\frac{1}{j-p_{I}}\end{split} \tag{12}\]
Here we have labelled the first non-trivial coefficient as \(\psi_{2}(j)\) in keeping with the convention used for the Mlde in \(\tau\), though here it does not reflect the modular weight, since everything is modular invariant (up to possible phases). Also note that the coefficients \(\mu_{2s}\) are now absorbed into the normalisation of the \(\psi_{2s}\).
The generalisation of the above to the case of \(\ell=6r+u\) is straightforward: the character acquires an extra multiplicative factor of \(j^{\frac{1}{3}}\) for each zero at \(\tau=\rho\) and a factor \((j-1728)^{\frac{1}{2}}\) for a zero at \(\tau=i\) (we are not requiring admissibility at this stage, which would have ruled out the latter). Then the coefficient function \(\psi_{2}(j)\) acquires an additive term:
\[-\frac{1}{3j},\ -\frac{1}{2(j-1728)} \tag{13}\]
for each zero at \(\rho,i\) respectively.
Moving on to the \(n\)-character case, the Mlde in terms of the independent variable \(j\) can be written:
\[\Big{(}\partial_{j}^{n}+\sum_{s=1}^{n}\psi_{2s}(j)\,\partial_{j}^{n-s}\Big{)} \chi(j)=0 \tag{14}\]
The modular invariants \(\psi_{2s}\) can have poles at the special points \(j=0,1728\) as well as at generic points \(j=p_{I},I=1,2,\cdots r\). The solutions can be expanded as follows around the special points:
\[\chi_{i}(j) =j^{\alpha_{i}^{(\rho)}}\sum_{k=0}^{\infty}a_{i,k}^{(\rho)}\,j^{ \frac{k}{2}}, \tag{3.6}\] \[\chi_{l}(j) =(j-1728)^{\alpha_{i}^{(i)}}\sum_{k=0}^{\infty}a_{l,k}^{(i)}\,(j-1 728)^{\frac{k}{2}}. \tag{3.7}\]
Similarly around each of the generic poles \(j=p_{I}\), we parametrise the solutions as:
\[\chi_{i}(j)=(j-p_{I})^{\alpha_{i}^{(I)}}\sum_{k=0}^{\infty}a_{i,k}^{(I)}\,(j-p _{I})^{k} \tag{3.8}\]
One should keep in mind that the \(a_{i,k}\) with superscripts \((\rho),(i),(I)\) have no particular integrality property.
The relevant Wronskians are defined similarly to Eq. (2.6) 4:
Footnote 4: We denote them \(W_{r}(j)\), though they are different from \(W_{r}(\tau)\) so this is really abuse of notation - which hopefully will not cause confusion.
\[W_{r}(j)\equiv\begin{vmatrix}\chi_{0}(j)&\chi_{1}(j)&\cdots&\chi_{n-1}(j)\\ \vdots&\vdots&\vdots&\vdots\\ \partial_{j}^{r-1}\chi_{0}(j)&\partial_{j}^{r-1}\chi_{1}(j)&\cdots&\partial_ {j}^{r-1}\chi_{n-1}(j)\\ \partial_{j}^{r+1}\chi_{0}(j)&\partial_{j}^{r+1}\chi_{1}(j)&\cdots&\partial_ {j}^{r+1}\chi_{n-1}(j)\\ \vdots&\vdots&\vdots&\vdots\\ \partial_{j}^{n-1}\chi_{0}(j)&\partial_{j}^{n-1}\chi_{1}(j)&\cdots&\partial_ {j}^{n-1}\chi_{n-1}(j)\end{vmatrix} \tag{3.9}\]
and:
\[\psi_{2s}=(-1)^{s}\frac{W_{n-s}(j)}{W_{n}(j)} \tag{3.10}\]
As noted in [27], \(W_{r}(j)\) necessarily have poles, unlike \(W_{r}(\tau)\). The poles are introduced by the powers of \(\frac{dj}{d\tau}\) that relate two Wronskians. For example the relations between the Wronskians \(W_{n}(j)\) and \(W_{0}(j)\) with that of the Wronskians \(W_{n}(\tau)\) and \(W_{0}(\tau)\) are as follows (similar but more complicated relations can be found for all the \(W_{r}\)):
\[W_{n}(j) =\left(\tfrac{dj}{d\tau}\right)^{-\frac{n(n-1)}{2}}W_{n}(\tau) \tag{3.11}\] \[W_{0}(j) =\left(\tfrac{dj}{d\tau}\right)^{-\frac{n(n+1)}{2}}W_{0}(\tau) \tag{3.12}\]
Using Eq. (2.27), we see that:
\[\frac{dj}{d\tau}=-2\pi i\frac{E_{6}}{E_{4}}\,j=-2\pi i\frac{E_{6}E_{4}^{2}}{\Delta} \tag{3.13}\]
Thus \(\frac{dj}{d\tau}\) has two zeroes of order \(\frac{1}{3}\) at \(\tau=\rho\) and one of order \(\frac{1}{2}\) at \(\tau=i\), so \(W(j)\) acquires \(\frac{n(n-1)}{3}\) poles at \(\rho\) and \(\frac{n(n-1)}{4}\) poles at \(i\). It also has the zeroes of \(W_{n}(\tau)\). So we could define a new Wronskian index \(\ell^{j}\) such that \(\frac{\ell^{j}}{6}\) gives the total number of zeroes of \(W(j)\):
\[\ell^{j}=-\frac{7n(n-1)}{2}+\ell \tag{3.14}\]
We see that for \(n=1\), \(\ell_{j}=\ell\), while for \(n=2\), \(\ell_{j}=-7+\ell\) as one can read off from Page 434 of [27]. Actually this relation is slightly misleading as it does not contain all the information: the extra poles contained in the first term on the RHS are necessarily at the points \(\tau=\rho,i\) and are not free to move. So it is better to break up \(\ell_{j}\) into contributions from zeroes/poles at \(\rho,i\) and generic positions (as we did before for \(\ell\)):
\[\ell^{j}=\ell_{\rho}^{j}+\ell_{i}^{j}+\ell_{\tau}^{j} \tag{3.15}\]
Now, positive values of these quantities denote zeroes while negative values denote poles. Recall that in the \(\tau\)-space case, the terms are individually \(\geq 0\) and \(\ell_{\rho},\ell_{i}\) and \(\ell_{\tau}\) are multiples of 2,3,6 respectively. Then, taking account of the new poles introduced by the change of variables, we get:
\[\begin{split}\ell_{\rho}^{j}&=-2n(n-1)+\ell_{\rho} \\ \ell_{i}^{j}&=-\frac{3n(n-1)}{2}+\ell_{i}\\ \ell_{\tau}^{j}&=\ell_{\tau}\end{split} \tag{3.16}\]
Positivity of \(\ell_{p},\ell_{i},\ell_{\tau}\) then induces obvious lower bounds on \(\ell_{\rho}^{j},\ell_{i}^{j},\ell_{\tau}^{j}\). Notice that it is possible for \(\ell_{\rho}^{j},\ell_{i}^{j}\) to vanish due to cancellations between poles induced by the change of variables to \(j\) and zeroes of the original Wronskian at the special points \(\rho,i\).
From the above considerations, we can readily fix the first coefficient function \(\psi_{2}(j)\) in Eq. (3.5), which is given by:
\[\psi_{2}(j)=-\frac{W_{n-1}(j)}{W_{n}(j)}=-\partial_{j}\log W_{n}(j) \tag{3.17}\]
The result is:
\[\begin{split}\psi_{2}(j)&=-\frac{\ell_{\rho}^{j}}{ 6j}-\frac{\ell_{i}^{j}}{6(j-1728)}-\sum_{I=1}^{\frac{\ell_{\tau}^{j}}{6}}\frac {1}{j-p_{I}}\\ &=\frac{n(n-1)}{3j}+\frac{n(n-1)}{4(j-1728)}-\frac{\ell_{\rho}}{ 6j}-\frac{\ell_{i}}{6(j-1728)}-\sum_{I=1}^{\frac{\ell_{\tau}}{6}}\frac{1}{j-p _{I}}\end{split} \tag{3.18}\]
### \((2,\ell)\) MLDE in \(j\)-space
We now again specialise to the case of two characters, keeping the Wronskian index arbitrary. The first step is to determine the remaining coefficient function \(\psi_{4}(j)\) in the MLDE for this case. From the definition we have:
\[\psi_{4}(j)=\frac{W_{0}(j)}{W_{2}(j)} \tag{3.19}\]
Now,
\[W_{0}(j)=\begin{vmatrix}\partial_{j}\chi_{0}&\partial_{j}\chi_{1}\\ \partial_{j}^{2}\chi_{0}&\partial_{j}^{2}\chi_{1}\end{vmatrix},\qquad W_{2}(j) =\begin{vmatrix}\chi_{0}&\chi_{1}\\ \partial_{j}\chi_{0}&\partial_{j}\chi_{1}\end{vmatrix}, \tag{3.20}\]
Inserting the behaviour \(\chi_{i}\sim j^{\alpha_{i}^{(\rho)}}\) near \(j\sim 0\) (\(\tau=\rho\)) we find:
\[\begin{split} W_{0}(j)&\sim\alpha_{0}^{(\rho)}\alpha_{1}^{( \rho)}\Big{(}\alpha_{1}^{(\rho)}-\alpha_{0}^{(\rho)}\Big{)}j^{\alpha_{0}^{( \rho)}+\alpha_{1}^{(\rho)}-3}+\mathcal{O}\Big{(}j^{\alpha_{0}^{(\rho)}+\alpha _{1}^{(\rho)}-2}\Big{)}\\ W_{2}(j)&\sim\Big{(}\alpha_{1}^{(\rho)}-\alpha_{0}^{(\rho)} \Big{)}j^{\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)}-1}\end{split} \tag{3.21}\]
The reason to write the first correction to \(W_{0}\) is that the leading term can vanish, if \(\alpha_{0}^{(\rho)}\) or \(\alpha_{1}^{(\rho)}\) vanishes. However since the two exponents must be distinct, the leading term of \(W_{2}\) cannot vanish. Thus we have \(\psi_{4}(j)\sim j^{-2}\) unless one of the exponents vanishes, in which case \(\psi_{4}(j)\sim j^{-1}\).
The exponents \(\alpha_{i}^{(\rho)}\) satisfy (see Eq. (A.2)):
\[\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)}-\frac{1}{3}=\frac{\ell_{\rho}}{6} \tag{3.22}\]
Writing \(\ell=6r+u\), we have \(u=\ell_{\rho}=0,2,4\) respectively for the cases \(\ell=6r,6r+2,6r+4\). It follows that the exponents are as follows:
\[\begin{split}\ell=6r\!:&\alpha_{0}^{(\rho)}+\alpha_ {1}^{(\rho)}=\tfrac{1}{3}\implies(\alpha_{0}^{(\rho)},\alpha_{1}^{(\rho)})=(0,\tfrac{1}{3})\\ \ell=6r+2\!:&\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)} =\tfrac{2}{3}\implies(\alpha_{0}^{(\rho)},\alpha_{1}^{(\rho)})=(0,\tfrac{2}{3} )\\ \ell=6r+4\!:&\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)} =1\implies(\alpha_{0}^{(\rho)},\alpha_{1}^{(\rho)})=(\tfrac{1}{3},\tfrac{2}{3} )\end{split} \tag{3.23}\]
(these facts have already been derived in terms of the \(\tau\) coordinate in Sub-section 2.3, but here our goal is to derive everything independently in the \(j\) coordinate). Thus in the first two cases the leading term in \(W_{0}\) indeed vanishes and the subleading term has to be used. We see that the behaviour of \(\psi_{4}(j)\) in the three cases is \(\sim j^{-1},\sim j^{-1},\sim j^{-2}\) respectively.
Next we consider the behaviour near \(j=1728\) (\(\tau=i\)). Similar arguments tell us that \(\psi_{4}(j)\sim\alpha_{0}^{(i)}\alpha_{1}^{(i)}(j-1728)^{-2}+\mathcal{O}\big{(} (j-1728)^{-1}\big{)}\). This time we have (see Eq. (A.4)):
\[\alpha_{0}^{(i)}+\alpha_{1}^{(i)}-\frac{1}{2}=\frac{\ell_{i}}{6}=0\implies( \alpha_{0}^{(i)},\alpha_{1}^{(i)})=\Big{(}0,\frac{1}{2}\Big{)}, \tag{3.24}\]
in every case, so the leading term always vanishes and we have a simple pole in \(j-1728\).
From the \(r\) generic zeroes of \(W_{2}\) at \(j=p_{I}\), we get a simple pole at each of these points. Finally, the \(\tau\to i\infty\) behaviour requires that the overall power of \(j\) as \(j\to\infty\) is \(-2\). Hence \(\psi_{4}(j)\) must contain, in the numerator, a generic polynomial in \(j\) of degree \(r\) for \(\ell=6r,6r+2\) and of degree \(r+1\) for \(\ell=6r+4\). Thus finally we get:
\(\ell=6r\) :
\[\partial_{j}^{2}\chi(j)+\left[\frac{1}{2(j-1728)}+\frac{2}{3j}-\sum_{I=1}^{r} \frac{1}{j-p_{I}}\right]\partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}}{j(j-1 728)}\frac{\prod\limits_{I=1}^{r}(j-b_{4,I})}{\prod\limits_{I=1}^{r}(j-p_{I}) }\chi(j)=0.\]
\(\ell=6r+2\) :
\[\partial_{j}^{2}\chi(j)+\left[\frac{1}{2(j-1728)}+\frac{1}{3j}-\sum_{I=1}^{r} \frac{1}{j-p_{I}}\right]\partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}}{j(j-1 728)}\frac{\prod\limits_{I=1}^{r}(j-b_{4,I})}{\prod\limits_{I=1}^{r}(j-p_{I}) }\chi(j)=0.\]
\(\ell=6r+4\) :
\[\partial_{j}^{2}\chi(j)+\left[\frac{1}{2(j-1728)}-\sum_{I=1}^{r}\frac{1}{j-p_ {I}}\right]\partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}}{j^{2}(j-1728)}\frac {\prod\limits_{I=1}^{r+1}(j-b_{4,I})}{\prod\limits_{I=1}^{r}(j-p_{I})}\chi(j) =0. \tag{3.25}\]
These expressions can easily be confirmed by explicitly changing variables from \(\tau\) to \(j\) in Eqs. (2.12), (2.18). However, the methods we have used to arrive at them are useful in the general case (higher than second-order) and one does not need to invoke the MLDE in \(\tau\) to write the equations in \(j\)-space.
By considering the indicial equation around \(j=0\) (\(\tau=\rho\)), we will again find, in the \(\ell=6r+4\) case, that it is possible to fix \(b_{4,r+1}\) in terms of the remaining coefficients. As a result, once we impose the indicial equations there are \(2r\) independent coefficients in every case, namely the \(p_{I}\) and \(b_{4,I}\) with \(I=1,2,\cdots,r\). Note that the MLDE is totally symmetric under permutations of the \(p_{I}\) and also under permutations of the accessory parameters \(b_{4,I}\).
The differential equations in the \(j\) plane that were discussed above are examples of Fuchsian differential equations (FDE) with regular singular points. However they have some special features. A general FDE with regular singular points is of the form:
\[\frac{d^{n}f}{dx^{n}}+\sum_{i=1}^{n}\alpha_{i}(x)\frac{d^{i}f}{dx^{i}}=0 \tag{3.26}\]
where the coefficient functions \(\alpha_{i}(x)\) have at most poles of order \(i\) at the regular singular points. However due to our genericity assumption, the Wronskian has only simple zeroes at generic points \(p_{I}\). Hence in our case, all the coefficient functions have only simple poles (with the exception of poles at \(\tau=\rho\), which are double poles whenever the Wronskian index is equal to 4 mod 6). This means that ab initio they span a more restricted set than general Fuchsian differential equations with regular singular points. We will revisit this issue later on when we move away from the genericity assumption by allowing movable poles to coalesce. Meanwhile, as already noted above, our \(b_{4,I}\) correspond in the language of FDE to what are called "accessory parameters".
### Reduction of \(\ell=6r+4\) to \(\ell=6r\)
Let us note an important general lesson that is exemplified by Eq. (3.23). In the third line, the lower of the two exponents is \(\frac{1}{3}\). This means we can take any solution of the \(\ell=6r+4\) MLDE and write it as:
\[\chi(j)=j^{\frac{1}{3}}\zeta(j) \tag{3.27}\]
where \(\zeta(j)\) has an expansion about \(j=0\) in positive powers of \(j\). Then, as is easily verified, \(\zeta(j)\) solves an MLDE with \((n,\ell)=(2,6r)\). This means that, in terms of having a well-defined power-series expansion about all the singular points of the MLDE, every \(\ell=6r+4\) solution factorises into the \(E_{8,1}\) character \(j^{\frac{1}{3}}\) times a solution of the \((2,6r)\) equation. It is not, however, necessarily the case that both factors are admissible. In particular, it is possible for \(\zeta(j)\) to be a non-admissible character while \(j^{\frac{1}{3}}\zeta(j)\) is admissible.
A very striking example, noted in sub-section 5.1 of [6], is that the characters of the \(c=33\) CFT of [41], which has Wronskian index \(\ell=4\), can be written as the product of \(j^{\frac{1}{3}}\) times a solution with \(c=25\) and \(\ell=0\). However, the \(c=25\) solution has some negative coefficients in its \(q\)-series and therefore does not count as admissible (as we will see below, it is actually a quasi-character). Yet, after multiplying it by \(j^{\frac{1}{3}}\) it becomes admissible and in fact a genuine CFT. But this CFT, despite the factorisation described above, is by no means a tensor product of two other CFTs.
The factorisation of solutions described above for \(\ell=4\) is easily seen to persist for all \(\ell=6r+4\). Hence we no longer need to discuss MLDEs for the case \(\ell=6r+4\), even though we have formulated them above. All we need to remember is that admissible solutions in these cases are found by considering all integral (not only admissible) solutions of the \(\ell=6r\) equation, multiplying each one by \(j^{\frac{1}{3}}\) and then testing for admissibility.
On the other hand, in the first two lines of Eq. (3.23), the lower of the two exponents is 0. This tells us that we cannot extract a positive power from the character and still hope to find a positive power-series expansion in \(j\). Moreover this fact persists for \(\ell=6r,6r+2\) as
long as the genericity assumption is obeyed. So even relaxing admissibility, the characters in these cases do not factorise.
### Determining accessory parameters: the \((2,6)\) case
Having dealt with the indicial equations about \(j=0,1728\), the next step is to study the indicial equations around \(j=p_{i}\). This will determine all the accessory parameters \(b_{4,i}\) in \(\psi_{4}(j)\). Let us start with a particular case, the \((2,6)\) MLDE which from Eq. (3.25) has the form:
\[\partial_{j}^{2}\chi(j)+\left[\frac{1}{2(j-1728)}+\frac{2}{3j}- \frac{1}{j-p_{1}}\right]\partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}(j-b_{4,1})}{j(j-1728)(j-p_{1})}\chi(j)=0, \tag{3.28}\]
A priori it has 2 parameters, \(b_{4,1}\) and \(p_{1}\). In this case we have \(\ell_{\rho}=\ell_{i}=0\) and \(\ell_{1}=6\).
Let us examine the leading behaviour of the characters about \(j=p_{1}\). Since \(p_{1}\) is not a special point in moduli space (by the genericity assumption), the critical exponents around it must be integers. We substitute the expansion Eq. (3.8) in Eq. (3.28) and look at the solution at order \((j-p_{1})^{\alpha_{i}^{(1)}-1}\) to get the indicial equation:
\[\alpha_{i}^{(1)}(\alpha_{i}^{(1)}-2)=0 \tag{3.29}\]
so the exponents are \(\left(\alpha_{0}^{(1)},\alpha_{1}^{(1)}\right)=(0,2)\). Notice that in this process we have identified the solution \(\chi_{0}(j)\) with the exponent 0, and \(\chi_{1}(j)\) with exponent 25.
Footnote 5: It is important not to identify these two solutions with the two characters \(\chi_{0}(\tau)\) and \(\chi_{1}(\tau)\) that form the two independent CFT characters with integral expansions in \(q\). The reason is that here we are expanding around a point inside moduli space instead of the point \(\tau\to i\infty\). Hence each pair is in general a linear combination of the other pair.
When the exponents differ by an integer there is potentially a problem with single-valuedness of the solution. In fact the solution with \(\alpha^{(1)}=2\) always exists, but the solution with \(\alpha^{(1)}=0\) in general has a logarithmic term. If present, this term would render the corresponding character multivalued in \(j\) and therefore unphysical [27; 42]. To analyse this situation we start by inserting the expansion Eq. (3.8) in Eq. (3.28). At order \((j-p_{1})^{\alpha_{i}^{(1)}}\) we find:
\[a_{i,1}^{(1)}=\frac{\alpha_{i}^{(1)}(\frac{7}{6}p_{1}-1152)+ \alpha_{0}\alpha_{1}(p_{1}-b_{4,1})}{p_{1}(p_{1}-1728)\Big{(}1-\left(\alpha_{ i}^{(1)}\right)^{2}\Big{)}} \tag{3.30}\]
Inserting \(\alpha_{1}^{(1)}=2\) we determine \(a_{1,1}^{(1)}\) for this solution, and continuing in this way we are guaranteed to determine the subsequent coefficients. If we insert the other value \(\alpha_{0}^{(1)}=0\), we find:
\[a_{0,1}^{(1)}=\frac{\alpha_{0}\alpha_{1}(p_{1}-b_{4,1})}{p_{1}(p _{1}-1728)} \tag{3.31}\]
Now the recursion relation that should have determined the next coefficient \(a_{0,2}^{(1)}\) does not contain that variable. This is a consequence of the integral difference in indices that we noted above. Instead, it gives us a constraint on \(b_{4,1}\):
\[\alpha_{0}\alpha_{1}+\left(576-\frac{5p_{1}}{6}+\alpha_{0}\alpha_{1}(p_{1}-b_{4,1})\right)a_{0,1}^{(1)}=0, \tag{3.32}\]
Thus the above constraint is the condition that the second solution does not have a logarithmic piece. If we do not implement the constraint, one of the characters becomes multi-valued around a zero of the Wronskian and has to be rejected on physical grounds 6.
Footnote 6: Such objects are called “weak VVMF” in [26].
Substituting Eq. (3.31) into Eq. (3.32) we get:
\[\alpha_{0}\alpha_{1}(p_{1}-b_{4,1})^{2}+\left(576-\frac{5}{6}p_{1}\right)(p_{1 }-b_{4,1})+p_{1}(p_{1}-1728)=0 \tag{3.33}\]
After multiplying by all the denominators, this becomes a quadratic curve in \(p_{1},(p_{1}-b_{4,1})\) with discriminant:
\[\frac{25}{36}-4\alpha_{0}\alpha_{1} \tag{3.34}\]
Using \(\alpha_{0}=-\frac{c}{24}\) and \(\alpha_{1}=\frac{c}{24}-\frac{5}{6}\) (the latter follows from Eq. (2.16)), we see that this is positive for all \(c\neq 10\), and the quadratic is a hyperbola. At \(c=10\) the curve degenerates to a parabola. From Eq. (3.33), notice that when \(b_{4,1}=p_{1}\) then we have \(p_{1}=0\) or \(p_{1}=1728\), in other words the pole has to be at one of the cusps \(\tau=\rho,\tau=i\) of moduli space. The curve Eq. (3.33) determines the accessory parameter in terms of the pole \(p_{1}\).
It is useful to consider the asymptotic region of the curve Eq. (3.33) as \(p_{1}\to\infty\). In this limit, the Wronskian no longer has a zero in the finite region of moduli space, hence now we should find solutions with \(\ell=0\) and this is indeed what happens as we will see later in several examples. Defining:
\[x_{1}=1-\frac{b_{4,1}}{p_{1}} \tag{3.35}\]
we see that at large \(p_{1}\), Eq. (3.33) becomes:
\[\alpha_{0}\alpha_{1}x_{1}^{2}-\frac{5}{6}x_{1}+1=0 \tag{3.36}\]
with the solutions:
\[x_{1}=\frac{\frac{5}{6}\pm\sqrt{\frac{25}{36}-4\alpha_{0}\alpha_{1}}}{2\alpha _{0}\alpha_{1}} \tag{3.37}\]
Since \(\ell=6\), we have from Eq. (2.16) that \(\alpha_{0}+\alpha_{1}=-\frac{5}{6}\). This allows us to simplify the above equation to:
\[x_{1}=\Bigg{\{}-\frac{1}{\alpha_{0}},-\frac{1}{\alpha_{1}}\Bigg{\}} \tag{3.38}\]
Next we consider generic values of \(\ell\) and show that the accessory parameters are determined similarly. As we will see, this allows the complete determination of \(\alpha_{0},\alpha_{1}\) for all \(\ell\) in terms of those for \(\ell<6\) which are already known.
### Determining accessory parameters: the general case
In the most general case with two characters and arbitrary \(\ell\), as long as the singularities are well-separated the phenomenon is very similar. For each singularity \(p_{I}\) we get one constraint on the combined set of \(b_{4,I}\) and \(p_{I}\) by trying to calculate the second-order coefficient \(a_{0,2}^{(I)}\). This provides us with a set of simultaneous equations for the \(b_{4,I}\). In the cases \(\ell=6r,6r+2\) this is sufficient to determine all the \(b_{4,i}\) in terms of the \(p_{I}\). We now consider each family in more detail.
#### \(\ell=6r\) case, \(r\geq 0\)
In this case, we have \(r\) full zeroes that are well-separated from each other and from the special points \(j=0,1728\). Substituting Eq. (3.8) into the \(\ell=6r\) case of Eq. (3.25) and equating the coefficient of \((j-p_{I})^{\alpha_{i}^{(I)}-1}\) to zero, we get the indicial equation:
\[\alpha_{i}^{(I)}\left(\alpha_{i}^{(I)}-2\right)=0, \tag{3.39}\]
and hence \((\alpha_{0}^{(I)},\alpha_{1}^{(I)})=(0,2),\ 1\leq I\leq r\). At the next order in the expansion, setting to zero the coefficient of \((j-p_{I})^{\alpha_{i}^{(I)}}\) we get:
\[a_{i,1}^{(I)}=\frac{1}{p_{I}(p_{I}-1728)\Big{(}1-\left(\alpha_{i}^{(1)}\right) ^{2}\Big{)}}\left(\begin{array}{c}\alpha_{0}\alpha_{1}\prod\limits_{J=1}^{r }(p_{I}-b_{4,J})\\ \prod\limits_{J=1\atop J\neq I}^{r}(p_{I}-p_{J})\end{array}+\alpha_{i}^{(I)} \left(\frac{7p_{I}}{6}-1152\right)\end{array}\right) \tag{3.40}\]
which is manifestly a generalisation of Eq. (3.30). Choosing \(\alpha_{0}^{(1)}=0\) in the above, we get:
\[a_{0,1}^{(I)}=\frac{\alpha_{0}\alpha_{1}\prod\limits_{J=1}^{r}(p_{I}-b_{4,J}) }{p_{I}(p_{I}-1728)\prod\limits_{J\neq I}^{r}(p_{I}-p_{J})}, \tag{3.41}\]
At the next order, we set to zero the coefficient of \((j-p_{I})^{\alpha_{i}^{(1)}+1}\). First let us examine the term that contains \(a_{0,2}^{(I)}\):
\[a_{0,2}^{(I)}\,p_{I}(p_{I}-1728)\Bigg{(}\prod\limits_{J=1\atop J\neq I}^{r}(p _{I}-p_{J})\,\Bigg{)}\alpha_{i}^{(I)}\left(\alpha_{i}^{(I)}-2\right) \tag{3.42}\]
Since this is proportional to the indicial equation, the above expression is identically zero and hence the dependence on \(a_{0,2}^{(I)}\) drops out. In its place, we find a set of constraint equations on the parameters of the MLDE (assuming the \(p_{I}\) are distinct from the accessory parameters \(b_{4,J}\)):
\[a_{0,1}^{(I)}\left(\begin{array}{c}\frac{1}{p_{I}(p_{I}-1728)} \left(576-\frac{5p_{I}}{6}\right)-2\sum_{\begin{subarray}{c}J=1\\ J\neq I\end{subarray}}^{r}\frac{1}{p_{I}-p_{J}}+\frac{\alpha_{0}\alpha_{1}}{p_ {I}(p_{I}-1728)}\frac{\prod\limits_{J=1}^{r}(p_{I}-b_{4,J})}{\prod\limits_{J \neq I}^{r}(p_{I}-p_{J})}\end{array}\right)\] \[\qquad\qquad+\ \frac{\alpha_{0}\alpha_{1}}{p_{I}(p_{I}-1728)} \frac{\prod\limits_{J=1}^{r}(p_{I}-b_{4,J})}{\prod\limits_{J=1}^{r}(p_{I}-p_{ J})}\left(\sum_{J=1}^{r}\frac{1}{p_{I}-b_{4,J}}\right)=0. \tag{43}\]
Now substituting the value of \(a_{0,1}^{(I)}\) from Eq. (3.41) in Eq. (3.43) we get:
\[\frac{1}{p_{I}(p_{I}-1728)}\left(\begin{array}{c}576-\frac{5p_{I}}{6}+\alpha _{0}\alpha_{1}\frac{\prod\limits_{J=1}^{r}(p_{I}-b_{4,J})}{\prod\limits_{J \neq I}^{r}(p_{I}-p_{J})}\end{array}\right)-2\sum_{\begin{subarray}{c}J=1\\ J\neq I\end{subarray}}^{r}\frac{1}{p_{I}-p_{J}}+\sum_{J=1}^{r}\frac{1}{p_{I}-b_ {4,J}}=0. \tag{44}\]
Thus we get a set of \(r\) coupled equations for the accessory parameters \(b_{4,J}\) which can be solved in principle to determine them as functions of the \(p_{I}\).
We can think of these equations as defining a sub-manifold or algebraic variety in the \(2r\)-dimensional parameter space of the \(p_{I}\) and \(b_{4,I}\). We will call them "accessory equations". If we multiply out all the denominators, these become a set of \(r\) polynomials of degree \(2r\). The special case in the last section, Eq. (3.33), corresponds to \(r=1\), hence a single quadratic equation, namely a hyperbola.
We see that for general \(r\), each equation is separately invariant under a permutation of the \(b_{4,I}\), while the equations are permuted among themselves if we permute the \(p_{I}\). These facts suggest the use of symmetric polynomials in the \(p_{I}\) as well as the \(b_{4,I}\), which will be introduced in Section 5. Also, the equations become singular if any two of the \(p_{I}\) coincide with each other or with an accessory parameter \(b_{4,J}\). This is as expected, since both such coincidences change the nature of the original equation - the first violates the genericity assumption, while the second cancels a pole in the last term of the MLDE.
Now let us consider the asymptotic region of the sub-manifold defined by Eq. (3.44), by taking \(p_{r}\to\infty\) together with \(b_{4,r}\) while keeping \(x_{r}=1-\frac{b_{4,r}}{p_{r}}\) fixed. Then the equations for
\(I=1,2,\cdots,r-1\) become:
\[\begin{split}\frac{1}{p_{I}(p_{I}-1728)}&\left(\,576- \frac{5p_{I}}{6}+\alpha_{0}\alpha_{1}\,(1-x_{r})\frac{\prod\limits_{J=1}^{r-1}( p_{I}-b_{4,J})}{\prod\limits_{J=1}^{r-1}(p_{I}-p_{J})}\,\,\right)-2\sum \limits_{\genfrac{}{}{0.0pt}{}{J=1}{J\neq I}}^{r-1}\frac{1}{p_{I}-p_{J}}\\ &+\sum\limits_{J=1}^{r-1}\frac{1}{p_{I}-b_{4,J}}=0,\qquad 1\leq I \leq r-1\end{split} \tag{3.45}\]
while the equation for \(I=r\) becomes:
\[\alpha_{0}\alpha_{1}\,x_{r}^{2}+\left(\frac{1-\ell}{6}\right)x_{r}+1=0. \tag{3.46}\]
The second equation determines \(x_{r}\) in terms of the product of exponents \(\alpha_{0}\alpha_{1}\):
\[x_{r}=\frac{1}{2\alpha_{0}\alpha_{1}}\!\left(\frac{\ell-1}{6}\pm\sqrt{\frac{( \ell-1)^{2}}{36}-4\alpha_{0}\alpha_{1}}\right) \tag{3.47}\]
We can simplify this using the valence formula Eq. (2.16) which tells us that \(\alpha_{0}+\alpha_{1}=\frac{1-\ell}{6}\). Then:
\[\begin{split} x_{r}&=\frac{1}{2\alpha_{0}\alpha_{1 }}\Big{(}-(\alpha_{0}+\alpha_{1})\pm\alpha_{0}-\alpha_{1}\Big{)}\\ &=\Bigg{\{}-\frac{1}{\alpha_{0}},-\frac{1}{\alpha_{1}}\Bigg{\}} \\ &=\Bigg{\{}\frac{24}{c},\frac{24}{c-24h}\Bigg{\}}\end{split} \tag{3.48}\]
Meanwhile, the first set of equations is precisely the one for the MLDE with \(r\) replaced by \(r-1\), i.e. Wronskian index \(\ell\) replaced by \(\ell-6\), with the replacement:
\[\begin{split}(\alpha_{0}\alpha_{1})^{(\ell-6)}&=( 1-x_{r})(\alpha_{0}\alpha_{1})^{(\ell)}\\ &=\Bigg{\{}\frac{\alpha_{0}^{(\ell)}+1}{\alpha_{0}^{(\ell)}}, \frac{\alpha_{1}^{(\ell)}+1}{\alpha_{1}^{(\ell)}}\Bigg{\}}(\alpha_{0}\alpha_ {1})^{(\ell)}\\ &=\Bigg{\{}\big{(}(\alpha_{0}+1)\alpha_{1}\big{)}^{(\ell)}, \big{(}\alpha_{0}(\alpha_{1}+1)\big{)}^{(\ell)}\Bigg{\}}\end{split} \tag{3.49}\]
Applying the procedure recursively, this equation determines \(\alpha_{0},\alpha_{1}\) (up to exchange of characters) for all \(\ell=6r\) given their values for \(\ell=0\), which are known from [1].
\(\ell=6r+2\) **case, \(r\geq 0\)**
For the case of \(\ell=6r+2\), we again have \(r\) distinct full zeros at \(p_{I}\), and also a zero of order \(\frac{1}{3}\) at \(\tau=\rho\left(j=0\right)\). The genericity assumption also says that \(p_{I}\neq 0,1728\). This case is very similar to the \(\ell=6r\) case and we again find \((\alpha_{0}^{(I)},\alpha_{1}^{(I)})=(0,2),\ 1\leq I\leq r\). At the next order we have:
\[a_{i,1}^{(I)}=\frac{1}{p_{i}(p_{I}-1728)\Big{(}1-\big{(}\alpha_{i}^{(1)}\big{)} ^{2}\Big{)}}\left(\begin{array}{c}\alpha_{0}\alpha_{1}\prod\limits_{J=1}^{r }(p_{I}-b_{4,J})\\ \prod\limits_{J=1\atop J\neq I}^{r}(p_{I}-p_{J})\end{array}+\alpha_{i}^{(I)} \left(\frac{5p_{I}}{6}-576\right)\ \right) \tag{3.50}\]
Now choosing \(\alpha_{0}^{(1)}=0\) in the above we get,
\[a_{0,1}^{(I)}=\frac{\alpha_{0}\alpha_{1}\prod\limits_{J=1}^{r}(p_{I}-b_{4,J} )}{p_{i}(p_{I}-1728)\prod\limits_{J=1\atop J\neq I}^{r}(p_{I}-p_{J})}, \tag{3.51}\]
At the next order we find the constraint:
\[a_{0,1}^{(I)}\left(\begin{array}{c}1\\ \frac{1}{p_{I}(p_{I}-1728)}\left(1152-\frac{7p_{I}}{6}\right)-2\sum\limits_{J =1\atop J\neq I}^{r}\frac{1}{p_{I}-p_{J}}+\frac{\alpha_{0}\alpha_{1}}{p_{I}(p _{I}-1728)}\frac{\prod\limits_{J=1\atop J=1}^{r}(p_{I}-b_{4,J})}{\prod \limits_{J=1\atop J\neq I}^{r}(p_{I}-p_{J})}\\ \end{array}\right)\] \[+\ \frac{\alpha_{0}\alpha_{1}}{p_{I}(p_{I}-1728)}\frac{\prod \limits_{J=1\atop J\neq I}^{r}(p_{I}-b_{4,J})}{\prod\limits_{J=1\atop J\neq I }^{r}(p_{I}-p_{J})}\left(\sum\limits_{J=1}^{r}\frac{1}{p_{I}-b_{4,J}}\right)=0. \tag{3.52}\]
Now substituting the value of \(a_{0,1}^{(I)}\) from Eq. (3.51) in Eq. (3.52) we get,
\[\frac{1}{p_{I}(p_{I}-1728)}\left(\begin{array}{c}1152-\frac{7p_{I}}{6}+ \alpha_{0}\alpha_{1}\frac{\prod\limits_{J=1\atop J=1}^{r}(p_{I}-b_{4,J})}{\prod \limits_{J=1\atop J\neq I}^{r}(p_{I}-p_{J})}\\ \end{array}\right)-2\sum\limits_{J=1\atop J\neq I}^{r}\frac{1}{p_{I}-p_{J}}+ \sum\limits_{J=1}^{r}\frac{1}{p_{I}-b_{4,J}}=0. \tag{3.53}\]
Thus, once more we get coupled equations for the \(b_{4,J}\) which define a sub-manifold of the original space and determine the accessory parameters as functions of the \(p_{I}\). The analysis of the asymptotic behaviour is precisely the same as for \(\ell=6r\), and we again end up with Eq. (3.46) where now \(\ell=6r+2\), as well as a version of Eq. (3.45) where \(-\frac{5}{6}p_{I}\) is replaced by \(-\frac{7}{6}p_{I}\) and \(576\) is replaced by \(1152\). Fruthermore, Eq. (3.49) remains unchanged in this case.
Since we have argued above that the \(\ell=6r+4\) case can always be reduced to \(\ell=6r\) by extracting a factor \(j^{\frac{1}{3}}\), we do not need to find the accessory equations separately for that case. Hence at this stage our analysis of accessory equations is complete.
To summarise, what we have learned from the asymptotic analysis is that Eq. (3.49) is true for all \(\ell=6r+u\) where \(u=0,2\). Although there are two choices in this equation, it is clear that they are related by an exchange of characters. We can invert Eq. (3.49) and iterate it \(u\) times (where \(\ell=6r+u\)) to get:
\[\begin{split}\alpha_{0}^{(\ell=6r+u)}&=\alpha_{0 }^{(\ell=u)}-r\\ \alpha_{1}^{(\ell=6r+u)}&=(\alpha_{1})^{(\ell=u)} \end{split} \tag{3.54}\]
Here we have chosen \(\alpha_{0}=-\frac{c}{24}\) and \(\alpha_{1}=-\frac{c}{24}+h\). So the above equations tell us that:
\[\begin{split} c^{(\ell=6r+u)}&=c^{(\ell=u)}+24r\\ h^{(\ell=6r+u)}&=h^{(\ell=u)}+r\end{split} \tag{3.55}\]
Thus, using only the MLDE for generic \(\ell\), we have demonstrated that the central charge and conformal dimension of a solution for any \(\ell=6r,6r+2\) are related as above to those of an MLDE solution with \(\ell=0,2\) (with a corresponding result for \(\ell=6r+4\) following from factorisation of the solutions in that case). As we show below, this perfectly agrees with the analysis from quasi-characters [6].
### Admissible range of central charges for \((2,\ell)\) solutions
In this sub-section, we study the admissible range of central charges for \((2,\ell)\) solutions, based on the asymptotic analysis and knowledge of the admissibility range for \((2,0)\) and \((2,2)\) solutions. Then we will present the results for \(\ell=6,8,12,14\), which will be used in upcoming sections.
We first note that Eq. (3.49) can be solved for \(c^{(\ell)}\) in terms of \(c^{(\ell-6)}\) by using the valence formula and then replacing everything in terms of central charges. There are two possibilities for the product of exponents in this equation, each of which translates into two possibilities for the relation between central charges. Thus we get:
\[\begin{split} c^{(\ell)}=c^{(\ell-6)}+24&\text{or} \quad c^{(\ell)}=4(\ell-1)-c^{(\ell-6)}\\ c^{(\ell)}=c^{(\ell-6)}&\text{or}\quad c^{(\ell)}=4 (\ell-7)-c^{(\ell-6)}\end{split} \tag{3.56}\]
respectively. Imposing unitarity via \(h^{(\ell)}>0\), we also get the lower bounds:
\[h^{(\ell)}=\frac{c-2(\ell-1)}{12}>0\implies c^{(\ell)}>2(\ell-1) \tag{3.57}\]
One of the four possibilities in Eq. (3.56) can be ruled out, namely \(c^{(\ell)}=4(\ell-7)-c^{(\ell-6)}\). To see this, let us suppose it is allowed. Then the unitarity bound Eq. (3.57) for \(c^{(\ell)}\) gives \(4(\ell-7)-c^{(\ell-6)}>2(\ell-1)\) implying \(c^{(\ell-6)}<2(\ell-13)\). However, the unitarity bound directly implies that \(c^{(\ell-6)}>2(\ell-7)\). Thus we have a contradiction and the above possibility is ruled out.
It follows that at each step we can only have the following three possibilities:
\[c^{(\ell)}=c^{(\ell-6)},\quad c^{(\ell)}=c^{(\ell-6)}+24,\quad c^{(\ell)}=4( \ell-1)-c^{(\ell-6)} \tag{3.58}\]
We can now recursively work out the ranges for any given \(\ell=6r,6r+2\) starting from the known ranges for \(\ell=0,2\)[1, 25]:
\[\begin{split}\ell=0:&\quad c^{(\ell=0)}\in(0,8) \\ \ell=2:&\quad c^{(\ell=2)}\in(16,24)\end{split} \tag{3.59}\]
and applying all the possibilities above subject to the constraint Eq. (3.57). We find 7:
Footnote 7: For \(\ell=6\), we rule out the case: \(c^{(\ell=6)}=20-c^{(\ell=0)}\), which in turn implies \(c^{(\ell=6)}\in(12,20)\), by requiring the admissibility of \(m_{k}^{(6)}\), for higher orders in \(k\sim 2000\).
\[\begin{split}\ell=6:&\quad c^{(\ell=6)}\in(24,32) \\ \ell=8:&\quad c^{(\ell=8)}\in(16,24)\ \cup\ (40,48)\\ \ell=12:&\quad c^{(\ell=12)}\in(24,32)\ \cup\ (48,56) \\ \ell=14:&\quad c^{(\ell=14)}\in(28,36)\ \cup\ (40,48)\ \cup\ (64,72) \end{split} \tag{3.60}\]
Using Eq. (3.58) for \(\ell=6,8,12,14\), we get the following admissible sets for the above \(\ell\) values:
\[\begin{split} c^{(\ell=6)}\in&\left\{\frac{122}{5}, 25,26,\frac{134}{5},28,\frac{146}{5},30,31,\frac{158}{5}\right\}\\ c^{(\ell=8)}\in&\left\{\frac{82}{5},17,18,\frac{94} {5},20,\frac{106}{5},22,23,\frac{118}{5}\right\}\ \cup\ \left\{\frac{202}{5},41,42,\frac{214}{5},44,\frac{226}{5},46,47,\frac{238}{5 }\right\}\\ c^{(\ell=12)}\in&\left\{\frac{122}{5},25,26,\frac{1 34}{5},28,\frac{146}{5},30,31,\frac{158}{5}\right\}\ \cup\ \left\{\frac{242}{5},49,50,\frac{254}{5},52,\frac{266}{5},54,55,\frac{278}{5 }\right\}\\ c^{(\ell=14)}\in&\left\{\frac{142}{5},29,30,\frac{ 154}{5},32,\frac{166}{5},34,35,\frac{178}{5}\right\}\ \cup\ \left\{\frac{202}{5},41,42,\frac{214}{5},44,\frac{226}{5},46,47,\frac{238}{5 }\right\}\\ &\quad\cup\ \left\{\frac{322}{5},65,66,\frac{334}{5},68,\frac{346}{5 },70,71,\frac{358}{5}\right\}\end{split} \tag{3.61}\]
Let us digress a bit and conclude this sub-section with an observation regarding the modular data for 2-character admissible solutions. Using Eqs. (3.56), and the admissibility
range for \((2,0)\) and \((2,2)\) solutions, we note that for any \(\ell=6r\), or \(6r+2\) we have \(c^{(\ell)}=\left\{n+c^{(\ell=0)},m+c^{(\ell=2)}\right\}\), where \(n,m\) are non-negative integers (as \(\ell\) is a non-negative integer). Since \(5c^{(\ell=0)}\) and \(5c^{(\ell=2)}\) are known to be integers, the above observation implies that \(5c^{(\ell)}\) is also an integer. For \(\ell=6r+4\), we already know that \(\chi^{(\ell=6r+4)}=j^{\frac{1}{3}}\chi^{(\ell=6r)}\) and hence \(5c^{(\ell=6r+4)}\) is also an integer. It follows that \(5c\) is an integer for any admissible \((2,\ell)\) solution. This fact was first noted in [43] where the result was derived using representation theory of \(\mathrm{PSL}(2,\mathbb{Z})\). Here we have derived it using just the MLDE approach. Similar results about the modular data for \(n\)-character admissible solutions with \(n=3,4,5\) have been obtained in [13]. It is worth exploring if those results can also be derived within the MLDE approach.
## 4 Detailed solution for the case of one movable pole
With the understanding of this system that we have described above, it is relatively straightforward to directly find the most general admissible solutions of the MLDE in the case \(\ell=6\), where there is one movable pole \(p_{1}\) and one accessory parameter. We first present the solution and then examine its relation to the quasi-character approach.
We will study the \((2,6)\) case in detail, with some formulae reserved for Appendix C.2, while a similar analysis for the \((2,8)\) case can be found in Appendix D. For future use, a review of the Frobenius solution for the \((2,0)\) and \((2,2)\) MLDEs is presented in Appendix C.1. This contains formulae that will be needed below.
### Solving the MLDE with one movable pole
In this sub-section we will adapt the various elements of the theory of MLDEs developed so far into an organised method to solve them. This method involves incorporation of the accessory equation into the solution from the outset. It allows us, as we will see, to solve the MLDE in the present case completely and thereby derive features of the solution that were suggested by quasi-character theory [6]. The \((2,6)\) MLDE in the \(\tau\)-plane is:
\[\left(D^{2}+\frac{E_{4}^{2}\,E_{6}}{E_{4}^{3}-p_{1}\,\Delta}\,D+\frac{\alpha_ {0}\,\alpha_{1}\,E_{4}(E_{4}^{3}-b_{4,1}\,\Delta)}{E_{4}^{3}-p_{1}\,\Delta} \right)\chi(\tau)=0. \tag{108}\]
In this form, we have three parameters, the rigid parameter \(\alpha_{0}\alpha_{1}\) and two non-rigid parameters, the movable pole \(p_{1}\) and the accessory parameter \(b_{4,1}\). We use the first three orders of the Frobenius solution as applied to the identity character. At leading order, we have the indicial equation which determines the rigid parameter in terms of the central charge, and at
the second and third order we have the following :
\[\alpha_{0}\alpha_{1} = \frac{c(c-20)}{576}, \tag{110}\] \[m_{1}^{(6)} = f_{1}(c,p_{1},b_{4,1})\] (111) \[m_{2}^{(6)} = f_{2}(c,p_{1},b_{4,1}) \tag{112}\]
Here \(m_{1}^{(6)}\) and \(m_{2}^{(6)}\) are the Fourier coefficients of the identity character. The superscript (6) indicates that this is of the \((2,6)\) solution. The explicit forms of \(f_{1}(c,p_{1},b_{4,1})\) and \(f_{2}(c,p_{1},b_{4,1})\) are given in Eq. (109) and Eq. (110).
In the next step, one solves for the three parameters of the MLDE in terms of objects associated to the identity character, namely the central charge \(c\) and the Fourier coefficients \(m_{1}^{(6)}\) and \(m_{2}^{(6)}\). This has already been done for \(\alpha_{0}\alpha_{1}\) in Eq. (110). For the remaining parameters we obtain:
\[p_{1} = f_{3}(c,m_{1}^{(6)},m_{2}^{(6)}) \tag{113}\] \[b_{4,1} = f_{4}(c,m_{1}^{(6)},m_{2}^{(6)}) \tag{114}\]
The explicit expressions for the right hand sides can be found in Eq. (112) and Eq. (113). We note that both \(p_{1}\) and \(b_{4,1}\) are rational functions of \(m_{1}^{(6)}\) and \(m_{2}^{(6)}\) with coefficients being rational functions of \(c\). In particular, we see that the movable pole in the \((2,6)\) solution is rational, as already noted in [27]. Later we will discuss the general version of this statement.
The next step is to invoke the accessory equations Eq. (104), insert the values of \(p_{1}\) and \(b_{4,1}\), previously determined in Eq. (113) and Eq. (114), and solve for \(m_{2}^{(6)}\) in terms of \(m_{1}^{(6)}\) and \(c\). Remarkably, we get the following linear equation in \(m_{1}^{(6)}\):
\[m_{2}^{(6)} = A_{2}(c)+B_{2}(c)\ m_{1}^{(6)} \tag{115}\]
where \(A_{2}(c)\) and \(B_{2}(c)\) are given in Eq. (114). Consulting Eq. (112) we immediately find a relation between the coefficient \(B_{2}(c)\) above, and the degeneracy for the \((2,0)\) MLDE solution at a central charge \(c-24\):
\[B_{2}(c)=m_{1}^{(0)}(c-24) \tag{116}\]
Some additional calculation shows that:
\[A_{2}(c)=m_{2}^{(0)}(c)-m_{1}^{(0)}(c-24)\,m_{1}^{(0)}(c) \tag{117}\]
Thus Eq. (115) is the same as:
\[m_{2}^{(6)}=m_{2}^{(0)}(c)+m_{1}^{(0)}(c-24)\ \ (m_{1}^{(6)}-m_{1}^{(0)}(c)). \tag{118}\]
At the next stage, we insert Eq. (4.10) in Eq. (4.5) and Eq. (4.6) to obtain:
\[p_{1} = f_{5}(c,m_{1}^{(6)}) \tag{4.11}\] \[b_{4,1} = f_{6}(c,m_{1}^{(6)}) \tag{4.12}\]
The explicit expressions for the right hand sides are given in Eq. (C.16) and Eq. (C.17). These equations now have a nice geometrical interpretation. The space of MLDE parameters is three dimensional, co-ordinatized by \(\alpha_{0}\alpha_{1}\) (or, via Eq. (4.2), the central charge \(c\)), \(p_{1}\) and \(b_{4,1}\). For a fixed central charge \(c\), we have the \(p_{1}-b_{4,1}\) plane. The algebraic variety defined by the accessory equation Eq. (3.33) is a hyperbola in this plane and the equations Eq. (4.11) and Eq. (4.12) are its parametric equations with \(m_{1}^{(6)}\) serving as a parameter on the curve.
We carry on solving the MLDE to higher order. At the next order, after using Eq. (4.10) we obtain the following:
\[m_{3}^{(6)} =A_{3}(c)+B_{3}(c)\ m_{1}^{(6)} \tag{4.13}\]
where \(A_{3}(c)\) and \(B_{3}(c)\) are given in Eq. (C.18). It is again remarkable that \(m_{3}^{(6)}\) has a linear dependence on \(m_{1}^{(6)}\). In the same way as was done above, one shows that \(m_{3}^{(6)}\) can be written in terms of the \((2,0)\) solution as follows:
\[m_{3}^{(6)}=m_{3}^{(0)}(c)+m_{2}^{(0)}(c-24)\ (m_{1}^{(6)}-m_{1}^{(0)}(c)), \tag{4.14}\]
which is very similar to the form of \(m_{2}^{(6)}\) in Eq. (4.10).
This motivates us to propose the relation:
\[m_{k}^{(6)}=m_{k}^{(0)}(c)+m_{k-1}^{(0)}(c-24)\ (m_{1}^{(6)}-m_{1}^{(0)}(c)). \tag{4.15}\]
We have performed a computer check of this phenomenon to order 8. We expect it to hold for all \(k\geq 2\) and hope to provide a proof in future work.
Notice that we can extend Eq. (4.15) to include \(k=1\). When we plug in \(k=1\) in Eq. (4.15) we get \(m_{1}^{(6)}=m_{1}^{(0)}(c)+m_{0}^{(0)}(c-24)\,(m_{1}^{(6)}-m_{1}^{(0)}(c))\) which is an identity after noting \(m_{0}^{(0)}(c-24)=1\).
Now Eq. (4.15) can be converted into an equation relating the identity characters at \(c\) and \(c-24\). We then compute the non-identity character of the \((2,6)\) MLDE and find that it satisfies the same equation, leading to:
\[\chi_{i}^{(6)}=\chi_{i}^{(0)}(c)+(m_{1}^{(6)}-m_{1}^{(0)}(c))\,\chi_{i}^{(0)}( c-24),\quad i=0,1 \tag{4.16}\]
We should emphasize that Eq. (4.16) and holds for all Frobenius solutions of the \((2,6)\) MLDE without any qualifiers such as admissibility, integrality etc: every Frobenius solution of the \((2,6)\) MLDE can be written as a sum of two Frobenius solutions of \((2,0)\) MLDE.
Now we impose admissibility. For this, we impose integrality of the \(m_{k}^{(6)}\)s and each of Eq. (4.15), for \(k\geq 2\), leads to a Diophantine equation, after defining \(\mathsf{N}=5c\). The first two are
\[\mathsf{N}^{4}+(2m_{1}^{(6)}-427)\,\mathsf{N}^{3}+(-656m_{1}^{(6)}+ 2m_{2}^{(6)}+41140)\,\mathsf{N}^{2}\] \[\qquad+(71480m_{1}^{(6)}-560m_{2}^{(6)}+1124700)\,\mathsf{N}+37400 m_{2}^{(6)}-2587200m_{1}^{(6)}=0 \tag{4.17}\] \[2\,\mathsf{N}^{6}+(3m_{1}^{(6)}-1308)\,\mathsf{N}^{5}+(274648-16 65m_{1}^{(6)})\,\mathsf{N}^{4}+(369774m_{1}^{(6)}-6m_{3}^{(6)}-18801040)\, \mathsf{N}^{3}\] \[\qquad+(-41075340m_{1}^{(6)}+3060m_{3}^{(6)}+453302400)\,\mathsf{N }^{2}+(2282045400m_{1}^{(6)}-498600m_{3}^{(6)}+22315264000)\,\mathsf{N}\] \[\qquad\qquad+25806000m_{3}^{(6)}-50725224000m_{1}^{(6)}=0 \tag{4.18}\]
In particular this shows directly that \(N=5c\) is an integer. Now, inserting the admissible set of central charges (3.61) into the above equations, we output all the possible admissible solutions. We also verify integrality of the non-identity character up to the same order. The result can then be computed up to very high orders (\(q^{2000}\) in this case) and verified to be admissible. We find an infinite family of admissible solutions for each of the following central charges:
\[c=\frac{122}{5},25,26,\frac{134}{5},28,\frac{146}{5},30,31,\frac{158}{5} \tag{4.19}\]
labelled by the free integer \(m_{1}^{(6)}\geq 0\).
We now study the other MLDE with a single movable pole, the \((2,8)\) MLDE :
\[\left(D^{2}+\frac{4}{3}\,\frac{E_{6}\left(E_{4}^{3}-\frac{p_{1}}{4}\,\Delta \right)}{E_{4}\left(E_{4}^{3}-p_{1}\,\Delta\right)}D+\frac{\alpha_{0}\,\alpha _{1}\,E_{4}^{2}\left(E_{4}^{3}-b_{4,1}\,\Delta\right)}{E_{4}\left(E_{4}^{3}-p _{1}\,\Delta\right)}\right)\chi(\tau)=0 \tag{4.20}\]
In this form, we have three parameters, the rigid parameter \(\alpha_{0}\alpha_{1}\) and two non-rigid parameters, the movable pole \(p_{1}\) and the accessory parameter \(b_{4,1}\). We use the first three orders of the Frobenius solution as applied to the identity character. At leading order, we have the indicial equation which determines the rigid parameter in terms of the central charge, and at the second and third order we have the following :
\[\alpha_{0}\alpha_{1} = \frac{c(c-28)}{576}, \tag{4.21}\] \[m_{1}^{(8)} = \widetilde{f}_{1}(c,p_{1},b_{4,1})\] (4.22) \[m_{2}^{(8)} = \widetilde{f}_{2}(c,p_{1},b_{4,1}) \tag{4.23}\]
Here \(m_{1}^{(8)}\) and \(m_{2}^{(8)}\) are the Fourier coefficients of the identity character. The superscript (8) indicates that this is of the \((2,8)\) solution. The explicit forms of \(\widetilde{f}_{1}(c,p_{1},b_{4,1})\) and \(\widetilde{f}_{2}(c,p_{1},b_{4,1})\) are given in Eq. (C.19) and Eq. (C.20).
In the next step, one solves for the three parameters of the MLDE in terms of objects associated to the identity character, namely the central charge \(c\) and the Fourier coefficients \(m_{1}^{(8)}\) and \(m_{2}^{(8)}\). This has already been done for \(\alpha_{0}\alpha_{1}\) in Eq. (111). For the remaining parameters we obtain:
\[p_{1} = \widetilde{f}_{3}(c,m_{1}^{(8)},m_{2}^{(8)}) \tag{114}\] \[b_{4,1} = \widetilde{f}_{4}(c,m_{1}^{(8)},m_{2}^{(8)}) \tag{115}\]
The explicit expressions for the right hand sides can be found in Eq. (110) and Eq. (111). We note that both \(p_{1}\) and \(b_{4,1}\) are rational functions of \(m_{1}^{(8)}\) and \(m_{2}^{(8)}\) with coefficients being rational functions of \(c\). In particular, we see that the movable pole in the \((2,8)\) solution is rational.
The next step is to invoke the accessory equation Eq. (109), insert the values of \(p_{1}\) and \(b_{4,1}\), previously determined in Eq. (114) and Eq. (115), and solve for \(m_{2}^{(8)}\) in terms of \(m_{1}^{(8)}\) and \(c\). Similar to the \((2,6)\) computation, remarkably, we get the following linear equation in \(m_{1}^{(8)}\):
\[m_{2}^{(8)}=m_{2}^{(2)}(c)+m_{1}^{(2)}(c-24)\ \ (m_{1}^{(8)}-m_{1}^{(2)}(c)). \tag{116}\]
At the next stage, we insert Eq. (116) in Eq. (114) and Eq. (115) to obtain:
\[p_{1} = \widetilde{f}_{5}(c,m_{1}^{(8)}) \tag{117}\] \[b_{4,1} = \widetilde{f}_{6}(c,m_{1}^{(8)}) \tag{118}\]
The explicit expressions for the right hand sides are given in Eq. (112) and Eq. (114). These equations now have a geometrical interpretation, similar to the \((2,6)\) case.
We carry on solving the MLDE to higher order and obtain :
\[m_{k}^{(8)}=m_{k}^{(2)}(c)+m_{k-1}^{(2)}(c-24)\ (m_{1}^{(8)}-m_{1}^{(2)}(c)). \tag{119}\]
We have performed a computer check of this phenomenon to order 8. We expect it to hold for all \(k\geq 2\) and hope to provide a proof in future work.
We can extend Eq. (119) to include \(k=1\). When we plug in \(k=1\) in Eq. (119) we get \(m_{1}^{(8)}=m_{1}^{(2)}(c)+m_{0}^{(2)}(c-24)\,(m_{1}^{(8)}-m_{1}^{(2)}(c))\) which is an identity after noting \(m_{0}^{(2)}(c-24)=1\).
Now Eq. (119) can be converted into an equation relating the identity characters at \(c\) and \(c-24\). We then compute the non-identity character of the \((2,8)\) MLDE and find that it satisfies the same equation, leading to:
\[\chi_{i}^{(8)}=\chi_{i}^{(2)}(c)+(m_{1}^{(8)}-m_{1}^{(2)}(c))\,\chi_{i}^{(2)} (c-24),\quad i=0,1 \tag{120}\]
We should emphasize that Eq. (114) holds for all Frobenius solutions of the \((2,8)\) MLDE without any qualifiers such as admissibility, integrality etc: every Frobenius solution of the \((2,8)\) MLDE can be written as a sum of two Frobenius solutions of \((2,2)\) MLDE.
Now we impose admissibility. For this, we impose integrality of the \(m_{k}^{(8)}\)s and each of Eq. (113), for \(k\geq 2\), leads to a Diophantine equation, after defining \(\mathsf{N}=5c\). The first two are
\[\mathsf{N}^{4}+(2m_{1}^{(8)}-755)\mathsf{N}^{3}+(2m_{2}^{(8)}-102 4m_{1}^{(8)}+190108)\mathsf{N}^{2}\] \[\qquad\qquad+(162200m_{1}^{(8)}-640m_{2}^{(8)}-15965940)\mathsf{N }-8174400m_{1}^{(8)}+49400m_{2}^{(8)}=0 \tag{115}\] \[2\mathsf{N}^{6}+(3m_{1}^{(8)}-2292)\mathsf{N}^{5}-(2709m_{1}^{( 8)}-983128)\mathsf{N}^{4}+(904050m_{1}^{(8)}-6m_{3}^{(8)}-193838000)\mathsf{N}^ {3}\] \[-(144191100m_{1}^{(8)}-3420m_{3}^{(8)}-17557104000)\mathsf{N}^{2}+ (11171925000m_{1}^{(8)}-628200m_{3}^{(8)}-686724352000)\mathsf{N}\] \[\qquad\qquad-339388920000m_{1}^{(8)}+37050000m_{3}^{(8)}=0 \tag{116}\]
In particular this shows directly that \(N=5c\) is an integer. Now, inserting the admissible set of central charges (113) into the above equations, we output all the possible admissible solutions. We also verify integrality of the non-identity character up to the same order. The result can then be computed up to very high orders (\(q^{2000}\) in this case) and verified to be admissible. We find an infinite family of admissible solutions for each of the following central charges:
\[c=\frac{82}{5},17,18,\frac{94}{5},20,\frac{106}{5},22,23,\frac{118}{5} \tag{117}\]
labelled by the free integer \(m_{1}^{(8)}\geq 0\).
### Brief review of quasi-characters
A construction of admissible characters for all two-character CFT was presented in [6]8. This proposal did not use MLDEs with movable poles (i.e. \(\ell\geq 6\)) that we are using here, rather it only made use of solutions to the MMS equation, which has \(\ell=0\), and a similar equation with \(\ell=2\). Now we are in a position to compare our results, obtained from the \(\ell=6\) MLDE, with this approach. For this we first briefly review the quasi-character approach and its application to the \((2,6)\) case (for a detailed exposition with references, see [6]). Then we will compare the results of the present paper with it.
Footnote 8: There is an earlier construction of VVMF due to Bantay and Gannon [33, 34], however, that requires advance knowledge of the possible modular data while here we do not make this assumption. Also here, as part of admissibility, we always impose the requirement that the leading term of the identity character is unity.
Ref. [6] started from the observation that although the \((2,0)\) MLDE - the MMS equation - has only finitely many admissible solutions, it has infinitely many more solutions having
all integral Fourier coefficients of which some are _negative_. Thus these are special, although not admissible, solutions. They occur at specific values of the central charge \(c\)9. There are families of such solutions with the following central charges, parametrised by an integer \(n\):
Footnote 9: Although these solutions do not describe CFT, they can still be assigned a value of \(c\) by writing their leading critical exponent \(\alpha_{0}\) as \(-\frac{c}{24}\).
\[\begin{split}\text{Lee-Yang family:}& c=\frac{2(6n+1)}{5},\ n\neq 4\ \text{mod}\ 5\\ A_{1}\text{ family:}& c=6n+1\\ A_{2}\text{ family:}& c=4n+2,\ n\neq 2\ \text{mod}\ 3\\ D_{4}\text{ family:}& c=12n+4\end{split} \tag{103}\]
Of these, the central charges
\[c=\frac{2}{5},1,2,\frac{14}{5},4,\frac{26}{5},6,7,\frac{38}{5} \tag{104}\]
correspond to admissible characters 10 with \(\ell=0\). Together with a \(c=8\) solution that corresponds to a one-character solution with a spurious second character, hence is not in the above set, these make up the so-called "MMS series" [1].
Footnote 10: These all correspond to CFTs, except for the first and last cases that are “Intermediate Vertex Operator Algebras” [44].
While all such quasi-characters have \(\ell=0\) and solve the MMS equation (\((2,0)\) MLDE), they do so for different values of the parameter in the MLDE. Thus their linear combinations do not solve the same equation, and in general they would not be closed under modular transformations. However if we take linear combinations of \(r+1\) quasi-characters such that successive terms differ in central charge by 24 (they automatically then belong to the same family in the list above), it can be shown that the modular transformations of each term are the same, and that the linear combination satisfies an MLDE for which the Wronskian index is \(6r\). It was argued in [6] that this process generates all \((2,\ell)\) admissible characters for every \(\ell=6r\).
Quasi-characters with \(\ell=2\), relevant to the \(\ell=6r+2\) case, have also been constructed [6] and we review them in Appendix D. On the other hand the ones with \(\ell=4\), relevant to \(\ell=6r+4\) are simply \(j^{\frac{1}{3}}\) times the \(\ell=0\) quasi-characters listed above. Thus all possible values of \(\ell\) have been covered.
Now we return to the case \(\ell=6\). Here one must add precisely two \(\ell=0\) quasi-characters differing in central charge by 24. We take one of these to be any of the MMS solutions, denoted \(\chi_{i}^{A}\) (where \(A\) stands for "admissible"), whose central charge lies in the MMS list Eq. (104), and the other to be the quasi-character \(\chi_{i}^{Q}\) with central charge 24 higher. We
denote the latter central charge by \(c\) and the former by \(c-24\). Thus we form the sum:
\[\chi_{i}^{Q}(q)+N_{1}\,\chi_{i}^{A}(q) \tag{111}\]
This sum has the following properties: (i) it has central charge \(c\) and satisfies Eq. (10) with Wronskian index \(6\), (ii) the negative degeneracies of the quasi-character in the sum are potentially cancelled by the positive terms in the admissible character, depending on the value of \(N_{1}\). Thus the sum is admissible for \(N_{1}\) greater than some lower bound, which varies from case to case.
In view of completeness of the above approach, one therefore predicts that all \((2,6)\) admissible characters (and hence all \((2,6)\) CFT) have central charges:
\[c=\frac{122}{5},25,26,\frac{134}{5},28,\frac{146}{5},30,31,\frac{158}{5} \tag{112}\]
This precisely coincides with Eq. (104) except for the two end-points. As already noted below that equation, those correspond to one-character theories that show up as two-character MLDE solutions with one spurious character, which we ignore.
Thus we have found perfect agreement between the central charges arising in the direct solution of the \((2,6)\) MLDE for admissible characters in Sub-section 4.1, and the central charges found from the quasi-character construction of the same admissible set that does not use the \((2,6)\) MLDE at all. We now go on to make a more detailed comparison of the results of the two approaches.
### Comparison of quasi-character and MLDE results
In this sub-section we confront the explicit admissible MLDE solutions described above with the quasi-character approach. The former approach has one free parameter, which we can take to be \(p_{1}\) describing the location of the zero of the Wronskian, or the Fourier coefficient \(m_{1}\) representing the degeneracy of the first excited state in the identity module. The two are related by Eq. (103). The latter approach has a free parameter \(N_{1}\), that also determines the first excited state degeneracy \(m_{1}\). Thus \(p_{1}\), the location of the movable pole, must be a function of the integer \(N_{1}\). We see that admissibility quantises the location of the movable pole and also that the quasi-character parameter \(N_{1}\) is the natural integer in terms of which this quantisation can be expressed. We now exhibit these relations in all the cases. We will see that all solutions lie on one of the two branches of the hyperbola in Eq. (105), while the other branch actually corresponds to negative values of \(m_{1}\).
#### Admissible Solutions (i)
\[c=\frac{122}{5},\quad m_{1}\geq 0,\quad m_{2}=169885+m_{1},\quad m_{3}=1987014 0+m_{1} \tag{113}\]
For this central charge, the last two equations of Eq. (111) become:
\[p_{1} = -\frac{(m_{1}-3538)(m_{1}-658)}{6(m_{1}+244)} \tag{114}\] \[b_{4,1} = -\frac{(m_{1}-354898)(m_{1}-3538)}{366(m_{1}+244)} \tag{115}\]
This is also the solution obtained by the quasi-character method
\[\chi^{LY}_{n=10}+N_{1}\,\chi^{LY}_{n=0} \tag{116}\]
with \(m_{1}=N_{1}-244\).
#### Admissible Solutions (ii)
\[c=25,\quad m_{1}\geq 0,\quad m_{2}=143375+3m_{1},\quad m_{3}=18616375+4m_{1} \tag{117}\]
In this case we have:
\[p_{1} = -\frac{(m_{1}-2875)(m_{1}-571)}{5(m_{1}+245)} \tag{118}\] \[b_{4,1} = -\frac{(m_{1}\ -118075)(m_{1}\ -2875)}{125(m_{1}\ +245)} \tag{119}\]
This is also the solution obtained by the quasi-character method
\[\chi^{A_{1}}_{n=4}+N_{1}\,\chi^{A_{1}}_{n=0} \tag{120}\]
with \(m_{1}=N_{1}-245\).
#### Admissible Solutions (iii)
\[c=26,\quad m_{1}\geq 0,\quad m_{2}=118105+8m_{1},\quad m_{3}=18305456+17m_{1} \tag{121}\]
In this case we have:
\[p_{1} = -\frac{(m_{1}-2210)(m_{1}-482)}{4(m_{1}+247)} \tag{122}\] \[b_{4,1} = -\frac{(m_{1}-47138)(m_{1}-2210)}{52(m_{1}+247)} \tag{123}\]
This is also the solution obtained by the quasi-character method:
\[\chi^{A_{2}}_{n=6}+N_{1}\,\chi^{A_{2}}_{n=0} \tag{124}\]
with \(m_{1}=N_{1}-247\).
#### Admissible Solutions (iv)
\[c=\frac{134}{5},\quad m_{1}\geq 0,\quad m_{2}=106731+14\,m_{1},\quad m_{3}=191128 22+42\,m_{1} \tag{114}\]
In this case we have:
\[p_{1} = -\frac{2(m_{1}-1876)(m_{1}-436)}{7m_{1}+1742} \tag{115}\] \[b_{4,1} = -\frac{2(m_{1}-1876)(7m_{1}-206092)}{67(7m_{1}+1742)} \tag{116}\]
This is also the solution obtained by the quasi-character method:
\[\frac{1}{7}\big{(}\chi_{n=11}^{LY}+N_{1}\,\chi_{n=1}^{LY}\big{)} \tag{117}\]
This is a curious case, already remarked upon in Section 5.2 of [6]. What happens here is that \(\chi_{n=11}^{LY}\) has an integral \(q\)-expansion only if the first term of the identity character is normalised to \(7\), rather than \(1\). This is the normalisation chosen above. The first excited state "degeneracy" of this quasi-character is \(-1742\) while all others are positive. Since the identity character of the sum will be considered admissible only when its leading term is \(1\), we must divide the sum by \(7\) as shown above. As a result the degeneracy of the first excited state is \(m_{1}=\frac{N_{1}-1742}{7}\). This can be any integer, as long as we choose \(N_{1}\) to be \(1742\) plus a multiple of \(7\). With this choice, the sum in Eq. (117) has all integral coefficients even after dividing by \(7\), which is a miracle of sorts since it means all the infinitely many coefficients become multiples of \(7\) even though neither of the terms in the sum has this property.
#### Admissible Solutions (v)
\[c=28,\quad m_{1}\geq 0,\quad m_{2}=97930+28m_{1},\quad m_{3}=21891520+134m_{1} \tag{118}\]
In this case we have:
\[p_{1} = -\frac{(m_{1}-1540)(m_{1}-388)}{3(m_{1}+252)} \tag{119}\] \[b_{4,1} = -\frac{(m_{1}-17668)(m_{1}-1540)}{21(m_{1}+252)} \tag{120}\]
This is also the solution obtained by the quasi-character method
\[\chi_{n=2}^{D_{4}}+N_{1}\,\chi_{n=0}^{D_{4}} \tag{121}\]
with \(m_{1}=N_{1}-252\).
**Admissible Solutions (vi)**
\[c=\frac{146}{5},\quad m_{1}\geq 0,\quad m_{2}=96433+52\,m_{1},\quad m_{3}=27 102272+377\,m_{1} \tag{109}\]
\[p_{1} = -\frac{3(m_{1}-1314)(m_{1}-354)}{4(2m_{1}+511)} \tag{110}\] \[b_{4,1} = -\frac{3(m_{1}-1314)(13m_{1}-157242)}{292(2m_{1}+511)} \tag{111}\]
This is also the solution obtained by the quasi-character method
\[\frac{1}{2}\Big{(}\chi_{n=12}^{LY}+N_{1}\,\chi_{n=2}^{LY}\Big{)} \tag{112}\]
with \(m_{1}=\frac{1}{2}(N_{1}-511)\). Here the quasi-character, when normalised to be integral, starts with \(2\). For \(m_{1}\) to be integral, we must choose \(N_{1}\) to be an odd integer.
**Admissible Solutions (vii)**
\[c=30,\quad m_{1}\geq 0,\quad m_{2}=99675+78m_{1},\quad m_{3}=327 82900+729m_{1} \tag{113}\]
\[p_{1} = -\frac{2(m_{1}-1200)(m_{1}-336)}{5(m_{1}+258)} \tag{114}\] \[b_{4,1} = -\frac{2(m_{1}-9840)(m_{1}-1200)}{25(m_{1}+258)} \tag{115}\]
This is also the solution obtained by the quasi-character method
\[\chi_{n=7}^{A_{2}}+N_{1}\,\chi_{n=1}^{A_{2}} \tag{116}\]
with \(m_{1}=N_{1}-258\).
**Admissible Solutions (viii)**
\[c=31,\quad m_{1}\geq 0,\quad m_{2}=110980+133m_{1},\quad m_{3}=44 696513+1673m_{1} \tag{117}\]
\[p_{1} = -\frac{3(m_{1}-1085)(m_{1}-317)}{7m_{1}+1829} \tag{118}\] \[b_{4,1} = -\frac{3(m_{1}-1085)(7m_{1}-55211)}{31(7m_{1}+1829)} \tag{119}\]
This is also the solution obtained by the quasi-character method
\[\chi_{n=5}^{A_{1}}+N_{1}\,\chi_{n=1}^{A_{1}} \tag{120}\]
with \(m_{1}=\frac{1}{7}(N_{1}-1829)\) and \(N_{1}\) must be taken to be 1829 plus a multiple of \(7\).
#### Admissible Solutions (ix)
\[c=\frac{158}{5},\quad m_{1}\geq 0,\quad m_{2}=124741+190\,m_{1},\quad m_{3}= 56937196+2831\,m_{1} \tag{111}\]
\[p_{1} = -\frac{4(m_{1}-1027)(m_{1}-307)}{9m_{1}+2370} \tag{112}\] \[b_{4,1} = -\frac{4(m_{1}-1027)(19m_{1}-133273)}{237(3m_{1}+790)} \tag{113}\]
This is also the solution obtained by the quasi-character method
\[\chi^{LY}_{n=13}+N_{1}\,\chi^{LY}_{n=3} \tag{114}\]
with \(m_{1}=\frac{1}{3}(N_{1}-790)\) and \(N_{1}\) has to be chosen to be \(790\) plus a multiple of \(3\).
### Analysis of the accessory equation
Let us now analyse the accessory equation Eq. (109) in the case of \(\ell=6\) in some more detail. This equation can be re-written as a quadratic:
\[p_{1}^{2}+\alpha_{0}\alpha_{1}(p_{1}-b_{4,1})^{2}-\frac{5}{6}p_{1}(p_{1}-b_{4, 1})-1728p_{1}+576(p_{1}-b_{4,1})=0 \tag{115}\]
As noted below Eq. (109), this is a hyperbola for all values of \(\alpha_{0}\alpha_{1}\) except when \(\alpha_{0}\alpha_{1}=\frac{25}{144}\), corresponding to \(c=10\), when it degenerates to a parabola. Remaining away from \(c=10\), we now analyse the hyperbola in some detail. We will see, among other things, that all \((2,6)\) solutions with \(N_{1}>0\) lie on one branch of the hyperbola, with the other branch corresponding to negative values of \(N_{1}\).
To illustrate this, we pick an example. Consider the \((2,6)\) solution with \(c=25\). In this case, we have: \(N_{1}=m_{1}+245\) (see previous section). For this, the hyperbola is given below.
In this case, from Eq. (4.43) and Eq. (4.44), we get,
\[\begin{split}& b_{4,1}=-\frac{(N_{1}-3120)(N_{1}-118320)}{125N_{1}},\\ & p_{1}=-\frac{(N_{1}-816)(N_{1}-3120)}{5N_{1}}.\end{split} \tag{4.75}\]
This gives, \(\frac{b_{4,1}}{p_{1}}=\frac{N_{1}-118320}{25(N_{1}-816)}\). The asymptotes to the above hyperbola are: \(b_{4,1}=-\frac{89856}{25}+\frac{29}{5}p_{1}\) (drawn in purple) and \(b_{4,1}=\frac{117504}{125}+\frac{1}{25}p_{1}\) (drawn in pink). Note that, the origin lies on the lower branch. This can be seen from the fact that when \(N_{1}\to 0\), we have \(\frac{b_{4,1}}{p_{1}}\to\frac{29}{5}\) (whose slope is equal to the purple asymptote), which intersects the lower branch at the origin. This means that the point \(N_{1}\to 0\) lies on the bottom end of the lower branch. Also, note that when \(N_{1}\to\infty\), we have \(\frac{b_{4,1}}{p_{1}}\to\frac{1}{25}\), whose slope is equal to the pink asymptote. This means that the point \(N_{1}\to\infty\) lies on the left end of the lower branch.
The lower branch of the hyperbola corresponds to characters with \(N_{1}>0\) and the upper branch corresponds to characters with \(N_{1}<0\). To see this note the following argument. Using Eq. (4.75), we can see that both \(b_{4,1}\) and \(p_{1}\) can only be positive (which happens on
Figure 1: Hyperbola in the \(b_{4,1}\) vs \(p_{1}\) plane corresponding to the \((2,6)\) solution with \(c=25\).
the upper branch) if \(N_{1}<0\). On the contrary, both \(b_{4,1}\) and \(p_{1}\) can never be positive (which happens on the lower branch) if \(N_{1}>0\).
\(N_{1}\) increases as we trace the lower branch from below. The red dot on the lower branch is where \(m_{1}=0\). Starting from this red dot and tracing towards the left end of the lower branch we obtain all the admissible solutions. The green dot on the lower branch corresponds to the point where \(m_{1}=571\) implying \(N_{1}=816\). This in turn implies \(p_{1}=0\) and \(b_{4,1}\neq 0\). This is a factorised solution of the form \(j^{\frac{1}{3}}\chi^{(2,2)}\) where \(\chi^{(2,2)}\) is a \((2,2)\) CFT with \(c=17\) (see [25]).
In the next plot, we have all the hyperbolas corresponding to each of the \((2,6)\) solutions with central charges in the admissibility range: \(24<c^{(\ell=6)}<32\).
As explained in the \(c=25\) example, for each hyperbola, we have the characters with \(N_{1}>0\) on the lower branches and \(N_{1}<0\) on the upper branches.
Figure 2: All hyperbolas in the \(b_{4,1}\) vs \(p_{1}\) plane corresponding to \((2,6)\) solutions with central charges: \(24<c^{(\ell=6)}<32\).
Now let us consider \((2,8)\) solutions. The accessory equation now becomes (see Eq. (3.53)),
\[p_{1}^{2}+\alpha_{0}\alpha_{1}(p_{1}-b_{4,1})^{2}-\frac{7}{6}p_{1}(p_{1}-b_{4,1} )-1728p_{1}+1152(p_{1}-b_{4,1})=0 \tag{4.76}\]
Let us consider the solution with \(c=23\). In this case, we have: \(N_{1}=\frac{m_{1}-69}{5}\) (see section D in appendix: case viii) with \(c=23\)). For this, the hyperbola is given below.
In this case, from Eq. (D.24) and Eq. (D.25), we get,
\[\begin{split} b_{4,1}&=-\frac{(5N_{1}-48944)(5N_{ 1}+4048)}{345N_{1}},\\ p_{1}&=\frac{(5N_{1}-560)(5N_{1}+4048)}{15N_{1}}. \end{split} \tag{4.77}\]
This gives, \(\frac{b_{4,1}}{p_{1}}=-\frac{5N_{1}-48944}{23(5N_{1}-560)}\). The asymptotes to the above hyperbola are: \(b_{4,1}=-\frac{16128}{23}-\frac{1}{23}p_{1}\) (drawn in purple) and \(b_{4,1}=\frac{25344}{5}-\frac{19}{5}p_{1}\) (drawn in pink).
The red dot on the lower branch is where \(m_{1}=0\), impying \(N_{1}=-\frac{69}{5}\). Starting from this red dot and tracing towards the bottom end of the lower and then continuing from the top
Figure 3: Hyperbola in the \(b_{4,1}\) vs \(p_{1}\) plane corresponding to the \((2,8)\) solution with \(c=23\).
end of the upper branch we obtain all the admissible solutions. The green dot on the upper branch corresponds to the point where \(m_{1}=629\) implying \(N_{1}=112\). This in turn implies \(p_{1}=0\) and \(b_{4,1}\neq 0\). This is a factorised solution of the form \(j^{\frac{2}{3}}\chi^{(2,0)}\) where \(\chi^{(2,0)}\) is a \((2,0)\) MMS CFT with \(c=7\).
So, we see that in the \((2,8)\) case, admissible solutions lie on both the branches while in the \((2,6)\) case, admissible solutions appear only in the lower branch.
In the next plot, we have all the hyperbolas corresponding to each of the \((2,8)\) solutions with central charges in the admissibility range: \(16<c^{(\ell=8)}<24\).
## 5 Discussion of the case of two movable poles
We now turn to the case of \(\ell=12\). This is important because there are two independent poles \(p_{1},p_{2}\) and correspondingly two accessory parameters. A number of novel features will emerge in this setting that were not visible for \(\ell<12\).
Figure 4: All hyperbolas in the \(b_{4,1}\) vs \(p_{1}\) plane corresponding to \((2,8)\) solutions with central charges: \(16<c^{(\ell=8)}\leq 24\).
### The \((2,12)\) Mlde and constraints on accessory parameters
The \((2,12)\) Mlde in the \(\tau\) plane is given by
\[\left(D^{2}+E_{4}^{2}E_{6}\left(\frac{1}{E_{4}^{3}-p_{1}\Delta}+\frac{1}{E_{4}^{ 3}-p_{2}\Delta}\right)D+\frac{\alpha_{0}\alpha_{1}\,E_{4}\,(E_{4}^{3}-b_{4,1} \,\Delta)(E_{4}^{3}-b_{4,2}\,\Delta)}{(E_{4}^{3}-p_{1}\,\Delta)(E_{4}^{3}-p_{2 }\,\Delta)}\right)\chi(\tau)=0 \tag{109}\]
In the \(j\)-coordinate, the same Mlde is given by:
\[\left(\partial_{j}^{2}+\left(\frac{2}{3j}+\frac{1}{2(j-1728)}-\frac{1}{(j-p_{ 1})}-\frac{1}{(j-p_{2})}\right)\partial_{j}+\frac{\alpha_{0}\alpha_{1}(j-b_{4, 1})(j-b_{4,2})}{j(j-1728)(j-p_{1})(j-p_{2})}\right)\chi(j)=0 \tag{110}\]
The accessory equations for \(\ell=12\) can be read off from Eq. (108) and after some rationalisation of denominators they reduce to:
\[576-\frac{5\,p_{1}}{6}+\frac{\alpha_{0}\,\alpha_{1}\left(p_{1}- b_{4,1}\right)\left(p_{1}-b_{4,2}\right)}{p_{1}-p_{2}}-p_{1}\left(p_{1}-1728 \right)\left(\frac{2}{p_{1}-p_{2}}-\frac{1}{p_{1}-b_{4,1}}-\frac{1}{p_{1}-b_{4,2}}\right)=0 \tag{111}\] \[576-\frac{5\,p_{2}}{6}+\frac{\alpha_{0}\,\alpha_{1}\left(p_{2}- b_{4,1}\right)\left(p_{2}-b_{4,2}\right)}{p_{2}-p_{1}}-p_{2}\left(p_{2}-1728 \right)\left(\frac{2}{p_{2}-p_{1}}-\frac{1}{p_{2}-b_{4,1}}-\frac{1}{p_{2}-b_{4,2}}\right)=0 \tag{112}\]
When one examines the coefficient functions of the MlDEs, both in the \(\tau\)-space and the \(j\)-space, one finds that the poles and the accessory parameters appear in symmetric combinations. Hence, we work with the symmetric parameters:
\[\begin{split} P_{1}\equiv p_{1}+p_{2},& P_{2}=p_{1}p_ {2}\\ B_{1}\equiv b_{4,1}+b_{4,2},& B_{2}=b_{4,1}b_{4,2} \end{split} \tag{113}\]
More generally, for \(\ell=6r\), we would have \(P_{k},B_{k}\) with \(k=1,2,\cdots r\), where \(P_{k}\) denotes the \(k\)th symmetric polynomial in the movable poles and \(B_{k}\) denotes the \(k\)th symmetric polynomial in the accessory parameters. We will see below that these symmetric parameters always turn out to be rational for admissible character solutions, while the individual poles and accessory parameters need not be.
Now, the sum and difference of the two equations in Eq. (112) can be written in terms of
the symmetric parameters :
\[\frac{(P_{1}-B_{1})(2P_{2}+2B_{2}-P_{1}B_{1})}{P_{2}^{2}+P_{2}(B_{1}^ {2}-P_{1}B_{1}-2B_{2})+B_{2}(P_{1}^{2}-P_{1}B_{1}+B_{2})}\] \[\qquad\qquad+\alpha_{0}\alpha_{1}\left(\frac{1728B_{2}+B_{1}P_{2 }-1728P_{2}-B_{2}P_{1}}{P_{2}(P_{2}-1728P_{1}+1728^{2})}\right)\] \[\qquad\qquad+\left(\frac{576(P_{1}^{2}-2P_{2})-\frac{5}{6}P_{1}P_ {2}-1728\cdot 576P_{1}+2880P_{2}}{P_{2}(P_{2}-1728P_{1}+1728^{2})}\right)=0\]
\[\alpha_{0}\alpha_{1}\left(\frac{2P_{2}^{2}+2\cdot 1728P_{2}B_{1}+B_ {2}P_{1}^{2}-1728P_{1}(P_{2}+B_{2})-P_{2}(P_{1}B_{1}+2B_{2})}{P_{2}(P_{2}-1728P _{1}+1728^{2})}\right)\] \[\qquad\qquad+(P_{1}^{2}-4P_{2})\left(\frac{2P_{2}-P_{1}B_{1}+B_{1 }^{2}-2B_{2}}{P_{2}B_{1}(P_{1}-B_{1})+2P_{2}B_{2}-P_{2}^{2}-B_{2}(P_{1}^{2}-P_ {1}B_{1}+B_{2})}\right)\] \[\qquad\qquad+(P_{1}^{2}-4P_{2})\left(\frac{576\cdot 1728-576P_{1}+ \frac{5}{6}P_{2}}{P_{2}(P_{2}-1728P_{1}-1728^{2})}\right)=4 \tag{102}\]
The accessory equations for the more general case for \(\ell=6r\) (Eq. (3.44)) can also similarly be recast in terms of the \(P_{k}\)'s and the \(B_{k}\)'s. Although these look more complicated than Eq. (5.3), we will soon see that the \(P_{k}\) and \(B_{k}\) are real while the same does not hold for the \(p_{I},b_{4,I}\).
Now, we will discuss in general terms how one solves the \((2,12)\) MLDE. We start out as we did for the \((2,6)\) MLDE. The first step is to obtain the first few orders of the Frobenius solution for the identity character. Here we have four parameters and hence we need four orders beyond the indicial equation. Thus there are five equations, the analogues of Eq. (4.2)-Eq. (4.4): the first one is simply \(\alpha_{0}\alpha_{1}=\frac{c(44-c)}{576}\) and four others for the Fourier coefficients of the identity character \(m_{1},m_{2},m_{3},m_{4}\) (to be consistent with our earlier notation these should have a superscript (12) to denote the \(\ell=12\) case, but we drop it to simplify the notation). These are four linear equations for the parameters \(P_{1},P_{2},B_{1},B_{2}\) and we can solve them and obtain the analogues of Eq. (4.5)-Eq. (4.6). Each of the symmetric parameters is a rational function of \(m_{1},m_{2},m_{3},m_{4}\) with coefficients being rational functions of \(c\). Hence we have shown, as promised, that the symmetric parameters are rational. This is the correct generalisation of the observation in [27] that a single movable pole is rational.
Thus there are three possibilities for the movable poles: (i) they are both rational, (ii) they are both real and irrational and lie in a quadratic field extension of the rationals, (iii) they are complex conjugates of each other. The accessory parameter follow the same pattern. Notice that the possibility of complex or real irrational poles/accessory parameters occurs for the first time at \(\ell=12\).
For the general case of \(\ell=6r\) the analogs of Eq. (4.2)-Eq. (4.4) would be \(2r+1\) equations \(\alpha_{0}\alpha_{1}=\frac{c(4(6r-1)-c)}{576}\) and \(2r\) more for the Fourier coefficients \(m_{1},\ldots m_{2r}\). Similar to the \(\ell=6\) and \(\ell=12\) cases, we would solve these equations for the \(2r\) variables \(P_{k},B_{k}\) and solve for each of them as a rational function of \(m_{1},\ldots m_{2r}\) with coefficients being rational functions of \(c\). Again, we can conclude that the \(P_{k}\) and the \(B_{k}\) are rational numbers. Thus, any given movable pole is either complex (and occurs with its conjugate) or is real irrational (and occurs with its Galois conjugates) or is rational, and the same for any accessory parameter.
The next step is to bring in the accessory equations Eq. (5.5). There are two of them and we substitute into them the symmetric parameters in terms of their rational expressions (of \(m_{1},m_{2},m_{3},m_{4}\)), leading to two equations that contain only \(c\) and the \(m_{1},m_{2},m_{3},m_{4}\). We then expect to solve for \(m_{3},m_{4}\) in terms of \(m_{1},m_{2}\) and \(c\). From the previous example, our expectation is that the dependence will be linear in \(m_{1}\) and \(m_{2}\), leading to an analogue of \(Eq.\) (4.7). We would then solve for higher Fourier coefficients \(m_{k},k\geq 5\) and expect to obtain a linear dependence in \(m_{1}\) and \(m_{2}\) thus making contact with the quasi-character theory. Unfortunately this procedure becomes extremely tedious, so we employ an alternate route below.
### \((2,12)\) admissible characters
We will use quasi-character theory [6] to obtain \((2,12)\) solutions and make contact with the above analysis. In general terms, quasi-character theory informs us that \((2,6r)\) admissible character solutions can be found by taking \(r+1\) summands, each of which is a \((2,0)\) quasi-character. The summation will contain \(r\) quasi-character parameters, which are non-negative and subject to further restrictions. The precise details and systematics of this procedure has never been worked out for \(r\geq 2\), and will be addressed in [45]. Here we will content ourselves with working out one example in full detail.
#### \((2,12)\) solution with \(c=25,h=\frac{1}{4}\)
According to quasi-character theory, we can pick three \((2,0)\) quasi-characters in the \(A_{1}\) class and sum them to obtain a \((2,12)\) solution: \(c=25,h=\frac{1}{4}\) in the following way :
\[\chi=\chi_{n=4}^{A_{1}}+N_{1}\,\chi_{n=0}^{A_{1}}+N_{2}\,\chi_{n=-4}^{A_{1}} \tag{5.6}\]
where the \(n=4\) and \(n=-4\) terms are quasi-characters (integral but not positive) while the \(n=0\) term is an admissible character (for the \(A_{1,1}\) WZW model). The leading behaviour in the \(q\)-series expansion of the identity character corresponds to \(c=25\) and that of the non-identity character gives \(h=\frac{1}{4}\). These numbers ensure that \(\ell=12\). But it is not yet
clear that these are admissible characters. For that we examine the \(q\)-series. For the identity character, we have
\[\chi^{A_{1}}_{n=4;0}+N_{1}\,\chi^{A_{1}}_{n=0;0}+N_{2}\,\chi^{A_{1}}_ {n=-4;0}=q^{-\frac{25}{24}}\left(1+(-245+N_{1})q+(142640+3\,N_{1}+26752\,N_{2})q ^{2}\right.\] \[\left.+(18615395+4\,N_{1}+1734016\,N_{2})q^{3}+(837384535+7\,N_{1}+ 46091264\,N_{2})q^{4}+\ldots\right) \tag{111}\]
and for the non-identity character, we have
\[\chi^{A_{1}}_{n=4;1}+N_{1}\,\chi^{A_{1}}_{n=0;1}+N_{2}\,\chi^{A_{1 }}_{n=-4;1}=q^{-\frac{19}{24}}\left(N_{2}+(2\,N_{1}-247\,N_{2})q+(565760+2\,N_{ 1}-86241\,N_{2})q^{2}\right.\] \[\left.+(51745280+6\,N_{1}-4182736\,N_{2})q^{3}+(1965207040+8\,N_{1 }-96220123\,N_{2})q^{4}+\ldots\right) \tag{112}\]
Requiring admissibility of the above \(q\)-series up to order \(q^{5}\), we find the following restrictions on the quasi-character parameters \(N_{1}\) and \(N_{2}\).
\[N_{1}=245+m_{1},\,N_{2}\leq\frac{490+2m_{1}}{247},\quad m_{1}\in\mathbb{Z}^{ \geq 0} \tag{113}\]
(we have denoted the integer by \(m_{1}\) anticipating that it will be the degeneracy of the first excited state in the identity character).
Let us consider the relations given in Eq. (113). The first expression relates the quasi-character parameter \(N_{1}\) to \(m_{1}\) which is the dimension of the Kac-Moody algebra (if any) of the final theory. On the other hand, the second relation serves as a restriction on \(N_{2}\) for any fixed \(N_{1}\). This restriction will be modified at every order in \(q\). So, to ascertain admissibility of \(\chi\) in Eq. (111), we need to look at the asymptotic growth of the coefficients in the \(q\)-series of this character. This is done by considering the Rademacher expansion (see [46] and appendix A of [6]). This assures us that the asymptotic growth of the negative-type quasi-character, \(\chi^{A_{1}}_{n=-4;1}\) is sub-leading as compared to that of the positive-type quasi-characters \(\chi^{A_{1}}_{n=4;1}\) and \(\chi^{A_{1}}_{n=0;1}\). Thus, Eq. (111) will be an admissible character for all \(N_{1}\) satisfying Eq. (113), i.e. \(N_{1}\geq 245\), and finitely many \(N_{2}\) satisfying some upper bound (not necessarily the one in Eq. (113).
Quasi-character theory claims that the two non-negative integral \(q\)-series above with \(c=25,h=\frac{1}{4}\) are in fact \((2,12)\) admissible characters. We will check this claim by showing that they solve the \((2,12)\) MLDE. In particular, first we will compute the point in parameter space where the solution Eq. (111) lives, in other words, we will determine the poles and accessory parameters as functions of the quasi-character parameters \(N_{1}\) and \(N_{2}\). Then we will show that this point satisfies the accessory equations Eq. (110) - Eq. (111).
For this, we substitute Eq. (111) into the MLDE Eq. (109). The two-derivative terms
simplify on using the fact that the summands in Eq. (101) are solutions to the \((2,0)\) MLDE:
\[D^{2}\chi_{n=4}^{A_{1}} = \frac{725}{576}\,E_{4}\,\chi_{n=4}^{A_{1}},\] \[D^{2}\chi_{n=0}^{A_{1}} = \frac{5}{576}\,E_{4}\,\chi_{n=0}^{A_{1}},\] \[D^{2}\chi_{n=-4}^{A_{1}} = \frac{437}{576}\,E_{4}\,\chi_{n=-4}^{A_{1}}. \tag{102}\]
The \(q\)-series of the MLDE, at the leading order for both the identity and non-identity character, determines the rigid-parameter in Eq. (101) to be the expected \(\alpha_{0}\alpha_{1}=\frac{475}{576}\). To obtain the poles and accessory parameters we consider the first and second order terms in the \(q\)-series for the identity and non-identity characters. This gives us four equations for the four variables (the two poles and the two accessory parameters). For the identity character this results in:
\[898560+432\,N_{1}-125\,P_{1}-475\,B_{1}=0 \tag{103}\]
and
\[-155450880+51984\,N_{1}+53932032\,N_{2}+\left(572905+451N_{1} \right)P_{1}+\left(469775-475N_{1}\right)B_{1}\] \[+725\,P_{2}-125\,P_{1}^{2}=0 \tag{104}\]
For the non-identity character we find instead:
\[1440N_{1}+612864N_{2}+19N_{2}\,P_{1}-475N_{2}\,B_{1}=0 \tag{105}\]
and
\[1466449920-468576N_{1}-499484160N_{2}+\left(1190N_{1}+287603N_{2} \right)P_{1}+\left(-950N_{1}+470725N_{2}\right)B_{1}\] \[+437N_{2}\,P_{2}+475N_{2}\,B_{2}+19N_{2}\,P_{1}^{2}-475N_{2}\,P_{1 }\,B_{1}=0 \tag{106}\]
Solving Eq. (103), Eq. (104), Eq. (105), Eq. (106), we obtain solutions for the symmetric parameters in terms of the quasi-character parameters:
\[P_{1} = \frac{1984N_{2}+3N_{1}\,N_{2}-10N_{1}}{N_{2}},\qquad P_{2}=\frac{8 (816-N_{1}+88N_{2})(3120-N_{1}-1064N_{2})}{N_{2}}\] \[B_{1} = \frac{650560\,N_{2}+57N_{1}\,N_{2}+1250N_{1}}{475\,N_{2}}\quad B _{2}=\frac{8(N_{1}+1064N_{2}-3120)(-5N_{1}+38456N_{2}+591600)}{475N_{2}}\]
We see right away that the symmetric parameters are all rational.
The equations Eq. (102) give the point in the MLDE parameter space where the quasi-character sum Eq. (101) lives. Now a necessary condition for this to be an admissible character
is that this point solves the accessory equations Eq. (100). We have checked that this is indeed the case. Now, we know the exact \((2,12)\) MLDE that the quasi-character sum Eq. (101) is expected to solve, namely Eq. (101) with the parameters given by Eq. (102) and \(\alpha_{0}\alpha_{1}=\frac{475}{576}\). We then just substitute the \(q\)-series expansions Eq. (102) and Eq. (103) and verify. We have done so for high-enough order to convince us that the quasi-character sum Eq. (101) is indeed a \((2,12)\) admissible character for all values of \(N_{1},N_{2}\) satisfying Eq. (104).
Now we can solve for the poles and accessory parameters in terms of the symmetric parameters, to find:
\[p_{1} = \frac{1984\,N_{2}+N_{1}\,(\,3\,N_{2}-10\,)\pm\sqrt{A}}{2\,N_{2}},\qquad p_{2}=\frac{1984\,N_{2}+N_{1}\,(\,3\,N_{2}-10\,)\mp\sqrt{A}}{2\,N_{2}}\] \[b_{4,1} = \frac{650560\,N_{2}+N_{1}\,(57\,N_{2}+1250)\pm\sqrt{B}}{950\,N_{ 2}},b_{4,2}=\frac{650560\,N_{2}+N_{1}\,(57\,N_{2}+1250)\mp\sqrt{B}}{950\,N_{2}}\]
where
\[A = (N_{2}-2)\left(4096\,N_{1}\,N_{2}+N_{1}^{2}(\,9\,N_{2}-50\,)+512 \,N_{2}\,(\,1463\,N_{2}+19890)\right)\] \[B = (19\,N_{2}+250)\left(-2723840\,N_{1}\,N_{2}+N_{1}^{2}\,(\,171\,N_ {2}+6250)-243200\,N_{2}\,(\,33649\,N_{2}-115362)\right).\]
This exemplifies our claim about the nature of movable poles and accessory parameters. When \(A>0\) and is a perfect square, both the movable poles are rational. When \(A>0\) and not a perfect square the poles are both real and irrational and lie in the field extension \({\bf Q}[\sqrt{A}]\). When \(A<0\), the movable poles are complex conjugates. Also when \(A=0\) (which happens for \(N_{2}=2\)) both poles coincide and we will discuss this in the next section.
### \((2,12)\) solution with \(c=31,h=\frac{3}{4}\)
Following the discussion of the previous seubsection, we can add three \((2,0)\) quasi-characters in the \(A_{1}\) class to obtain a \((2,12)\) solution: \(c=31,h=\frac{3}{4}\) in the following way :
\[\chi=\frac{1}{7}\left(\chi_{n=5}^{A_{1}}+N_{1}\,\chi_{n=1}^{A_{1}}+N_{2}\, \chi_{n=-3}^{A_{1}}\right) \tag{106}\]
where the \(n=5\) and \(n=-3\) terms are quasi-characters while the \(n=1\) term is an admissible character (for the \(E_{7,1}\) WZW model). Now let us consider the leading behaviour in the \(q\)-series expansions. The identity character corresponds to \(c=31\) and that of the non-identity character gives \(h=\frac{3}{4}\). Using the Riemann-Roch relation Eq. (16), we see that these numbers correspond to \(\ell=12\). However, at this stage, it is not yet clear that the \(q\)-series expansions are
those of admissible characters. For that we examine the \(q\)-series. For the identity character, we have
\[\tfrac{1}{7}\left(\chi_{n=5;0}^{A_{1}}+N_{1}\,\chi_{n=1;0}^{A_{1}}+N _{2}\,\chi_{n=-3;0}^{A_{1}}\right)=\tfrac{1}{7}q^{-\frac{31}{24}}\left(7+(N_{1}- 1829)q+(533603+133\,N_{1}+39\,N_{2})q^{2}\right.\] \[\left.+(309815674+1673\,N_{1}+1547\,N_{2})q^{3}+\ldots\right) \tag{111}\]
and for the non-identity character, we have
\[\tfrac{1}{7}\left(\chi_{n=5;1}^{A_{1}}+N_{1}\,\chi_{n=1;1}^{A_{1}} +N_{2}\,\chi_{n=-3;1}^{A_{1}}\right)=\tfrac{1}{7}q^{-\frac{13}{24}}\left(N_{2}+ (56\,N_{1}-377\,N_{2})q+(40641+968\,N_{1}-22126\,N_{2})q^{2}\right.\] \[\left.+(4836279+7504\,N_{1}-422123\,N_{2})q^{3}+\ldots\right) \tag{112}\]
Requiring admissibility of the above \(q\)-series up to order \(q^{5}\), we find the following relations:
\[N_{1}=7\,m_{1}+1829,\quad N_{2}\leq\frac{7678672813+1363376m_{1}}{41490618}, \quad m_{1}\in\mathbb{Z}^{\geq 0} \tag{113}\]
(we have denoted the integer by \(m_{1}\) anticipating that it will be the degeneracy of the first excited state in the identity character).
Proceeding as before, let us note that, the first of the relations in Eq. (113) relates the quasi-character parameter \(N_{1}\) to the dimension \(m_{1}\) of the Kac-Moody algebra (if any) of the final theory. However the second relation should be viewed as a restriction on \(N_{2}\) for any fixed \(N_{1}\). This restriction will be modified at higher orders in \(q\). As in the previous sub-section, the Rademacher expansion (see [46] and appendix A of [6]) assures us that the asymptotic growth of the negative-type quasi-character, \(\chi_{n=-3;1}^{A_{1}}\) is sub-leading as compared to that of the positive-type quasi-characters \(\chi_{n=5;1}^{A_{1}}\) and \(\chi_{n=1;1}^{A_{1}}\). Thus, Eq. (110) will be an admissible character for all \(N_{1}\), that is, \(N_{1}\geq 1829\) satisfying Eq. (113) and finitely many \(N_{2}\) satisfying some upper bound.
Now proceeding as before, we can express the symmetric parameters of the MLDE, namely, \(P_{1},P_{2},B_{1},B_{2}\) in terms of the quasi-character parameters only. However, in this case these expressions are quite lengthy and hence we do not report them here.
## 6 Beyond the genericity assumption: merging of poles
In this section we consider what happens when poles in the original MLDE merge. This first happens when \(p_{1}\to 0\) or \(1728\) in the \((2,6)\) case. Once we reach the value \(\ell=12\), we can also have two movable poles \(p_{1},p_{2}\) merging. We will analyse some special cases below and then draw general conclusions at the end.
### \((2,6)\) solutions as \(p_{1}\to 0\)
We want to investigate what happens to the \((2,6)\) solutions if we set \(p_{1}=0\), corresponding to the point \(\tau=\rho\). This is our first example of a case violating the genericity assumption. Let us take this limit directly in Eq. (3.28). It becomes:
\[\partial_{j}^{2}\chi(j)+\bigg{(}\frac{1}{2(j-1728)}-\frac{1}{3j}\bigg{)} \partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}(j-b_{4,1})}{j^{2}(j-1728)}\chi (j)=0. \tag{6.1}\]
Now we insert the expansion:
\[\chi_{i}(j)=j^{\alpha_{i}^{(\rho)}}\sum_{k=0}^{\infty}a_{i,k}^{(\rho)}\,j^{k} \tag{6.2}\]
We find the indicial equation:
\[\alpha_{i}^{(\rho)}(\alpha_{i}^{(\rho)}-1)-\frac{1}{3}\alpha_{i}^{(\rho)}+ \frac{\alpha_{0}\alpha_{1}b_{4,1}}{1728}=0 \tag{6.3}\]
Since the last term in general contributes to the indicial equation, we do not immediately get the values of the exponents. Instead the equation tells us that
\[\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)}=\frac{4}{3} \tag{6.4}\]
This in fact already follows from Eq. (A.2), since after taking \(p_{1}\to 0\) we have \(\ell_{\rho}=6\). Since the \(\alpha^{(\rho)}\) must be distinct non-negative multiples of \(\frac{1}{3}\), the possible solutions to the above equation are \((0,\frac{4}{3})\) or \((\frac{1}{3},1)\).
The first choice of exponents lead to either \(\alpha_{0}\alpha_{1}=0\) or \(b_{4,1}=0\). First let us look at the case when \(\alpha_{0}\alpha_{1}=0\). From Eq. (6.1) we notice that in this case the non-derivative term (the last term) vanishes. So \(\partial_{j}\chi(j)\) solves a first order MLDE. We can now integrate this to get a first-order MLDE for \(\chi(j)\) itself. This means the solution space becomes 1-dimensional and thereby we rule this case out 11.
Footnote 11: Note that this argument is independent of the exact form of the MLDE or even its order, hence it can be readily generalised to an \(n^{th}\) order MLDE where it implies that the solution space is \((n-1)\)-dimensional when \(\alpha_{0}\alpha_{1}=0\).
The other choice, \(b_{4,1}=0\), is a possibility and in this case indeed we have the exponents \((0,\frac{4}{3})\). As noted at the end of sub-section 3.2, when the lower of the two exponents is 0, it means we cannot extract some positive power of \(j\) from the solution and still get a sensible expansion in powers of \(j\). Thus such solutions, if they exist, would be non-factorisable.
In fact we have already shown that they do exist. In Section 4.3, we have listed the values of \(p_{1}\) and \(b_{4,1}\) for all admissible solutions of the \((2,6)\) MLDE as functions of the degeneracy \(m_{1}\) of the first excited state in the identity character. In Eqs. (4.39, 4.40) we see that there is a common value \(m_{1}=3538\) such that \(p_{1}\) and \(b_{4,1}\) both vanish keeping their ratio fixed. This
is the unique solution with indices \((\alpha_{0}^{(\rho)},\alpha_{1}^{(\rho)})=\left(0,\frac{4}{3}\right)\) within this family. A similar situation holds for each one of the subsequent pairs, Eqs (4.43, 4.44), (4.47, 4.48), (4.51, 4.52), (4.55, 4.56), (4.59, 4.60), (4.63, 4.64), (4.67, 4.68), (4.71, 4.72) - for each case, there is a unique value of \(m_{1}\) that makes both \(p_{1}\) and \(b_{4,1}\) vanish together. This, then, is the full list of admissible solutions with indices \(\left(0,\frac{4}{3}\right)\).
The other alternative is that the exponents are \((\frac{1}{3},1)\). These arise in the same solutions listed in the previous paragraph, by choosing \(m_{1}\) to be the value that makes \(p_{1}\) vanish but \(b_{4,1}\neq 0\). For example in Eqs (4.39, 4.40) this value is 658. Each of the other cases is similar.
Inserting this in Eq. (6.3) leads to the constraint:
\[b_{4,1}=\frac{576}{\alpha_{0}\alpha_{1}} \tag{6.5}\]
The reader may verify that this equation agrees with the values obtained as in the previous paragraph, by making \(p_{1}\) vanish with \(b_{4,1}\neq 0\) in each of our explicit solutions.
Now we come to a key point. Since the lower of the two exponents has shifted from \(0\) (when \(p_{1}\) was a generic point) to \(\frac{1}{3}\) (after \(p_{1}\) goes to \(0\)) we can make the change of variable:
\[\chi_{i}(j)=j^{\frac{1}{3}}\zeta_{i}(j) \tag{6.6}\]
where the function \(\zeta(j)\) has a sensible expansion in power of \(j\). Indeed, we find that \(\zeta\) satisfies the \((2,2)\) MLDE, the middle equation of Eq. (3.25) with \(r=0\). Recall that the parameters in that equation are \(\alpha_{0},\alpha_{1}\), the exponents around \(\tau\to\infty\) (not to be confused with the \(\alpha_{i}^{(\rho)}\) above!). We find the relation:
\[(\alpha_{0}\alpha_{1})^{\ell=6}=(\alpha_{0}\alpha_{1})^{\ell=2}+\frac{1}{6} \tag{6.7}\]
In fact from Eq. (6.6) we already know the exponents of the solution \(\zeta(j)\) must be:
\[\alpha_{i}^{\ell=2}=\alpha_{i}^{\ell=6}+\frac{1}{3} \tag{6.8}\]
and using the valence formula on both sides, it is easy to check that Eq. (6.7) agrees with this.
In this case we can come to the same conclusion by solving the accessory equation Eq. (3.33) as \(p_{1}\to 0\). One solution is \(b_{4,1}=0\) and the other is:
\[b_{4,1}=\frac{576}{\alpha_{0}\alpha_{1}} \tag{6.9}\]
Inserting this into the MLDE in terms of \(\tau\):
\[\left(D^{2}+\frac{1}{3}\frac{E_{6}}{E_{4}}D+\frac{(\alpha_{0}\alpha_{1}-\frac {1}{6})E_{4}^{3}+(576-\alpha_{0}\alpha_{1}\,b_{4,1})\Delta}{E_{4}^{2}}\right) \zeta(\tau)=0 \tag{6.10}\]
and performing the change of dependent variable:
\[\chi(\tau)=j^{\frac{1}{3}}\,\zeta(\tau) \tag{111}\]
we get the \((2,2)\) MLDE:
\[\left(D^{2}+\frac{1}{3}\frac{E_{6}}{E_{4}}D+\left(\alpha_{0}\alpha_{1}-\frac{1} {6}\right)E_{4}\right)\zeta(\tau)=0 \tag{112}\]
In this equation we see the relation \(\left(\alpha_{0}\alpha_{1}\right)^{\ell=2}=\left(\alpha_{0}\alpha_{1}\right)^ {\ell=6}-\frac{1}{6}\).
Thus we learn that, in the cases where the lower of the critical exponents is nonzero, sending the movable pole to the point \(p_{1}=0\) causes the solution to factorise into a product of solutions of an MLDE with lower value of \(\ell\) (in this case a pair of characters with \(\ell=2\)) times a single meromorphic character \(j^{\frac{1}{3}}\) which also has \(\ell=2\). A priori this may not seem like a "merger" of poles since there was no pole at \(p_{1}=0\) to begin with. However it does count as a merger because a single pole at \(\tau=\rho\) is three times the minimum allowed pole at that point.
As we will see later, the reason we could simply take the limit in the accessory equation like Eq. (103) is that this limit does not create any new constraint, which in turn is because the new exponents do not differ from each other by an integer. Below we will see examples where merging of poles leads to new exponents that differ by an integer and consequently a novel constraint equation arises. In such cases, merging the poles in the original constraint equation can give incorrect results. Instead one has to start afresh from the MLDE where the poles have merged.
Let us consider the relation Eq. (107), valid for factorised characters of the form \(\chi^{(\ell=6)}=j^{\frac{1}{3}}\chi^{(\ell=2)}\). Next we use the \(\ell=6\) and \(\ell=2\) valence formula Eq. (16) on the left and right of this equation respectively. Replacing everything in terms of the central charges, we get the following possibilities:
\[c^{(\ell=6)}=12-c^{(\ell=2)},\qquad\qquad c^{(\ell=6)}=c^{(\ell=2)}+8 \tag{113}\]
We know from [28] that admissible solutions for the \((2,2)\) case lie in the range \(16<c^{(\ell=2)}<24\). Thus, admissibility of \((2,6)\) solutions rules out the first case in Eq. (113). Then from the second equation above, we have \(24<c^{(\ell=6)}<32\). We already know the allowed central charges for admissible \((2,6)\) solutions from Sec. 3.6. Thus we see that tensor-product \((2,6)\) CFTs follow the exact same range and not a subset of it. We will encounter examples later where the product theories occupy a smaller range of central charges.
We will now look at some more examples that present different features, and then turn to the general case.
### \((2,6)\) solutions as \(p_{1}\to 1728\)
Let us now investigate what happens to the \((2,6)\) solutions if we set \(p_{1}=1728\). This corresponds to the point \(\tau=i\) in the upper half plane. Taking this limit in Eq. (3.28) we get:
\[\partial_{j}^{2}\chi(j)+\bigg{(}\frac{2}{3j}-\frac{1}{2(j-1728)}\bigg{)} \partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}(j-b_{4,1})}{j(j-1728)^{2}}\chi (j)=0. \tag{6.14}\]
Let us now insert the following expansion in the MLDE Eq. (6.14):
\[\chi_{l}(j)=(j-1728)^{\alpha_{l}^{(i)}}\sum_{k=0}^{\infty}a_{l,k}^{(i)}\,j^{k} \tag{6.15}\]
We find the indicial equation to be,
\[\alpha_{l}^{(i)}(\alpha_{l}^{(i)}-1)-\frac{1}{2}\alpha_{l}^{(i)}+\alpha_{0} \alpha_{1}\left(1-\frac{b_{4,1}}{1728}\right)=0 \tag{6.16}\]
The indicial equation tells us that,
\[\alpha_{0}^{(i)}+\alpha_{1}^{(i)}=\frac{3}{2} \tag{6.17}\]
One could have already deduced this fact from Eq. (A.4), since after taking \(p_{1}\to 1728\) we have \(\ell_{i}=6\). Now the \(\alpha^{(i)}\)s must be distinct non-negative multiples of \(\frac{1}{2}\). Hence, the possible solutions to the above equation are \((0,\frac{3}{2})\) or \((\frac{1}{2},1)\).
The exponents \((0,\frac{3}{2})\) correspond to either \(\alpha_{0}\alpha_{1}=0\) or \(b_{4,1}=1728\). The former can be ruled out since in this case the solution space is 1-dimensional, as argued in the previous sub-section. However \(b_{4,1}=0\) is a possibility and in this case indeed we have the exponents \((0,\frac{3}{2})\). Solutions, with these exponents, if they exist, would be non-factorisable. Once can see this by following similar arguments of regularity of solution around \(\tau=i\) as described in the previous sub-section.
The other alternative is that the exponents are \((\frac{1}{2},1)\). Inserting this in Eq. (6.16) leads to the constraint:
\[b_{4,1}=\frac{864\,(2\alpha_{0}\alpha_{1}-1)}{\alpha_{0}\alpha_{1}} \tag{6.18}\]
As noted in the previous sub-section, since the lower of the two exponents has shifted from 0 (when \(p_{1}\) was a generic point) to \(\frac{1}{2}\) (after \(p_{1}\) goes to 1728) we can make the following change of variable:
\[\chi(j)=(j-1728)^{\frac{1}{2}}\zeta(j) \tag{6.19}\]
where the function \(\zeta(j)\) is regular around \(\tau=i\). Furthermore, we find that \(\zeta\) satisfies the \((2,0)\) MLDE and this yields the following relation,
\[(\alpha_{0}\alpha_{1})^{\ell=6}=(\alpha_{0}\alpha_{1})^{\ell=0}+\frac{1}{6} \tag{6.20}\]
In fact from Eq. (111) we already know the exponents of the solution \(\zeta(j)\) must be:
\[\alpha_{l}^{\ell=0}=\alpha_{l}^{\ell=6}+\frac{1}{2} \tag{114}\]
and using the valence formula on both sides, it is easy to check that Eq. (113) agrees with this.
Thus, as seen before in the previous sub-section, in the cases where the lower of the critical exponents is nonzero, sending the movable pole to the point \(p_{1}=1728\) causes the solution to factorise into a product of solutions of an MLDE with lower value of \(\ell\) (in this case a pair of characters with \(\ell=0\)) times \((j-1728)^{\frac{1}{2}}\) which is a solution to the first order MLDE with \(\ell=3\).
In this case also, we can come to the same conclusions as above by solving the accessory equation Eq. (104) as \(p_{1}\to 1728\). This is because, as before, taking \(p_{1}\to 1728\) in Eq. (104) doesn't create any new constraint since the new exponents about \(\tau=i\) do not differ by an integer.
Now we make a comment about the factorised case: \(\chi^{(\ell=6)}=(j-1728)^{\frac{1}{2}}\chi^{(\ell=0)}\), analogous to the discussion following Eq. (105). Using the valence formula in Eq. (113) and replacing every exponent in terms of central charges we get:
\[c^{(\ell=6)}=8-c^{(\ell=0)},\qquad c^{(\ell=6)}=12+c^{(\ell=0)} \tag{115}\]
Each of these conditions, together with the known range for admissible \(c^{(\ell=0)}\) solutions, implies that \(c^{(\ell=6)}<20\) for such a factorised solution. However admissible \((2,6)\) solutions have \(c^{(\ell=6)}>24\), therefore there are none with this factorised form.
### \((2,12)\) solutions as \(p_{1}\to p_{2}\)
Next we consider the case where two movable poles coalesce but remain away the points \(0\) and \(1728\). This possibility arises for the first time in the \((2,12)\) MLDE Eq. (103). So we put \(p_{2}=p_{1}\) in this equation, to get:
\[\partial_{j}^{2}\chi+\left(\frac{1}{2(j-1728)}+\frac{2}{3j}-\frac{2}{j-p_{1} }\right)\partial_{j}\chi+\frac{\alpha_{0}\alpha_{1}}{j(j-1728)}\frac{(j-b_{4, 1})(j-b_{4,2})}{(j-p_{1})^{2}}=0. \tag{116}\]
Now we expand the characters about \(j=p_{1}\) as in Eq. (102):
\[\chi_{i}(j)=(j-p_{1})^{\alpha_{i}^{(1)}}\sum_{k=0}^{\infty}a_{i,k}^{(1)}(j-p_ {1})^{k}, \tag{117}\]
Due to the double pole, the last term in Eq. (116) contributes to the indicial equation, which becomes:
\[\alpha_{i}^{(1)}(\alpha_{i}^{(1)}-3)+\frac{\alpha_{0}\alpha_{1}(p_{1}-b_{4,1} )(p_{1}-b_{4,2})}{p_{1}(p_{1}-1728)}=0 \tag{118}\]
Again we cannot read off the exponents directly, but from the above equation we have:
\[\alpha_{0}^{(1)}+\alpha_{1}^{(1)}=3, \tag{111}\]
Since \(p_{1}\) is a regular point in moduli space, \(\alpha_{0}^{(1)},\alpha_{1}^{(1)}\) must be distinct non-negative integers, so the only possibilities are \((0,3)\) and \((1,2)\). The former leads to either \(\alpha_{0}\alpha_{1}=0\) or \(p_{1}=b_{4,1}\) or \(p_{1}=b_{4,2}\).
This in fact already follows from Eq. (110), since after taking \(p_{2}\to p_{1}\) we have \(\ell_{\tau}=12\). Since the \(\alpha^{(I)}\) must be distinct non-negative integers, the possible solutions to the above equation are \((0,3)\) or \((1,2)\).
With \(\alpha_{0}\alpha_{1}=0\), as noted before, we get the solution space to be 1-dimensional. Thus, we can only have the exponents \((0,3)\) if \(p_{1}=p_{2}=b_{4,1}\) or \(p_{1}=p_{2}=b_{4,2}\). At such points, one factor cancels from the numerator and denominator of the \((2,12)\) MLDE. Thus we get the equation:
\[\partial_{j}^{2}\chi(j)+\bigg{(}\frac{1}{2(j-1728)}+\frac{2}{3j}-\frac{2}{j-p _{1}}\bigg{)}\partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}(j-b_{4,2})}{j(j-17 28)(j-p_{1})}\chi(j)=0 \tag{112}\]
In this case there is a constraint equation at third order, which is left as an exercise for the reader.
Now we return to the other possible set of exponents, namely \((\alpha_{0}^{(1)},\alpha_{1}^{(1)})=(1,2)\). From Eq. (111) we then immediately find the condition:
\[\frac{\alpha_{0}\alpha_{1}(p_{1}-b_{4,1})(p_{1}-b_{4,2})}{p_{1}(p_{1}-1728)}=2 \tag{113}\]
Because the indices now differ by 1 (rather than 2 in the generic case), there is a potential logarithmic singularity in the character \(\chi_{0}(j)\) manifested by a constraint arising at first order beyond the indicial equation (as against second order in the generic case). The mechanism has been discussed before - at this order the coefficient \(a_{0,1}\) will not appear and instead we will get a constraint.
From the MLDE Eq. (110), this constraint is found to be:
\[\frac{p_{1}}{2}+\frac{2(p_{1}-1728)}{3}-2(2p_{1}-1728)+\alpha_{0}\alpha_{1}(2 p_{1}-B_{1})=0, \tag{114}\]
which is written in terms of the symmetric polynomial basis (see Eq. (109)).
In the symmetric polynomial basis, we can write Eq. (113) as:
\[\frac{\alpha_{0}\alpha_{1}}{p_{1}(p_{1}-1728)}(p_{1}^{2}-B_{1}p_{1}+B_{2})=2, \tag{115}\]
Now we can solve for \(B_{1}\) and \(B_{2}\) using Eq. (6.29) and Eq. (6.30) to get:
\[B_{1} =\frac{13824-17p_{1}+12\alpha_{0}\alpha_{1}\,p_{1}}{6\alpha_{0} \alpha_{1}}, \tag{6.31}\] \[B_{2} =\frac{p_{1}(6\alpha_{0}\alpha_{1}\,p_{1}-5p_{1}-6912)}{6\alpha_{ 0}\alpha_{1}}, \tag{6.32}\]
We now show that the solution is factorised, with one factor being a meromorphic character and the other being a solution (not necessarily admissible) of the \((2,0)\) MLDE. Since the lower exponent is \(1\), we substitute:
\[\chi(j)=(j-p_{1})\zeta(j) \tag{6.33}\]
in Eq. (6.23) to get:
\[(j-p_{1}) \Bigg{[}\,\partial_{j}^{2}\zeta+\bigg{(}\frac{1}{2(j-1728)}+\frac {2}{3j}\bigg{)}\partial_{j}\zeta+\bigg{(}\frac{1}{2(j-1728)(j-p_{1})} \tag{6.34}\] \[\qquad+\frac{2}{3j(j-p_{1})}-\frac{2}{j(j-p_{1})^{2}}+\frac{ \alpha_{0}\alpha_{1}}{j(j-1728)}\frac{j^{2}-B_{1}j+B_{2}}{(j-p_{1})^{2}} \bigg{)}\zeta\,\Bigg{]}=0.\]
On inserting the values of \(R\) and \(S\) from Eq. (6.31) and Eq. (6.32), Eq. (6.34) simplifies to:
\[(j-p_{1})\Bigg{[}\partial_{j}^{2}\zeta+\bigg{(}\frac{1}{2(j-1728)}+\frac{2}{3j }\bigg{)}\partial_{j}\zeta+\frac{1}{j(j-1728)}\bigg{(}\alpha_{0}\alpha_{1}- \frac{5}{6}\bigg{)}\zeta\Bigg{]}=0 \tag{6.35}\]
which means \(\zeta(j)\) solves the \((2,0)\) MLDE if we identify:
\[(\alpha_{0}\alpha_{1})^{\ell=12}=(\alpha_{0}\alpha_{1})^{\ell=0}+\frac{5}{6}. \tag{6.36}\]
The above equation holds for factorised \((2,12)\) solutions of the form: \(\chi^{(\ell=12)}=(j-p_{1})\chi^{(\ell=0)}\). Now let us comment on tensor-product \((2,12)\) CFTs of the above factorised form. Using the valence formula in Eq. (6.36) and writing everything in terms of central charges, we get: \(c^{(12)}=20-c^{(0)}\) or \(c^{(12)}=c^{(0)}+24\). Since the admissible central charge range of \((2,0)\) solutions is: \(0<c^{(\ell=0)}<8\) and unitarity implies \(c^{(12)}>22\), the first possibility gets ruled out. So, we must have: \(c^{(12)}=c^{(0)}+24\) implying \(24<c^{(\ell=12)}<32\), for tensor-product \((2,12)\) CFTs. So, again we find an example where the central charge range for tensor-product solutions lie in a smaller range compared to the full admissible range.
In this sub-section, we have shown that when two movable poles \(p_{1},p_{2}\) coincide with each other (but not with an accessory parameter), the solutions of the MLDE factorise into a product of a meromorphic character and a pair of characters \(\zeta_{i}(j)\) satisfying an MLDE with \(\ell=0\). As we already discussed in sub-section 3.2, this factorisation can means one of two
things for an admissible character \(\chi_{i}(j)\): either \(\zeta_{i}(j)\) is itself an admissible character, or \(\zeta_{i}(j)\) is not admissible but becomes admissible upon multiplying by \((j-p_{1})\).
To exemplify the above considerations, consider the example of \((2,12)\) characters studied in Section 5, Eq. (102). The conditions on the parameters \(N_{1},N_{2}\) are in Eq. (103). Now when \(N_{2}=2\), we see that the parameter \(A\) in Eq. (104) vanishes and the poles \(p_{1},p_{2}\) merge. Now the equations Eq. (105) and Eq. (106) become:
\[\chi_{n=4;0}^{A_{1}}+N_{1}\,\chi_{n=0;0}^{A_{1}}+2\,\chi_{n=-4;0}^{ A_{1}}\] \[= q^{-\frac{25}{24}}\left(1+(-245+N_{1})q+(196144+3\,N_{1})q^{2}+( 22083427+4\,N_{1})q^{3}+(929567063+7\,N_{1})q^{4}+\ldots\right)\] \[\chi_{n=4;1}^{A_{1}}+N_{1}\,\chi_{n=0;1}^{A_{1}}+2\,\chi_{n=-4;1} ^{A_{1}}\] \[= q^{-\frac{19}{24}}\left(2+2(-247+N_{1})q+2(196639+N_{1})q^{2}+6 (7229968+N_{1})q^{3}+(1772766794+8\,N_{1})q^{4}+\ldots\right)\]
It is easily verified that the above two equations Eq. (106), Eq. (107) are in the form:
\[\chi_{n=4}^{A_{1}}+N_{1}\,\chi_{n=0}^{A_{1}}+2\,\chi_{n=-4}^{A_{1}}=(j+N_{1}- 992)\,\chi_{n=0}^{A_{1}}, \tag{108}\]
thus they are factorised as in Eq. (105).
### Analogous considerations for \((2,8)\) and \((2,14)\) cases
In this sub-section, we shall first take \(p_{1}\to 0\) and \(p_{1}\to 1728\) in the \((2,8)\) case, and then take \(p_{2}\to p_{1}\) in the \((2,14)\) case. We shall see that, as a result of this procedure, the equations and expressions that come out will be very similar to the \((2,6)\) and \((2,12)\) cases discussed in the previous sub-sections. So we shall focus on the main results and shall be very brief about the intermediate steps.
The \((2,8)\) MLDE is the second line of Eq. (104) with \(r=1\). The indicial equation obtained after taking \(p_{1}\to 0\) in this equation results in the same equation as Eq. (105) with the second term being \(\frac{2}{3}\) instead of \(\frac{1}{3}\). Thus, now the sum of roots become, \(\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)}=\frac{5}{3}\). So the choice of exponents are:
\[(\alpha_{0}^{(\rho)},\alpha_{1}^{(\rho)})=\Big{(}0,\frac{5}{3}\Big{)},\ \Big{(}\frac{1}{3},\frac{4}{3}\Big{)},\ \Big{(}\frac{2}{3},1\Big{)} \tag{109}\]
The first of these cases corresponds to, as before, \(b_{4,1}=0\) and non-factorised characters. Regarding the second case, we notice that the difference between the two critical exponents is an integer. This solution is ruled out, as already observed in [27], because the monodromy about \(\tau=\rho\) would become reducible. The third case allows us to write \(\chi(j)=j^{\frac{2}{3}}\zeta(j)\) with \(\zeta(j)\) having \(\ell=0\), and in this case, \(\alpha_{0}\alpha_{1}b_{4,1}=1152\). One can show that \(\zeta(j)\) satisfies a \((2,0)\)
MLDE and this in turn gives the following relation,
\[\left(\alpha_{0}\alpha_{1}\right)^{\ell=8}=\left(\alpha_{0}\alpha_{1} \right)^{(\ell=0)}+\frac{1}{3}. \tag{108}\]
Eq. (108) is true for factorised \((2,8)\) solutions of the form: \(\chi^{(\ell=8)}=j^{\frac{2}{3}}\chi^{\ell=0}\). Using the valence formula in Eq. (108) and expressing everything in terms of central charges, we obtain the following two possibilities,
\[c^{(\ell=8)}=12-c^{(\ell=0)},\qquad c^{(\ell=8)}=16+c^{(\ell=0)} \tag{109}\]
Now unitarity implies, \(h^{(\ell=8)}=\frac{c^{(\ell=8)}-14}{12}>0\). Thus, \(c^{(\ell=8)}>14\) and this rules out the first possibility. Hence, for such factorised solutions, we have: \(c^{(\ell=8)}=16+c^{(\ell=0)}\). Since the admissibility range of \((2,0)\) solutions fall in, \(0<c^{(\ell=0)}<8\), we conclude that tensor-product \((2,8)\) CFTs of the form \(\chi^{(\ell=8)}=j^{\frac{2}{3}}\chi^{\ell=0}\) lie in the range: \(16<c^{(\ell=8)}<24\). Recall that, from 101, we had the admissibility range for \((2,8)\) solutions as: \(16<c^{(\ell=8)}<24\) and \(40<c^{(\ell=8)}<48\). So, this is an example where the central charge range for tensor-product solutions lie in a smaller range compared to the full admissible range. This is in contrast to the tensor-product \((2,6)\) CFT case.
Next we consider \(p_{1}\to 1728\) in the \((2,8)\) MLDE. The analysis parallels that of the \((2,6)\) case with very slight modifications. In this case, the indicial equation remains the same and hence the choice of exponents also remain the same: \((0,\frac{3}{2})\) or \((\frac{1}{2},1)\). The first choice lead to, \(b_{4,1}=1728\) and to non-factorised solutions while the second choice leads to solutions of the form: \(\chi=(j-1728)^{\frac{1}{2}}\zeta(j)\). \(\zeta(j)\) solves the \((2,2)\) MLDE which in turn leads to the following relation,
\[\left(\alpha_{0}\alpha_{1}\right)^{\ell=8}=\left(\alpha_{0} \alpha_{1}\right)^{\ell=2}+\frac{1}{3}, \tag{110}\]
Let us make a comment about the factorised case: \(\chi^{(\ell=8)}=(j-1728)^{\frac{1}{2}}\chi^{(\ell=2)}\). Using the valence formula in Eq. (110) and replacing every exponent in terms of central charges we get:
\[c^{(\ell=8)}=16-c^{(\ell=2)},\qquad c^{(\ell=8)}=12+c^{(\ell=2)}. \tag{111}\]
If we consider admissible \((2,8)\) solutions, for which \(c^{(\ell=8)}>14\), then the first case is ruled out. The second case implies, \(28<c^{(\ell=8)}<36\). However, from the central charge range of \((2,8)\) admissible solutions Eq. (104), we know there are no admissible solutions in the above range. hence we conclude that there are no admissible solutions of the above factorised form.
Now we consider coalescing of poles \(p_{2}\to p_{1}\) in the \((2,14)\) MLDE. This analysis parallels the \(p_{2}\to p_{1}\) case in the \((2,12)\) MLDE with slight modifications. The \((2,14)\) MLDE obtained
after taking \(p_{2}\to p_{1}\) is similar to Eq. (110) with the only difference being in the second term inside the coefficient of the \(\partial_{j}\chi\) term. It is now \(\frac{1}{3j}\) instead of \(\frac{2}{3j}\).
The indicial equation obtained from this is the same as in the \((2,12)\) case and hence the choice of exponents remains the same: \((0,3)\) or \((1,2)\). The first choice leads to \(p_{1}=p_{2}=b_{4,1}\) or \(p_{1}=p_{2}=b_{4,2}\) and to non-factorised solutions. Considering the second choice of exponents we recover Eq. (111). As explained before, since the indices now differ by \(1\) we get a constraint equation at first order beyond the indicial equation. Using, the symmetric parameters \(B_{1}\) and \(B_{2}\) (see Eq. (109)), this constraint equation becomes,
\[\frac{p_{1}}{2}+\frac{p_{1}-1728}{3}-2(2p_{1}-1728)+\alpha_{0}\alpha_{1}(2p_{1 }-B_{1})=0, \tag{112}\]
Using the indicial equation Eq. (111) and Eq. (112), we can solve for \(B_{1}\) and \(B_{2}\), as before, in terms of \(\alpha_{0}\alpha_{1}\) and the movable pole \(p_{1}\).
Note that, since the lower exponent, at \(j=p_{1}\), is \(1\) we can have the following substitution:
\[\chi(j)=(j-p_{1})\zeta(j) \tag{113}\]
which we can plug in the \((2,14)\) MLDE (with \(p_{2}\to p_{1}\)) to get,
\[\begin{split}(j-p_{1})&\Bigg{[}\,\partial_{j}^{2} \zeta+\bigg{(}\frac{1}{2(j-1728)}+\frac{1}{3j}\bigg{)}\partial_{j}\zeta+ \bigg{(}\frac{1}{2(j-1728)(j-p_{1})}\\ &\qquad+\frac{1}{3j(j-p_{1})}-\frac{2}{j(j-p_{1})^{2}}+\frac{ \alpha_{0}\alpha_{1}}{j(j-1728)}\frac{j^{2}-B_{1}j+B_{2}}{(j-p_{1})^{2}} \bigg{)}\zeta\,\Bigg{]}=0.\end{split} \tag{114}\]
On inserting the values of \(B_{1}\) and \(B_{2}\) obtained above, Eq. (114) simplifies to:
\[(j-p_{1})\Bigg{[}\partial_{j}^{2}\zeta+\bigg{(}\frac{1}{2(j-1728)}+\frac{1}{3 j}\bigg{)}\partial_{j}\zeta+\frac{1}{j(j-1728)}\bigg{(}\alpha_{0}\alpha_{1}- \frac{7}{6}\bigg{)}\zeta\Bigg{]}=0 \tag{115}\]
which means \(\zeta(j)\) solves the \((2,2)\) MLDE if we identify:
\[(\alpha_{0}\alpha_{1})^{\ell=14}=(\alpha_{0}\alpha_{1})^{\ell=2}+\frac{7}{6}. \tag{116}\]
The above equation holds for factorised \((2,14)\) solutions of the form: \(\chi^{(\ell=14)}=(j-p_{1})\chi^{(\ell=2)}\). Now let us make a comment on tensor-product \((2,14)\) CFTs of the above factorised form. Using the valence formula in Eq. (116) and writing everything in terms of central charges, we get: \(c^{(14)}=28-c^{(2)}\) or \(c^{(14)}=c^{(2)}+24\). Since the admissible central charge range of \((2,2)\) solutions is: \(16<c^{(\ell=2)}<24\) and unitarity implies \(c^{(14)}>26\), the first possibility gets ruled out. So, we must have: \(c^{(14)}=c^{(2)}+24\) implying \(40<c^{(\ell=14)}<48\), for tensor-product \((2,14)\) CFTs. Here again we find an example where the central charge range for tensor-product solutions lie in a smaller range compared to the full admissible range.
### Merging of movable poles in the general case
Here we examine the general case, namely \(\ell=6r,6r+2,6r+4\) for arbitrarily large values of \(r\), the number of movable poles. Now from the poles \(p_{1},p_{2},\cdots p_{r}\), we send \(p_{r}\to 0\), or \(p_{r}\to 1728\), or \(p_{r}\to p_{r-1}\). In each of these situations, the poles \(p_{1},p_{2},\cdots,p_{r-2}\) are held fixed.
The exponents around the special points \(p_{r}=0,1728\) and around \(p_{r}=p_{r-1}\) can now be derived by inspection of the general MLDEs Eq. (3.2). But in fact there is a simpler way to find the same results. The behaviour at special points for \(\ell=0,2\) and at a generic isolated pole has long been understood (starting with [27]) and we have listed the possible exponents in Eqs. (3.23, 3.24, 3.29). Now when a pole merges with any of the above, we need to add 1 to either of the exponents, keeping the lower one distinct from the upper and also not allowing an integral difference between the exponents in the case of the points \((0,1728)\). Morever at these two points the added exponent of 1 can be broken into factors of \(\frac{1}{3}\) or \(\frac{1}{2}\) respectively. The result is as follows:
\begin{tabular}{|c|c|c|} \hline \(\ell\) & Limit & Exponents \\ \hline \(6r\) & \(p_{r}\to 0\) & \((0,\frac{4}{3}),(\frac{1}{3},1)\) \\ & \(p_{r}\to 1728\) & \((0,\frac{3}{2}),(\frac{1}{2},1)\) \\ & \(p_{r}\to p_{r-1}\) & \((0,3),(1,2)\) \\ \hline \(6r+2\) & \(p_{r}\to 0\) & \((0,\frac{5}{3}),(\frac{2}{3},1)\) \\ & \(p_{r}\to 1728\) & \((0,\frac{3}{2}),(\frac{1}{2},1)\) \\ & \(p_{r}\to p_{r-1}\) & \((0,3),(1,2)\) \\ \hline \(6r+4\) & \(p_{r}\to 0\) & \((\frac{1}{3},\frac{5}{3}),(\frac{2}{3},\frac{4}{3})\) \\ & \(p_{r}\to 1728\) & \((0,\frac{3}{2}),(\frac{1}{2},1)\) \\ & \(p_{r}\to p_{r-1}\) & \((0,3),(1,2)\) \\ \hline \end{tabular}
From the exponents we learn whether, and what, we can factorise from the solution. If the lower exponent is 0 we have a non-factorisable solution, while if it is \(\frac{1}{3},\frac{2}{3},1\) we can factorise \(j^{\frac{1}{3}},j^{\frac{2}{3}},(j-p_{r-1})\) respectively. In particular this means that if \(p_{r}\to p_{r-1}\) then the character \(\zeta\) after extracting \((j-p_{r-1})\) loses its dependence on the merged pole entirely.
## 7 Illustrative examples of genuine CFTs
The main focus of this paper has been on finding admissible solutions to MLDE with \(\ell\geq 6\). In general one expects that some, though not all, of the admissible solutions will be actual CFTs. Completely classifying these is a major project, perhaps an unachievable one, but one
may check whether at least some illustrative CFTs can be found for each of the classes we have considered. We do this here.
There are two cases for which CFTs are already known: the \((2,6)\) and \((2,8)\) MLDEs with the movable pole away from the boundary of moduli space. For the \((2,6)\) case, this has been done in [30] making use of a relation derived in Eq.(3.6) of [25] that relates three Wronskian indices: \(\mathcal{L}\) for a meromorphic theory which is the numerator in a coset relation, \(\ell\) for the denominator theory and \(\tilde{\ell}\) for the coset theory 12. We write the equation for the case of two characters:
Footnote 12: The equation as written in [25] involves \(N\equiv\frac{c}{24}\) where \(c\) is the central charge of the meromorphic theory, but it is easily verified that \(6N=\frac{c}{4}=\mathcal{L}\).
\[\tilde{\ell}=2(\mathcal{L}+1)-6n-\ell \tag{7.1}\]
Here \(n\geq 1\) is an integer labelling the sum of dimensions of the non-trivial primary for the denominator and coset theories. One finds \(n=2\) whenever the coset is of the type where a simple factor of a Kac-Moody algebra is deleted by a corresponding denominator, and \(n=1\) for non-trivial embeddings of Kac-Moody algebras in the numerator. Now, [30] considers \(\mathcal{L}=8\) and \(\ell=0\), corresponding to cosets of a \(c=32\) meromorphic theory, where the embedding is of the "deletion" type with \(n=2\). This results in \(\tilde{\ell}=6\). Nearly 150 CFTs of this form are listed in Appendix A of [30]. These belong to a class with complete Kac-Moody algebras, which means the stress tensor is pure Sugawara with no additional contribution.
For \((2,8)\), a number of CFTs can be found in [18]. Although Wronskian indices are not the main focus of this paper, some of the cosets considered are of meromorphic theories with \(c=24\) (and hence \(\mathcal{L}=6\)) with a non-trivial embedding of the Kac-Moody algebra of a denominator with \(\ell=0\). Thus \(n=1\) and Eq. (7.1) gives \(\tilde{\ell}=8\). Several theories of this type are included in Table 1 of that paper. It may be mentioned that the method used in that work only reproduces theories in the range \(c<25\). However for the \((2,8)\) case, Eq. (3.60) also allows the range \(c\in(40,48)\). This can potentially be realised with \(\mathcal{L}=12\) and \(n=3\) in Eq. (7.1), but it is not clear if \(n=3\) is allowed for two-character meromorphic cosets, so we leave this question for the future.
Next we move on to the case of \((2,12)\) with generic poles. Eq. (7.1) tells us that this can be realised by a non-trivial embedding of Kac-Moody algebras (with \(n=1\)) in a \(c=32\) meromorphic CFT (for which \(\mathcal{L}=8\)). Such embeddings have not been completely classified, even for the complete KM algebra case, but one can start with the meromorphic CFTs corresponding to the 132 even, unimodular Kervaire lattices [47] and take a coset by non-trivially embedding \(A_{1,1}\) into any of the simple factors of the numerator. This will result in the desired \((2,12)\) CFT and one expects that most of them will satisfy MLDEs with generic
(non-coincident) poles. As an example, take the Kervaire lattice with root system \(A_{9,1}^{2}E_{7,1}^{2}\) and quotient by \(A_{1,1}\), embedding it in \(E_{7,1}\). The quotient theory has central charge \(31\) and algebra \(A_{9,1}^{2}D_{6,1}E_{7,1}\) with \(m_{1}=397\).
Similarly, the coset of a \(c=48\) meromorphic theory by \(A_{1,1}\) with a trivial embedding of the KM algebra (\(n=2\)) will give \(\ell=14\). We are not aware of a classification of even, unimodular lattices of dimension \(48\) even under the restriction of having complete root systems, but one possibility is \(A_{1,1}^{48}\) and the trivial embedding deletes one of the \(48\) factors leaving an \(\ell=14\) CFT with Kac-Moody algebra \(A_{1,1}^{47}\). There will surely be many more examples, most of which should have generic poles.
Now we turn to a CFT example for \((2,6)\) with \(p_{1}\to 0\), a limit studied above that effectively corresponds to coincident poles. As we saw in Sub-section 6.1, in this limit there are two possible sets of exponents at \(0\), namely \((0,\frac{4}{3})\) and \((\frac{1}{3},1)\). As explained there, these correspond respectively to characters of non-factorisable and factorisable type.
Within the factorisable or \((\frac{1}{3},1)\) type, we can easily find a set of tensor-product \((2,6)\) CFTs with characters of the form in Eq. (6.6), namely \(\chi_{i}(j)=j^{\frac{1}{3}}\zeta_{i}(j)\) where \(\zeta(j)\) are the characters of a \((2,2)\) CFT. The latter have central charges \(c\in(16,24)\) and are enumerated in [25]. Since multiplication by \(j^{\frac{1}{3}}\) increases \(c\) by \(8\), the result has central charge \(c\in(24,32)\) consistent with the expected range from Eq. (3.60). The question now is whether we can get a \((2,6)\) CFT with factorised characters where \(\zeta_{i}(j)\) does _not_ represent a CFT, though \(\chi_{i}(j)\) does. This is easily answered. For \(\chi_{i}(j)\) to be admissible, \(\zeta_{i}(j)\) must at least be a quasi-character, as then it is possible for multiplication by \(j^{\frac{1}{3}}\) to turn the negative coefficients positive. However, the central charge associated to \(\zeta\) must still lie in the range \(c\in(16,24)\). But in this range there are no quasi-characters as we can see from Eq. (D.7). We conclude that there are no factorised \((2,6)\) characters other than those of admissible CFT (this in contrast to the case of \((2,4)\) discussed in sub-section 3.3).
Turning now to the non-factorisable case, where the indices are \((0,\frac{4}{3})\) and the CFT is irreducible. As shown in sub-section 6.1, this case arises when \(p_{1}\) and \(b_{4,1}\) vanish together. As an example, in Eqs (4.43, 4.44) this happens when \(m_{1}=2875\). Similarly there is a unique \(m_{1}\) that achieves this for all the other cases in sub-section 4.3. Now all non-factorisable examples that arise as cosets of \(32\)d lattices having complete Kac-Moody algebras were listed in Appendix A of [30]. The relation between \(\mathcal{N}\) of that Appendix and the above \(m_{1}\), which we temporarily denote \(m_{1}^{\rm coset}\), is easily seen to be \(m_{1}^{\rm coset}=\mathcal{N}-m_{1}^{\rm denom}\) where \(m_{1}^{\rm denom}\) is the dimension of the KM algebra of the denominator in the coset. With this one finds that in none of the cases can the character with indices \((0,\frac{4}{3})\) be associated with a coset CFT. We do not know of a deep reason why this should be the case.
We move on to \((2,8)\) with \(p_{1}\to 0\). Here the possible indices are \((0,\frac{5}{3})\) and \((\frac{2}{3},1)\). We
again see that factorised solutions corresponding to the latter case are trivially possible and they lie in the sub-range \(c\in(16,24)\). To populate the other sub-range in Eq. (114), namely \(c\in(40,48)\), we now have a possibility: consider quasi-characters for \(\ell=0\) in the range \(24<c<32\), for example the \(c=25\) quasi-character in the \(A_{1}\) series. While this is not admissible, multiplying it by \(j^{\frac{2}{3}}\) makes it admissible and it has \(c=41\). In this case, the result is a tensor product of \(E_{8,1}\) times the exotic \(c=33\) theory of [9]. But more general non-tensor-product theories could well exist.
Finally we look at the case of \((2,12)\). In the coincident limit \(p_{2}\to p_{1}\), one can look for factorisable as well as non-factorisable characters. In the factorisable case one simply has \((j-p_{1})\zeta_{i}^{(\ell=0)}\) where \(\zeta_{i}\) is an MMS character hence lies in the range \(c\in(0,8)\). The result is in the range \(c\in(24,32)\) and will have indices \((1,2)\). Searching for the non-factorisable case with indices \((0,3)\) is more trivial and we leave it for the future.
## 8 Discussion and conclusions
Since this has been a lengthy discussion, let us review the chain of arguments that led to our understanding of MLDEs with movable poles. The first step is to suitably parametrise the MLDE. This was done in Section 2.17 in the \(\tau\) parameter and Section 3 in the \(j\) parameter. The next several steps use the latter form of the equation. Assuming a generic location of poles, we examined the behaviour of solutions around each movable pole and thereby derived a constraint equation (the "accessory equation") relating the accessory parameters to the poles (sub-sections 3.4, 3.5). The accessory equations describe an \(r\)-dimensional sub-manifold of the \(2r\)-dimensional space of poles and accessory parameters (it is an algebraic variety if we rationalise all denominators to make the equations polynomial, though the form of the equations in the general case is simpler without doing so). We then took one of the poles to infinity and were thereby led to the boundary of this sub-manifold. The original accessory equations now reduced to two types of equations: accessory equations for the case with one less movable pole, but with a modified accessory parameter, and an equation that determines the ratio of the accessory parameter and the pole on the boundary. Together, these determined the critical exponents (hence \(c,h\)) of solutions with \(r\) movable poles in terms of solutions with \(r-1\) movable poles. Applying this recursively gave us the allowed central charges for any number of movable poles, displayed in Section 3.6.
Next, in Section 4 we returned to the single-pole MLDE as a function of \(\tau\) and computed the Frobenius solution. Inserting the known allowed values of \(c\) then determined both the characters completely up to an arbitrary integer. One cannot reduce further since it is known [3; 6] that there is exactly one free integer parameter for each movable pole. We were
also able to related our results precisely with the quasi-character construction of [6] which constructs admissible characters for generic Wronskian index without any use of the corresponding MLDE, and precise agreement was found. An analogous discussion in Section 5 considered the case of two movable poles. In Section 6 we considered what happens when one violates the genericity assumption by merging poles. The equation and solutions remain well-defined when one pair of poles is merged, though they become singular if we simultaneous merge more than two poles. Finally in Section 7 we gave just a few examples of CFTs for the various cases we considered, showing that they are populated by genuine CFTs, and leaving a more detailed analysis for the future.
Our analysis makes it clear that whenever there are movable poles, there is an equal number of free integer parameters in the admissible solutions. This fact has previously been noted in [6], but here we have re-obtained it directly from MLDE. This means there is an infinite set of admissible characters for every \(\ell\geq 6\). However it can be argued that the number of CFTs for a given \(\ell\geq 6\) is finite. For example, a result of [18] implies that every \((2,6)\) CFT with \(c<32\) is a coset of a \(c=32\) meromorphic CFT. At the same time, Eq. (111) makes it clear that there are no \((2,6)\) admissible characters (hence no CFT) for \(c\geq 32\). So in fact, all \((2,6)\) CFTs are cosets of \(c=32\) meromorphic theories. It is expected that the latter are finite in number (though the number is enormous) which implies that the former number is also finite.
We conclude with a discussion of some open questions. For any number \(n\) of characters, movable poles are present for \(\ell\geq 6\) but the corresponding MLDEs do not appear to have been studied at all. Even for low values like \(n=3,4,5\), the classifications in the literature [2, 10, 13, 14, 15, 22, 25, 26] are all at \(\ell=0\). Things are only slightly better using the alternate approaches of Hecke operators and quasi-characters. Ref. [3] constructed some admissible characters for the \((3,6)\) case as Hecke images of the Ising model characters. Several sets of quasi-characters solving the \((3,0)\) MLDE were found in [8] and their linear combinations were shown to provide admissible characters with \(\ell=6\). A concrete example of a \((3,6)\) CFT was also provided in this reference. Beyond this, the space of admissible characters and CFT for \(n\geq 3\) characters arising from MLDE with movable poles, is essentially unexplored. It should certainly be possible to gain some insights into at least the \((3,6)\) case from the MLDE following the methods used here.
Another space of MLDEs that is largely unexplored is the \((n,0)\) class - with \(n\) characters but no poles. Like the \((2,\ell)\) case studied in the present paper, \((n,0)\) also involves a proliferation of parameters for \(n\geq 6\), however clearly these do not correspond to poles or accessory parameters and one has to find a useful interpretation. Moreover the number of exponents is \(n\), if this is large the analysis may be quite difficult. Nevertheless, as the existing literature
shows, a lot can be learned bout RCFT by exploring modular differential equations, and we hope to report on more of the open problems in the future.
## Acknowledgements
AD would like to thank Jishu Das and Naveen Balaji Umasankar for useful discussions on modular forms and MLDEs. He would also like to thank Sigma Samhita for helpful discussions regarding SageMath. CNG thanks Iosif Bena and gratefully acknowledges the hospitality of CEA Saclay where some part of this work was done. CNG also thanks Bobby Acharya, Paolo Creminelli, Atish Dabholkar and gratefully acknowledges the hospitality of the High-Energy Section of the ICTP where some part of this work was done. SM would like to thank the Department of Theoretical Physics at CERN, Geneva for its warm hospitality. JS would like to thank Suresh Govindarajan for valuable discussions. He gratefully acknowledges the hospitality of the School of Physical Sciences at NISER, Bhubaneswar. He would also like to acknowledge support from the Institute Postdoctoral fund of IIT Madras.
## Appendix A Critical indices at the poles
In this Appendix we review the leading behaviour of the character \(\chi(j)\), about various points in the upper half plane, following [27].
About \(\tau=\rho\) we have \(j\to 0\) and the leading behaviour of the characters are parametrised as:
\[\chi_{0} \sim j^{\alpha_{0}^{(\rho)}}\] \[\chi_{1} \sim j^{\alpha_{1}^{(\rho)}}\]
\(\alpha_{0}^{(\rho)}\) and \(\alpha_{1}^{(\rho)}\) must be non-negative multiples of \(\frac{1}{3}\) to ensure regularity of the characters around \(\tau=\rho\). Now let us compute the leading behaviour of the Wronskian \(W(j)\) about \(\tau=\rho\):
\[W(j) \sim j^{\alpha_{0}^{(\rho)}}\left(-j\frac{E_{6}}{E_{4}}\right) \partial_{j}(j^{\alpha_{1}^{(\rho)}})\] \[\sim j^{\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)}}\left(\frac{E_{6 }}{E_{4}}\right)\] \[\sim j^{\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)}-\frac{1}{3}}=j^{ \frac{\ell_{\rho}}{6}} \tag{100}\]
where we used the fact that \(E_{4}=j^{\frac{1}{3}}\Delta^{\frac{1}{3}}\) and both \(\Delta\) and \(E_{6}\) are non-vanishing at \(\tau=\rho\). Thus, we get:
\[\alpha_{0}^{(\rho)}+\alpha_{1}^{(\rho)}-\frac{1}{3}=\frac{\ell_{\rho}}{6}. \tag{101}\]
About \(\tau=i\) we have \(j\to 1728\) and the leading behaviour of characters is parametrised as:
\[\chi_{0} \sim(j-1728)^{\alpha_{0}^{(i)}}\] \[\chi_{1} \sim(j-1728)^{\alpha_{1}^{(i)}}\]
with \(\alpha_{0}^{(i)}\) and \(\alpha_{1}^{(i)}\) being non-negative multiples of \(\frac{1}{2}\). This is to ensure regularity of characters around \(\tau=i\). Now let us compute the leading behaviour of the Wronskian \(W(j)\) about \(\tau=i\):
\[W(j) \sim(j-1728)^{\alpha_{0}^{(i)}}\left(-j\frac{E_{6}}{E_{4}}\right) \partial_{j}(j-1728)^{\alpha_{1}^{(i)}}\] \[\sim(j-1728)^{\alpha_{0}^{(i)}}(j-1728+1728)(j-1728)^{\alpha_{1}^ {(i)}-1}\left(\frac{E_{6}}{E_{4}}\right)\] \[\sim(j-1728)^{\alpha_{0}^{(i)}}\left[(j-1728)(j-1728)^{\alpha_{1} ^{(i)}-1}+1728(j-1728)^{\alpha_{1}^{(i)}-1}\right]\left(\frac{E_{6}}{E_{4}}\right)\] \[\sim(j-1728)^{\alpha_{0}^{(i)}+\alpha_{1}^{(i)}}\left[1+1728(j-1 728)^{-1}\right]\left(\frac{E_{6}}{E_{4}}\right)\] \[\text{note},E_{6}=(j-1728)^{\frac{1}{2}}\Delta^{\frac{1}{2}}\]
From this we get:
\[W(j)\sim(j-1728)^{\alpha_{0}^{(i)}+\alpha_{1}^{(i)}-\frac{1}{2}}(E_{4}^{-1} \Delta^{1/2})\sim(j-1728)^{\alpha_{0}^{(i)}+\alpha_{1}^{(i)}-\frac{1}{2}}\sim (j-1728)^{\frac{\ell_{j}}{6}} \tag{101}\]
where we used the fact that \(E_{4}\) and \(\Delta\) are finite at \(\tau=i\). Then we have:
\[\alpha_{0}^{(i)}+\alpha_{1}^{(i)}-\frac{1}{2}=\frac{\ell_{i}}{6}. \tag{102}\]
Next let us study the leading behaviour of the Wronskian about a movable pole say, \(j=p_{1}\). We parametrise this by:
\[\chi_{0} \sim(j-p_{1})^{\alpha_{0}^{(1)}}\] \[\chi_{1} \sim(j-p_{1})^{\alpha_{1}^{(1)}}\]
with \(\alpha_{0}^{(1)}\) and \(\alpha_{1}^{(1)}\) being non-negative integers to ensure regularity of characters around \(j=p_{1}\). Now the leading behaviour of the Wronskian \(W(j)\) about \(j=p_{1}\) is:
\[W(j)\sim(j-p_{1})^{\alpha_{0}^{(1)}+\alpha_{1}^{(1)}-1}\sim(j-p_{1})^{\frac{ \ell_{\tau}}{6}} \tag{103}\]
and we find:
\[\alpha_{0}^{(1)}+\alpha_{1}^{(1)}-1=\frac{\ell_{\tau}}{6}. \tag{104}\]
\(\ell\) is even for \(2\)-character solutions
In this appendix we will show that for \(2\)-character solutions, \(\ell\) is always even. This result was first obtained in [27] using monodromy arguments for solutions around \(\tau=i\). It was shown that if \(\ell\) is odd then the monodromy is reducible, implying the solution space becomes one-dimensional and hence is not allowed. Here we will approach the problem in a slightly different way but will arrive at the same conclusion.
Using equations Eq. (11) and Eq. (12) we write the \((2,\ell)\) MLDE in the \(j\)-plane for \(\ell=6r+1,6r+3\) and \(6r+5\).
\[\ell=6r+1\colon\ \partial_{j}^{2}\chi(j)+\left[-\sum_{I=1}^{r-1}\frac{1}{j- p_{I}}\right]\partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}}{j^{2}(j-1728)} \frac{\prod\limits_{I=1}^{r}(j-b_{4,I})}{\prod\limits_{I=1}^{r-1}(j-p_{I})} \chi(j)=0.\]
\[\ell=6r+3\colon\ \partial_{j}^{2}\chi(j)+\left[\frac{2}{3j}-\sum_{I=1}^{r} \frac{1}{j-p_{I}}\right]\partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}}{j(j- 1728)}\frac{\prod\limits_{I=1}^{r}(j-b_{4,I})}{\prod\limits_{I=1}^{r}(j-p_{I}) }\chi(j)=0.\]
\[\ell=6r+5\colon\ \partial_{j}^{2}\chi(j)+\left[\frac{1}{3j}-\sum_{I=1}^{r} \frac{1}{j-p_{I}}\right]\partial_{j}\chi(j)+\frac{\alpha_{0}\alpha_{1}}{j(j- 1728)}\frac{\prod\limits_{I=1}^{r}(j-b_{4,I})}{\prod\limits_{I=1}^{r}(j-p_{I}) }\chi(j)=0 \tag{130}\]
Comparing the above MLDEs to the ones given in Eq. (3.25), we notice a striking difference, namely the term \(\frac{1}{2(j-1728)}\) is missing in the first-derivative term. We shall see that the absence of this term is crucial to ruling out odd \(\ell\) values.
Suppose we expand the characters around \(\tau=i\) as given in Eq. (3.7). The indicial equation in all the above cases gives \((\alpha_{0}^{(i)},\alpha_{0}^{(i)})=(0,1)\). In fact we could have already found these values from Eq. (120). Now the next order is interesting. Due to the absence of \(\frac{1}{2(j-1728)}\) in the linear derivative term, we get no contribution from this term at this order. So for the character with exponent \(\alpha_{0}^{(i)}=0\), we find at this order:
\[\alpha_{0}\alpha_{1}\prod_{I=1}^{r}(1728-b_{4,I})=0, \tag{131}\]
The above implies either \(\alpha_{0}\alpha_{1}=0\) or \(b_{4,I}=1728\) for some \(1\leq I\leq r\). The second choice is ruled out since it leads to removal of the pole at \(j=1728\) in the last term. Since \(\frac{1}{2(j-1728)}\) is also absent in the middle term, the MLDE has now no poles about \(j=1728\) and thus the expansion Eq. (3.7) does not make sense. So we must rule out this possibility.
Now let us move on to the other possibility \(\alpha_{0}\alpha_{1}=0\). From the footnote in section 6.1 we know that whenever this happens, the solution space becomes 1-dimensional. So this too is ruled out.
Thus, the above considerations rule out all odd \(\ell\) values for two-character solutions.
## Appendix C Frobenius solutions of MLDEs
### \((2,0)\) and \((2,2)\) Mlde
Here we review the well-established method for recursively solving the MLDE in the cases \(\ell=0,2\). The \((2,0)\) MLDE in the \(\tau\)-plane is given in Eq. (2.21). It is a one-parameter MLDE, the only parameter is the rigid parameter: \(\alpha_{0}\alpha_{1}=-\frac{c(c+4)}{576}\). Its solutions are :
\[\chi_{0}(q)=q^{-\frac{c}{24}}\,\sum_{k=0}^{\infty}m_{k}^{(0)}(c)\,q^{k},\qquad \qquad\chi_{1}(q).=q^{\frac{c+4}{24}}\,\mathsf{D}\,\sum_{k=0}^{\infty}m_{k}^{ (0)}(-c-4)\,q^{k}.\] (C.1)
Here \(\mathsf{D}\) is the apparent degeneracy of the non-identity character. The \(m_{k}^{(0)}(c)\)'s are rational functions of the central charge \(c\); the superscript indicates the fact that these belong to the \(l=0\) solution. We give here the first few: we have \(m_{0}^{(0)}(c)=1\) and then for \(k\geq 1\):
\[m_{k}^{(0)}(c)\equiv(-1)^{k}\frac{N_{k}^{(0)}(c)}{D_{k}^{(0)}(c)}.\] (C.2)
The \(D_{k}^{(0)}(c)\) are the denominator polynomials:
\[D_{k}^{(0)}(c)=k!\,\,\Pi_{l=0}^{k-1}(c-10-12l)\] (C.3)
and the \(N_{k}^{(0)}(c)\)'s are the numerator polynomials, of which the first few are:
\[N_{1}^{(0)}(c) = 5c^{2}+22c\] \[N_{2}^{(0)}(c) = 25c^{4}+175c^{3}+508c^{2}+804c\] \[N_{3}^{(0)}(c) = 125c^{6}+975c^{5}+10330c^{4}+68308c^{3}+148872c^{2}+33344c\] \[N_{4}^{(0)}(c) = 625c^{8}+4250c^{7}+136475c^{6}+1359450c^{5}+6793624c^{4}+22169872 c^{3}\] \[+38327216c^{2}+18775968c\] \[N_{5}^{(0)}(c) = 3125c^{10}+12500c^{9}+1464375c^{8}+16026500c^{7}+1629216204c^{6} +1246732800c^{5}\] \[+5241174800c^{4}+12353480000c^{3}+14698399680c^{2}+2755008000c\] \[N_{6}^{(0)}(c) = 15625c^{12}-9375c^{11}+13815625c^{10}+132866875c^{9}+2911676350c^ {8}+32677746940c^{7}\] (C.4) \[+238017546040c^{6}+1317574464400c^{5}+4550303524000c^{4}+800220275 6160c^{3}\] \[+6057775308160c^{2}+2846891980800c\]
We note that \(D_{k}^{(0)}(c)\) is a polynomial of degree \(k\), \(N_{k}^{(0)}(c)\) is a polynomial of degree \(2k\) with the leading coefficient being \(5^{k}\) and vanishing constant term. Examples that will be relevant to the main text are:
\[m_{1}^{(0)} =-\frac{5c^{2}+22c}{c-10}\] \[m_{2}^{(0)} =\frac{25c^{4}+175c^{3}+508c^{2}+804c}{2(c-10)(c-22)} \tag{112}\] \[m_{3}^{(0)} =-\frac{125c^{6}+975c^{5}+10330c^{4}+68308c^{3}+148872c^{2}+33344c} {6(c-10)(c-22)(c-34)}\]
The \((2,2)\) MLDE in the \(\tau\)-plane is given in Eq. (22). It is a one-parameter MLDE, the only parameter is the rigid parameter : \(\alpha_{0}\alpha_{1}=-\frac{c(c-4)}{576}\). It's solutions are :
\[\chi_{0}(q)=q^{-\frac{c}{24}}\,\sum_{k=0}^{\infty}m_{k}^{(2)}(c)\,q^{k}, \qquad\qquad\chi_{1}(q).=q^{\frac{c-4}{24}}\,\mathsf{D}\,\sum_{k=0}^{\infty}m_ {k}^{(2)}(-c+4)\,q^{k}. \tag{113}\]
Here, as before, \(\mathsf{D}\) is the apparent degeneracy of the non-identity character. The \(m_{k}^{(2)}(c)\) are rational functions of the central charge \(c\) and the superscript indicates the fact that these belong to the \(l=2\) solution. We give here the first few: again we start with \(m_{0}^{(2)}(c)=1\) and then for \(k\geq 1\) we get:
\[m_{k}^{(2)}(c)\equiv(-1)^{k}\frac{N_{k}^{(2)}(c)}{D_{k}^{(2)}(c)}. \tag{114}\]
\(D_{k}^{(2)}(c)\)'s are the denominator polynomials
\[D_{k}^{(2)}(c)=k!\,\Pi_{l=0}^{k-1}(c-14-12l) \tag{115}\]
and the \(N_{k}^{(2)}(c)\)'s are the numerator polynomials, the first few are:
\[N_{1}^{(2)}(c) = 5c^{2}-142c\] \[N_{2}^{(2)}(c) = 25c^{4}-1465c^{3}+8980c^{2}-45420c\] \[N_{3}^{(2)}(c) = 125c^{6}-11325c^{5}+159550c^{4}-1931740c^{3}+7672440c^{2}-196035 20c\] \[N_{4}^{(2)}(c) = 625c^{8}-77750c^{7}+1850075c^{6}-37721430c^{5}+336005080c^{4}-229 1330800c^{3}\] \[+7121862320c^{2}-12830855520c\] \[N_{5}^{(2)}(c) = 3125c^{10}-500000c^{9}+17589375c^{8}-523745000c^{7}+7785543020c^ {6}-93188748960c^{5}\] \[+632315675600c^{4}-3135002595200c^{3}+8096425231680c^{2}-11243426 250240c\] \[N_{6}^{(2)}(c) = 15625c^{12}-3084375c^{11}+148590625c^{10}-5971718125c^{9}+130232 057350c^{8} \tag{116}\] \[-2331666656740c^{7}+26060719253080c^{6}-228682548002800c^{5}\] \[+1273810283284000c^{4}-5062375605466560c^{3}+11295987759233920c^{2}\] \[-13145822789068800c\]
Examples that will be relevant to the main text are:
\[\begin{split} m_{1}^{(2)}&=-\frac{5c^{2}-142c}{c-14} \\ m_{2}^{(2)}&=\frac{25c^{4}-1465c^{3}+8980c^{2}-45420c} {2(c-14)(c-26)}\\ m_{3}^{(2)}&=-\frac{125c^{6}-11325c^{5}+159550c^{4}- 1931740c^{3}+7672440c^{2}-19603520c}{6(c-14)(c-26)(c-38)}\end{split} \tag{105}\]
We will now use the above results to understand the case of MLDEs with movable poles, specifically \((2,6)\) and \((2,8)\).
### \((2,6)\) and \((2,8)\) MlDEs.
Now we exhibit the Frobenius solution of MLDEs with one movable pole. This sub-section contains formulae that will be referred to in the main text of the paper. In section 4.1, we solved the \((2,6)\) MLDE. In the first step, one computes the first three orders of the Frobenius solution for the identity character and obtains Eq. (4.3) and Eq. (4.4) where the functions \(f_{1}(c,p_{1},b_{4,1})\) and \(f_{2}(c,p_{1},b_{4,1})\) are given by :
\[f_{1}(c,p_{1},b_{4,1}) = -\frac{1}{48(c-22)}(240c(c-94)+c(c+4)\,p_{1}-c(c-20)\,b_{4,1}) \tag{106}\] \[f_{2}(c,p_{1},b_{4,1}) = \frac{1}{96(c-34)}\left(-720c(243c-17294)-240\left(c^{2}+50c-1392 \right)m_{1}^{(6)}+\right.\] \[\left.+\left(24\,c(c+8)-(c-20)(c-24)m_{1}^{(6)}\right)p_{1}+c(c- 20)(216+m_{1}^{(6)})b_{4,1}\right).\]
In the next step, we solved for three parameters in terms of objects associated to the identity character viz. the central charge \(c\), the Fourier coefficients \(m_{1}^{(6)}\) and \(m_{2}^{(6)}\). For the non-rigid parameters, we obtained the equations Eq. (4.5) and Eq. (4.6) where \(f_{3}(c,m_{1}^{(6)},m_{2}^{(6)})\) and \(f_{4}(c,m_{1}^{(6)},m_{2}^{(6)})\) are given by:
\[f_{3}(c,m_{1}^{(6)},m_{2}^{(6)}) = \frac{1}{c(5c+22)+(c-10)m_{1}^{(6)}}\Big{(}285c(9c-554)+24(21c-92 )m_{1}^{(6)} \tag{107}\] \[-(c-22)(m_{1}^{(6)})^{2}+2\,(c-34)m_{2}^{(6)}\Big{)}\] \[f_{4}(c,m_{1}^{(6)},m_{2}^{(6)}) = \frac{1}{c(c-20)(c(5c+22)+(c-10)m_{1}^{(6)})}\Big{(}15c^{2}\left( 251c^{2}-17010c-75192\right)\] \[+24c(41\,c^{2}-1224\,c+8064)m_{1}^{(6)}+2\,c(c+4)(c-34)m_{2}^{(6)}\] \[-(c-20)(c-22)(c-24)(m_{1}^{(6)})^{2}\Big{)}\]
We then used the accessory equation and obtained a relation between \(m_{2}^{(6)}\), \(m_{1}^{(6)}\) and \(c\) in Eq. (110) where the \(A_{2}(c)\) and \(B_{2}(c)\) are:
\[A_{2}(c) =-\frac{25c^{4}-2135c^{3}+41140c^{2}+224940c}{2(c-22)(c-34)},\quad B _{2}(c)=-\frac{(c-24)(5c-98)}{c-34} \tag{121}\]
Next we rewrote the pole and accessory parameters only in terms of \(c\) and \(m_{1}^{(6)}\); after substituting Eq. (105) in Eq. (106) and Eq. (107). We obtained Eq. (108) and Eq. (109) where the \(f_{5}(c,m_{1}^{(6)})\) and \(f_{6}(c,m_{1}^{(6)})\) are given in:
\[f_{5}(c,m_{1}^{(6)}) =-\frac{[c(5c-94)+(c-22)m_{1}^{(6)}][5c^{2}-470c+6912+(c-22)m_{1}^ {(6)}]}{(c-22)[c(5c+22)+(c-10)m_{1}^{(6)}]}, \tag{122}\] \[f_{6}(c,m_{1}^{(6)}) =-\frac{[c(5c-94)+(c-22)m_{1}^{(6)}][c\left(5c^{2}-590c-2544\right) +(c-22)(c-24)m_{1}^{(6)}]}{c(c-22)[c(5c+22)+(c-10)m_{1}^{(6)}]}, \tag{123}\]
In the next step, we obtained the third Fourier coefficient of the identity character in Eq. (109) where \(A_{3}(c)\) and \(B_{3}(c)\) are given by:
\[A_{3}(c) =\frac{250c^{6}-32700c^{5}+1373240c^{4}-18801040c^{3}+90660480c^{ 2}+892610560c}{6(c-46)(c-34)(c-22)},\] \[B_{3}(c) =\frac{(c-24)\left(25c^{3}-1625c^{2}+35308c-256188\right)}{2(c-4 6)(c-34)}.\]
We also obtained the fourth Fourier coefficient of the identity character in Eq. (108) (with \(k=4\)), where \(A_{4}(c)\) and \(B_{4}(c)\) are given by:
\[A_{4}(c) = \frac{-1875c^{8}+333750c^{7}-21989925c^{6}+680543850c^{5}-122601 07560c^{4}+97916677200c^{3}}{24(c-58)(c-46)(c-34)(c-22)},\] \[+\frac{-87462415440c^{2}-3618704872800c}{24(c-58)(c-46)(c-34)(c-2 2)}\] \[B_{4}(c) = -\frac{(c-24)\left(125c^{5}-14025c^{4}+636730c^{3}-14585852c^{2}+ 168166728c-778842496\right)}{6(c-58)(c-46)(c-34)}.\]
We now give formulae that will be referred to in the main text of the paper, for the \((2,8)\) MLDE. In the first step, one computes the first three orders of the Frobenius solution for the identity character and obtains Eq. (111) and Eq. (112) where the functions \(\widetilde{f}_{1}(c,p_{1},b_{4,1})\) and
\(\widetilde{f}_{2}(c,p_{1},b_{4,1})\) are given by :
\[\widetilde{f}_{1}(c,p_{1},b_{4,1}) = -\frac{1}{48(c-26)}(48c(5c-634)+c(c-4)\,p_{1}-c(c-28)\,b_{4,1}) \tag{111}\] \[\widetilde{f}_{2}(c,p_{1},b_{4,1}) = \frac{1}{96(c-38)}\left(-144c(1615c-167794)-48\left(5c^{2}+326c-1 3104\right)m_{1}^{(8)}\right.\] \[\left.-\left(24c(9c+208)+(c-24)(c-28)m_{1}^{(8)}\right)p_{1}+c(c- 28)(456+m_{1}^{(8)})b_{4,1}\right).\]
In the next step, we solved for three parameters in terms of objects associated to the identity character viz. the central charge \(c\), the Fourier coefficients \(m_{1}^{(8)}\) and \(m_{2}^{(8)}\). For the non-rigid parameters, we obtained the equations Eq. (4.24) and Eq. (4.25) where \(\widetilde{f}_{3}(c,m_{1}^{(8)},m_{2}^{(8)})\) and \(\widetilde{f}_{4}(c,m_{1}^{(8)},m_{2}^{(8)})\) are given by:
\[\widetilde{f}_{3}(c,m_{1}^{(8)},m_{2}^{(8)}) =\frac{1}{c(5c-142)+(c-14)m_{1}^{(8)}}\left(3c(855c-71426)+24(21c- 52)m_{1}^{(8)}\right.\] \[\left.-(c-26)(m_{1}^{(8)})^{2}+2\,(c-38)m_{2}^{(8)}\right) \tag{112}\] \[\widetilde{f}_{4}(c,m_{1}^{(8)},m_{2}^{(8)}) =\frac{1}{c(c-28)(c(5c-142)+(c-14)m_{1}^{(8)})}\left(3c^{2}\left(1 255c^{2}-136926c+1726152\right)\right.\] \[\left.+24c\left(41c^{2}-2088c+25344\right)m_{1}^{(8)}+2c(c-38)(c -4)m_{2}^{(8)}-((c-28)(c-26)(c-24))(m_{1}^{(8)})^{2}\right)\]
Next we rewrote the pole and accessory parameters only in terms of \(c\) and \(m_{1}^{(6)}\); after substituting Eq. (4.26) in Eq. (4.24) and Eq. (4.25). We obtained Eq. (4.27) and Eq. (4.28) where the \(\widetilde{f}_{5}(c,m_{1}^{(6)})\) and \(\widetilde{f}_{6}(c,m_{1}^{(6)})\) are given in:
\[\widetilde{f}_{5}^{(8)}(c,m_{1}^{(8)}) =-\frac{[c(5c-634)+(c-26)m_{1}^{(8)}][5c^{2}-634c+13824+(c-26)m_{1 }^{(8)}]}{(c-26)[c(5c-142)+(c-14)m_{1}^{(8)}]}, \tag{113}\] \[\widetilde{f}_{6}^{(8)}(c,m_{1}^{(8)}) =-\frac{[c(5c-634)+(c-26)m_{1}^{(8)}][c\left(5c^{2}-754c+8304 \right)+(c-26)(c-24)m_{1}^{(8)}]}{c(c-26)[c(5c-142)+(c-14)m_{1}^{(8)}]},\]
## Appendix D Some (2, 8) MLDE solutions and quasi-characters
Here we analyse the \((n,\ell)=(2,8)\) MLDE in the same was as was done for \((2,6)\) in Section 4. The MLDE in the \(j\)-coordinate is given by:
\[\left(\partial_{j}^{2}+\left(\frac{1}{3j}+\frac{1}{2(j-1728)}-\frac{1}{(j-p_{ 1})}\right)\partial_{j}+\frac{\alpha_{0}\alpha_{1}(j-b_{4,1})}{j(j-1728)(j-p_{ 1})}\right)\chi(j)=0 \tag{114}\]
Using the series expansion \(\chi_{i}=\sum\limits_{k=0}^{\infty}a_{i,k}^{(1)}\,(j-p_{1})^{k+\alpha_{i}^{(1)}}\), \(a_{i,0}^{(1)}\neq 0\), the indicial equation around \(j=p_{1}\) is:
\[\alpha_{i}^{(1)}\,(\alpha_{i}^{(1)}-2)=0 \tag{104}\]
At first subleading order, for the solution \(\alpha_{0}^{(1)}=0\) we get:
\[a_{0,1}^{(1)}=\frac{\alpha_{0}\alpha_{1}(p_{1}-b_{4,1})}{p_{1}(p_{1}-1728)} \tag{105}\]
At second order beyond this, we find (as expected) that the \(a_{0,2}^{(1))}\) terms cancel resulting in a constraint equation:
\[\alpha_{0}\alpha_{1}(p_{1}-b_{4,1})^{2}+\left(1152-\frac{7p_{1}}{6}\right)(p_ {1}-b_{4,1})+p_{1}(p_{1}-1728)=0 \tag{106}\]
Now one analyses the Frobenius solution by going back to the \((2,8)\) MLDE in the \(\tau\)-plane:
\[\left(D^{2}+\left(\frac{E_{6}}{3E_{4}}+\frac{E_{4}^{2}E_{6}}{E_{4}^{3}-p_{I} \Delta}\right)\!D+\frac{\alpha_{0}\alpha_{1}\,E_{4}\left(E_{4}^{3}-b_{4,1} \,\Delta\right)}{\left(E_{4}^{3}-p_{1}\,\Delta\right)}\right)\chi(\tau)=0 \tag{107}\]
and using the methods explained in Section 4, we obtain:
\[p_{1} = \frac{-738720\alpha_{0}^{2}-12\alpha_{0}\left(\left(m_{1}-504 \right)m_{1}-2m_{2}+214278\right)-13m_{1}^{2}+624m_{1}+38m_{2}}{\left(12\alpha _{0}+7\right)m_{1}-24\alpha_{0}\left(60\alpha_{0}+71\right)}\] \[b_{4,1} = \frac{1}{\alpha_{0}\left(6\alpha_{0}+7\right)\left(1440\alpha_{ 0}^{2}+1704\alpha_{0}-12\alpha_{0}m_{1}-7m_{1}\right)}\bigg{(}6505920\alpha_{0 }^{4}+29576016\alpha_{0}^{3} \tag{108}\] \[+15535368\alpha_{0}^{2}+72\alpha_{0}^{3}m_{1}^{2}-70848\alpha_{0} ^{3}m_{1}-144\alpha_{0}^{3}m_{2}+234\alpha_{0}^{2}m_{1}^{2}-150336\alpha_{0}^{ 2}m_{1}\] \[-252\alpha_{0}^{2}m_{2}+253\alpha_{0}m_{1}^{2}-76032\alpha_{0}m_{ 1}-38\alpha_{0}m_{2}+91m_{1}^{2}\bigg{)}\]
Next we exhibit the quasi-characters for \(\ell=6r+2\) cases, for which the initial quasi-characters solve the \(\ell=2\) MLDE. These solutions exist for the following values of \(c\):
dual Lee-Yang family: \[c=\frac{2(6n-1)}{5},\ n\neq 1\ {\rm mod}\ 5\] (109) dual \[A_{1}\ {\rm family:} c=6n-1\] \[{\rm dual}\ A_{2}\ {\rm family:} c=4n-2,\ n\neq 1\ {\rm mod}\ 3\] \[{\rm dual}\ D_{4}\ {\rm family:} c=12n-4\]
Of these, the central charges:
\[c=\frac{82}{5},17,16,\frac{94}{5},20,\frac{106}{5},22,23,\frac{118}{5} \tag{110}\]
correspond to admissible characters 13 with \(\ell=2\)[25]. As before, linear combinations of these quasi-characters make up admissible characters with increasing values of \(\ell\), this time in the family \(\ell=6r+2\), and all such characters are generated.
Footnote 13: Again these all correspond to CFTs, except for the first and last cases that are Intermediate Vertex Operator Algebras [44]. A new feature here is that a single set of admissible characters corresponds to more than one CFT.
We can now list the admissible \((2,8)\) solutions and express them in terms of quasi-characters.
#### Admissible Solutions (i)
\[c=\frac{82}{5},\quad m_{1}=410+87\,m,\quad m_{2}=64739+5510\,m,\quad m_{3}=2089 934+95323\,m,\quad 0<m\leq 2\]
For this case,
\[p_{1} = \frac{4(m_{1}-497)(m_{1}+943)}{m_{1}-410} \tag{113}\] \[b_{4,1} = -\frac{4(m_{1}+943)(19m_{1}-11603)}{41(m_{1}-410)} \tag{114}\]
The equations Eq. (113) and Eq. (114) satisfy Eq. (109). This solution is equal to the following sum of quasi-characters:
\[\chi^{\ell=8}=\chi_{n=7}^{\widetilde{LY}}+N_{1}\,\chi_{n=-3}^{\widetilde{LY}}\]
with the identification \(m=N_{1}\).
#### Admissible Solutions (ii)
\[c=17,\quad m_{1}=323+11\,m,\quad m_{2}=60860+649\,m,\quad m_{3}=2158575+10480 \,m,\quad 0<m\leq 40\]
For this case,
\[p_{1} = \frac{3(m_{1}-499)(m_{1}+1037)}{m_{1}-323} \tag{115}\] \[b_{4,1} = -\frac{3(m_{1}+1037)(7m_{1}-5797)}{17(m_{1}-323)} \tag{116}\]
The equations Eq. (115) and Eq. (116) satisfy Eq. (109). This solution is equal to the following sum of quasi-characters:
\[\chi^{\ell=8}=\chi_{n=3}^{\tilde{A}_{1}}+N_{1}\,\chi_{n=-1}^{\tilde{A}_{1}}\]
with the identification, \(m=N_{1}\). This solution appears in [18].
#### Admissible Solutions (iii)
\[c=18,\quad m_{1}=234+5\,m,\quad m_{2}=59805+258\,m,\quad m_{3}=2482242+3690\,m, \quad 0<m\leq 171\]
For this case,
\[p_{1} = \frac{2(m_{1}-504)(m_{1}+1224)}{m_{1}-234} \tag{113}\] \[b_{4,1} = -\frac{2(m_{1}-1368)(m_{1}+1224)}{3(m_{1}-234)} \tag{114}\]
The equations Eq. (113) and Eq. (114) satisfy Eq. (104). This solution is equal to the following sum of quasi-characters:
\[\chi^{(8)}=\chi_{n=5}^{\tilde{A}_{2}}+N_{1}\chi_{n=-1}^{\tilde{A}_{2}}\]
with the identification \(m=N_{1}\).
#### Admissible Solutions (iv)
\[c=\frac{94}{5},\quad m_{1}=188+46\,m,\quad m_{2}=62087+2093\,m,\quad m_{3}=292 3494+27002\,m,\quad 0<m\leq 26\]
For this case,
\[p_{1} = \frac{3(m_{1}-510)(m_{1}+1410)}{2(m_{1}-188)} \tag{115}\] \[b_{4,1} = -\frac{3(m_{1}+1410)(13m_{1}-26790)}{94(m_{1}-188)} \tag{116}\]
The equations Eq. (115) and Eq. (116) satisfy Eq. (104). This solution is equal to the following sum of quasi-characters,
\[\chi^{\ell=8}=\chi_{n=8}^{\tilde{LY}}+N_{1}\,\chi_{n=-2}^{\tilde{LY}}\]
with the identification \(m=N_{1}\). This solution appears in [18].
#### Admissible Solutions (v)
\[c=20,\quad m_{1}=140+m,\quad m_{2}=69950+36\,m,\quad m_{3}=3983800+394\,m, \quad 0<m\leq 1807\]
For this case,
\[p_{1} = \frac{(m_{1}-524)(m_{1}+1780)}{(m_{1}-140)} \tag{117}\] \[b_{4,1} = -\frac{(m_{1}-3980)(m_{1}+1780)}{5(m_{1}-140)} \tag{118}\]
The equations Eq. (176) and Eq. (177) satisfy Eq. (175). The quasi-character sum for this solution is:
\[\chi^{(8)}=\chi_{n=2}^{\tilde{D}_{4}}+N_{1}\chi_{n=0}^{\tilde{D}_{4}}, \tag{178}\]
with the identification \(m=N_{1}\). This solution appears in [27] with \(m=960\).
#### Admissible Solutions (vi)
\[c=\frac{106}{5},\quad m_{1}=106+17\,m,\quad m_{2}=84429+442\,m, \quad m_{3}=5825442+4063\,m,\quad 0<m\leq 155\]
For this case,
\[p_{1} = \frac{2(m_{1}-548)(m_{1}+2332)}{3(m_{1}-106)} \tag{179}\] \[b_{4,1} = -\frac{2(m_{1}+2332)(7m_{1}-59996)}{159(m_{1}-106)} \tag{180}\]
The equations Eq. (179) and Eq. (180) satisfy Eq. (175). This is equal to the following sum of quasi-characters:
\[\chi^{\ell=8}=\chi_{n=9}^{\widetilde{LY}}+N_{1}\,\chi_{n=-1}^{ \widetilde{LY}}\]
with the identification \(m=N_{1}\). This solution appears in [18].
#### Admissible Solutions (vii)
\[c=22,\quad m_{1}=88+m,\quad m_{2}=99935+19\,m,\quad m_{3}=7846300 +155\,m,\quad 0<m\leq 3436\]
For this case,
\[p_{1} = \frac{(m_{1}-574)(m_{1}+2882)}{2(m_{1}-88)} \tag{181}\] \[b_{4,1} = -\frac{(m_{1}-16126)(m_{1}+2882)}{22(m_{1}-88)} \tag{182}\]
The equations Eq. (181) and Eq. (182) satisfy Eq. (175). This is equal to the following sum of quasi-characters:
\[\chi^{(8)}=\chi_{n=6}^{\tilde{A}_{2}}+N_{1}\,\chi_{n=0}^{\tilde{ A}_{2}}\]
with the identification, \(m=N_{1}\). This solution appears in [27] with \(m=1782\).
#### Admissible Solutions (viii)
\[c=23,\quad m_{1}=69+5\,m,\quad m_{2}=131905+49\,m,\quad m_{3}=12195106+345\,m, \quad 0<m\leq 996\]
For this case,
\[p_{1} = \frac{(m_{1}-629)(m_{1}+3979)}{3(m_{1}-69)} \tag{104}\] \[b_{4,1} = -\frac{(m_{1}-49013)(m_{1}+3979)}{69(m_{1}-69)} \tag{105}\]
The equations Eq. (104) and Eq. (105) satisfy Eq. (104). This solution is equal to the following sum of quasi-characters:
\[\chi^{(8)}=\chi_{n=4}^{\tilde{A}_{1}}+N_{1}\,\chi_{n=0}^{\tilde{A}_{1}}\]
with the identification, \(m=N_{1}\). This solution appears in [18].
#### Admissible Solutions (ix)
\[c=\frac{118}{5},\quad m_{1}=59+11\,m,\quad m_{2}=164315+44\,m,\quad m_{3}=1677 8125+285\,m,\quad 0\leq m\leq 591\]
For this case,
\[p_{1} = \frac{(m_{1}-686)(m_{1}+5074)}{4(m_{1}-59)} \tag{106}\] \[b_{4,1} = -\frac{(m_{1}-164846)(m_{1}+5074)}{236(m_{1}-59)} \tag{107}\]
The equations Eq. (106) and Eq. (107) satisfy Eq. (104). This solution is equal to the following sum of quasi-characters:
\[\chi^{\ell=8}=\chi_{n=10}^{\widetilde{LY}}+N_{1}\,\chi_{n=0}^{\widetilde{LY}}\]
with the identification \(m=N_{1}\).
|
2309.09402 | PSMA PET/CT as a predictive tool for sub-regional importance estimates
in the parotid gland | Xerostomia and radiation-induced salivary gland dysfunction remain a common
side effect for head-and-neck radiotherapy patients, and attempts have been
made to quantify the heterogeneous dose response within parotid glands. Here
several models of parotid gland subregional importance are compared with
prostate specific membrane antigen (PSMA) positron emission tomography (PET)
uptake. PSMA ligands show high concentrations in salivary glands, whose uptake
has been previously found to relate to gland functionality. We develop a
predictive model for relative importance estimates using PSMA PET and CT
radiomic features, and demonstrate a methodology for predicting
patient-specific importance deviations from the population. Intra-parotid gland
uptake was compared with four regional importance models using 30 [18F]DCFPyL
PSMA PET images. A radiomics-based predictive model of population importance
was developed using a double cross-validation methodology. Population
importance estimates were supplemented using patient-specific radiomic
features. Anticorrelative relationships were found to exist between PSMA PET
uptake and four independent models of subregional parotid gland importance from
the literature. Kernel Ridge Regression with principal component analysis
feature selection performed best over test sets (MAE = 0.08), with GLCM
features being particularly important. Deblurring PSMA PET images strengthened
correlations and improved model performance. This study suggests that regions
of relatively low PSMA PET concentration in parotid glands may exhibit
relatively high dose-sensitivity. We've demonstrated the utility of PSMA PET
radiomic features for predicting relative importance within the parotid glands.
PSMA PET appears promising for analyzing salivary gland functionality. | Caleb Sample, Arman Rahmim, François Bénard, Jonn Wu, Haley Clark | 2023-09-17T23:55:51Z | http://arxiv.org/abs/2309.09402v2 | # PSMA PET as a predictive tool for sub-regional importance estimates in the parotid gland
###### Abstract
_Objective_: Xerostomia (subjective sensation of oral dryness) and radiation-induced salivary gland dysfunction remain a common side effect for head-and-neck radiotherapy patients, and attempts have been made to quantify the variation of the dose response within parotid glands. Here, we aim to compare several models of parotid gland regional importance with prostate specific membrane antigen (PSMA) positron emission tomography (PET), which has high concentrations of uptake in salivary glands and has been previously suggested to relate to gland functionality. Furthermore, we develop a predictive model of Clark et al.'s relative importance using radiomic features, and demonstrate a methodology for predicting patient-specific importance deviations from the population. _Approach_: Intra-parotid uptake was compared with four regional importance models using [18F]DCFPyL PSMA PET images. The correlation of uptake and importance was ascertained when numerous non-overlapping sub-regions were defined, while a paired t-test was used when binary regions were defined. Radiomic PSMA PET/CT features for Clark et al.'s sub-regions were used to develop a predictive model of population importance using a double cross-validation methodology. We demonstrate a method for supplementing population importance estimates using individual patient features. _Main Results_: Clark et al.'s relative importance regions were significantly (\(p<0.02\)) anti-correlated with PSMA PET uptake. Van Luijk et al.'s critical regions had significantly lower (\(p<0.01\)) uptake than in non-critical regions. Kernel Ridge Regression with principal component analysis feature selection performed best over test sets (Mean Absolute Error = 0.08), with gray level co-occurrence matrix (GLCM) features being particularly important. Deblurring PSMA PET images with neural blind deconvolution strengthened correlations and improved model performance. _Significance_: This study suggests that regions of relatively low PSMA PET concentration in parotid glands may exhibit relatively high dose-sensitivity. We've demonstrated the ability of PSMA PET radiomic features for predicting relative importance within the parotid glands.
Introduction:
Intensity Modulated Radiotherapy (IMRT) allows for the creation of treatment plans with high dose conformity in cancerous regions while minimizing dose to healthy tissue [1]. However, high dose levels in cancerous tissue inevitably tail off into healthy tissue, so treatment planners and oncologists must prioritize which healthy regions to spare. Treatment planning in the head-and-neck region is particularly challenging, as there are many organs in close proximity which often abut or overlap with tumour volumes [2]. Dose levels in the salivary glands are of particular concern, as xerostomia (subjective sensation of oral dryness) remains a common side effect for head-and-neck cancer patients [3]. Dose to the largest salivary glands, the parotids, is the greatest risk factor for post-treatment xerostomia [4].
The current standard of care is to minimize the whole-mean dose to parotid glands [5], which were previously considered to have a uniform dose response [6]. However, there have been numerous attempts in recent years to quantify the relative importance of various parotid gland sub-regions for predicting post-treatment complications [7, 8, 9, 10].
Prostate specific membrane antigen (PSMA) positron emission tomography (PET) has high ligand accumulation in the parotid glands [11, 12, 13], and has been suggested to relate to whole-gland functionality [14, 15, 16]. Furthermore, uptake within parotid glands has been found to be non-uniform, with high uptake regions tending towards lateral, posterior, and superior regions [17]. We hypothesize that intra-parotid gland uptake variability of PSMA PET is predictive of functional importance.
The purpose of this study is two-fold. First, we use a data set of 30 PSMA PET images to compare intra-parotid PSMA PET uptake trends with several regional importance estimates from the literature [7, 8, 9, 10]. Second, we develop a population-level model of Clark et al.'s [8] regional importance using radiomic features from PSMA PET and Computed Tomography (CT). We also demonstrate how such a model can be used to predict a patient's deviation from population-derived importance estimates, creating a single metric with population-derived, and patient-specific components.
## 2 Methods:
### Dataset
This study was approved by an institutional review board. The data set included identified [18F]DCFPyL PSMA PET/CT images for 30 previous prostate cancer patients (Mean Age 68, Age Range 45-81; mean weight: 90 kg, weight range 52 - 128 kg). Scans were acquired two hours following intravenous injection, from the thighs to the top of the skull on a GE Discovery MI (DMI) scanner. PET images were reconstructed using VPFXS (OSEM with time-of-flight and point spread function corrections) (pixel spacing: 2.73 - 3.16 mm, slice thickness: 2.8 - 3.02 mm). Helical CT scans were acquired on the same scanner (kVP: 120, pixel spacing: 0.98 mm, slice thickness: 3.75 mm). Images were scaled to standard uptake values normalized by lean body mass (\(SUV_{bm}\)). Registered
CT images were used for delineating parotid and submandibular glands. Limbus AI [18] was used for preliminary auto-segmentation of the glands, which were then manually refined by a single senior radiation oncologist, Jonn Wu.
**2.2 Correction of Partial Volume Effects**
One weakness of PET as an imaging modality is its intrinsically low spatial resolution [19]. The burden of partial volume effects is less pronounced when analyzing large geometric regions with homogeneous uptake, but cannot be ignored when attempting to compare heterogeneous uptake in small regions-of-interest (ROIs), such as sub-regions of the parotid glands. Recently, a method has been developed for simultaneous deblurring and super-sampling of PSMA PET images using neural blind deconvolution [20], which we employ in this work for pre-processing PSMA PET images. This model has been shown to illuminate fine uptake trends within small regions of parotid glands. We performed all calculations with both "enhanced" images, and unmodified original images.
**2.3 Comparison of PSMA PET with Parotid Gland Importance Models**
PSMA PET uptake trends were compared with four models of intra-parotid gland importance found in the literature, detailed in the following sub-sections. Uptake metrics included the mean, median, and maximum, calculated in ROIs defined according to each specific model. As importance models were all population-level estimates, uptake metrics were averaged over the 60 parotid glands from the 30 patients.
**Clark et al.'s Model**
Clark et al.'s model [8] estimates the relative importance of 18 equal-volume sub-regions of the contralateral parotid gland for predicting salivary dysfunction following radiotherapy. Stimulated saliva measurements were collected for 332 patients before and at one year after radiotherapy. The relative differences were predicted with conditional inference trees using radiotherapeutic dose levels in parotid gland sub-regions. Parotid glands were sub-segmented using nested planar segmentation (planes: 2 axial, 1 coronal, 2 sagittal). Regions of high relative importance tended towards caudal-anterior regions, as shown in Fig 1. We sub-segmented parotid glands according to the same regimen, and voxels within each of the 18 sub-regions were used to calculate uptake statistics. To test whether intra-parotid PSMA PET uptake is related to Clark et al.'s importance estimates, Spearman's rank correlation coefficient, \(r_{s}\), was computed between uptake in the 18 sub-regions along with their corresponding importance estimates.
**Han et al.'s model**
Han et al. [7] assess the relative importance of 9 parotid gland sub-regions for predicting injury (\(\geq\) grade 2 xerostomia at 6 months post-radiotherapy) and recovery (\(\geq\) grade 2 xerostomia at 6 months post-radiotherapy, followed by \(<\) grade 2 xerostomia at 18 months post-radiotherapy). Sub-regions were defined by first applying a 3 mm margin to whole parotid glands, then dividing glands into three radial sectors (anterior, medial, posterior), then further dividing these sectors along the inferior-superior axis into 3 equal-length regions. Voxels within a parotid gland corresponding to these importance regions are shown in Fig 2. Han et al. [7] determined the relative importance of 9 dose-volume statistics in 10% volume increments from D10 (Minimum dose to 10% volume) to D90 (Minimum dose to 90% volume) in each sub-region. For our purposes, the mean importance computed over all dose statistics was used as a single relative importance estimate for each sub-region. Spearman's rank correlation coefficient, \(r_{s}\), was calculated between uptake and relative importance for predicting injury, and recovery.
Figure 1: 3D rendering of voxels within a parotid gland corresponding to Clark et al’s relative importance sub-regions are shown from two angles.
**Van Luijk et al.'s Model**
Van Luijk et al. [9] used stimulated saliva measurements and radiotherapeutic dose levels to locate "critical" regions within parotid glands which are most predictive of salivary outcome at one year post radiotherapy. The study did not specify a well-defined location of this critical region over the population, however, it is stated to be in close proximity of the Stensen's duct, adjacent to the dorsal side of the mandible. For our purposes, we approximated the critical region by applying a 9 mm margin to the mandible, which was intersected with the top half of the parotid gland (Fig 3). The 9 mm margin was found to consistently intersect a region of the parotid gland approximately corresponding Van Luijk et al.'s [9] critical regions. Uptake statistics were compared within expanded critical and non-critical regions using a paired t-test.
**Buettner et al.'s Model**
Buettner et al. [10] evaluated the predictive ability of various dose "moments" in a regression model for post-treatment xerostomia in 63 head-and-neck cancer patients, treated with either IMRT or conventional radiotherapy. Important variables included mean dose to the superficial lobe, skewness of dose in the cranial-caudal direction within
Figure 2: 3D rendering of voxels within a parotid gland corresponding to Han et al.’s relative importance sub-regions are shown from two angles.
the deep and superficial lobe, and relative concentration of dose in the caudal-medial region of the deep lobe. While the parotid glands were only segmented into superficial and deep lobes, dose moments calculated within these regions evaluated the spatial variance of the dose response.
### Development of a predictive Model for parotid gland relative importance using PSMA PET and CT
To demonstrate the predictive ability of PSMA PET for parotid gland regional functionality, we develop a model for predicting Clark et al.'s [8] relative importance estimates using radiomic features extracted from both PSMA PET and CT images. We then demonstrate how such a model can be used for predicting patient-specific perturbations away from population-level importance estimates.
Figure 3: The approximate location of Van Luijk et al.’s critical region of the parotid gland used for computing uptake statistics is shown
**Feature Extraction**
The standardized pyradiomics library [21] was used for computing radiomic features of PSMA PET and CT within all 18 of Clark's sub-regions. The full set of Gray Level Co-occurrence Matrix (GLCM), Gray Level Run Length Matrix (GLRLM), Gray Level Size Zone Matrix (GLSZM), Gray Level Dependence Matrix (GLDM), and first order features, computed with original squared, square root, and wavelet image types, were computed for a total of 1060 features. For gray level discretization, a fixed bin width was chosen over a fixed bin count, as a fixed bin width has been shown to have better reproducibility [22], especially when chosen to yield a bin count between 16-128 [23]. We therefore set the bin count to 0.2 for original, 0.1 for square root, and 1 for square. For the wavelet features, a fixed bin count of 100 was used, due to uncertainty in the expected range.
Radiomic features were calculated for individual patients, and averaged over each parotid gland for all 18 patients. This yielded a population-level design matrix of shape (36, 1060) prior to feature selection. Features were calculated for both enhanced and original PSMA PET images.
**Double Cross Validation**
The small size of our data set made it inappropriate to define a single test set for final performance evaluation, and we therefore used double cross validation, or sometimes called nested cross validation, where our test set was rotated through 9 outside folds, each having its own inner cross validation loop for tuning the feature selection algorithm, model, and hyper-parameters (Fig 4).
**Feature Selection and Models**
To avoid overfitting, the large number of features extracted must be pruned using feature selection methods prior to model training. For this purpose, we include 3 feature selection algorithms within the cross validation loops, including a linear combination filter, a pairwise correlation filter, and principle component analysis as used by Delzell et al. [24] to predict lung cancer using radiomic features. The linear combination (lincom) filter uses QR decomposition to iteratively remove features which are linear combinations of others. The pairwise correlation filter tests the correlation between features and removes those who are correlated above a specified cutoff. Principal component analysis changes the basis of the feature space to capture a large portion of the variance using a smaller number of feature vectors. For more information, refer to the work by Delzell et al. [24]
Five different regression model types were included within cross validation. This included a linear regression, support vector machine, random forest, conditional inference tree, and kernel ridge model. Model performance is highly dependent on a variety of hyper-parameters which can be tuned for the different models and
feature selection algorithms. A set of different hyper-parameters for each model were iterated over within cross validation. All models, feature selection algorithms, and their corresponding hyper-parameters tested are listed in Table 1 Models were scored according to the mean absolute error (MAE).
**Error Analysis**
For estimating the uncertainty of model predictions, we employ the methodology described by Cawley et al. [25] for kernel ridge models. This involves computing the
Figure 4: For model testing and validation, a double cross validation scheme was employed, where the outside test set is rotated through a 9-fold cross validation loop, each including its own 8-fold inner cross validation loop for parameter tuning.
leave-one-out absolute error for each sample in the data set, and then training a second kernel ridge model for predicting the absolute error of predictions based on the same input features and using this as an estimate of prediction variance.
**Demonstrating a method for predicting patient-specific importance perturbations**
Finally, we demonstrate how patient-specific deviations from population-level importance estimates can be obtained, to create a single, combined importance estimate including both population-level and patient-specific components. This is obtained by first computing and processing a patient's radiomic features according to the feature selection algorithm employed by the final model. As the model has been trained to understand the relationship between specific features and relative importance estimates, inputting patient specific features into the model for all 18 parotid gland sub-regions provides an estimate of relative importance for said patient. A single combined importance estimate is created by taking the population level sub-region importance estimates, \(I_{j}^{P}\), \(j\in\mathbb{Z},1\leq j\leq 18\) and the difference in patient specific and population estimates, \(\Delta_{j}\), \(j\in\mathbb{Z},1\leq j\leq 18\) and computing
\[I=\begin{cases}I_{j}^{P}&\Delta_{j}<0\\ \frac{2I_{j}^{P}}{1+e^{-2\Delta_{j}}}&\Delta_{j}>0\end{cases} \tag{1}\]
This defines a minimum importance estimate using the population-level estimates, and increases estimates in regions of high patient-specific importance, levelling off as the patient-specific estimate approaches about 3x the maximum population-level importance estimate. Using this approach, patient-specific predictions can be used to supplement or perturb population-level estimates in regions predicted to be of high radiotherapeutic importance. Combined importance estimates for sub-regions are never lower than population-level estimates, to avoid potential negative impacts associated with underestimating importance.
\begin{table}
\begin{tabular}{c c} \hline
**Model** & **Approximate** \\ \hline Support Vector Machine & \(\epsilon=[0.01,0.05,0.1]\) \\ & **Kernel** = [Linear, radial basis function, sigmoid, poly] \\ & **Logate** = [2.3] \\ & \(\gamma=[\text{scale, auto}]\) \\ & **coef** = [-0.85, -0.2, 0.1, 0] \\ Random Forest & number estimators = \([3,5,7,10\) \\ & max depth = \([3,5,8,8,\text{med}]\) \\ & Criterion = [ Absolute Error, Squared Error] \\
**Conditional Inference Trees** & **Max Depth = [5, 10, 15, 20, 25, 20] \\ & **Critention** = [ Absolute Error, Squared Error] \\ Kernel Ridge & \(\alpha=0.1,0.5,1.5,1.0\) \\ & Kernel = [Linear, Radial Basis Function, Spindind] \\
**Linear Regression** & **N/A** \\ \hline \hline Pairwise Correlation Filter & feature count = [1,2,3,4,5,6], cutoff = [8,85, 0.88, 0.9, 0.92] \\
**PCA** & **feature count = [1,2,3,4,5,6,8,10,80,80,20,30] \\ Linear Combination Filter & correlation cutoff = [0.05, 0.1, 0.2, 0.3] \\ \hline \end{tabular}
\end{table}
Table 1: Models and feature selection (F.S) algorithms, along with their corresponding hyper-parameters tested in cross-validation, are shown.
Results:
**3.1 Comparison of PSMA PET with Importance Models**
Overall, uptake of PSMA PET was found to be inversely proportional with sub-region importance estimates from the literature. These trends appeared stronger when enhanced images were used for uptake calculations.
Clark et al.'s [8] importance predictions in the 18 equal volume regions were significantly anti-correlated with mean and median uptake (Table 2). A scatter plot of importance vs mean uptake is shown in Fig 5.
\begin{table}
\begin{tabular}{l c c c} \hline \hline & Mean & Median & Maximum \\ \hline Enhanced & \(r_{s}=-0.56,p=0.015\) & \(r_{s}=-0.55,p=0.016\) & \(r_{s}=-0.25,p=0.31\) \\ Original & \(r_{s}=-0.50,p=0.03\) & \(r_{s}=-0.51,p=0.03\) & \(r_{s}=-0.30,p=0.22\) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Spearman’s rank correlation coefficients for PSMA PET uptake with Clark’s relative importance estimates. Correlations are calculated for mean, median, and maximum uptake, normalized by lean body mass. Results are calculated using enhanced (de-blurred and super-sampled) and original PSMA PET images.
Han et al's [7] model predictions for relative importance in 9 regions of unequal volume were not significantly correlated with uptake metrics. There did however exist weak trends of anti-correlation of uptake with importance for predicting injury, and direct correlation for predicting recovery 3. Uptake levels were found to be approximately two times higher in Van-Luijk et al.'s [9] non-critical region than in the critical region (\(p<0.01\)) (Table 4).
Figure 5: Clark’s relative importance vs mean PSMA PET uptake in 18 equal-volume parotid gland sub-regions, averaged over 30 patients. Relative importance was found to have a significant (\(p=0.015\)) anti-correlation with regional PSMA PET uptake. Calculations were performed with de-blurred PSMA PET images. A best fit line is shown in red.
**3.2 Model Performance for Predicting Parotid Grand Relative Importance**
For each of the 9 test-sets of the outer cross validation loop, the M.A.E, along with best model and feature selection algorithm, as determined via inner cross validation, are shown in Table 6 The average M.A.E was 0.08 using enhanced images, and 0.15 with original images. Overall, the best performing model and feature selection algorithm was the kernel ridge regressor with principal component analysis using 20 features. A performance comparison between all models and feature selection algorithms is shown in Fig 6. The most important features for principal component analysis (determined by projecting principal components scaled by their singular values onto original feature axes), are shown in Table 7. The overall-best hyper-parameters were found to be a polynomial kernel of degree 2, with \(\alpha=0.1\), and coef0\(=1\).
\begin{table}
\begin{tabular}{c c c c c c c} \hline \hline & \multicolumn{3}{c}{Enhanced Images} & \multicolumn{3}{c}{Original Images} \\ \cline{2-7} Fold & M.A.E & Model & F.S Algorithm & M.A.E & Model & F.S Algorithm \\
1 & 0.08 & K.R & P.C.A & 0.31 & C.I.T & P.C.A \\
2 & 0.10 & K.R & P.C.A & 0.29 & R.F & P.W.C \\
3 & 0.06 & K.R & P.C.A & 0.08 & K.R & P.C.A \\
4 & 0.04 & K.R & P.C.A & 0.11 & K.R & P.C.A \\
5 & 0.06 & C.I.T & P.C.A & 0.24 & R.F & P.C.A \\
6 & 0.09 & K.R & P.C.A & 0.03 & C.I.T & P.C.A \\
7 & 0.07 & K.R & P.C.A & 0.02 & C.I.T & P.C.A \\
8 & 0.13 & C.I.T & P.C.A & 0.04 & K.R & P.C.A \\
9 & 0.08 & K.R & P.C.A & 0.20 & C.I.T & P.C.A \\ Average & 0.08 & & & 0.15 & & \\ \hline \hline \end{tabular}
\end{table}
Table 6: Mean absolute error for each test set of the outer cross validation is shown, along with the best performing model and feature selection algorithm determined during the inner cross validation. Results are shown for models created with both enhanced and original PSMA PET images.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{Sup} & \multicolumn{1}{c}{Deep} & \multicolumn{1}{c}{Sup Cranial} & \multicolumn{1}{c}{Sup Cranial} & \multicolumn{1}{c}{Deep Cranial} & \multicolumn{1}{c}{Deep Caudial} & \multicolumn{1}{c}{Deep Caudial-Medial} & \multicolumn{1}{c}{Deep Caudial-Lateral} \\ \hline Enhanced & Mean & 25.22.30 & 28.48 \(\pm\)2.7 & 7.22 \(\pm\)2.1 & 28.20 \(\pm\)2.0 & 25.22 & 4.22 \(\pm\)2.3 & 5.9\(\pm\)2.6 \\ & Medial & 7.62 \(\pm\)2.4 & 5.33 \(\pm\)2.3 & 8.8 \(\pm\)2.8 & 7.0 \(\pm\)2.2 & 5.8 \(\pm\)2.3 & 4.9\(\pm\)2.0 & 5.7\(\pm\)2.6 & 5.7\(\pm\)2.6 & 5.7\(\pm\)2.0 \\ & Maximum & **0.89 \(\pm\)0.0** & **16.3 \(\pm\)2.7** & **18.8 \(\pm\) 5.6** & **18.3 \(\pm\) 5.5** & **17.2 \(\pm\) 4.6** & **13.8 \(\pm\) 4.6** & **14.8 \(\pm\) 4.7** & 15.2 \(\pm\) 4.2 \\ Original & Mean & **0.6 \(\pm\)1.0** & **0.2 \(\pm\)1.7** & 8.0 \(\pm\) 2.5 & 6.3 \(\pm\)1.8 & **6.6 \(\pm\) 1.9** & **4.5 \(\pm\) 1.0** & 3.5\(\pm\) 1.9 & 5.4 \(\pm\) 2.3 \\ & Median & **0.5 \(\pm\)1.3** & **0.1 \(\pm\)1.0** & **0.4 \(\pm\)2.7** & **0.3 \(\pm\) 1.9** & **0.6 \(\pm\) 2.1** & **4.2 \(\pm\) 2.3** & **3.1 \(\pm\) 2.0** & **5.4 \(\pm\) 2.7** \\ & Maximum & **0.02 \(\pm\)0.0** & **0.02 \(\pm\)0.0** & **14.8 \(\pm\)4.0** & **14.1 \(\pm\)4.0** & **11.0 \(\pm\)3.2** & **10.5 \(\pm\)3.3** & **8.1 \(\pm\)3.6** & **10.2 \(\pm\)3.6** \\ \hline \hline \end{tabular}
\end{table}
Table 5: Buettner et al. [10] found that dose to the superficial lobe, relative concentration of dose in the caudal/cranial region of both superficial and deep lobes, and relative concentration in the caudal-medial region of the deep lobe, were predictive of post-treatment xerostomia for head-and-neck radiotherapy patients. It is unclear whether xerostomia is directly or inversely proportional to these metrics, so we simply report differences in PSMA uptake within corresponding regions. Correlations are calculated for mean, median, and maximum uptake, normalized by lean body mass, using enhanced (de-blurred and super-sampled) and original PSMA PET images.
All model predictions for importance in the 9 test sets are collected and plotted against Clark's estimates in Fig 7. Prediction error was estimated by a separate kernel ridge model which was trained to predict model estimation error, as previously described.
**Estimating perturbations of patient importance from the population**
The best performing model, feature selection algorithm, and hyper-parameters as determined via cross validation, were then used to train a final population-level model using the entire population-level data set. Importance estimates for parotid gland subregions of individual patients could then be obtained by inputting a given patient's radiomic features into the model. Examples of how individual predictions deviate from population-level estimates, along with prediction errors estimated with the kernel ridge error model, are demonstrated for six different patients in Fig 8
Fig 9 illustrates an example of how an individual patient's parotid gland radiomic features can be used to supplement the population importance estimate, using the equation described in the methods.
\begin{table}
\begin{tabular}{c c c c} \hline \hline & Feature Name & Modality & Relative Importance \\ \hline
1. & Original GLCM - Inverse Difference & PSMA PET & 1.0 \\
2. & Square Root GLCM - Run Variance & PSMA PET & 0.98 \\
3. & Original GLCM - Inverse Difference Moment & PSMA PET & 0.97 \\
4. & Square Root GLCM Inverse Difference & PSMA PET & 0.97 \\
5. & Original GLCM Inverse Variance & PSMA PET & 0.96 \\
6. & Square Root GLCM Inverse Difference Moment & PSMA PET & 0.95 \\
7. & Square Root First Order RMS & CT & 0.95 \\
8. & Square GLSZM GLNUN & PSMA PET & 0.94 \\
9. & Original GLRLM Long Run Emphasis & PSMA PET & 0.94 \\
10. & Original GLDM Large Dependence & PSMA PET & 0.93 \\
11. & Original GLRLM Run Variance & PSMA PET & 0.91 \\
12. & Wavelet HHH GLCM Joint Average & CT & 0.91 \\
13. & Wavelet GLCM Sum Average & CT & 0.91 \\
14. & Wavelet HHH GLSZM High Gray Level Zone Emphasis & CT & 0.91 \\
15. & Square Root GLCM Inverse Variance & CT & 0.90 \\
16. & Wavelet HHH GLDM High Gray Level Emphasis & CT & 0.90 \\
17. & Wavelet HHH GLRLM High Gray Level Run Emphasis & CT & 0.90 \\
18. & Wavelet LLL GLCM Dependence Entropy & CT & 0.90 \\
19. & Wavelet HHH GLCM Autocorrelation & CT & 0.89 \\
20. & Square Root First Order Entropy & CT & 0.88 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Relative importance of radiomic PSMA PET / CT features determined via principal components analysis for modelling of parotid gland sub-region importance.
Figure 6: To compare performance across models and feature selection algorithms, the mean absolute error (MAE) of the top-performing model and feature selection algorithm from each test fold was computed and averaged over all folds. Overall, kernel ridge regression and principal component analysis using 20 features demonstrated the best performance. Error bars correspond to the standard deviation of prediction accuracy over various hyper-parameters.
Figure 7: Model-predicted relative importance estimates for all sub-regions are plotted along with Clark et al.’s (in this case, ground truth) importance estimates. Predictions are shown with their associated model predicted errors. Predictions were made using the best performing model and feature selection algorithm found with nested cross validation - a kernel ridge regressor model and principal component analysis feature selection using 20 features.
Figure 8: Importance estimates obtained for six individual patients using the population-level model, are shown. Population level estimates for the 18 sub-regions are shown as purple squares, with the patient specific estimates as gold circles. The population-level model captures the relationship between important radiomic features and importance estimates, and can be used for estimating approximate shifts in importance estimates for an individual patient’s parotid gland sub-regions. Error estimates were obtained with the kernel ridge error model.
Figure 9: We demonstrate how patient-specific parotid sub-regional importance estimates (left), can be used to supplement population-level importance estimates (middle) using the formula described in the methods, such that final estimates are never lower than population-level estimates, but further increased in regions where patient-specific estimates are high. Only positive perturbations to regional importance were made, to avoid negatively impacting patients in the case where importance estimates are to be used for designing dose constraints.
## 4 Discussion:
The results of this work demonstrate that intra-parotid PSMA PET uptake may be inversely related to regional importance. Clark et al.'s [8] and Han et al.'s [7] relative importance estimates were practically advantageous in that they define numerous non-overlapping sub-regions of parotid glands where correlations between relative importance and PSMA PET uptake could be assessed. Both models predict higher importance towards medial and caudal-middle regions of the gland, while Clark et al.'s predicts much higher importance in the anterior half of the gland. Clark et al.'s model was significantly (\(p<0.02\)) anti-correlated with PSMA PET uptake, while Han et al.'s model was also anti-correlated, but not significantly. It should be noted, however, that regions of high importance predicted by both models correspond to regions of lower than average uptake [17]. Han et al.'s sub-segmentation method yields sub-regions of unequal volume, which also creates problems when comparing uptake statistics. Due to the shape of the parotid glands, sub-regions within center regions will be larger than those at the top and especially the bottom, when sub-segmenting according to equal superior-inferior length. Note that both models (Clark et al. and Han et al.) were developed using salivary measurements. The anti-correlation relationship of importance with PSMA PET uptake is unexpected and suggests an underlying connection that could relate to salivary gland functionality.
We were able to approximate the location of Van Luijk et al.'s [9] critical regions and compute PSMA PET uptake statistics within and outside said regions. Uptake statistics within critical regions were significantly lower than non-critical regions, supporting the anti-correlation trend between importance and uptake seen with Clark et al. and Han et al.'s sub-regions. Sparing dose in Van Luijk' et al.'s [9] critical regions of parotid glands for radiotherapy patients was recently shown to insignificantly impact patient outcomes [26]. However, dose to critical regions was more predictive of salivary dysfunction than whole-gland dose. Comparing uptake with importance in regions defined by Buettner et al.'s [10] analysis also pointed towards an anti-correlative relation of importance with PSMA PET uptake.
Simultaneous deblurring and super-sampling of PSMA PET images prior to uptake calculations led to stronger correlations between uptake and importance estimates, and better model performance for predicting regional importance. Better performance using enhanced images was expected, as partial volume effects cause fine detail wash-out in small regions of PET images.
The PSMA ligand binds to the PSMA epitope of acinar and ductal cells in salivary glands [14], so uptake has been suggested to be directly proportional to functional importance [14, 15, 16]. In addition, irradiation of parotid glands has been shown to decrease PSMA PET uptake [27]. It is likely that whole-gland PSMA PET uptake is directly proportional to whole-gland importance, but the results of this work demonstrate that intra-parotid functional importance may tend towards relatively low-uptake regions. It is unclear which physiological mechanisms within the gland could result in this
relationship.
The relationship between PSMA PET and relative importance appears non-linear (Fig 5) and can be captured using radiomic features and non-linear modelling. Model development for predicting relative importance with PSMA PET and CT radiomic features was successful, yielding a relatively low absolute error for test predictions, considering the small size of the data set. Multilayer perceptrons were original included in the cross validation but were found to perform poorly due to over-fitting. Kernel ridge regression with a quadratic kernel out-performed all other models tested. This suggests that the relationship between PSMA PET uptake and regional importance is not simply linear. This is further supported by the most important features determined by principal component analysis (Table 7). In particular, radiomic features of squared PSMA PET uptake were highly predictive of importance. The top 6 most important features were all forms of the GLCM of PSMA PET images. Based on this finding, we recommend using them in future predictive models of functional importance for salivary glands.
In this work we demonstrated how PSMA PET and CT can be used together to predict a patient's deviation from population-level estimates of parotid gland regional importance. The purpose was to present a hypothetical method of extracting patient-specific parotid gland importance estimates for tailoring patient dose constraints for radiotherapy [28]. PSMA PET is not acquired as standard-of-care for head-and-neck patients, and is too likely too costly to add to the standard-of-care, and therefore these images would not be practically available in most clinical situations. However, we believe further PSMA PET studies could be critical in shedding light on importance trends and variability within the glands. The method presented here is not exclusive to PSMA PET models, and could be used to supplement other population-based models.
Our data set was small, comprised of only 60 parotid glands from 30 patients. This necessitated the double cross validation methodology used for model development, where the test set was rotated through, along with inner validation sets for each, to determine model parameters. Outer test sets had no influence on model development and were used to independently test the models predictive accuracy of all sub-regions. Clark et al.'s [8] importance regions are biased towards low numbers, with only a few regions of high importance. We believe double cross-validation is warranted in this case to help mitigate biased error estimates for different importance values.
## 5 Conclusion
In this work, we have compared four models of parotid gland sub-regional importance from the literature, with regional PSMA PET uptake. In general, an inverse proportionality between importance and PSMA PET uptake was observed. We demonstrated the ability of PSMA PET radiomic features to predict regional importance by training a predictive model of Clark et al.'s importance estimates (MAE = 0.08). Lastly, we demonstrated a methodology for supplementing population-level importance
estimates using patient-specific radiomic features.
## 6 Acknowledgements
This work was supported by the Canadian Institutes of Health Research (CIHR) Project Grant.
|
2309.06731 | Improving Deep Learning-based Defect Detection on Window Frames with
Image Processing Strategies | Detecting subtle defects in window frames, including dents and scratches, is
vital for upholding product integrity and sustaining a positive brand
perception. Conventional machine vision systems often struggle to identify
these defects in challenging environments like construction sites. In contrast,
modern vision systems leveraging machine and deep learning (DL) are emerging as
potent tools, particularly for cosmetic inspections. However, the promise of DL
is yet to be fully realized. A few manufacturers have established a clear
strategy for AI integration in quality inspection, hindered mainly by issues
like scarce clean datasets and environmental changes that compromise model
accuracy. Addressing these challenges, our study presents an innovative
approach that amplifies defect detection in DL models, even with constrained
data resources. The paper proposes a new defect detection pipeline called
InspectNet (IPT-enhanced UNET) that includes the best combination of image
enhancement and augmentation techniques for pre-processing the dataset and a
Unet model tuned for window frame defect detection and segmentation.
Experiments were carried out using a Spot Robot doing window frame inspections
. 16 variations of the dataset were constructed using different image
augmentation settings. Results of the experiments revealed that, on average,
across all proposed evaluation measures, Unet outperformed all other algorithms
when IPT-enhanced augmentations were applied. In particular, when using the
best dataset, the average Intersection over Union (IoU) values achieved were
IPT-enhanced Unet, reaching 0.91 of mIoU. | Jorge Vasquez, Hemant K. Sharma, Tomotake Furuhata, Kenji Shimada | 2023-09-13T05:20:41Z | http://arxiv.org/abs/2309.06731v1 | # Improving Deep Learning-based Defect Detection on Window Frames with Image Processing Strategies
###### Abstract
Detecting subtle defects in window frames, such as dents, scratches, and bends, can be challenging and critical for ensuring product quality and reputation. Traditional machine vision systems may not accurately identify unexpected and random defects, particularly in complex environments with varied object orientations and lighting conditions. Moreover, machine learning methods may struggle with changing environmental lighting conditions and small datasets, limiting their effectiveness in defect detection. To overcome these limitations, we introduce InspectNet, a hybrid deep learning vision-based method that employs advanced image processing techniques to improve defect detection accuracy and reduce errors under adverse lighting conditions. InspectNet also reduces the need for labeled data using the correct image processing technique method for each detection, resulting in a more efficient and generalized inspection process. Our experiments show that InspectNet outperforms other machine learning pipelines, achieving an IoU 9.7% higher than the U-net method. These results demonstrate the potential impact of InspectNet on automating the window-frame inspection process on construction sites, providing a reliable and accurate alternative to manual inspection while reducing time and cost. InspectNet represents a significant step towards improving product quality inspection and protecting companies' reputations in the construction industry.
image processing techniques, image quality assessment, image enhancement, deep learning
## I Introduction
The construction industry is currently confronted with the formidable challenge of meeting the escalating demand for quality inspection while contending with a decreasing number of inspectors available. In order to tackle this issue effectively, there exists an urgent requirement for automated robotic inspection systems to target specific objects. Human vision, with its inherent limitations, can often overlook minor and subtle defects that necessitate a certain level of expertise or experience on the part of the inspector. By addressing this need, automated robotic inspection holds tremendous potential for improving the overall quality assessment process. One such object is aluminum window frames, susceptible to surface defects such as scratches, dents, and bends that can significantly impact product quality and safety. Detecting these defects during and after installation ensures product quality and prevents costly faults. However, these defects can be difficult to detect due to complex lighting conditions and variations in colors and materials. This situation highlights the need for more accurate defect detection methods to ensure quality window frame inspection.
In recent years, machine and deep learning methods have proven effective in detecting defects in many industrial cases [6, 35, 30, 30, 41]. However, these methods often struggle with small prepossessing datasets, interpreting ambiguous inspector or defect criteria, and handling extrinsic factors such as illumination. Specifically, complex lighting conditions can produce shadow occlusions, contrast, and color distortion, making it challenging to accurately identify and locate surface defects. Image Processing Techniques (IPTs) have been developed to enhance image attributes [23, 3]. However, these techniques have limitations when detecting unexpected and random defects, mainly out of the laboratory environments [4, 2, 22]. Therefore, there is a need for further research and development to overcome these challenges and improve the accuracy of automated defect detection systems combining deep learning-based methods with image-enhanced IPTs. By addressing these issues, manufacturers can ensure the quality of their products while reducing the costs associated with manual inspection methods.
In order to tackle the challenges arising from adverse lighting conditions in the detection of defects on window frames, this paper presents InspectNet. InspectNet is a hybrid deep learning approach that combines the power of deep learning with a carefully selected strategy of image processing techniques, as illustrated in Fig. 1. The aim of InspectNet is to enhance the accuracy and effectiveness of surface defect detection in window frames. |
2309.16522 | Quantum hobbit routing: Annealer implementation of generalized
Travelling Salesperson Problem | In this paper, we present an implementation of a Job Selection Problem (JSP)
-- a generalization of the well-known Travelling Salesperson Problem (TSP) --
of $N=9$ jobs on its Quadratic Unconstrained Binary Optimization (QUBO) form,
using $\mathcal{O}(N)$ qubits on DWave's Advantage$\_$system4.1 quantum
annealing device. The best known quantum algorithm for TSP to date uses
$\mathcal{O}(N^2)$ qubits. A solution is found using the quantum method.
However, since hardware is not yet able to compensate the increase in
search-space size, no present overall advantage is achieved when comparing the
quantum results with either exhaustive or equiprobably sampled classical
solutions of the problem. | Iñigo Perez Delgado, Beatriz García Markaida, Aitor Moreno Fdez. de Leceta, Jon Ander Ochoa Uriarte | 2023-09-28T15:28:50Z | http://arxiv.org/abs/2309.16522v1 | # Quantum hobbit routing: Annealer implementation of generalized Travelling Salesperson Problem
###### Abstract
In this paper, we present an implementation of a Job Selection Problem (JSP) -- a generalization of the well-known Travelling Salesperson Problem (TSP)-- of \(N=9\) jobs on its Quadratic Unconstrained Binary Optimization (QUBO) form, using \(\mathcal{O}(N)\) qubits on DWave's Advantage_system4.1 quantum annealing device. The best known quantum algorithm for TSP to date uses \(\mathcal{O}(N^{2})\) qubits. A solution is found using the quantum method. However, since hardware is not yet able to compensate the increase in search-space size, no present overall advantage is achieved when comparing the quantum results with either exhaustive or equiprobably sampled classical solutions of the problem.
Quantum annealing, Quantum optimization, Job Selection Problem, Travelling Salesperson Problem, Quadratic Unconstrained Binary Optimization.
## I Introduction
The usage of quantum devices as a mean of information transfer beyond classical physics was born as a mental experiment by Wiesner [1] in 1968 [2, 3]. However, that paper was not published until the early 80s, after the reignition of the interest on the topic caused by the publications of Rabin [4], Diek [5], and Wootters and Zurek [6]. Paralelly, along with the demonstration of the Turing completeness of 'quantum states' by Benioff [7] --which would now be called 'qubits' [8]-- Manin [9] and Feynman [10] started to think of quantum devices as computational machines: if quantum systems are hard to simulate by classical computers, and analog quantum systems can be used to mimic the evolution of other quantum systems, then it is clear that quantum computers have capabilities beyond those of classical devices. It is only now, more than half a century after Wiesner's paper, that real quantum devices are large and stable enough to attempt a rich variety of tasks. The current state of quantum computing, referred to as the Noisy Intermediate-Scale Quantum (NISQ) era, is however still far from fault-tolerant systems, and thus noise heavily limits the size of the problems solvable by modern quantum devices. The chosen problem, called Job Selection Problem (JSP), lies in this frontier, where we are able to obtain the solution with the chosen quantum device --the Advantage_system4.1 DWave quantum annealer-- but obtaining no present improvement when compared with classical solutions. However, we do improve on previous similar quantum algorithms, reducing the number of required qubits from \(\mathcal{O}(N^{2})\) to just \(\mathcal{O}(N)\) thanks to a specifically chosen formulation.
The JSP is a generalization of the widely known Travelling Salesperson Problem (TSP). In the JSP, the traveller needs not only to optimize the route through a number of nodes like in a TSP, but has to first select which of all the possible places are they going to visit, since only a limited amount of some resource is available for the travel and thus typically it is not possible to visit all nodes. In our problem this limiting factor is time, but it could be fuel, distance, or another resource. Then, instead of all \(N\) nodes being visited in \(N\) timesteps, only \(\xi\) nodes are visited out of all \(N\) possible ones, using only \(\xi\) timesteps.
Our JSP-focused formulation allows us to use only \(\mathcal{O}(\xi N)\) qubits instead of the \(\mathcal{O}(N^{3})\) of the native formulation of the TSP [11], the \(\mathcal{O}(N^{2}\log_{2}N)\) of MTZ formulation [12] or the \(\mathcal{O}(N^{2})\) of GPS formulation [13], all of them designed for the TSP. For our JSP formulation, in the \(\xi(N)=N\) case where we recover the TSP, \(\mathcal{O}(\xi N)=\mathcal{O}(N^{2})\), which matches the GPS formulation. However, there are several instances where \(\xi\) is not a function of \(N\). For example, a delivery drone could have 12 hours of battery independently of the number of packages waiting to be delivered. In those cases, \(\mathcal{O}(\xi N)\) is really \(\mathcal{O}(N)\). To the best of our knowledge no quantum solution of the JSP has been published yet, so this is the first time this formulation has been used in this context.
In this paper we will first illustrate the quantum process with the simple example of Sec. II, which will help the reader understand both where the advantage quantum annealing provides comes from and where the problems of the technique may be found. In Sec. III the exact instance of JSP will be explained, and some narrative context will be provided in order to improve the comprehensibility of the problem. In Sec. IV the details of the hamiltonian of the model will be explained, and in Sec. VI the selection of the values of its weights
will be justified. Once the hamiltonian has been presented, the problem will first be solved classically in Sec. VII, both random and exhaustively. As a sanity check, in VIII those results will be used to confirm the validity of the proposed hamiltonian. Lastly, in Sec. IX the results of the quantum processing will be presented, and the extent of the success of the method will be shown. The conclusions of the work appear in Sec. X.
We have chosen to set the problem in the world of Tolkien's The Lord of the Rings trilogy, in order to provide an easy-to-follow example in a fictional yet hopefully familiar world.
## II The value of quantum
The DWave annealing protocol initiates each qubit to the uniform superposition \((|0\rangle+|1\rangle)/\sqrt{2}\), which combined equals to a uniform superposition of all of the \(2^{N}\) individual eigenstates of the \(N\)-qubit base. Then, the state is left to cool down, which transfers the probability amplitude of high-energy states into low-energy ones. This means that, when measuring the system, it will collapse with higher probability to lower-energy states. If the hamiltonian of the system is constructed to inversely correlate the energy of the states and the quality of the solutions they represent, in order to find the explicit form of the solution we will just need to perform the quantum experiment a number of runs inversely proportional to the probability of measuring the lowest-energy state (or states, in case of degeneration).
In order to visualize the effect of the cooling down of the system, we have solved a dummy problem with the DWave annealer. In this problem, of \(N=20\) binary variables, the solutions are those states with five of the variables equal to 1 and the remaining fifteen equal to 0. Results of the experiment are shown in Fig. 1.
The increase in the measuring probability of the ground state is crucial, since in several kinds of problems the number of answers corresponding to high-energy states is far greater than the number of optimal answers. In our simple \(N=20\) example, only \(\binom{20}{5}=15504\) answers have the minimum energy from all \(2^{20}=1048576\) possible states, and it is easy to see how quickly this difference escalates with big values of \(N\).
## III Description of the problem
Years after the events of the trilogy of The Lord of the Rings, Samwise Gamgee decides he wants to go on another trip around the Middle earth. In total, his list is composed of \(N=9\) places, listed in Tab. I. However, he wants to be back home by the 1st of May, which gives him \(t_{MAX}=100\) days for the trip. This means he will need to select only some of the places of his list, leaving some others out.
He procceeds to assign a priority to each of the places he wants to visit on Tab. I. The second thing Sam needs to know in order to plan his trip is the distances between all these \(N\) places. He tabulates them on Tab. II. Lastly, he estimates how much time he will spend on each of the destinations, and adds them to Tab. I.
Lastly, an average travelling speed is needed to convert distances into time. Sam's pony moves at \(v=9.6\) leagues a day [14].
## IV The hamiltonian
Let us describe the algorithm Sam could use if he had access to a quantum annealing device. The core idea of quantum annealer problems is to write a hamiltonian whose state of minimum energy is the state that best fulfills our criteria. That state, in our algorithm, will be described by variables labeled as \(x_{i,s}\) and stored each in one qubit. If the \(i\)th location has been visited in \(s\)th place, then the variable \(x_{i,s}\) will be equal to one. Else, \(x_{i,s}=0\), since we will write this problem in its Quadratic Unconstrained Binary Optimization (QUBO) form. As \(i\in\{1,N\}\) and \(s\in\{1,\xi\}\), the number of qubits needed is \(\xi N\). For problems where each place is only visible once, we have that \(N\geq\xi\). Moreover, it is also easy to find problems where \(N>>\xi\), so the usefullness of our algorithm is apparent. However, the reduction in the number of qubits comes with an increase of the number of problems to be solved: the method we propose is to solve the problem for \(2\Delta\) different values of \(\xi\) centered around the number of steps \(s_{cl}\) of some classically calculated solution. This solution does not need to be a very fine solution either, because it is only intended to give us a characteristic size for the problem. This increase in the number of steps is not an issue either, since \(\Delta<<N\) and thus solving \(2\Delta\) problems of \(N\left(s_{cl}\pm\Delta\right)\) variables is at worst equivellent to solving one problem of \(\mathcal{O}(N^{2})\) variables. As an example
\begin{table}
\begin{tabular}{|c|c|c|} \hline PLACE \(i\) & PRIORITY \(p_{i}\) & VISIT TIME \(t_{i}\) \\ \hline Bree & 15 & 3 days \\ Edoras & 150 & 5 days \\ Isengard & 35 & 4 days \\ Lorien & 75 & 4 days \\ Mins Thrith & 170 & 7 days \\ Pelargir & 50 & 3 days \\ Rivendel & 40 & 5 days \\ Tharbad & 5 & 2 days \\ Valle & 15 & 4 days \\ \hline \end{tabular}
\end{table}
Table I: The visiting priorities of each place and the expected duration of the visit. According to the table, for example, Sam would enjoy equally visiting Rivendel \((p_{tot}=40)\) and visiting both Isengard and Tharbad \((p_{tot}=35+5)\).
Figure 1: Results of the quantum annealer —in purple—, albeit random in nature, show an increase of the measuring probability of low-energy answers when compared with a sample built choosing each variable \(x_{i}\in\{0,1\}\) equiprobably —in orange.
of the relative size of those numbers, one realistic problem could have \(N=1000\), \(s_{cl}=20\) and \(\Delta=5\), which would mean solving 10 problems of sizes between 15,000 and 25,000 variables instead of one of size 1,000,000.
For clarity purposes, we will divide our Hamiltonian in two parts: the 'natural' hamiltonian \(H^{0}\) and the'restriction' hamiltonian \(H^{R}\). Then, the total hamiltonian will be a sum of both:
\[H=H^{0}+H^{R} \tag{1}\]
### _Natural hamiltonian_
With the 'natural' part of the hamiltonian we account for all natural causes of energy of our analog system: we want to maximize the total solved priority, so we will add a negative term \(H_{p}\) proportional to it. However, we want to do it in the minimum time possible, so another two positive terms will be added too, proportional to the times spent travelling to (\(H_{tt}\)) and visiting (\(H_{vt}\)) each place.
\[H^{0}\equiv H_{p}+H_{tt}+H_{vt}\;. \tag{2}\]
Remember that we want the best fitting state to be the ground state of the hamiltonian, that is, the state with the lowest energy, so if a term is negative it will actually represent a preferred state.
#### Ii-B1 Priority term
The first term of our hamiltonian will be a negative term that accounts for the total obtained priority:
\[H_{p}=-c_{p}\sum_{i,s}p_{i}x_{i,s}\;, \tag{3}\]
where \(c_{p}>0\) is a proportionality constant that will help us balance the contribution of each term of the hamiltonian. The more places we visit (the more \(x_{i,s}\in\{0,1\}\) are equal to one) more priority terms \(p_{i}\) will be added, making the hamiltonian more negative.
#### Ii-B2 Travel time term
The second term will be a positive term that accounts for the travelling time:
\[\begin{split} H_{tt}&=c_{tt}\Bigg{[}\sum_{ij,s} \frac{d_{ij}}{v}x_{i,s}\,x_{j,s+1}\\ &\quad+\sum_{i}\frac{d_{0i}}{v}x_{i,1}+\sum_{i}\frac{d_{i0}}{v}x_ {i,\xi}\Bigg{]}\;,\end{split} \tag{4}\]
where \(c_{tt}>0\) is another proportionality constant. The first of the sums of \(H_{tt}\) accounts for the time spent travelling between visited places: if (and only if) a place \(i\) is visited at step \(s\) (that is, \(x_{i,s}=1\)) and then place \(j\) is visited at step \(s+1\) (\(x_{j,s+1}=1\)) then we add a positive energy term. This term is proportional to the time spent moving from place \(i\) to place \(j\) which, assuming a constant speed \(v\), is proportional to the distance \(d_{ij}\) that separates both places.
The second sum accounts for the time travelling from home, denoted as \(i=0\), to the place visited in the first step, and is proportional to the distance \(d_{0i}\). Similarly, the third sum counts the time spent travelling from the last visited place back home, and is proportional to the distance \(d_{i0}\). Note that in a symmetrically distanced problem such as this \(d_{ij}=d_{ji}\) and the distinction is just for notation clarity.
#### Ii-B3 Visit time term
We will add a third term, also positive, accounting for the visit time, with its own proportionality constant \(c_{v}>0\):
\[H_{vt}=c_{vt}\sum_{i,s}t_{i}x_{i,s}\;. \tag{5}\]
If place \(i\) is visited at timestep \(s\), \(x_{i,s}=1\) and a term proportional to \(t_{i}\) will be added.
### _Restriction hamiltonian_
When using only the natural part of the hamiltonian, however, some problems arise. For example, as each place \(i\) is represented \(\xi\) times by variables {\(x_{i,1}\), \(x_{i,2}\), \(x_{i,...}\)}, nothing stops the hamiltonian to 'visit' a high-priority place \(i_{0}\) multiple times, getting to add \(-c_{p}p_{i_{0}}\) on each of the timesteps and thus lowering the total energy to its minimum. In fact, nothing stops the hamiltonian to turn all \(x_{i,s}\) to one, visiting all the places all the time.
It would seem that a QUBO method, unconstrained by definition, should not be able to take into account those requisites. However there is a simple way to implement them in the hamiltonian. Those terms are called the'restriction hamiltonian', \(H^{R}\).
#### Ii-B1 'One place per timestep' term
In order to have exactly one place visited on each step \(s\), we want all \(x_{i,s}\) to be zero \(\forall i\) except for one. The following equations will then be fulfilled:
\[\forall s,\quad\sum_{i}x_{i,s}=1\;. \tag{6}\]
If the criteria are not met, the energy should go up. Hence, to ensure our nodes are visited in a one-per-step fashion, we will add the following sum of terms to our hamiltonian:
\[H_{ops}=\lambda_{ops}\sum_{s}\left(\sum_{i}x_{i,s}-1\right)^{2}\;, \tag{7}\]
where \(\lambda_{ops}>0\) is a proportionality constant, similar in nature to the \(c\) constants of the terms of \(H^{0}\) but much bigger. \(\lambda\) constants are sometimes called Lagrange multipliers. If each of the Eqs. 6 is fulfilled, \(H_{ops}=0\) and no penalty is imposed. Else, \(H_{ops}>0\) and, as \(\lambda_{ops}\) is big enough, that is translated into a penalty unassumable by the system, which means only solutions which fulfill Eqs. 6 will appear as feasible.
#### Ii-B2 'Each place is visited once at most' term
In a similar fashion to Sec. IV-B1, in order to ensure no place is visited more than once, a penalty term will be added. In this case we want to fulfill
\[\forall i,\quad\sum_{s}x_{i,s}\leq 1\;, \tag{8}\]
which is not a set of equations but of inequalities. Usually this means extra dummy variables are required, which means more qubits, but in this special case where only two values
are permitted (a place can be visited either zero times or one time) we can write the term \(H_{oam}\) ('_once at most_') as follows:
\[H_{oam}=\lambda_{oam}\sum_{i}\left(\sum_{s}x_{i,s}-0.5\right)^{2}. \tag{9}\]
If a place \(i\) is visited zero or one times the penalty is not zero, but it is the minimum value it can take: \(\lambda_{oam}(\pm 0.5)^{2}\). Visiting any other number of times {2,3,4,...} will make the penalty term higher than that \(\{\lambda_{oam}(1.5)^{2}\), \(\lambda_{oam}(2.5)^{2}\), \(\lambda_{oam}(3.5)^{2}\),...}.
## V Hamiltonian encoding
If we take a look at our hamiltonian, all the individual terms are either constant, in which case do not affect the localization of the state of minimum energy, or proportional to one or to two \(x_{i,s}\) variables. However, as we are dealing with a QUBO problem, \(x_{i,s}\in\{0,1\}\) by definition, and we have that \(x_{i,s}=(x_{i,s})^{2}\). Then, effectively, all the terms of the hamiltonian we need to take into account are proportional to some second-order term \(x_{i,s}\,x_{j,s^{\prime}}\), where diagonal \(x_{i,s}^{2}\) terms will correspond to originally \(x_{i,s}\) first-order terms. Depending of the exact syntaxis of the specific annealer controller, we will store the coefficients of these terms either in an \(N\xi\times N\xi\) matrix or, as in our case, in a two-keyed dictionary with that same number of entries (\(N\xi\) for each key). In any case the storage item will be denoted as \(Q\).
## VI Selection of constants
As mentioned in Sec. IV, coefficients \(c_{p}\), \(c_{tt}\), \(c_{vt}\), \(\lambda_{ops}\) and \(\lambda_{oam}\) have to be chosen in a proportion such that the minimum of the energy corresponds to a maximum of the total solved priority. There are multiple combinations of values that satisfy that minimal requirement. However, some of them will give a higher probability of finding a solution than others. As explained in each case, approximated values for each of the constants have been calculated, and trial-and-error empirical adjustments have been performed in order to complement them.
These are the chosen constants:
* \(c_{p}=0.1\), so that the typical energies of the system are of the order of 10s and not 100s. This is mostly an aesthetic choice, but has been left intentionally in the manuscript to underline that only the proportion between constants is relevant.
* \(c_{tt}=c_{vt}=c_{p}(p_{GUESS}/t_{MAX})\). Here, \(p_{GUESS}\) is a rough estimate of the priority of our optimal answer, which gives the hamiltonian a feel of the characteristic size of the problem. In this paper we have used \(p_{GUESS}=500\), but \(p_{GUESS}=400\) or \(p_{GUESS}=600\) would have equally worked. Since only the proportion between terms matter, we took priority terms as our reference. Time terms are always penalties, so it makes sense to decrease their coefficients when \(t_{MAX}\) is big and so time is a less scarce resource.
* \(\lambda_{ops}=300c_{p}\) and \(\lambda_{oam}=200c_{p}\), since penalties have to be big enough to be unassumable: we impose that \(\forall i\ |\lambda_{ops}|,|\lambda_{oam}|>|c_{p}p_{i}|\). \(\lambda_{ops}\) is slightly bigger because, empirically, setting it at \(200c_{p}\) still did not avoid a significant number of violations of its intended restriction due to the non-ideal nature of hardware.
## VII Classical resolutions
### _Exhaustive resolution_
The simplest way of obtaining the optimal route between those which fulfil our requisites is to simply check every possible route for each \(\xi\), filtering out those who do not met the criteria, and then looking for the best answer between all the remaining routes. The best answers will be those with a maximum total priority \(p_{tot}^{\xi}\), calculated by the sum of the priorities of each of the visited places. \(n_{o}^{\xi}\) is the number of separate answers that, for each \(\xi\), share the same \(p_{tot}^{\xi}\). Due to the distance symmetry of the problem, \(n_{o}^{\xi}\) has to be even, since forwards and backwards versions of the same route are counted as separate answers. We then obtain the \(p_{tot}^{G}\) priority of the global optimum or optima by choosing the largest of the \(p_{tot}^{\xi}\) priorities. \(n_{o}^{G}\) is the number of answers that have priority \(p_{tot}^{G}\).
Doing so for our problem, one finds out there are \(n_{o}^{G}=12\) equivalent globally optimal routes, all of them with \(\xi=6\), and with \(p_{tot}^{G}=495\). This is shown in Tab. III. The runtimes required to check all routes for each \(\xi\), denoted as \(T_{run}\), also appear on the table.
However, the runtime escalates quickly not only with the size of the problem, but with the number of steps, proportionally to the number \(f\) of possible routes
\[f(\xi)=\frac{N!}{(N-\xi)!} \tag{10}\]
which, when \(\xi<<N\), can be approximated by
\[f(\xi)\approx N^{\xi}. \tag{11}\]
\begin{table}
\begin{tabular}{|c|c c c c c c c c c|} \hline & Bree & Edorus & Isengard & Hobbiton & L\(\ddot{\rm o}\)rien & Minas Thrith & Pelargir & Rivendel & Tharbad & Valle \\ \hline Bree & - & 200 & 150 & 40 & 140 & 285 & 315 & 100 & 67 & 225 \\ Edoras & 200 & - & 48 & 225 & 100 & 102 & 117 & 172 & 133 & 235 \\ Isengard & 150 & 48 & - & 175 & 83 & 150 & 163 & 135 & 83 & 225 \\ Hobbiton & 40 & 225 & 175 & - & 183 & 321 & 342 & 167 & 90 & 270 \\ Lofern & 140 & 100 & 83 & 183 & - & 158 & 192 & 77 & 100 & 145 \\ M. Thrith & 285 & 102 & 150 & 321 & 158 & - & 43 & 200 & 229 & 245 \\ Pelargir & 315 & 117 & 163 & 342 & 192 & 43 & - & 243 & 252 & 290 \\ Rivendel & 100 & 172 & 135 & 167 & 77 & 200 & 243 & - & 100 & 125 \\ Tharbad & 67 & 133 & 83 & 90 & 100 & 229 & 252 & 100 & - & 220 \\ Valle & 225 & 225 & 225 & 270 & 145 & 245 & 290 & 125 & 220 & - \\ \hline \end{tabular}
\end{table}
Table II: The distances \(d_{ij}\) between different places of the middle earth, in leagues [14, 15].
The comparison between the real runtime and the runtime calculated with Eq. 10 is shown in Fig.2.
### _Random sample resolution_
For problems with \(N>>\xi\) the exhaustive method escalates quickly with both variables. Moreover, exhaustive search is not what quantum annealers do, since they work with a fixed number of probabilistic runs. Adopting that same approach with a classical computer allows us to drastically reduce the runtime of the search for optimal routes, at the expense of having to deal with the intrinsic uncertainty of random processes.
Let \(f(\xi)\) be the number of possible different routes for a given \(\xi\). Then, knowing from Tab. III the number of those routes which are optimal, we can calculate the \(P^{\xi}\) probability that, for one randomly chosen route of a certain \(\xi\), that route is optimal. Then, for \(r\) runs, we can find the expected number of found optimal answers to be
\[\langle n_{f}^{\xi}\rangle=rP^{\xi}=\frac{r\;n_{\circ}^{\xi}}{f(\xi)}=\frac{r \;n_{\circ}^{\xi}(N-\xi)!}{N!}\;. \tag{12}\]
Performing the experiment a number \(r=10000\) of runs for each \(\xi\), we obtain the results shown in Tab. IV.
Although \(T_{run}^{\xi}\) escalates much better in Tab. IV than in Tab. III, the use of a fixed number of \(r\) runs is excessive for low-\(\xi\) cases, where \(\langle n_{f}^{\xi}\rangle>n_{\circ}^{\xi}\), and not enough for high-\(\xi\) cases where \(\langle n_{f}^{\xi}\rangle<1\).
## VIII Hamiltonian confirmation
Once that we know that the optimum answer is a route with \(\xi=6\) and \(p=495\), we can check whether the hamiltonian devised in Sec. IV has that answer as its lowest-energy state. This, as the classical resolutions of VII, would not be part of the proposed method and is used here only for comparison and confirmation purposes. Plotting all the \(\xi=6\) routes as a function of their \(p_{tot}\) total priority and \(t_{tot}\) time needed to complete them, along with the \(t=t_{MAX}\) vertical line and the \(H^{0}=0\) diagonal line where \(|H_{p}|=|H_{tt}+H_{vt}|\), gives us the figures 3 and 4.
It is enough to be the lowest-energy routes amongst the exhaustive search, since any change that decreases energy by adding or repeating a high-priority place to the route will be affected by a penalisation even greater, since \(\forall i\;|\lambda_{ops}|,|\lambda_{oam}|>|c_{p}p_{i}|\).
The distribution of results by energy is shown in Fig. 5. As in Fig. 1, the classical distribution follows the bell distribution expected from a combinatorial problem like this.
## IX Results
Performing the experiment three times in the quantum annealer Advantage_system4.1, each with 10000 runs per value of \(\xi\in\{4,5,6,7\}\), we obtain the three results for the optimal route shown in Tab. V.
Only once in these three experiments, \(30000\) runs, we obtain one of the \(p=495\) optimal solutions. Being \(\frac{9!}{9-6!}=60480\) different \(\xi=6\) routes, with 12 of them having \(p=495\), it looks like the quantum solution is similar to a classical random guess. However, that direct comparison is not entirely symmetrical, since the search space of the quantum device with \(N=9\) places and \(\xi=6\) steps has \(N\xi\) variables, which create a space inhabited by \(2^{N\xi}\approx 1.8\times 10^{16}\) solutions. The distribution of the quantum histogram is shown in Fig. 6.
For a more solid statistical analysis more data should be gathered, however, only limited access to the device was available at the time of the experiments. Moreover, due to those limitations \(10^{4}\) was the maximum number of runs we could ask for each experiment. If in our results at least one of \(30000\) turned out to be an optimal answer, for an increase of the number of runs of only an order of magnitude, up to \(10^{5}\), a sufficient reliability is to be expected.
## X Conclusions
The present work manages to solve the proposed Job Selection Problem using the Advantage_system4.1 DWave quantum annealer. Even if it does not achieve any advantage over classical methods, that is due to the size difference in the search space of the two approaches. Improved physical devices which accumulate probability around low-energy answers with greater efficiency would allow the method to be directly implementable.
Moreover, as shown in Sec. IV, our method only uses \(N\xi\) variables --which also means using \(N\xi\) qubits- to solve the problem, as opposed to the \(N^{2}\) variables needed by other methods devised with only the TSP in mind. This is not only is a notable improvement in resource economy, but a decrease of the answer space size, which we have shown is crucial to take advantage of the probability increase of low-energy states and reliably find the optimum answers.
Further lines of research coming out of this work include a search for a standardized form of coefficient selection that minimizes the relative energy of the ground state, since is this energy what correlates with the probability of measuring that optimal answer. On the other hand, a standard test of benchmarking the quality of quantum devices could be constructed by evaluating the shape of probability distributions such as the one shown in Figs. 1 and 6 for simple, controllable problems similar to the one explained in Sec. II. Finally, comparison with classical methods more refined than the exhaustive or
\begin{table}
\begin{tabular}{|c|c|c|c|} \hline ROUTE & PRIORITY & TRAVEL TIME & RUNTIME \\ \hline Hobbion & & & & \\ Valle & & & \\ Isengard & & & \\ Edoras & 495 & 99 days & 117.57s \\ Pelarí & & & \\ Mins Tirith & & & \\ Lorien & & & \\ Hobbion & & & \\ \hline Hobbion & & & \\ Lorien & & & \\ Pelarí & & & \\ Edoras & & & \\ Isengard & & & \\ Hobbion & & & \\ \hline Hobbion & & & \\ Bree & & & \\ Loríen & & & \\ Mins Tirith & 460 & 95 days & 131.05s \\ Pelarí & & & \\ Edoras & & & \\ Hobbion & & & \\ \hline \end{tabular}
\end{table}
Table V: Optimal routes according to each of the 10000-run experiments performed. The possible number of steps (without counting the starting and finishing points Hobbiton) was \(\xi\in\{4,5,6,7\}\), and some of the answers have 5 and some others have 6. For each experiment, all results with higher total priority but total time higher than \(t_{MAX}=100\) days were discarded. Shown runtime represents the sum of both the classical and the quantum parts of the process.
Figure 5: Distribution of energies of all the possible \(\xi=6\) routes which visit each place once at most. The optimal \(p=495\) answers correspond to the lower-energy end of the histogram.
Figure 6: Distribution of the energies of the answers of the quantum resolution of the JSP problem. The first hill, in the negative energies, corresponds to the answers represented in 5. The rest of the histogram is composed by results with \(H^{R}>0\).
the equiprobably random used in this paper would also be of interest.
## Acknowledgments
The research leading to this paper has received funding from the QUANTEK project (ELKARTEK program from the Basque Government, no. KK-2021/00070).
(c) 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
## Competing interests
The authors declare no competing interests. We acknowledge use of the DWave for this work. The views expressed are those of the authors and do not reflect the official policy or position of DWave or the DWave team.
|
2309.05238 | Generating Natural Language Queries for More Effective Systematic Review
Screening Prioritisation | Screening prioritisation in medical systematic reviews aims to rank the set
of documents retrieved by complex Boolean queries. Prioritising the most
important documents ensures that subsequent review steps can be carried out
more efficiently and effectively. The current state of the art uses the final
title of the review as a query to rank the documents using BERT-based neural
rankers. However, the final title is only formulated at the end of the review
process, which makes this approach impractical as it relies on ex post facto
information. At the time of screening, only a rough working title is available,
with which the BERT-based ranker performs significantly worse than with the
final title. In this paper, we explore alternative sources of queries for
prioritising screening, such as the Boolean query used to retrieve the
documents to be screened and queries generated by instruction-based generative
large-scale language models such as ChatGPT and Alpaca. Our best approach is
not only viable based on the information available at the time of screening,
but also has similar effectiveness to the final title. | Shuai Wang, Harrisen Scells, Martin Potthast, Bevan Koopman, Guido Zuccon | 2023-09-11T05:12:14Z | http://arxiv.org/abs/2309.05238v3 | # Generating Natural Language Queries for More Effective Systematic Review Screening Prioritisation
###### Abstract.
Screening prioritisation in medical systematic reviews aims to rank the set of documents retrieved by complex Boolean queries. Prioritising the most important documents ensures that subsequent review steps can be carried out more efficiently and effectively. The current state of the art uses the final title of the review as a query to rank the documents using BERT-based neural rankers. However, the final title is only formulated at the end of the review process, which makes this approach impractical as it relies on \(\mathsf{ex}\) post facto information. At the time of screening, only a rough working title is available, with which the BERT-based ranker performs significantly worse than with the final title. In this paper, we explore alternative sources of queries for prioritising screening, such as the Boolean query used to retrieve the documents to be screened and queries generated by instruction-based generative large-scale language models such as ChatGPT and Alpaca. Our best approach is not only viable based on the information available at the time of screening, but also has similar effectiveness to the final title.
Systematic review, Screening prioritisation, Query variations, LLM
3
Footnote 1: Please note, in our ACM published paper, the result of working title in Sred Collection was wrong due to bug in data pre-processing, the is updated here, and the update of the result does not have any influence to the observation and conclusion made from this paper.
Systematic review, Screening prioritisation, Query variations, LLM
## 1. Introduction
Systematic reviews are a widely used type of literature review in evidence-based medicine to comprehensively identify, analyse and summarise all available research on a particular topic or question in an unbiased manner (Wang et al., 2018). They provide a rigorous and transparent pathway to medical decision-making tasks and minimise bias and errors that might otherwise result from an ad hoc literature search (Wang et al., 2018). Systematic reviews are usually conducted according to a protocol of established steps (Kal
straightforward. A Boolean query is complex, structured, and detailed; it is very different from the queries that are common in ad hoc retrieval (Bordes and McAllester, 2017). BERT-based methods for ranking may perform poorly on these queries. Therefore, we investigate the use of two instruction-based models, namely OpenAI's ChatGPT (Krishnan et al., 2017) and Stanford's Alexa (Santford et al., 2017), to generate natural language queries from Boolean queries. These generated natural language queries are in turn used as input for our neural-ranker-based screening prioritization methods. The bottom rows of Table 1 show that the most powerful variants of our method are able to generate queries that compete with the use of the final title.2 To guide our investigation, we have developed five research questions:
Footnote 2: Code: [https://github.com/ielab/SIGIR-AP-2023-Bolean2Natural45R](https://github.com/ielab/SIGIR-AP-2023-Bolean2Natural45R)
**RQ1**: How effective is screening prioritisation with Boolean queries compared to natural language queries generated from them?
**RQ2**: How do different generation models affect the effectiveness of natural language queries generated from Boolean queries?
**RQ3**: What impact do ranking methods have on the effectiveness of natural language queries derived from Boolean queries?
**RQ4**: Does generating multiple natural language queries from a single Boolean query improve effectiveness?
**RQ5**: How effective is screening prioritisation with natural language queries derived from Boolean queries compared to using the working titles of systematic reviews?
## 2. Related Work
In this section, we review the literature on screening prioritisation for systematic reviews and instruction-based large language models.
### Systematic Review Screening Prioritisation
Screening prioritisation has received considerable attention in technology-assisted systematic review generation. Various aspects were investigated, including the use of different input data sources (Sutton et al., 2017; Bordes and McAllester, 2017; Bordes and McAllester, 2017; Bordes and McAllester, 2017; Bordes and McAllester, 2017; Bordes and McAllester, 2017; Bordes et al., 2018; McAllester et al., 2018; McAllester et al., 2018; McAllester et al., 2018; McAllester et al., 2018; McAllester et al., 2018; McAllester et al., 2018), and active learning techniques that improve the efficiency of screening prioritisation through a human-in-the-loop approach (Bordes and McAllester, 2017; Bordes and McAllester, 2017; Bordes and McAllester, 2017; Bordes and McAllester, 2017; McAllester et al., 2018; McAllester et al., 2018; McAllester et al., 2018; McAllester et al., 2018; McAllester et al., 2018).
_Boolean-driven screening prioritisation_ uses a Boolean query to rank candidate documents directly. While few studies have examined using Boolean queries alone, most used them in conjunction with the final review title (Bordes and McAllester, 2017; Bordes and McAllester, 2017; Bordes and McAllester, 2017). Typically, keywords are extracted from the Boolean query and the review title to formulate a (bag of words) query, and then a lexical scoring function determines the relevance of a document to the query. However, these methods are impractical since the final title is not available at the time of screening. The coordination level fusion (CLF) approach proposed by Scells et al. (Scells et al., 2017) is the only existing method that examines screening prioritisation using Boolean queries only. It uses rank fusion to rank the documents retrieved by each clause of a Boolean query.
_Neural ranker-based screening prioritisation_ methods rely on pretrained models such as BERT and have achieved much higher effectiveness than traditional lexical rankers, on par with active learning methods that use relevance signals from the screened documents (Santford et al., 2017). Despite the improvements they have brought to screening prioritisation, there are still challenges in using them: For instance, the input token length limitation imposed by most BERT-based models (Krishnan et al., 2017) is a critical limitation. It does not allow the model to process longer text inputs, such as the full text of candidate documents, extensive Boolean queries, or seed studies as a source of information. Previous approaches using neural rankers for screening prioritisation has focused only on using the review title as a query. We show that their effectiveness does not generalise when working titles are used instead (see Table 1).
### Instruction-based Large Language Models
Recent advances in instruction-based large language models (LLMs), such as ChatGPT, have shown that they are able to accurately follow user instructions to complete tasks (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). These models typically contain tens of billions of parameters and are trained on diverse and extensive textual data so that they are able to generate relevant and coherent answers for a wide range of topics (Krishnan et al., 2017). Several studies have evaluated the effectiveness of ChatGPT on various tasks, often observing an increase in effectiveness compared to previous approaches, e.g., in question answering (Krishnan et al., 2017; Krishnan et al., 2017) or ranking (Krishnan et al., 2017; Krishnan et al., 2017). As part of a systematic review literature search, the use of ChatGPT to generate Boolean queries for systematic reviews has been investigated Wang et al. (Wang et al., 2018). The results of this study showed that ChatGPT generates effective queries with appropriate prompting. In this paper we use ChatGPT and Alpaca (Santford et al., 2017). Alpaca was fine-tuned based on an LLM developed by Meta with seven billion parameters, known as LLMa (Santford et al., 2017; Krishnan et al., 2017). Alpaca has been fine-tuned using 52K instruction-output pairs generated from ChatGPT via a self-instruction approach Wang et al. (Wang et al., 2018), showing similar capabilities to ChatGPT in preliminary human evaluations (Santford et al., 2017).
For ranking tasks, instruction-based language models are integrated with ranking models to achieve more effective results (Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017; Krishnan et al., 2017). Specifically, there are two common ways to combine these models: _retrieval-then-generation_ and _generation-then-retrieval_. In the _retrieval-then-generation_ approach, the ranking model first retrieves a set of relevant results based on the user's query. Then, the instruction-based language model generates a response based on the retrieved documents. This method relies primarily on the ranking model's ability to understand the user and extract corresponding information from their query, while the LLM is used to summarise the retrieved evidence to provide the user with a credible and comprehensive answer (Krishnan et al., 2017; Krishnan et al., 2017). In the _generate-then-retrieval_ approach, the instruction-based LLM is first used to generate a response based on the user's query, and this response is then processed by a ranking model as a new query to retrieve documents that provide evidence for its statements (Krishnan et al., 2017; Krishnan et al., 2017).
\begin{table}
\begin{tabular}{l l l l l l l} \hline \hline
**Source** & **Query** & **MAP** & **LastRel** & **WSS95** & **WSS100** & **Ref.** \\ \hline \multirow{2}{*}{post hoc} & final review title & 0.295 & 634.975 & **0.609** & **0.597** & (Santford et al., 2017) \\ & best generated query & **0.310** & **620.025** & 0.589 & 0.569 & **ours** \\ \hline \multirow{2}{*}{practice} & working title & 0.171* & 801.050* & 0.465* & 0.450* & 0.50* & (Santford et al., 2017) \\ & generated queries & 0.249 & 714.500 & 0.541* & 0.521* & **ours** \\ \hline \hline \end{tabular}
\end{table}
Table 1. Our contribution at a glance: The post hoc effectiveness of Wang et al.’s (Wang et al., 2018) original approach can be achieved by generating queries from sources available in practice; * indicates statistical significant differences.
## 3. Methodology
Figure 1 gives an overview of our approach for screening prioritisation. One or more natural language queries are generated from a given Boolean query. Then the candidate documents to be screened are ranked based on the generated queries.
### Query Generation
Our task is to generate a natural language query from a Boolean query of a systematic review that best describes its information need. We evaluate two LLMs for this task: ChatGPT (Ghosh et al., 2017)3 and Alpaca (Alpaca et al., 2018).4 Figure 1 shows an example consisting of a Boolean query, carefully optimized prompts for each of the two models, and four alternative generated natural language queries. Preliminary studies have shown that Alpaca has problems with zero-shot generation for virtually all Boolean queries, often returning the original Boolean query itself. We addressed this problem by fine-tuning Alpaca using pairs of Boolean queries and natural language queries generated by ChatGPT as training examples.
Footnote 3: We used the OpenAI’s GPT-3.5-turbo API with a maximum of 4,097 tokens.
Footnote 4: The model was fine-tuned using the original setup of Stanford’s Alpaca.
To adjust the "creativity" of an LLM, they often introduce a degree of stochasticity controlled by the so-called temperature parameter \(t\), where \(0\leq t\leq 1\)(Gao et al., 2018). Setting \(t=0\) causes the model to generate the same response over multiple inferences for a given prompt, whereas \(t=1\) causes the model's response to be randomly different each time. In other words, the lower the temperature value, the more deterministic a model's response. In our experiments, we investigate how the creativity of a model affects the effectiveness of screening prioritisation (RQ4). Therefore, we compare the generation of a single natural language query (_Single-Generation_) to generating multiple natural language queries (_Multi-Generations_) by adjusting the temperature accordingly.5
Footnote 5: As Hugging Face does not allow \(t=0\), we use \(t=0.0001\) instead.
### Document Ranking
To rank the documents, we follow the state-of-the-art screening prioritisation method developed by Wang et al. (Wang et al., 2019). Here, a cross-encoder-based neural ranker is used to calculate the relevance score of a query-document pair. Specifically, the query and the document are first concatenated with a \([SEP]\) token, and then fed into a cross-encoder model that calculates the relevance score for the concatenated pair. The relevance score is then represented by the special classification token \([CLS]\) in the output of the model (Beng et al., 2017). In the proposed pipeline, the query is obtained from a Boolean query as described above. However, in our experiments, we also explore the alternative of using the original Boolean query as input to the cross-encoder to address RQ1, and the alternative of using the working title of the review to address RQ5.
For fine-tuning, we first use a pre-trained BioBERT model, which has been shown to be effective in screening prioritisation when the title of the review is used as query (Wang et al., 2019). Then, for each topic in the training set, we extract all relevant documents \(D^{+}\) and a number of non-relevant documents \(D^{-}\). For each pair of relevant document \((d^{+},d^{-})\in D^{+}\times D^{-}\), we create training triples (query, \(d^{+},d^{-}\)) and then fine-tune the model using localised contrastive loss as proposed by Gao et al. (Gao et al., 2019).
To investigate RQ4, we generate multiple queries per Boolean query in the Multi-Generation setup, calculate a relevance score for each pair of query and candidate document, and then apply two strategies to derive a single relevance score from them for a candidate document: _Fusion_ and _Oracle_ selection. In the _Fusion_ strategy, the relevance scores of all natural language queries on the same topic that refer to a candidate document are summed to calculate the final relevance score of the document with respect to the topic of the systematic review. For the _Oracle_ strategy, we first evaluate the ranked lists from different natural language queries, after which the best-performing ranked list, as measured by the mean average precision (MAP), is selected. This strategy serves as an upper bound baseline.
Figure 1. Illustration and examples of our screening prioritisation approach: Given a Boolean query, an instruction-based LLM is prompted to generate one or more natural language queries. Then, given a generated query and a candidate document, a neural ranker is used to predict one or more relevance scores for the document. In the latter case, the scores are fused by addition. As a baseline for our experiments, the score that maximises effectiveness is selected by an oracle.
To investigate the effectiveness of combining the results of a generated natural language query with those of the original Boolean query, we also evaluate a setup that includes a fusion of their ranking results. For this purpose, we use the COMBSUM fusion technique to fuse the two ranked lists (Kipf and Welling, 2017): the relevance score of a document in the fused ranked list is the sum of the individual scores of the document in the two lists to be fused.
## 4. Experimental Setup
In this section, we outline the datasets we use, the methods we apply, and how we evaluate them.
### Dataset
We use two collections in our experiments. The _CLEF TAR Collection_ comprises three datasets from 2017, 2018, and 2019. In 2017, the dataset includes 50 systematic review topics divided into 20 for training and 30 for testing (Kipf and Welling, 2017). In 2018 the dataset is expanded including all 50 systematic review topics from 2017 as a training set and adding 30 new topics for testing (Kipf and Welling, 2017). The 2017 and 2018 datasets focus on Diagnostic Test Accuracy (DTA) systematic reviews. The 2019 dataset is divided into four categories of systematic reviews: the DTA category, which builds upon the 2018 dataset and uses it as the training set with eight new topics for testing; the Intervention category, containing 20 training topics and 20 testing topics; the Prognosis Review category and the Qualitative Review category, each featuring one topic (Kipf and Welling, 2017). In our experiments, we treat DTA and Intervention topics as two sub-collections, denoted as CLEF-2019-DTA, and CLEF-2019-Intervention. Each topic in the CLEF TAR Collection provides the review title, the Boolean query used for document retrieval, the documents retrieved as a result of the Boolean query, and the relevance labels for the documents at both abstract and full-text levels (Kipf and Welling, 2017; Welling, 2017; Welling, 2017).
The _Seed Collection_ contains 40 systematic review topics without training or testing portions (Kipf and Welling, 2017). The dataset also contains the review title, the Boolean query used during retrieval, documents retrieved, and relevance labels. However, unlike the CLEF TAR Collection, where abstract-level relevance and full-text level relevance are all included, the Seed Collection only contains full-text level relevance judgements directly extracted from published reviews. One major difference between the Seed Collection and the CLEF TAR Collection is that the dataset also includes more details of the review. For example, it includes a temporary working title for each review, named'search name' in the collection, and a set of seed studies used for Boolean query creation (Kipf and Welling, 2017).
Unlike previous studies that used only the training portions specified in each dataset, we re-split our training data to include distinct topics from all other datasets (CLEF TAR Collection and Seed Collection) that are not included in the test portion of the respective dataset. We chose this strategy due to the uneven allocation of training data across the datasets we use. For instance, the Seed Collection dataset contains no training topics, whereas CLEF-2017 comprises 20 training topics, and CLEF-2019-DTA holds 80 topics. By incorporating training data from a range of sources, we aim to establish a more balanced and comprehensive training environment for our fine-tuned models.
### Baseline Methods
In our experiment, we employ BM25 and the Query Likelihood Model (QLM) as baseline ranking models (Kipf and Welling, 2017; Welling, 2017). For query preprocessing, we begin by removing all field types in the query, leaving us with only the query terms. We then apply the matching algorithm to the candidate document, calculating a relevance score between the query and the document.
Similar to previous studies comparing neural rankers with traditional term-matching rankers, we utilise specific tools to implement our baseline models. For BM25, we employ the Gensim toolkit, an open-source library that offers robust implementations for a variety of information retrieval tasks (Kipf and Welling, 2017). For the QLM, we apply Jelinek-Mercer (JM) smoothing, a popular technique for query likelihood estimation (Kipf and Welling, 2017).
In addition to the traditional ranking models, we also benchmark our models against the best-performing methods from participant runs in each CLEF-TAR dataset. It is important to note that certain participant runs have utilised relevance signals from relevance assessments to actively re-rank the remaining documents. We have excluded these runs from our baseline comparison, as they do not align with the scope of our screening prioritisation task, making the comparison unfair. The following participant runs have been selected as baselines for our study: CLEF-2017: _sheffield.run4_(Kipf and Welling, 2017); CLEF-2018: _sheff-general_(Kipf and Welling, 2017); CLEF-2019-Dta: _Sheffield/DTA/DTA_sheffield-Odds. Ratio_(Kipf and Welling, 2017); CLEF-2019-intervention: _Sheffield/DTA/DTA_sheffieldLog.Likelihood3_(Kipf and Welling, 2017).
Lastly, for the CLEF-2017 and 2018 datasets, we compare our method with the CLF approach proposed by Scells et al. (Scells et al., 2018). The CLF approach stands out as the only existing methodology that has explored the application of Boolean queries for systematic review screening prioritisation.
### Model Fine-tuning
In our experiments, we focus on fine-tuning two models: the Alpaca model for query generation and BioBERT for document ranking.
#### 4.3.1. Fine-tuning the Alpaca Model
To fine-tune the Alpaca model, our first step involves using Single-Generation to convert the Boolean query into a natural language query for the training portion of each dataset. We use ChatGPT for this conversion task, and consider its output as the gold standard for the Alpaca model to learn from. Following this, we use the prompts shown in the second column of Figure 1 to further fine-tune the Alpaca model to generate a natural language query using the Boolean query of a topic. As Boolean queries for systematic reviews are complex and require many tokens, we opted to simplify the prompt used for Boolean query conversion in ChatGPT. To ensure minimal loss of information from the Boolean query, we increased the input token limit of the Alpaca model from 512 to 768.6 Our fine-tuning process for each Alpaca model continues over three epochs, with batch size and gradient accumulation steps of one each, using three Nvidia 80GB A100 GPUs. For the remaining parameters, we adhered to those used in the original Alpaca work (Dai et al., 2019). During inference, we use the same prompt as fine-tuned to convert the Boolean query to a natural language query in each test dataset.
#### 4.3.2. Fine-tuning the neural ranker
In our experiments, we chose BioBERT as our pre-trained language model to fine-tune for ranking (Serban et al., 2017). Previous research has demonstrated that BioBERT shows higher effectiveness in the task of title-driven screening prioritisation (Zhu et al., 2019). Same as the previous work, we utilise the Reranker toolkit (Rasmal et al., 2018) to fine-tune our model across 100 epochs. The key distinction in fine-tuning and inference pipeline lies in the maximum query length set for all models that utilise Boolean or natural language queries. Instead of the query limit of 64 that was set in previous work for the review title, we extend this to 256 to accommodate the naturally longer input derived from Boolean queries. This adjustment ensures that our models are capable of processing and learning from the full complexity of these queries, potentially enhancing their performance and the accuracy of their outputs.
### Evaluation
For the CLEF-TAR Collection, we rely on abstract-level relevance to ensure a fair comparison with the submitted runs. However, for the Seed Collection where no abstract-level labels were provided, we utilise full-text level relevance signals.
To demonstrate the effectiveness of document ranking on screening prioritisation, we compute various evaluation metrics as established in at CLEF TAR. These metrics include Average Precision (AP), the rank of the last relevant document (Last_rel), Recall at several percentage cutoffs (1%, 5%, 10%, and 20%), and Work Saved over Sampling (WSS) at 95% and 100%. In accordance with the CLEF TAR tasks, we have used the same metrics for evaluating our work and used the tar-2018 evaluation script to evaluate our results (Zhu et al., 2019).
## 5. Main Results
In this section, we outline and interpret the results from our experiments. Specifically, we delve into the results derived from _Single-Generation_ in Section 5.1, while Section 5.2 is devoted to examining _Multi-Generations_. Lastly, we perform an ablation study in Section 5.3 to further investigate the effectiveness of our method under various experimental configurations.
### Effectiveness of Single-Generation
To understand the effectiveness of _Single-Generation_, Table 27 compares the ranking effectiveness of the generated query to the original Boolean query, our baseline methods, and title-driven methods (where the working title is used to rank candidate documents). We also evaluate the differences in effectiveness of screening prioritisation between queries generated by various generation models.
Footnote 7: Please note, in our ACM published paper, the result of working title in Seed Collection was wrong due to bug in data pre-processing, the is updated here, and the update of the result does not have any influence to the observation and conclusion made from this paper.
#### 5.1.1. Boolean vs. Generated Query
First, we explore the overall effectiveness of neural-ranker-based screening prioritisation using the original Boolean queries versus generated natural language queries. The results suggest that transforming a Boolean query into a natural language query enhances the effectiveness of systematic review screening prioritisation. The only exception to this improvement is seen in CLEF-2019-DTA,8 when using MAP. When evaluating using the recall and WSS measures, generating a natural language query for screening prioritisation achieves higher effectiveness on ranking non-relevant documents at the bottom of the ranking, as denoted by a higher value of Recall@5%, 10%, 20%, and WSS95, 100; but generally lower effectiveness of Recall@1%.
Footnote 8: Note that the CLEF-2019-DTA dataset is notably smaller, containing only eight topics (30 topics for other datasets on average), which may make it vulnerable to outliers.
We also find that fusing the ranking results of the generated query with those from the Boolean query further improves effectiveness. Fusion leads to a significantly better ranking than when using the Boolean query alone, particularly for CLEF-2017, 2018, and 2019-Intervention. This finding points to the potential benefits of using a fusion of converted natural language and Boolean queries to improve the ranking of systematic review screening.
#### 5.1.2. Neural vs. Baselines
When comparing neural-based rankers with lexical methods, we observe that the results from neural-based rankers significantly outperform those from BM25 or QLM when the Boolean query alone is used to rank candidate documents. There are only two exceptions: one in CLEF-2018 when comparing the rank of the last relevant document (Last_rel), and the other in CLEF-2019-DTA when comparing Recall@1%. Even in these instances, although higher effectiveness was achieved, it did not reach statistical significance. These results highlight the substantial potential of neural rankers for boolean-driven screening prioritisation.
When we compare our approach to the CLF method, which also only uses Boolean queries for screening prioritisation, we find that our methods exhibit statistically higher effectiveness (except for WSS95 at CLEF-2017 and WSS100 at CLEF-2018). However, the margin is narrower than for methods like BM25 or QLM. Similarly, our methods consistently achieved higher effectiveness over the best participation runs from CLEF. However, similar to the previous comparison, the margin is narrower than when compared to BM25 or QLM, and the difference is not statistically significant in terms of MAP, except for the topics in the CLEF-2018 dataset. While our approach only used the Boolean query for screening prioritisation, the top CLEF entries typically utilised additional input sources, such as the final title of the review, which again shows that the neural method could be beneficial to screening prioritisation.
#### 5.1.3. Comparison with using the Working Title
We are able to compare Boolean-driven screening prioritisation with working-title-driven screening prioritisation exclusively through the Seed Collection, as it is the only collection that provides systematic review working titles. To make this comparison, we trained an additional BioBERT ranker that uses the title from the CLEF dataset to prioritiise relevant documents, with the same fine-tuning parameters as previous work (Zhu et al., 2019). Our findings suggest that although past studies have demonstrated substantial improvements in screening effectiveness when using the final review title as a query, this improvement does not extend to working titles. Remarkably, using working titles results in significantly lower effectiveness than Boolean-driven screening prioritisation methods and even underperforms when compared to basic term-matching methods.
#### 5.1.4. ChatGPT vs. Alpaca
Comparing the effectiveness of natural language queries generated by ChatGPT and Alpaca, our results
indicate that ChatGPT often outperforms Alpaca in terms of MAP, with the sole exception being the Seed Collection. A significant discrepancy is also noted in CLEF-2019-Intervention, where the natural language query generated by the Alpaca model considerably underperforms compared to both the original Boolean query and the query generated by ChatGPT. This effectiveness drop could be attributed to the difference in systematic review types in the dataset to other datasets (intervention versus DTA). Therefore, the Alpaca model, which has learned to generate queries based on DTA, may not be as effective for intervention topics.
Similar to the ChatGPT-generated queries, those from Alpaca also achieve higher effectiveness when fused with the results from
\begin{table}
\begin{tabular}{l l l l l l l l l l l} \hline \hline
**Dataset** & **Query** & **Ranker** & **MAP** & **Last\_Rel** & \multicolumn{3}{c}{**Recall@\(x\)**} & **WSS95** & **WSS100** \\ \cline{6-11} & & & & & \(x=1\%\) & \(x=5\%\) & \(x=10\%\) & \(x=20\%\) & & \\ \hline \multirow{6}{*}{CLEF-2017} & Boolean & BM25 & 0.114\({}^{*}\) & 3242.733\({}^{*}\) & 0.083\({}^{*}\) & 0.215\({}^{*}\) & 0.324\({}^{*}\) & 0.491\({}^{*}\) & 0.252\({}^{*}\) & 0.188\({}^{*}\) \\ & Boolean & QLM & 0.122\({}^{*}\) & 3223.400\({}^{*}\) & 0.073\({}^{*}\) & 0.209\({}^{*}\) & 0.325\({}^{*}\) & 0.476\({}^{*}\) & 0.243\({}^{*}\) & 0.195\({}^{*}\) \\ & Boolean & CLF & 0.217 & 3028.033\({}^{*}\) & 0.149 & 0.341\({}^{*}\) & 0.473\({}^{*}\) & 0.671\({}^{*}\) & 0.442\({}^{*}\) & 0.327\({}^{*}\) \\ & Best Participation Run & 0.218 & 2382.467\({}^{*}\) & 0.131 & 0.332\({}^{*}\) & 0.499\({}^{*}\) & 0.688\({}^{*}\) & 0.488\({}^{*}\) & 0.395\({}^{*}\) \\ \cline{2-11} & Boolean & BioBERT & 0.278 & 1790.867 & 0.166 & 0.488 & 0.656 & 0.812 & 0.600 & 0.536 \\ & ChaitGPT & BioBERT & 0.293 & 1991.167 & 0.150 & 0.476 & 0.643 & 0.801 & 0.590 & 0.501 \\ & Boolean/ChaitGPT & BioBERT & **0.300\({}^{*}\)** & 1843.133 & 0.170 & **0.499** & **0.664** & 0.823 & 0.610 & 0.532 \\ & Alpaca & BioBERT & 0.284 & 1866.000 & 0.165 & 0.435 & 0.607 & 0.789 & 0.591 & 0.502 \\ & Boolean/Alpaca & BioBERT & 0.295 & **1759.233** & **0.171** & 0.483 & 0.663 & **0.827\({}^{*}\)** & **0.615** & **0.539** \\ \hline \multirow{6}{*}{CLEF-2018} & Boolean & BM25 & 0.154\({}^{*}\) & 6033.067\({}^{*}\) & 0.082\({}^{*}\) & 0.242\({}^{*}\) & 0.391\({}^{*}\) & 0.563\({}^{*}\) & 0.361\({}^{*}\) & 0.264\({}^{*}\) \\ & Boolean & QLM & 0.157\({}^{*}\) & 6097.133\({}^{*}\) & 0.080\({}^{*}\) & 0.252\({}^{*}\) & 0.380\({}^{*}\) & 0.557\({}^{*}\) & 0.384\({}^{*}\) & 0.251\({}^{*}\) \\ & Boolean & CLF & 0.272\({}^{*}\) & 5743.267\({}^{*}\) & 0.152 & 0.393\({}^{*}\) & 0.546\({}^{*}\) & 0.729\({}^{*}\) & 0.552\({}^{*}\) & 0.411\({}^{*}\) \\ & Best Participation Run & 0.258\({}^{*}\) & 5519.200 & 0.129\({}^{*}\) & 0.383\({}^{*}\) & 0.545\({}^{*}\) & 0.729\({}^{*}\) & 0.552\({}^{*}\) & 0.431 \\ \cline{2-11} & Boolean & BioBERT & 0.353 & 4830.933 & 0.202 & 0.517 & 0.681 & 0.845 & 0.656 & 0.503 \\ & ChaitGPT & BioBERT & 0.381 & **4508.9333** & 0.247\({}^{*}\) & **0.555\({}^{*}\)** & **0.713\({}^{*}\)** & **0.865\({}^{*}\)** & **0.692\({}^{*}\)** & 0.528 \\ & Boolean/ChaitGPT & BioBERT & **0.386\({}^{*}\)** & 4603.767\({}^{*}\) & **0.247\({}^{*}\)** & 0.551\({}^{*}\) & 0.705\({}^{*}\) & 0.859\({}^{*}\) & 0.685\({}^{*}\) & **0.537\({}^{*}\)** \\ & Alpaca & BioBERT & 0.333 & 4957.233 & 0.191 & 0.493 & 0.662 & 0.827 & 0.640 & 0.485 \\ & Boolean/Alpaca & BioBERT & 0.365 & 4628.233 & 0.220 & 0.525 & 0.688 & 0.849 & 0.668 & 0.523 \\ \hline \multirow{6}{*}{CLEF-2019-DTA} & Boolean & BM25 & 0.125\({}^{*}\) & 2766.875\({}^{*}\) & 0.068 & 0.163\({}^{*}\) & 0.303\({}^{*}\) & 0.463\({}^{*}\) & 0.299\({}^{*}\) & 0.163\({}^{*}\) \\ & Boolean & QLM & 0.121\({}^{*}\) & 2614.750\({}^{*}\) & 0.042 & 0.185\({}^{*}\) & 0.278\({}^{*}\) & 0.432\({}^{*}\) & 0.271\({}^{*}\) & 0.180\({}^{*}\) \\ & Best Participation Run & 0.248 & 2183.500 & 0.168 & 0.439 & 0.594 & 0.742 & 0.490\({}^{*}\) & 0.347\({}^{*}\) \\ \cline{2-11} & Boolean & BioBERT & **0.272** & 1146.000 & 0.174 & 0.419 & 0.565 & 0.751 & 0.651 & 0.528 \\ \cline{2-11} & ChaitGPT & BioBERT & 0.247 & 1173.250 & 0.183 & **0.454** & **0.594** & **0.757** & 0.660 & 0.528 \\ \cline{2-11} & Boolean/ChaitGPT & BioBERT & 0.268 & **1134.375** & **0.183** & 0.446 & 0.584 & 0.755 & **0.665** & **0.545** \\ \cline{2-11} & Alpaca & BioBERT & 0.241 & 1217.875 & 0.170 & 0.483 & 0.622 & 0.784 & 0.666 & 0.520 \\ \cline{2-11} & Boolean/Alpaca & BioBERT & 0.251 & 1146.125 & 0.173 & 0.458 & 0.592 & 0.783 & 0.659 & 0.537 \\ \hline \multirow{6}{*}{CLEF-2019-Intervention} & Boolean & BM25 & 0.154\({}^{*}\) & 1479.450\({}^{*}\) & 0.070\({}^{*}\) & 0.181\({}^{*}\) & 0.264\({}^{*}\) & 0.417\({}^{*}\) & 0.289\({}^
the Boolean query. This approach leads to higher effectiveness across all datasets compared to using only the Boolean query. Notably, in the Seed Collection, the fusion of derived and Boolean queries achieves significantly higher effectiveness compared to the Boolean query alone. This provides further evidence that the Alpaca model can be trained to generate high-quality natural language queries from Boolean queries, equalling the effectiveness of Chat-GPT. While Alpaca provides a degree of transparency in its process, unlike ChatGPT, making this comparison even more compelling.
### Variability and Impact of Multi-Generations
Figure 2 shows the results of multiple generations from both the ChatGPT and Alpaca models. Here, both the average effectiveness and the per-topic effectiveness are measured by the MAP metrics. Our findings reveal high variation in effectiveness when using converted queries from both models, with intervention queries appearing as the most unstable topics. We note that extreme variability in effectiveness has also been observed in previous work for both user-edited and system-generated queries, across a range of search domains (Srivastava et al., 2017; Srivastava et al., 2016; Srivastava et al., 2017; Srivastava et al., 2018; Srivastava et al., 2019; Srivastava et al., 2019; Srivastava et al., 2019; Srivastava et al., 2019; Srivastava et al., 2019).
In evaluating the variability of effectiveness across different generation models, we observe a difference in terms of different topic types. In the case of DTA topics, the Alpaca model shows a higher degree of variability compared to the ChatGPT model. This is evidenced by a higher variance observed in the CLEF-2017, CLEF-2018, and CLEF-2019-DTA datasets, where the variance of the Alpaca model is 14.3%, 7.7%, and 166% greater than that of the ChatGPT model, respectively. On the other hand, for intervention topics, or topics in the Seed Collection that are not classified, the Alpaca model demonstrates more stability. Specifically, its variance is 39.1%, and 28.6% lower than that of ChatGPT.
Upon examining the average effectiveness, the fusion of multiple generations generally outperforms Single-Generation. Exceptions occur in the CLEF-2017 and CLEF-2019-DTA datasets with ChatGPT queries, and in the CLEF-2019-Intervention dataset with Alpaca queries. Moreover, the fusion of Multi-Generations from the Alpaca model consistently performs better than Boolean queries.
Without a doubt, Multi-Generation Oracle queries consistently achieve the highest effectiveness, marking a considerable margin over the other ranking methods. This tells that with a proper technique or investigation to know how to select the best query over Multi-Generations, it could potentially lead to significant improvements in the effectiveness of screening prioritisation tasks.
### Ablation Studies
To gain deeper insights into why generating a natural language query could yield higher effectiveness, and to understand the role of fusion, and the training process in the effectiveness of screening prioritisation, we conduct a series of ablation studies that investigates these factors.
#### 5.3.1. Generate query vs generate title
In our first ablation experiment, our underlying intuition for generating a natural language query instead of a systematic review title from the Boolean query is that we believe a title may only cover a narrow aspect of the Boolean query. Therefore, if vital information from the title is missed, it could result in lower effectiveness. To test this assumption, we compare generating a systematic review title and a natural language query from a Boolean query for the task of screening prioritisation.
To accomplish this, we first train a cross-encoder BioBERT model using the training portions of each dataset to rank documents using the final review title. For generating the review title, we employ ChatGPT in a zero-shot fashion, as fine-tuning the model is not yet available. For Alpaca, we fine-tune the model using the review titles
Figure 2. Topic-by-topic variability graph for the effectiveness of the Multi-Generations setup, using a single generated natural language query to rank documents. The coloured horizontal lines indicate the average effectiveness of different methods (Boolean, Single-Generation, Multi-Generation Fusion, and Multi-Generation Oracle).
in the training portion of our dataset using the same parameters as described in Section 3.1, and then test on the testing portion.
Our results, presented in Table 3, clearly demonstrate that generating titles almost always yields lower effectiveness than generating natural language queries, regardless of whether the generation is done using ChatGPT or Alpaca (with the only exceptions being WSS95 on CLEF-2017 and WSS100 on CLEF-2018 when comparing the Alpaca model). Nevertheless, generating titles using ChatGPT appears to be significantly lower than when generated through the Alpaca model, with most results showing statistical significance.
#### 5.3.2. Impact of Fusion
We further explore how the fusion of results from both Boolean and generated queries impacts the effectiveness of screening prioritisation. In Figure 3, we compare the effectiveness of using Boolean queries and generated queries separately versus using their fused results for screening prioritisation.
The results indicate that the fusion, on average, consistently outperforms using the generated query alone, but it is not always more effective than using the Boolean queries alone. The effectiveness of Boolean queries should not be overlooked. When comparing results across the two generation models, we observe that the effectiveness gains over Boolean queries obtained tend to be more stable when ChatGPT is used. Using ChatGPT in query generation may thus contribute to more consistent improvements when the results are combined with those from Boolean queries.
#### 5.3.3. Train Ranker using Single-Generation or Multi-Generations
In our experiments, we train our natural language query-based ranker using Single-Generation results from generation models for reproducibility purposes, as Multi-Generations setup do not yield deterministic results each time, even when given the same prompt. However, we are interested in understanding how using Single-Generation versus Multi-Generations impacts the final outcome of the trained ranking model. To explore this, we formulated four distinct training and inference strategies for our downstream ranking model, which we refer to as Single-Train, Multi-Train, Single-Inference, and Multi-Inference.
For Single-Train, we train our model using the Single-Generation result from each Boolean query. For Multi-Train, we incorporate all generations from the generation model for each Boolean query in our training data. For Single-Inference, we test our model using a Single-Generation result from each Boolean query. Lastly, for Multi-Inference, we test our model using all generated queries from each Boolean query, and fuse them together.
With the same training parameters applied, we present the resulting effectiveness of screening prioritisation from ChatGPT using a bar chart in Figure 4. From the results, it is apparent that training the neural ranker using multiple creative queries does not typically yield higher effectiveness compared to training on a single deterministic query. The sole exception to this observation is the CLEF-2019-DTA dataset. However, when it comes to
\begin{table}
\begin{tabular}{l l l l l l} \hline \hline
**Dataset** & **Model** & **Query** & **AP** & **WSS95** & **WSS100** \\ \hline \multirow{3}{*}{CLEF-2017} & ChatGPT & GQ & **0.293** & **0.590** & **0.501** \\ & ChatGPT & GT & 0.140\({}^{*}\) & 0.486\({}^{*}\) & 0.396\({}^{*}\) \\ \cline{2-6} & Alpaca & GQ & **0.284** & 0.591 & **0.502** \\ & Alpaca & GT & 0.270 & **0.595** & 0.502 \\ \hline \multirow{3}{*}{CLEF-2018} & ChatGPT & GQ & **0.381** & **0.692** & **0.528** \\ & ChatGPT & GT & 0.277\({}^{*}\) & 0.626\({}^{*}\) & 0.491 \\ \cline{2-6} & Alpaca & GQ & **0.333** & **0.640** & 0.485 \\ & Alpaca & GT & 0.307 & 0.637 & **0.501** \\ \hline \multirow{3}{*}{CLEF-2019-DTA} & ChatGPT & GQ & **0.247** & **0.660** & **0.528** \\ & ChatGPT & GT & 0.175 & 0.565\({}^{*}\) & 0.504 \\ \cline{1-1} \cline{2-6} & Alpaca & GQ & **0.241** & **0.665** & **0.521** \\ & Alpaca & GT & 0.164 & 0.544\({}^{*}\) & 0.458 \\ \hline \multirow{3}{*}{CLEF-2019-Intervention} & ChatGPT & GQ & **0.433** & **0.573** & **0.503** \\ & ChatGPT & GT & 0.164\({}^{*}\) & 0.443\({}^{*}\) & 0.404 \\ \cline{1-1} \cline{2-6} & Alpaca & GQ & **0.317** & **0.491** & **0.448** \\ \cline{1-1} & Alpaca & GT & 0.232 & 0.458 & 0.408 \\ \hline \multirow{3}{*}{Seed Collection} & ChatGPT & GQ & **0.217** & **0.530** & **0.505** \\ \cline{1-1} & ChatGPT & GT & 0.127\({}^{*}\) & 0.494 & 0.490 \\ \cline{1-1} \cline{2-6} & Alpaca & GQ & **0.221** & **0.529** & **0.500** \\ \cline{1-1} \cline{2-6} & Alpaca & GT & 0.164 & 0.432\({}^{*}\) & 0.439 \\ \hline \hline \end{tabular}
\end{table}
Table 3. Results comparing the effectiveness of generating a title (GT) versus generating a natural language query (GQ) from the Boolean query of a systematic review for screening prioritisation. Statistical significant differences (\(p<0.05\)) between the effectiveness of a generated title versus a generated natural language query are indicated by \(*\).
Figure 4. Effectiveness when different training and inference settings are used for ranking candidate documents using the generated natural language query from ChatGPT.
Figure 3. Differences in MAP from Boolean, Generated Query (GQ) to their fused effectiveness.
Multi-Inference, models generally exhibit improved effectiveness. This implies that the diversity introduced in the inference stage can positively impact the effectiveness of the ranking model, allowing it to generalise better and handle different query formulations. On the other hand, training using a diverse number of generated queries for the same topic may not significantly improve the effectiveness of the ranking model. This is likely due to the model being trained to generalise over multiple query formulations, which could lead to an averaging effect on the learned query-document relevance patterns.
## 6. Summary of Findings
Finally, we answer our research questions based on our results:
**RQ1:**_Comparison of original Boolean queries and generated natural language queries_. We find that generating natural language queries generally results in higher effectiveness than using Boolean queries. This is valid both when the Boolean query is used in the context of the SOTA neural rankers for screening prioritisation, and when used within the previously proposed CLF technique (Zhu et al., 2019), the only published technique for screening prioritisation that explicitly uses the Boolean query for ranking.
We also find that large gains can be obtained when the rankings obtained when using the original Boolean query and the generated natural language query are fused together. This result was obtained when using a simple rank fusion method, CombSUM: further improvements might be possible if using more sophisticated fusion methods (Zhu et al., 2019; Li et al., 2020). This result suggests that these queries have complementary characteristics that can benefit screening prioritisation.
**RQ2:**_Impact of generation models_. When the effectiveness of two generation models, ChatGPT and Alpaca, are compared, we observe that ChatGPT consistently generates natural language queries that are more effective in screening prioritisation. The gap in effectiveness is more pronounced for the CLEF-2019-Intervention dataset. This may be attributed to the training of Alpaca models on primary DTA topics, with intervention topics only contained in the test portion. This could have affected Alpaca's ability to perform effectively on the intervention topics. Conversely, ChatGPT, used in a zero-shot fashion, is not specifically tailored towards any topic; thus, its effectiveness is not significantly influenced by different types of systematic reviews.
**RQ3:**_Impact of ranking methods_. We find that neural methods consistently outperform traditional term-matching methods; they also outperform runs submitted by the research teams that participated in the CLEF TAR shared tasks associated with the datasets we use. This finding highlights the robustness and effectiveness of neural ranking methodologies for the screening prioritisation task.
**RQ4:**_Effect of Multi-Generations_. We identify considerable variance in the effectiveness of multiple natural language queries generated from both ChatGPT and Alpaca when applied to screening prioritisation. Notably, the Alpaca model tends to generate more unstable queries. However, when the results derived from these diversified queries are integrated, they often outperform the strategy of generating and using just one deterministic query for ranking documents. This occurs in 52.3% of cases when using ChatGPT and in 61.7% of cases when using Alpaca. If we also consider instances where the effectiveness is tied, these percentages increase to 55.5% for ChatGPT and 70.3% for Alpaca.
This finding suggests that the creativity of generative LLMs can enhance the natural language query generation task. Importantly, our findings also indicate that if a method could be implemented to effectively select the highest-performing generated query, the effectiveness of the downstream screening prioritisation task can be significantly improved (Oracle results). This potential for query selection may open new avenues for improving systematic review processes, pointing to the value of research into query performance predictors for systematic reviews; research on query performance predictors has been substantial in general information retrieval (Beng et al., 2019), but very scarce in the context of systematic reviews, where common predictors have been shown to be mostly ineffective (Zhu et al., 2019).
**RQ5:**_Derived natural language queries vs. working titles_. We find that using a systematic review's working title as an input query for screening prioritisation generally results in lower effectiveness when compared to the use of our methods that uses the Boolean query for the same review to derive a natural language query to rank the candidate documents. This is different from when the final titles of the review are used: a practice that is common when experimenting with automation methods for systematic review, but only possible in retrospective evaluation, and not in practise. This discrepancy between working title effectiveness and final title effectiveness may be due to the evolving nature of the review title throughout the research process for the systematic review.
## 7. Conclusion
Our approach to screening prioritisation advances the state of the art by combining the power of large language models, neural rankers, and relying only on information available at the time of screening during the production of a systematic review. Previous work relied instead on the final review title as the query for ranking candidate documents for screening, which is only available at the end of producing a systematic review. This led to overestimated effectiveness scores, as our experiments show. Using instruction-based LLMs to generate queries from the Boolean queries available at the time of screening is competitive with the state of the art using the final title. We also show that improvements in effectiveness can be achieved when rankings based on Boolean queries and generated natural language queries are combined with rank fusion.
Our results also show that while Alpaca, an open-source generation model, can match ChatGPT's effectiveness in some cases, ChatGPT generally produces better natural language queries, leading to more effective screening prioritisation. We also found that multiple generations of natural language queries, while leading to high variance in effectiveness, have the potential to yield a significant increase in effectiveness when effective query performance predictors are available to identify the best query variants, which leaves room for future work.
In summary, this paper has demonstrated the value of instruction-based models in generating and improving queries for screening prioritisation with neural rankers. Our future work involves investigating the potential of combining the query generation capability
of instruction-based models with the highly effective ranking capability of neural rankers. In short, we believe that end-to-end training of instruction and ranking models can lead to even higher effectiveness in ranking documents.
###### Acknowledgements.
Shuai Wang is supported by a UQ Earmarked PhD Scholarship. This research is funded by the Australian Research Council Discovery Projects programme ARC DP 210104043, and by the Universities Australia - DAAD Joint Research Co-operation Scheme. This work was partially funded by the European Commission under GA 101070014 (OpenWebSearchEU).
|
2309.03649 | Exploring kinase DFG loop conformational stability with AlphaFold2-RAVE | Kinases compose one of the largest fractions of the human proteome, and their
misfunction is implicated in many diseases, in particular cancers. The
ubiquitousness and structural similarities of kinases makes specific and
effective drug design difficult. In particular, conformational variability due
to the evolutionarily conserved DFG motif adopting in and out conformations and
the relative stabilities thereof are key in structure-based drug design for ATP
competitive drugs. These relative conformational stabilities are extremely
sensitive to small changes in sequence, and provide an important problem for
sampling method development. Since the invention of AlphaFold2, the world of
structure-based drug design has noticably changed. In spite of it being limited
to crystal-like structure prediction, several methods have also leveraged its
underlying architecture to improve dynamics and enhanced sampling of
conformational ensembles, including AlphaFold2-RAVE. Here, we extend
AlphaFold2-RAVE and apply it to a set of kinases: the wild type DDR1 sequence
and three mutants with single point mutations that are known to behave
drastically differently. We show that AlphaFold2-RAVE is able to efficiently
recover the changes in relative stability using transferable learnt order
parameters and potentials, thereby supplementing AlphaFold2 as a tool for
exploration of Boltzmann-weighted protein conformations. | Bodhi P. Vani, Akashnathan Aranganathan, Pratyush Tiwary | 2023-09-07T11:32:50Z | http://arxiv.org/abs/2309.03649v1 | # Exploring kinase DFG loop conformational stability with AlphaFold2-RAVE
###### Abstract
Kinases compose one of the largest fractions of the human proteome, and their misfunction is implicated in many diseases, in particular cancers. The ubiquitousness and structural similarities of kinases makes specific and effective drug design difficult. In particular, conformational variability due to the evolutionarily conserved DFG motif adopting in and out conformations and the relative stabilities thereof are key in structure-based drug design for ATP competitive drugs. These relative conformational stabilities are extremely sensitive to small changes in sequence, and provide an important problem for sampling method development. Since the invention of AlphaFold2, the world of structure-based drug design has noticably changed. In spite of it being limited to crystal-like structure prediction, several methods have also leveraged its underlying
architecture to improve dynamics and enhanced sampling of conformational ensembles, including AlphaFold2-RAVE. Here, we extend AlphaFold2-RAVE and apply it to a set of kinases: the wild type DDR1 sequence and three mutants with single point mutations that are known to behave drastically differently. We show that AlphaFold2-RAVE is able to efficiently recover the changes in relative stability using transferable learnt order parameters and potentials, thereby supplementing AlphaFold2 as a tool for exploration of Boltzmann-weighted protein conformations.
American Chemical Society, Department of Chemistry, University of California, Berkeley, CA 94720, USA
## 1 Introduction
The first step of a typical structure-based drug design pipeline [2] is target protein structure prediction [3]. Traditionally this has been done using experimental techniques like x-ray crystallography, NMR spectroscopy or computational techniques like homology modeling [4, 5, 6]. These are all either time consuming, have limited accuracy or require adequate prior knowledge. With AlphaFold2[7](AF2), we saw a paradigm shift in protein structure prediction. However, protein function is not solely dependent on a single native like structure, rather it is only properly understood or characterized by the protein's structural ensemble, including potentially several metastable conformations. Moreover, it is not sufficient to have a sense of conformational diversity alone, as relative thermodynamic stabilities of protein conformations can be key in understanding activity, effects of mutation, and differences in behaviors of closely related proteins. In the short time since the development of AlphaFold2 (AF2), several publications have discovered ways to bridge the gaps between conformational dynamics variability and AF2 predictions. Many of these leverage AF2's internal architecture, and range over a spectrum between needing substantial input from physics-based simulation engines to not needing any physics at all. A common approach is to exploit the input featureisation of multiple sequence alignment to introduce stochastictity and deviations from native structure to the AF2 prediction [8]. This includes our work [9] combining AF2 and the machine
learning-based enhanced sampling method Reweighted Autoencoded Variational Bayes for Enhanced Sampling (RAVE) [10] into a combined protocol which we call AF2-RAVE to go from sequence to conformations ranked as per their thermodynamic or Boltzmann weights.
A well explored way to study the conformational diversity of a biomolecule is using the computational method of molecular dynamics (MD), i.e. by parametrizing intra- and intermolecular forces with a force field and integrating newtons equations of motion [11, 12, 13]. However there are two key challenges in MD. First is the difficulty in sampling biologically relevant timescales. Since the integration of the equations of motion are limited by or fastest degree of motion which is at a femtosecond time scale, seeing changes of interest which can be at timescales of nanoseconds to hours is often prohibitively expensive and intractable with our current computational capacities. This has given rise to a large body of work in enhanced sampling algorithms [14, 15, 16, 17] for difficult to sample distributions. These algorithms essentially attempt to sample a modified distribution and then reweight observables to obtain the correct statistics. This includes a wide range of methods addressing different concerns in sampling, each with its own set of challenges to deal with and it's own limitations. Broadly these methods can be classified in at least two ways: those that attempt to change the underlying Hamiltonian of the system [18, 19, 20], and those which aim to statistically bias trajectories by splitting and resampling them [21, 22, 23].
The second, closely related to the first, challenge in MD is the so-called curse of dimensionality. Biomolecular systems usually are roughly in the range of \(10^{3}\) to \(10^{7}\) atoms, leading to an unmanageably large number of degrees of freedom. We do not aim to sample the entire configuration space of these systems. However, it is commonly true that most biomolecules have a small number of low lying degrees of freedom, or a low lying manifold, that completely describes transitions of interest, or can separate conformational differences of biological relevance [24, 25]. This underlying manifold is rather confusingly referred to by many names, some commonly used are: reaction coordinate, alluding to the fact that the manifold traverses transitions between metastable states; collective variable, as it is usually a
function of multiple coordinates of the system; order parameter, as it is used to parametrize different metastable states. In this work, we will refer to these degrees of freedom as "collective variables" (CVs) when using them as inputs or a basis set, and "order parameter" (OP) when referring to the finally obtained variable that we will bias along. Solving this second challenge thus has implications for the first as well.
While some of the aforementioned enhanced sampling methods attempt a generalized increase in sampling across every degree of freedom for every atom, for instance by increasing temperature, most of them aim to increase sampling along a specific manifold in the configuration space of the molecule. Even for the more generalized enhanced methods, actually quantifying the increase in sampling for high dimensional systems is difficult. Conversely, for methods that sample along predetermined manifolds, it is often the case that the wrong choices result in incomplete or incorrect configuration space sampling. Prior to the advent of machine learning in chemistry these low-lying degrees of freedom were almost always chosen by careful inspection and prior biophysical knowledge [26, 27, 28, 29, 30, 31]. Since the popularization of manifold learning methods, the identification of collective variables using ML has been a large field of interest [32, 33, 34]. However most methods in this field are still best suited to very small set of problems and each have their own particular limitations [15, 32], in particular the need for _a priori_ information remains a frequent bottleneck.
In this work, we use the AF2-RAVE [9] protocol, but with some significant refinements that make it more efficient, statistically robust, and transferable to mutations, suggesting it might be transferable within families of closely related proteins. We demonstrate this protocol on a kinase and its mutants.
Protein kinases, and in particular the DFG-in to DFG-out transition have been extensively studied using MD along with several enhanced sampling methods [35, 36]. This family of enzymes is one of the most important therapeutic targets for structure-based drug design, as they are ubiquitous in the human proteome [37]. Their main role is to mediate cell signalling in a large range of biomolecular processes at the cell level, in particular replication,
hence implicating them in a majority of cancers. While there already exist several highly effective medicinal molecules for cancer therapy that function by targeting and inhibiting kinases,[38] one could argue that at best we have scratched the surface in terms of kinase based therapeutics.[39]
In their active state, protein kinases catalyze the phosphorylation of substrate proteins through the transfer of the \(\gamma\)-phosphate group from adenosine triphosphate (ATP) or guanosine triphosphate (GTP). Often, the substrate protein is another in a cascade of kinases required for cell signalling.[40, 41] While there are over 500 kinases in the human kinome, making them a challenging class to study, the protein kinome universally has some highly conserved structural motifs with a structurally well characterized active state. One key motif for this characterization is the Asp-Phe-Gly (DFG) motif in the activation loop. This motif has two structural conformers, one with the Asp pointing into the loop, the DFG-in or active conformation, and one with it pointing out into the solvent, the DFG-out or inactive conformation. The ATP binding, and hence phosphorylation catalysis can only occur in the DFG-in conformation. Most drugs targeting kinases are "ATP competitive", i.e. they bind to prohibit ATP binding. These ATP-competitive drugs themselves are classified mainly in two types, either binding to the active site in DFG-in conformation, hence inhibiting the binding of ATP, or binding to the DFG-out conformation and hence stabilizing the inactive state. However, the presence of kinases in many essential cellular functions and their homologous nature makes specificity and efficacy particularly hard to achieve. Often, we are interested in drugs that bind preferentially to specific kinases _without_ affecting other kinases. Given this, the characterization of diverse inactive states is of particular importance, both in terms of a structural understanding of these states, as well as knowledge of the thermodynamic stabilities relative to active state. For instance, given a target kinase, finding a uniquely stable inactive state with a novel binding site could lead to a more specific, less promiscuous (and hence toxic) drug. However, the most robust way to do this computationally, obtaining a MD trajectory that traverses the space of active and inactive states multiple times, is impossible. Addition
ally, the transition from DFG-in to DFG-out is highly non-local and a concerted combination of several long-range and large-scale motions, so that characterizing it and studying it even with the aid of enhanced sampling is difficult.
Another important covariate to study kinase conformational ensembles is that of point mutations, which are often the cause of incorrect signalling leading to pathology. One key reason for this is that the balance of probability between active and inactive conformations is delicate and often flipped on changing single residues, as shown by the system we have chosen in this work, DDR1.
Our AF2-RAVE based approach for solving this problem involves combining structural ensembles obtained from AF2 with a machine learning algorithm to learn order parameters for biasing. To demonstrate this, we use the discoidin domain receptor tyrosine kinase 1 (DDR1), which in wild-type is more stable in an inactive DFG-out conformation. However, in several single site mutations, specifically D671N, Y755A and Y759A, the relative DFG conformational stabilities are flipped. One of the reasons we choose this set of systems is that a recent paper [42] provides extremely detailed and valuable work on their DFG stabilities, with atypically long unbiased MD trajectories. We show that our results agree with theirs qualitatively, in that we predict the flipping of DFG conformation preference on mutation. While we sacrifice some accuracy qualitatively, our method is faster by roughly 2-3 orders of magnitude (exact simulation lengths described in Results), and has the potential of being reused without relearning a lot of the information.
We will begin by discussing the methods used within the protocol and outlining AF2-RAVE. Next, we discuss some molecular biology background for kinases, in particular those which may be important to the DFG-in to DFG-out transition. Finally, we will describe our protocol and the results we obtained.
## 2 Methods
In the section we will first describe the methods that compose our protocol: (i) AlphaFold2 and MSA depth modification, (ii) Metadynamics, and (iii)State predictive information bottleneck (SPIB,[43] the most recent variant of RAVE[10]). Finally we will list important parameters to note for our MD simulations in (iv) Simulation Details. Additionally, Fig. 1 shows a high level flowchart form of the method.
### AlphaFold2 and MSA depth modification
The search for a computational model to predict crystal structures or other native-like structures for proteins has been a central part of computational molecular biology. When AF2 was introduced in 2020 it was unprecedented in its speed and accuracy.[7] The internal architecture of AlphaFold2 uses three primary components: the alignment of multiple evolutionarily related sequences, an attention-based neural network, or transformer, and a black hole initialized attention based graph neural network structure module. The model is trained on
Figure 1: A high level schematic of the method, showing: (i) a typical input sequence, (ii) AF2 generated seed structures, (iii) regular space clustering and unbiased runs, (iv) SPIB to suggest OPs, (v) metadynamics runs.
the entire RCSB database protein database of experimentally derived structures. While transformative to the field, the model does not quite solve the protein folding problem, as proteins _in vivo_ are not defined by a single structure but by the structural ensemble.
The multiple sequence alignment (MSA) form of the input has been found to be a convenient point of input to introduce stochasticity. The simplest way to do this, which we employ in AF2-RAVE is to decrease the depth of the MSA input into both channels of the model, and then run the model repeatedly with randomly chosen subsets of the full MSA. In some sense this process is a way to withhold data from the model to produce ostensibly incorrect outputs. However, AF2 has also been found to perform consistently badly in situations that deviate from the norm of their evolutionarily related sequences, which is likely due to the MSA form of input featurization leading to significant bias. From this perspective, the above described protocol should lead to some random sampling of the correct structure in these special cases. Moreover, in cases where the protein family has multiple metastable structures, with the structure of native stability being different in different members of the family, this protocol could conceivably provide hints or "breadcrumbs" for the entire conformational space of interest.
A recent study [44] has proposed that AF2 has indeed learned an energy-surface for protein folding in its transformer weights. They propose and provide significant evidence for the idea that the MSA pair representation matrix simply initializes close to the correct minimum while the transformer architecture performs an optimization step on this energy surface. This suggests that our MSA reduction protocol simply initializes the transformer closer to a different local minimum, possibly a non-native for the learned energy surface. This might be a biologically native-like structure from the previously described special cases, or a biologically relevant metastable structure.
In spite of this, this modified version of AF2 still leads to some highly unphysical structures, as we illustrated previously [9]. Worse still, the structures obtained, including those that are metastable are not in any physically reasonable probability distribution. Nor is there an
obvious way to directly obtain a distribution or free energy surface from them that could account for both enthalpy and entropy. Some direct notion of physics and thermodynamics is still required for this information to be usable.
### Metadynamics
Hamiltonian-based enhanced sampling algorithms rely on the approach of editing the underlying energetics of dynamical systems. For instance umbrella sampling adds harmonic restraints along successive points of conformation of space in replicate simulations and recombines them to reproduce the original energy surface. One of the most powerful of this family of methods to explore complex energy landscapes with high barriers is metadynamics [20].
For a predetermined order parameter(s), metadynamics aims to learn a biased potential that is the negative of the true free energy. To achieve this, a history dependent potential is added to the Hamiltonian of the dynamics. This potential is updated periodically by adding Gaussian functions to the bias function centered at current values of the order parameters.
This potential acts as a driving force, pushing the system away from visited regions, forcing it to explore new areas of the configuration space. The specific version we employ is well-tempered metadynamics, wherein the height of the Gaussians is modulated with a time dependence to mimic a high temperature simulation and to prevent the bias potential from growing indefinitely. In this case the bias potential can be shown to converge to the true underlying free energy modulo a multiplicative constant [45].
While clever and asymptotically accurate estimates for time dependent bias rewinding exist for dynamic biases [20], they are subject to normalization errors and some strong assumptions. Additionally in spite of the well-tempering of this method, often its unbounded nature can result in sampling regions of configuration space that we are not interested in or are sufficiently rare enough to be irrelevant. In the study we use metadynamics to learn a potential that we then freeze and use as a static Hamiltonian bias. By controlling the re
gion explored by the initial metadynamics through a stop condition we circumvent learning and unbounded potential bias, and compute our final statistical estimates with fewer errors. Next, we initialize independent walkers using the same static bias from both DFG-in and DFG-out structures, and run them till we see transitions followed by a stable trajectory in the basin that they were not initialized in. In general, since metadynamics relies on dynamically pushing the simulation to undiscovered regions, and has historically been considered difficult to learn an effective static bias, and hence is not a common practice [45, 46]. Every single independently launched trajectory visit the DFG-in basin when launched from DFG-out, and the DFG-out basin when launched from DFG-in. We find this to be a computationally more efficient protocol for getting overlap in explored configuration space. We find that the same static bias can also occasionally lead to back-and-forth transitions in the same trajectory, but the approach used in this work is computationally much more attractive.
It is important to note however that metadynamics suffers from the common enhanced sampling method limitation that it requires an _a priori_ notion of the approximate reaction coordinate or underlying low dimensional manifolds to use as order parameters for sampling. A now common approach that we also employ in this work is to use a machine learning method to learn OPs for sampling, specifically, here we use the approach of a time lagged state predictive autoencoder, described below. Previous work has shown its suitability to learn metadynamics OPs for a range of systems such as conformational changes, membrane permeation [47], as well as for using in this AF2-seeded approach [9]. This represents a crucial step towards making enhanced sampling methods usable on novel systems with limited _a priori_ biophysical understanding.
### State predictive information bottleneck
To solve the problem of an unknown underlying manifold required for biasing that captures relevant slow degrees of freedom, we use a method based originally on the reweighted autoencoder for variational Bayes algorithm (RAVE) [10]. We use its more updated form, the
state predictive information bottleneck [43].
In general a variational auto encoder (VAE) is a neural network framework that attempts to learn a low dimensional probabilistic function (the encoder) of the input and a function that is then able to reproduce the input (the decoder). This underlying low dimensional function is the information bottleneck, i.e. it minimizes input information while maximizing its ability to obtain the output. Here, since we aim to study a dynamical trajectory, we modify the basic VAE to incorporate a past future information bottleneck. Given a frame of the trajectory as an input, instead of reproducing this input, we reproduce a trajectory frame at a later time stamp, i.e. with some time lag. Additionally, we note that since proteins have several degrees of freedom that move at different time scales, for a specific time lag we are not attempting to reproduce the trajectory at every coordinate. Since we do not know which degrees of freedom correspond with which time scale, we instead choose for the output a notion of states, represented by one hot encoded vectors. These states are iteratively learnt between epochs of training the neural network. This protocol has been shown to be effective in several complex systems [47, 48, 49].
Since we aim to make our protocol generalizable, we start with a large set of input CVs. However, biasing using metadynamics on a function of a large set of CVs is hard to control and not always statistically stable. To alleviate this to some extent, we adopt a basis CV refinement step as a stand-in for regularization in our OP learning protocol, wherein we run SPIB three times, each time discarding features with weights lower than 0.25 of the maximum weight.
### Simulation details
The protein is represented by the AMBER03 force field [50]. The simulations are performed at 300 K with the BAOAB integrator [51] in OpenMM [52]; LINCS is used to constrain the lengths of bonds to hydrogen atoms [53]; Particle Mesh Ewald is used to calculate electrostatics [54]; the step size was 2 fs. The systems are solvated with TIP3P water models and equilibrated
under NVT and NPT for 200ps and 300ps respectively. To prevent melting during biasing, we also restrain the conserved \(\alpha\)C-helix of the N-lobe. Large scale motion for this motif is essential, both to see a transition and for drug unbinding pathways [55, 56]. However, biasing CVs that include distances from this helix often leads to irreversible disordering, and we find that applying torsional restraints on residues 65 to 81 is sufficient to allow for smooth upward motion. In Figure 2, we show the \(\alpha\)C-helix and its position with respect to the DFG loop.
## 3 Results and Discussion
Our overall protocol comprises the methods described in the previous section. We start by using reduced-MSA AlphaFold2 to generate a diverse set of initial conformations, and cluster them using regular space clustering. We choose this method of clustering because the reduced-MSA AF2 outputs tend to be quite sparse in regions of interest, and we want to prioritize the tails of the distribution over highly sampled regions. The centers of these
Figure 2: a) The structural anatomy of the DDR1 kinase molecule showing the activation loop (red), Gly rich P-loop(blue), \(\alpha\)C helix (purple) and the characteristic N lobe \(\beta\) sheets (green). These motifs are relevant to DFG-in to DFG-out transition. b) The conserved salt bridge between K57 in \(\beta_{3}\) and E74 in \(\alpha\)C helix that is crucial in ligand dissociation and for basic kinase functioning [55].
structures are then used to seed unbiased MD trajectories, which are our input trajectories for RAVE. This learns a 2-dimensional order parameter expressed as an information bottleneck. Then, we run well-tempered metadynamics biasing along this 2-dimensional order parameter. Finally we use the metadynamics learnt bias for the wild type protein as a static bias to sample distributions for the wild type and mutant structures sequences.
We will discuss the results in two sections. First, we discuss the modified AF2 outputs for all four sequences, and the process of learning a biasing potential, and next, we discuss results from biased dynamics for DDR1 WT and mutants.
When we refer to the kinase DFG-in, -out, and -inter structures, we use the Dunbrack method[57] for classification using distance cutoffs. Representative structures for these states are shown in Fig. 3. When referring to dimensions or OPs that are learnt through RAVE, we label them as information bottlenecks (IB) and assign variables \(\sigma\).
Figure 3: Representative structures from reduced MSA AF2 for the DFG-in (purple), DFG-inter (blue) and DFG-out (green) shown in two views, superimposed on the same structure.
### Learning a bias potential from AF2-RAVE on WT DDR1
Our first step is to generate structures using the reduced-MSA version of AF2. We generate 1280 structures for each kinase: 640 for MSAs of depth 16 and 32, with 128 random seeds generating 5 structures each. AlphaFold2 even with the reduced MSA approach is simply unable to distinguish between the conformational diversity expected for these 4 sequences, giving effectively identical results for all. This can be seen from Fig. 4a) where we show the populations of active, inactive, and the known intermediate or transition state "DFG-up" or "DFG-inter" state after filtering out obviously unphysical structures (e.g. with broken bonds). These structures are classified using Dunbrack classification described below. We note that while we see significant DFG-out population, even in the wild type which is known to have a higher inactive state stability, AF2 predicts the more commonly found DFG-in structure. These are in disagreement with population densities implied by previous long MD simulations of the same kinase, in spite of thoroughly searching MSA hyperparameters to force increased structural diversity [42]. The transition state is currently commonly called "inter" [57], as it has been consistently found to be necessary structure to see the DFG-in to DFG-out trajectory. Previously, it was referred to as "DFG-up", for the upward pointing position of the Asp residue sidechain in the traditional structural view (with the N-lobe above the C-lobe), while the previously named "DFG-down" position is referred to as unassigned, as it is a high energy, physically unlikely structure. The fact that we see the transition state from reduced MSA AlphaFold2 is extremely significant and useful, as we have previously attempted to use RAVE simply with crystal structures of DFG-in and -out structures and failed, as simulations tend to push towards and then get stuck in the "DFG-down" configuration. We show further analysis of the AF2 reduced-MSA outputs in the SI.
Next, we use SPIB on these structures to propose possible OPs for biasing. The active-inactive conformational transition is a highly delocalized one, and requires an understanding of several intramolecular interactions. In particular, the \(\alpha\) C-helix forms a conserved salt bridge with the \(\beta\)3 strand that plays a crucial role in the molecule's transition [58, 55]. Further,
globally, the opening and closing of the two lobes (N and C) define the active-inactive transition [59]. From a dynamics perspective, the prime motifs involved in this transition include the Activation loop (A-Loop), P-Loop and \(\alpha\)C-helix, which mainly belong to the N-lobe (Fig. S1). To this end, we include a number of other CVs, described below, including the distances used for the DFG classification above. These CVs roughly correspond to those used in several previous studies focusing on different parts of the kinase molecular structure [57, 60, 61, 62]. In Table S1, we list all the residues involved in distances that we consider, indicating our acronym, the conserved residue (if applicable), and the resid for DDR1 (numbered from 1), and a description of the motif it belongs to. In Table S2, we list all the distances used as initial inputs for SPIB, and indicate them visually in Figure S1.
In this nomenclature, the Dunbrack distances, which we use to project our final potentials of mean force (PMFs), are sbridgeK CB - ChelE CB and sbridgeK CB - DFGAsp CB. Referring to these as \(d_{1}\) and \(d_{2}\), the structures are classified as: DFG-in if \(d_{1}<11\) and \(d_{2}>14\), DFG-out if \(d_{1}>11\) and \(d_{2}<14\), DFG-inter if \(d_{1}<11\) and \(d_{2}<11\), and unassigned if \(d_{1}>11\) and \(d_{2}>11\).
In Fig. S2 we show the distributions of AF2 outputs for all four sequences of DDR1 on the entire set of input CVs and find that they all look quite similar, in spite of known differences in these mutants. This is interesting, considering we expect AF2 to usually perform more confidently in evolutionarily faithful sequences. However, in this case, we suspect that the fact that DDR1 is unusual in its DFG-out stability contributes to the easy access it has to conformational diversity. Nevertheless, our AF2 outputs still predict higher stability for the DFG-in structures universally for the DDR1s.
To learn a biasing potential, we run multiple metadynamics trajectories in parallel that share bias potentials. This is done only for the wild-type sequence and the same bias is then used for further calculations for all sequences. These initial structures are chosen as in the original AF2-RAVE paper [9], as follows: first, we run AF2 using ColabFold with a manually set MSA depth of 8 and 16. Next, we run regular space clustering with the minimum distance
parameter of 9 on standardized CV values, using the set of CVs described above.
We use the following stopping conditions for the initial metadynamics runs: (1) If the walker started in DFG-in(out) structures, it must reach the DFG-out(in) structure, and (2) If the walker was not initially in one of the two main metastable states, it must reach one of them. We only stop if the transition is stable for 1 nanosecond during the biased simulation. These simulations are between 5ns and 20ns long.
In Fig. S4, we show the bias learnt for the process. In order to demonstrate the transferability and efficiency of our protocol, we only learn an IB and a bias potential using the wild type. This also provides a basis to propose learning universal biases for systems with transferable CVs potentially allowing for more efficient sampling for homologous families of proteins.
### Biased dynamics on DDR1 mutants
In this study, our main result is the accurately predicted relative stabilities of the DFG-in and DFG-out structures for the wild type and mutants for the kinase DDR1. In Fig. S5, we show predicted PMFs for these structures. We compute DFG-in versus DFG-out relative thermodynamic stability by integrating probabilities over Dunbrack definitions for kinase structural states and calculating the \(\Delta G\) between these states: (i) WT: 0.5 \(K_{B}T\), (ii) D671N: -0.2 \(K_{B}T\), (iii) Y755A: -0.42 \(K_{B}T\), (iv) Y759A: -0.13 \(K_{B}T\). Each of these were computed with 5 trajectories each, and had standard deviations of (i) WT: 0.23 \(K_{B}T\), (ii) D671N: 0.12 \(K_{B}T\), (iii) Y755A: 0.29 \(K_{B}T\), (iv) Y759A: 0.11 \(K_{B}T\). This flipping in Boltzmann ranking of active and inactive is in concurrence with previous findings [42], which were obtained using long unbiased MD simulations. We can also integrate over our reweighted data to obtain the Dunbrack populations to compare with those from AF2, shown in Fig. 4.
In Fig. 5 we show snapshots of one example of our sampled trajectories. We also note some salient details of the transition we study. We find that in the process of the transition, the breakage and formation of the salt bridge is clear, and the large scale upward motion of
the Chelix is absolutely essential for sampling dynamics, as noted previously.
Figure 4: Populations of active, inactive, and the known transition state “DFG-up” or “DFG-inter” state for wild type and mutants D671N, Y755A, and Y759A, a) through reduced MSA AF2 (MSA lengths 8 and 16 combined) and b) using our AF2-RAVE protocol. We clearly see that AF2 by itself is unable to distinguish between wild type and mutants, and in particular for mutant gives us the wrong order of stability. On the other hand, AF2-RAVE is able to find the reversal in stability on point mutation, and give us more thermodynamically representative populations for these states, in excellent agreement with benchmark calculations performed in Hanson et al[42] using unbiased MD simulations.
Figure 5: An example trajectory for the DFG-out to DFG-in transition obtained using AF2-RAVE. See Code Availability for details of an example video showing the transition.
## Conclusion
In this work, we have extended our previous protocol AF2-RAVE to obtain Boltzmann-ranked conformational diversity in protein kinases, specifically from the perspective of the pharmaceutically relevant and evolutionarily conserved DFG loop. We have previously shown that the protocol is able to capture a versatile range of transitions: rotameric metastability, large scale helical motions, and partial disordering [9]. Here, we study a the DFG loop conformational change that is of utmost therapeutic importance. Moreover, we choose DDR1 since there exists an unusually thorough sampling with unbiased trajectories by Hansen et al. [42] which provides more robust comparison than enhanced sampling-based work on large biomolecules is usually afforded. With AF2-RAVE we obtain the same conformational ranking and similar thermodynamic stabilities as found by Hanson et al. [42] for the DFG-in versus DFG-out conformations of DDR1. However the total MD simulation time in our work is around 2-3 orders of magnitude shorter.
It is important to note that our trajectories rely on collective variables that allow us to sample the DFG-inter transition state. Without AF2 structures to seed our initial unbiased trajectories, the collective variable learnt usually leads to a DFG-down structure resulting in an unsuccessful transition trajectory. Another significant factor of our work is the use of a static learnt bias, which is usually considered difficult and avoided. However, we find that the restriction to transitions of importance is necessary for this measurement in terms of both speed and replication.
This entire protocol can be repeated for other kinases in a few different ways. The first is to learn a bias using a wild-type and then sample mutants that are known to be pathological or disease causing due to changes in activity. The second is to learn a bias using a single kinase and sample other closely related kinases using the same potential. Finally, we hope that eventually a generalized set of collective variables and biases can be learnt that could sample across the human kinome. This protocol can be used to sample novel states that can then be used as relatively high-throughput inputs for cryptic pocket prediction algorithms [63],
and we demonstrate some examples in the SI.
**Acknowledgments**
P.T. and B.V. were supported by the National Institute of General Medical Sciences of the National Institutes of Health under Award Number R35GM142719. The content is solely the responsibility of the authors and does not represent the official views of the National Institutes of Health. A.A. was supported by NCI-UMD Partnership for Integrative Cancer Research. We are grateful to NSF ACCESS Bridges2 (project CHE180053) and University of Maryland Zaratan High-Performance Computing cluster for enabling the work performed here. We thank Drs. Eric Beyerle, Xinyu Gu, Zack Smith, and Dedi Wang for critical reading of the manuscript, and Drs. Mrinal Shekhar, Shashank Pant, Zack Smith and Dedi Wang for helpful discussions regarding kinases.
**Notes**
The authors declare the following competing financial interest(s): P.T. is a consultant to Schrodinger, Inc. and is on their Scientific Advisory Board.
Detailed description of methods used and further analysis of systems and sampling can be found in the supplement.
## Code Availability
The code to run AF2-RAVE in a seamless manner is available at [https://github.com/tiwarylab/alphafold2rave](https://github.com/tiwarylab/alphafold2rave). This can be run on Google Colab using GPUs. Using Colab Pro is advised. Codes, parameters, and bias files used to specifically run the simulations from this protocol can be found at [https://github.com/tiwarylab/kinase_Aloop](https://github.com/tiwarylab/kinase_Aloop). These will currently only work for the sequences used in this paper, but most files are easily adaptable
for use in other kinases with some specific changes, which are marked. An example video is also in the folder, and while full trajectories are too large to upload, they can be made available on request.
## Data Availability
All data associated with this work is available through [https://github.com/tiwarylab/kinase_Aloop](https://github.com/tiwarylab/kinase_Aloop).
|
2309.12022 | Demystifying Visual Features of Movie Posters for Multi-Label Genre
Identification | In the film industry, movie posters have been an essential part of
advertising and marketing for many decades, and continue to play a vital role
even today in the form of digital posters through online, social media and OTT
platforms. Typically, movie posters can effectively promote and communicate the
essence of a film, such as its genre, visual style/ tone, vibe and storyline
cue/ theme, which are essential to attract potential viewers. Identifying the
genres of a movie often has significant practical applications in recommending
the film to target audiences. Previous studies on movie genre identification
are limited to subtitles, plot synopses, and movie scenes that are mostly
accessible after the movie release. Posters usually contain pre-release
implicit information to generate mass interest. In this paper, we work for
automated multi-label genre identification only from movie poster images,
without any aid of additional textual/meta-data information about movies, which
is one of the earliest attempts of its kind. Here, we present a deep
transformer network with a probabilistic module to identify the movie genres
exclusively from the poster. For experimental analysis, we procured 13882
number of posters of 13 genres from the Internet Movie Database (IMDb), where
our model performances were encouraging and even outperformed some major
contemporary architectures. | Utsav Kumar Nareti, Chandranath Adak, Soumi Chattopadhyay | 2023-09-21T12:39:36Z | http://arxiv.org/abs/2309.12022v1 | # Demystifying Visual Features of Movie Posters for Multi-Label Genre Identification
###### Abstract
In the film industry, movie posters have been an essential part of advertising and marketing for many decades, and continue to play a vital role even today in the form of digital posters through online, social media and OTT platforms. Typically, movie posters can effectively promote and communicate the essence of a film, such as its genre, visual style/ tone, vibe and storyline cue/ theme, which are essential to attract potential viewers. Identifying the genres of a movie often has significant practical applications in recommending the film to target audiences. Previous studies on movie genre identification are limited to subtitles, plot synopses, and movie scenes that are mostly accessible after the movie release. Posters usually contain pre-release implicit information to generate mass interest. In this paper, we work for automated multi-label genre identification only from movie poster images, without any aid of additional textual/meta-data information about movies, which is one of the earliest attempts of its kind. Here, we present a deep transformer network with a probabilistic module to identify the movie genres exclusively from the poster. For experimental analysis, we procured 1382 number of posters of 13 genres from the Internet Movie Database (IMDb), where our model performances were encouraging and even outperformed some major contemporary architectures.
Movie genre identification, Multi-label classification, Transformer network.
## I Introduction
In the contemporary landscape of the film industry, where digital platforms have revolutionized the way we consume content, the role of movie posters has undergone a profound transformation. These visual canvases, once primarily relegated to theater exhibits, newspaper ads, and DVD covers, have emerged as powerful tools for attracting audiences in the era of online streaming [1]. A movie poster is no longer just a piece of promotional artwork; it has become a gateway to a cinematic experience, a glimpse into the world of a film, and a crucial factor in a viewer's decision-making process. Beyond their aesthetic appeal, movie posters are rich repositories of information. They convey not only the visual aesthetics of a film but also subtle cues about its genre, style, and thematic content. A well-crafted poster can encapsulate the essence of a movie, enticing viewers with tantalizing glimpses of its narrative and emotional landscape. As viewers increasingly turn to online platforms to discover and enjoy films, movie posters have assumed a pivotal role in the digital realm, guiding users in their quest for cinematic satisfaction [2].
In this digital age, where the sheer volume of available content can be overwhelming, accurate genre categorization has become paramount. Audiences rely on genre labels to navigate the expansive catalogs of online streaming platforms, seeking films that resonate with their tastes and preferences [3]. This reliance on genre categorization underscores the critical role that automated genre identification plays in enhancing the discoverability of films and improving the overall user experience. In the literature, automated movie genre identification has been performed mostly using video trailers [4, 5] and textual plot synopses [6, 7]. A very few works have been reported using movie posters [8]. However, movie posters play a crucial role in genre identification, and subsequently attracting potential target audiences, since they precede the release of the film itself, even before trailers and synopses become available. Moreover, compared to the video/textual modality, posters serve as prevalent thumbnails on OTT platforms, and are extensively shared across social media and various advertising/ promotional channels. This _motivates_ us to undertake the task of genre identification solely from movie posters.
In Fig. 1, we present some movie poster samples with corresponding genres. Analyzing only poster images brings several challenges, since a single poster may have limited information (Fig. 1.(a)), intricate backgrounds (Fig. 1.(b)), incorporated multiple small images in a collage (Fig. 1.(c)), or included solely the cast member photos (Fig. 1.(d)). Here, we analyze only the poster image without any aid of other modalities to identify its genre, which is a considerably more challenging task compared to other computer vision tasks, e.g., object detection, scene recognition and classification. Unlike objects, genres are intangible implicit features that can hardly be precisely determined in a poster [1]. Here, genre identification depends on the individual human perception, i.e.,
Fig. 1: Example of movie posters with genres.
a poster can belong to one genre for a person, and the same poster is of another genre to some other person. A movie poster may be of multiple genres, introducing the challenge of multi-label classification [9] and potentially exacerbating data imbalance concerns [10]. In a poster, a genre may be suppressed by other genres, e.g., in Fig. 1.(d), _action_ and _adventure_ genres are more explicit than _fantasy_. Moreover, the information present in posters itself brings additional challenges in identifying the movie genre, which we briefly mention in Appendix B. In this study, we obtained posters from IMDb, where each poster can be multi-labeled [9] with a maximum of three movie genres.
In this paper, we harness the power of a transformer-based architecture for its ability to grasp the global context and decipher intricate relationships spanning the entire poster image [11]. We first introduce a residual dense transformer, and then engage an ensemble mechanism and an asymmetric loss to tackle multi-label genre identification [12]. Furthermore, we propose a probabilistic module to accommodate a variable number of genres in the classification process. We now briefly mention our _contributions_ to this paper.
_(i)_ We work with genre identification only from poster images without any aid from textual/ video/ audio modalities. We introduce a residual dense transformer model, which features densely connected transformer encoders. Here, the model takes deep feature embeddings as input, instead of raw image patches.
_(ii)_ A poster can often be associated with multiple movie genres, presenting a multi-label classification challenge. To effectively address this issue, we employ an ensemble technique. Additionally, we adopt an asymmetric loss function to handle the intricacies of multi-label classification, particularly when positive labels are less prevalent than negative ones.
_(iii)_ Movie posters may exhibit a variable number of multi-genres. To adapt to this variability, we introduce a probabilistic module designed to eliminate extraneous genres and accurately discern the varying number of genres associated with each poster. To the best of our knowledge, this is the earliest attempt of its kind.
_(iv)_ To assess the effectiveness of our model, we conducted comprehensive experiments on the poster images procured from IMDb, compared with contemporary architectures, and performed an ablation study. Our findings offer valuable insights into the interplay between poster visual elements and movie genres, benefiting film recommendation systems and the film industry's digital evolution.
The rest of the paper is organized as follows. Section II provides a concise overview of related literature. The subsequent section III discusses the proposed methodology, followed by section IV, which delves into the analysis of experimental results. Finally, section V concludes this paper.
## II Related Work
Our primary focus in this paper is identifying multi-label movie genres exclusively from posters. Using only posters as input for this task is relatively limited in the literature [1]. However, some past works used trailer [13], clips [14], and facial frames [5] as visual inputs. Additionally, numerous studies have focused on textual inputs, such as movie plot summary [6] / synopsis [15] and screenplay [16]. Furthermore, past research endeavors engaged multimodal approaches, combining visual, textual, and audio data as input [17, 18]. Now, we provide a brief summary of significant prior studies, in addition to Table I.
_Visual Input:_ Visual data related to a movie, e.g., poster, teaser, trailer, or clip, can convey cues about the genre of the film. Many studies in the literature emphasized trailers [13, 14, 19, 20, 4], while only a few works have explored the use of movie posters [1, 8, 21, 22] for genre identification.
From a movie trailer, Zhou et al. [19] chose keyframes and extracted GIST, CENTRIST, and W-CENTRIST features, followed by a nearest neighbor (kNN)-based classifier to identify the movie genre. Simoes et al. [13] employed a CNN (Convolutional Neural Network) model to detect 4 different genres within a selection of trailers procured from LMTD (Labeled Movie Trailer Dataset). Wehrmann et al. [20] also used a CNN leveraging trailer frames across time to detect 9 genres from some trailers of LMTD. In [4], genres were identified from trailer clips using DIViTA (Dual Image and Video Transformer Architecture). In [14], spatio-temporal features were extracted from video clips, followed by using a hSVM (hierarchical Support Vector Machine). Initially, the videos were categorized into broader categories, e.g., movie, news, sports, commercial, and music videos, after which the specific genre was identified. Yadav et al. [5] predicted emotions of facial frames of trailers followed by genre identification using an Inception-LSTM-based architecture.
Pobar et al. [21] used Naive Bayes (NB) classifier on GIST and classeme features extracted from movie posters to predict the genres. In [8], YOLO was used to detect objects on posters and a CNN model was engaged for corresponding genre identification. Turkish movie genres were identified in [22] from posters using a basic CNN architecture. Wi et al. [1] employed a Gram layer to extract style features and merged with a CNN to classify genres from posters only.
_Textual Input:_ Textual data in movies, including plots/ synopses, subtitles, and user-generated reviews on social media, offer valuable insights into genre identification.
Ertugrul et al. [23] employed BLSTM (Bidirectional Long Short-Term Memory) to classify movie genres based on sentences extracted from plot summaries. In [6], GRU (Gated Recurrent Unit) was for a similar input/output.
Kar et al. [15] engaged plot synopses and proposed CNN-FE (CNN with Flow of Emotions) encoded with emotion flow, CNN, and BLSTM to predict movie tags, i.e., genres and associated plot-related attributes (e.g., violence, suspenseful, melodrama). Battu et al. [24] identified genres from synopses of multi-language movies. They also attempted movie rating prediction. Multiple models based on CNN, LSTM, and GRU were used. In [7], CNN with a self-attention mechanism was used for genre classification from textual synopses.
Movie screenplays were engaged by Gorinski et al. [16] to predict various movie attributes, including genre, mood, plot, and style. They used a multi-label encoder (MLE) and LSTM-based decoder for this task.
_Multimodal Input:_ In the past, often, two or more modalities (e.g., text, image, video, audio) were combined and used for the genre identification.
Arevalo et al. [26] proposed GMU (Gated Multimodal Unit) to fuse features extracted from text (synopsis, metadata) and image (poster) using Word2Vecc and CNN, respectively. Bribiesca et al. [17] engaged text (synopsis, metadata), image (poster), video (trailer), audio, and fed to MuIf-GMU, which is a transformer architecture with GMU, to identify movie genres. Bonilla et al. [18] proposed a multi-modal fusion using fastText, fastVideo, VGG-16, CRNN to fuse text (plot, metadata), video (trailer), image (poster), audio, respectively, for movie genre identification. In [25], various textural features were extracted from the text (synopsis, subtitle), video (trailer), image (poster), audio, and fed to various classifiers, e.g., LSTM, kNN, SVM, MLP (Multi-Layer Perceptron), DT (Decision Tree) followed by a fusion step to classify genres. Rasheed [27] et al. computed average shot length, visual disturbance, audio energy from trailer video/ audio and used a rule-based classifier to classify into four genres. In [28], DCT (Discrete Cosine Transform) and BoW (Bag-of-Word) were used to extract features from trailer videos and subtitles, respectively, followed by SVM for detecting movie genres.
_Positioning of our work:_ In the literature, there is a scarcity of research that focuses on genre identification exclusively from poster images. Furthermore, prior studies heavily leaned towards utilizing CNN-based features and did not effectively tackle the intricacies of multi-label genres. Our study is one of the earliest attempts to perform genre identification solely through poster images using a transformer-based architecture proficient in handling multi-label genres and eliminating extraneous genre labels.
## III Proposed Methodology
In this section, we first formulate the problem and subsequently present the solution architecture.
### _Problem Formulation_
We are given:
1. A set of \(\delta\) genres \(\mathcal{G}=\{\mathcal{G}_{1},\mathcal{G}_{2},\ldots,\mathcal{G}_{\delta}\}\)
2. A set of \(n\) movie poster images \(\mathcal{I}=\{\mathcal{I}_{1},\mathcal{I}_{2},\ldots,\mathcal{I}_{n}\}\)
3. Each movie poster image \(\mathcal{I}_{i}\in\mathcal{I}\) is associated with a set of \(\kappa_{i}\) number of genres \(\mathcal{G}^{<i>}=\{\mathcal{G}_{1}^{<i>},\mathcal{G}_{2}^{<i>},\ldots, \mathcal{G}_{\kappa_{i}}^{<i>}\}\subseteq\mathcal{G}\), \(1\leq\kappa_{i}\leq\delta\)
In this paper, we represent the ground-truth genre in terms of a multi-hot encoding vector (an example is shown in Fig. 2). The encoding of \(\mathcal{I}_{i}\) is represented by \(\Lambda^{<i>}\) of length \(\delta\), as defined below:
\[\Lambda_{j}^{<i>}=\begin{cases}1\,&\text{if }\mathcal{I}_{i}\text{ is associated with }\mathcal{G}_{j}\\ 0\,&\text{otherwise}\end{cases} \tag{1}\]
Given an unknown movie poster \(\mathcal{I}_{u}\), the objective here is to identify the genres associated with \(\mathcal{I}_{u}\). Since a sample can have more than one positive class, we formulate this problem as a _multi-label classification_ task [9], and predict multiple genre labels for each movie poster image.
### _Solution Architecture_
We employ a transformer network for our task due to its capacity for reducing inductive bias and its effectiveness in capturing global dependencies and contextual understanding compared to CNNs. However, it is important to note that in our approach, we do not directly utilize the ViT (Vision Transformer) paradigm, which involves feeding raw image patches directly into the transformer encoder [11]; instead, we feed deep feature embeddings and perform dense connection among the transformer encoders. We now begin with presenting our architecture, Residual Dense Transformer.
#### Iii-B1 **Residual Dense Transformer (RDT)**
RDT comprises three main modules: deep feature embedding, densely connected transformer encoders comprising multi-head self-attention and multi-layer perceptron, and a feed-forward neural network [29]. The pictorial representation of the workflow of RDT is shown in Fig. 3, and the modules are discussed below.
_(i) Deep feature embedding:_ The transformer network takes input into a sequence of token embedding [29]. Here, the input image \(\mathcal{I}_{o}\) is first resized into \(\mathcal{I}\in\mathbb{R}^{w_{z}\times w_{z}\times c_{p}}\) that is converted into a sequence of patches \(x_{p}^{i}\in\mathbb{R}^{w_{p}\times w_{p}\times c_{p}}\), for \(i=1,2,\ldots,n_{p}\). From each patch \(x_{p}^{i}\), we extract deep features \(a_{p}^{i}\) using a convolutional architecture \(f_{a}\). For our task, up to the average_pool layer of ResNet50V2 [30] as \(f_{a}\) works better among some contemporary architectures [31]. The employed \(f_{a}\)'s share weights among patches. Empirically, we set \(w_{z}=1024\), \(w_{p}=256\), \(n_{p}=(w_{z}/w_{p})^{2}=16\), and \(c_{p}=3\) that denotes the RGB channel count of \(\mathcal{I}\).
Fig. 2: Multi-hot encoding of a movie poster genre.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{Method} & Input & Architecture/ & \#Genre/ & Dataset & Multi- \\ \cline{3-8} \multicolumn{2}{|c|}{} & Technique & \#Fmpe & \#Ice & \#Ice \\ \hline \multirow{5}{*}{[9]} & [19] & Trailer & GIST constrist. kNN & 4 & (co) & \(\chi\) \\ \cline{2-8} & [13] & Trailer & CNN & 4 & LMTD & \(\chi\) \\ \cline{2-8} & [29] & Trailer & CNN & 9 & LMTD & \(\chi\) \\ \cline{2-8} & [4] & Trailer & Transformer & 10 & Trailer1x & \(\chi\) \\ \cline{2-8} & [4] & Clip & bsSVM & 4 & (co) & \(\chi\) \\ \cline{2-8} & [5] & Facial frame & Inception-LSTM & 6 & Embed(Io) & \(\chi\) \\ \hline \multirow{5}{*}{[29]} & [29] & Poster & NB & 18 & TMDB (Io) & \(\chi\) \\ \cline{2-8} & [9] & Poster & CNN, VOLO & 23 & MDB (Io) & \(\chi\) \\ \cline{2-8} & [29] & Poster & CNN & 4 & (co) & \(\chi\) \\ \cline{2-8} & [19] & Poster & CNN, Ganin layer & 12 & MDB (Io) & \(\chi\) \\ \cline{2-8} & [29] & Poster & REDT, REDT & 13 & IMDB & \(\chi\) \\ \hline \multirow{5}{*}{[29]} & [29] & Pote summary & BLSTM & 4 & (co) & \(\chi\) \\ \cline{2-8} & [29] & Pote summary & GRU & 20 & IMDB (Io) & \(\chi\) \\ \cline{2-8} & [29] & Synopsis & CNN-FE & 71 & MPST & \(\chi\) \\ \cline{2-8} & [29] & Synopsis & CNN, LSTM & 9 & MLEMD & \(\chi\) \\ \cline{2-8} & [30] & Synopsis & Self-Attention & 9 & LMTD & \(\chi\) \\ \cline{2-8} & [31] & Stereography & MLE, LSTM & 31 & 1min (\(\chi\)) & \(\chi\) \\ \cline{2-8} & [30] & Synopsis, Metadata, & Multi-GANU & 13 & \begin{tabular}{c} Monticyegne \\ (co) \\ \end{tabular} & \begin{tabular}{c} Monticyegne \\ (co) \\ \end{tabular} \\ \cline{2-8} & [29] & Synopsis, Metadata, & Inception-LSTM & 13 & \begin{tabular}{c} Monticyegne \\ (co) \\ \end{tabular} \\ \cline{2-8} & [29] & Synopsis, Metadata, & Inception-LSTM & 13 & \begin{tabular}{c} Monticyegne \\ (co) \\ \end{tabular} \\ \cline{2-8} & [29] & Synopsis, Metadata, & Inception-LSTM & 18 & \begin{tabular}{c} Monticyegne \\ (co) \\ \end{tabular} \\ \cline{2-8} & [29] & Synopsis, Metadata, & Inception-LSTM & 18 & \begin{tabular}{c} Monticyegne \\ (co) \\ \end{tabular} \\ \cline{2-8} & [29] & Synopsis, Metadata, & Inception-LSTM & 18 & \begin{tabular}{c} Monticyegne \\ (co) \\ \end{tabular} \\ \cline{2-8} & [29] & Synopsis, Metadata, & Inception-LSTM & 18 & \begin{tabular}{c} TMDB (Io) \\ \end{tabular} & ✓ \\ \hline \multirow{5}{*}{[29]} & [29] & Synopsis, Metadata, & Inception-CNN, GRU & 23 & MMB (Io) & ✓ \\ \cline{2-8} & [29] & Traffic, Audio & Inception-LSTM & 4 & (co) & \(\chi\) \\ \cline{2-8} & [29] & Subtitle, Video & DCT, BoW, SVM & 18 & \begin{tabular}{c} (co) \\ \end{tabular} &
\begin{tabular}{c} \(\chi\) \\ \end{tabular} \\ \cline{1-1} \cline{2-8} & [29] & Synopsis, Metadata, & Inception-CNN, GRU & 23 & MMB (Io) & ✓ \\ \cline{1-1} \cline{2-8} & [29] & Traffic, Audio & Inception-LSTM & 4 & (co) & \(\chi\) \\ \hline \end{tabular}
\end{table} TABLE I: Summary of related works for movie genre identification
Further, each \(a_{p}^{i}\) is flattened and mapped into a \(D\)-dimensional vector, i.e., embedding \(z_{0}\) through transformer layers by the below linear projection.
\[z_{0}=\left[a_{class}~{};~{}a_{p}^{1}\mathbb{E}~{};~{}a_{p}^{2}\mathbb{E}~{};~{} \dots~{};~{}a_{p}^{n_{p}}\mathbb{E}\right]+\mathbb{E}_{pos} \tag{2}\]
where, \(\mathbb{E}\in\mathbb{R}^{w_{p}\times w_{p}\times c_{p}\times D}\) is the patch embedding projection, \(\mathbb{E}_{pos}\in\mathbb{R}^{(n_{p}+1)\times D}\) is the positional encoding that holds the patches' position information [32], and \(a_{class}=z_{0}^{0}\) is a learnable embedding [11].
_(ii) Dense transformer encoders:_ After mapping the image patches to the deep feature embedding space with positional encoding, we employ densely connected transformer encoders sequentially [32]. Here, the \(\ell^{th}\) transformer encoder (TE\({}_{\ell}\)) inputs concatenated feature encodings (\(\mathcal{X}\)) of all preceding encoders:
\[\mathcal{X}(\text{TE}_{\ell})=[\mathcal{X}(\text{TE}_{1});\mathcal{X}(\text{TE }_{2});\dots;\mathcal{X}(\text{TE}_{\ell-1})] \tag{3}\]
The building blocks of a TE are shown in Fig. 3, which includes alternating layers of \(MSA\) (Multi-head Self-Attention) and \(MLP\) (Multi-Layer Perceptron) blocks [11, 33].
Multi-head Self-Attention (\(MSA\))The core of the TE is its \(MSA\) mechanism consisting of \(h\) parallel attention layers, i.e., attention heads, where each head utilizes SA (Scaled dot-product Attention) [32]. The SA takes input comprising \(D_{K}\) dimensional queries and keys, and \(D_{V}\) dimensional values [32], and computed as follows.
\[SA(Q,K,V)=softmax\left(QK^{T}\diagup\sqrt{D_{K}}\right)V \tag{4}\]
where, a set of queries, keys, and values are packed to form \(Q\), \(K\), \(V\) matrices, respectively.
\(MSA\) empowers the capability to focus on information across diverse representations at various positions. Here, concurrent self-attention computations for each head collectively output as below.
\[MSA(Q,K,V)=\left[head_{1},head_{2},\dots,head_{h}\right]W_{O}; \tag{5}\] \[head_{i}=SA(QW_{i}^{Q},KW_{i}^{K},VW_{i}^{V})\]
where, \(W_{i}^{Q}\in\mathbb{R}^{D\times D_{K}}\), \(W_{i}^{K}\in\mathbb{R}^{D\times D_{K}}\), \(W_{V}^{V}\in\mathbb{R}^{D\times D_{V}}\), \(W_{O}\in\mathbb{R}^{D_{V}\times D}\) are parameter matrices; \(D_{K}=D_{V}=\lfloor D/h\rfloor\).
Multi-Layer Perceptron (\(MLP\))The \(MLP\) block consists of two fully connected layers with \(2D\) and \(D\) nodes, respectively, and employs the GELU (Gaussian Error Linear Unit) non-linear activation function, similar to [11].
Before and after the \(MSA\) / \(MLP\) blocks, \(LN\) (Layer Normalization) [34] and residual connections [30] are engaged, respectively (Fig. 3). It can be represented as below.
\[z_{\ell}=MLP(LN(z_{\ell}^{\prime}))+z_{\ell}^{\prime}; \tag{6}\] \[z_{\ell}^{\prime}=MSA(LN(z_{\ell-1}))+z_{\ell-1};~{}\ell=1,2, \dots,L\]
where, \(L\) is the total count of engaged TEs. After multiple TEs, the <class> token is imbued with contextual information. The learnable embedding state at the outcome of the TE\({}_{L}\), i.e., \(z_{L}^{0}\), serves as the image representation \(y^{\prime}\)[11]; \(y^{\prime}=LN(z_{L}^{\prime})\).
_(iii) Feed-forward neural network (FNN):_ The final stage of our model comprises an FNN consisting of one hidden layer with \(D/2\) nodes having ReLU activation function [33], followed by an output layer. The output layer contains \(\delta\) number of nodes with sigmoid as output function [33]. To mitigate the challenge in multi-label classification, where positive labels are lesser than negative ones, we leverage the asymmetric loss function (ASL) to train our model [12]. We use Adam optimizer here due to its adaptive learning rates and efficient memory usage [35].
Finally, for a poster image \(\mathcal{I}_{i}\), RDT generates a confidence score vector \(\rho^{<i>}=(\rho_{1}^{<i>},\rho_{2}^{<i>},\dots,\rho_{\delta}^{<i>})\). The top-3 genres based on the confidence scores are selected as the associated genres of \(\mathcal{I}_{i}\).
#### Iii-B2 **Ensmebled Residual Dense Transformer (ERDT)**
Given that data imbalances are a common issue in multi-label classification problems [9], here, we propose an ensemble strategy to mitigate this challenge. In our proposed ensemble method, we consider three fundamental models: (a) R: the residual network with sigmoid as the output function and ASL as the loss function, (b) RT: the residual transformer network, a simplified version of the RDT that does not include dense connections, (c) RDT: the proposed model.
For a poster \(\mathcal{I}_{i}\), we first obtain three confidence score vectors \(\rho_{i,1}=(\rho_{1}^{<i,1>},\rho_{2}^{<i,1>},\dots,\rho_{\delta}^{<i,1>})\), \(\rho_{i,2}=(\rho_{1}^{<i,2>},\rho_{2}^{<i,2>},\dots,\rho_{\delta}^{<i,2>})\), and \(\rho_{i,3}=(\rho_{1}^{<i,3>},\rho_{2}^{<i,3>},\dots,\rho_{\delta}^{<i,3>})\) from R, RT, and RDT models, respectively, which are then combined using _weighted mean_ ensemble scheme [36], as shown in Eq. 7, to produce the confidence score vector \(\rho_{i}=(\rho_{1}^{<i>},\rho_{2}^{<i>},\dots,\rho_{\delta}^{<i>})\) for the ERDT model.
\[\rho_{j}^{<i>}=\sum\limits_{k=1}^{3}\alpha_{k}~{}\rho_{j}^{<i,k>};~{}~{}\forall j \in\{1,2,\dots,\delta\} \tag{7}\]
where, \(0\leq\alpha_{k}\leq 1\); \(\sum\limits_{k=1}^{3}\alpha_{k}=1\); for \(k=1,2,3\) represents the weights that were tuned by a grid-search technique [37].
#### Iii-B3 **Probabilistic Module**
As discussed earlier, a movie poster can encompass multiple genres. Given an input poster \(\mathcal{I}_{i}\), the multi-label classifier generates a confidence score vector \(\rho_{i}=(\rho_{1}^{<i>},\rho_{2}^{<i>},\dots,\rho_{\delta}^{<i>})\) of \(\mathcal{I}_{i}\) comprising the confidence score for each genre to be associated with \(\mathcal{I}_{i}\). The top three genres with the highest confidence score are predicted as the associated genres with \(\mathcal{I}_{i}\). However, the poster can be associated with fewer than three genres. To address this
Fig. 3: Workflow of Residual Dense Transformer (RDT)
issue, here, we propose a probabilistic module. The objective of this module is to determine whether the poster is associated with more than one genre and, if so, select the 2\({}^{nd}\) and 3\({}^{rd}\) genres accordingly.
The crux of this module is to compute the association between genres, which are captured by the following equations.
\[P(g_{k}|g_{j})=|\mathcal{Z}_{j}\cap\mathcal{Z}_{k}|\ /\ |\mathcal{Z}_{j}| \tag{8}\]
\[P(g_{l}|g_{j},g_{k})=|\mathcal{Z}_{j}\cap\mathcal{Z}_{k}\cap\mathcal{Z}_{l}|\ /\ | \mathcal{Z}_{j}\cap\mathcal{Z}_{k}| \tag{9}\]
where, \(\mathcal{Z}_{j}\) is the set of posters that are associated with genre \(\mathcal{G}_{j}\). Eqn. 8 expresses the likelihood of a poster being associated with \(\mathcal{G}_{k}\), considering that the poster is already associated with \(\mathcal{G}_{j}\). Eqn. 9 denotes the probability of a poster being associated with \(\mathcal{G}_{l}\), given that the poster is already associated with both \(\mathcal{G}_{j}\) and \(\mathcal{G}_{k}\). We calculate the conditional probabilities in advance for all possible combinations of genres.
Once the multi-label classifier generates the confidence score vector \(\varrho_{i}\) for the input poster \(\mathcal{I}_{i}\), the genre with the highest confidence score is chosen to be the first genre of \(\mathcal{I}_{i}\). We refer to this first genre as the dominant genre.
The 2\({}^{nd}\) genre of \(\mathcal{I}_{i}\) is determined based on the dominant genre. Let us assume that \(\mathcal{G}_{j}\) is the dominant genre for \(\mathcal{I}_{i}\) (i.e., \(\mathcal{G}_{1}^{<i>}=\mathcal{G}_{j}\)). The 2\({}^{nd}\) genre for \(\mathcal{I}_{i}\) is selected by Eqn. 10.
\[\begin{split}\mathcal{G}_{2}^{<i>}=\operatorname*{arg\,max}_{k\neq j _{1}\ \leq k\leq\delta}\left(\rho_{k}^{<i>}\succ\widetilde{P}(g_{k}|g_{j})\right); \\ \text{if}\ \max_{k\neq j,\ 1\leq k\leq\delta}\left(\rho_{k}^{<i>} \succ\widetilde{P}(g_{k}|g_{j})\right)>\tau\end{split} \tag{10}\]
where, \(\widetilde{P}(g_{k}|g_{j})\) is the normalized probability value computed over \((P(g_{1}|g_{j}),P(g_{2}|g_{j}),\dots,P(g_{\delta}|g_{j}))\), and \(\tau\) is a tunable threshold determined empirically.
Here, for each genre other than \(G_{j}\), an association score is computed. The association score for genre \(G_{k}\) is determined by multiplying its confidence score \(\rho_{k}^{<i>}\) with its normalized conditional probability value \(\widetilde{P}(g_{k}|g_{j})\), provided that \(\mathcal{I}_{i}\) is linked to \(G_{j}\). If the maximum association score across all genres except \(G_{j}\) exceeds a predefined threshold \(\tau\), the corresponding genre is assigned in \(\mathcal{G}_{2}^{<i>}\).
If the 2\({}^{nd}\) genre of \(\mathcal{I}_{i}\) (assuming \(\mathcal{G}_{2}^{<i>}=\mathcal{G}_{k}\)) is chosen, we then proceed to determine whether to select the 3\({}^{rd}\) genre. The selection of the 3\({}^{rd}\) genre of \(\mathcal{I}_{i}\) is captured by Eqn. 11.
\[\begin{split}\mathcal{G}_{3}^{<i>}=\operatorname*{arg\,max}_{l\neq j _{1}\ \neq k,\ 1\leq l\leq\delta}\left(\rho_{l}^{<i>}\succ\widetilde{P}(g_{l}|g_{j})\succ \widetilde{P}(g_{l}|g_{j},g_{k})\right);\\ \text{if}\ \max_{l\neq j_{1},\ 1\leq l\leq\delta}\left(\rho_{l}^{ <i>}\succ\widetilde{P}(g_{l}|g_{j})\succ\widetilde{P}(g_{l}|g_{j},g_{k}) \right)>\tau^{\prime}\end{split} \tag{11}\]
where, \(\widetilde{P}(g_{l}|g_{j},g_{k})\) represents normalized probability calculated from \((P(g_{1}|g_{j},g_{k}),P(g_{2}|g_{j},g_{k}),\dots,P(g_{\delta}|g_{j},g_{k}))\), and \(\tau^{\prime}\) is a tunable threshold, set empirically.
It is important to emphasize that selecting the 2\({}^{nd}\) and 3\({}^{rd}\) genres is greatly influenced by the correct prediction of the dominant genre. Therefore, an incorrect prediction for the dominant genre may lead to the inaccurate selection of the 2\({}^{nd}\) and 3\({}^{rd}\) genres.
In our experimental analysis, we show the performance of this probabilistic module with respect to the correct prediction of the dominant genre, which is denoted by hit ratio (\(\mathcal{H}it\)) as defined below: \(\mathcal{H}it=|TD_{c}|\ \diagup\ |TD|\) where, \(TD_{c}\) is the set of test samples for which the dominant genre is correctly identified, and \(|TD|\) is the total number of employed test samples.
Finally, we integrate this probabilistic module with ERDT to obtain our final proposed model, PrERDT.
## IV Experiments and Discussion
This section describes the employed dataset and experimental results with discussions.
### _Employed Dataset_
The primary objective of this study is to analyze the poster images to identify their multi-labeled movie genres. Consequently, obtaining a dataset featuring posters with multiple genres proved challenging, as there were scarce off-the-shelf options available. Therefore, we procured authentic movie poster images with corresponding genres from _Internet Movie Database_ (IMDb: _[https://developer.imdb.com/non-commercial-datasets_](https://developer.imdb.com/non-commercial-datasets_)). A movie may be of multiple genres; however, on IMDb, a maximum of 3 genres are labeled for an individual movie. Currently, our dataset considers 13 distinct genres (i.e., \(\delta=13\)), as mentioned in Table II. The ground-truth genre of a movie poster is available in terms of multi-hot encoding (refer to section III-A). We gathered posters of 4464 individual movies, each with 1 to 5 posters; the distribution of movie count with respect to the individual poster count is presented in Fig. 4.(a). Overall, our dataset comprises 13882 distinct posters, each having 1 to 3 genre labels. The movie and poster counts with respect to genre label count are shown in Fig. 4.(b). Here, we can see that about \(\frac{3}{4}^{th}\) posters/movies of our dataset have 3 genre labels. For individual genre label/ class id, the corresponding poster and movie counts are shown in Table II. As a matter of fact, in this table, some poster/movie counts overlap across genres due to having multi-label genres. Here, genre _drama_ is included in the highest number of posters, i.e., 6609; whereas _biography_ has 1076 posters, which is the lowest in our dataset. Table II comprererends the data imbalance issue [10]. In Fig. C.1 (refer to Appendix B), we present some poster images from our employed dataset along with the genre class numbers. Fig. A.1 of Appendix A illustrates a co-occurrence matrix for movie genre labels associated with the posters.
\begin{table}
\begin{tabular}{c|c|c|c} \hline Class Id & Genre label & Poster count & Movie count \\ \hline \hline
1 & Action & 4985 & 1426 \\
2 & Adventure & 3702 & 1024 \\
3 & Animation & 1196 & 325 \\
4 & Biography & 1076 & 348 \\
5 & Comedy & 4380 & 1517 \\
6 & Crime & 3052 & 1003 \\
7 & Drama & 6609 & 2217 \\
8 & Fantasy & 1379 & 423 \\
9 & Horror & 2646 & 860 \\
10 & Mystery & 2285 & 750 \\
11 & Romance & 2406 & 913 \\
12 & Sci-Fi & 1542 & 458 \\
13 & Thriller & 3455 & 1092 \\ \hline \end{tabular}
\end{table} TABLE II: Poster and movie counts across genre labels
From IMDb, while selecting the movies/poster, we followed the following strategies:
* sorted out movies based on release year since 2000.
* picked movies having more than 10000 user votes and more than 60 minutes of runtime.
* crawled and filtered movies based on the 13 genres employed here (refer to Table II).
The dataset was split up into training (\(DB_{tr}\)), validation (\(DB_{v}\)), and testing (\(DB_{t}\)) disjoint sets with an approx ratio of \(8:1:1\) considering the presence of all 13 genres in each set equivalently. \(DB_{tr}\), \(DB_{v}\), and \(DB_{t}\) contain 10942, 1470, and 1470 posters, respectively.
### _Experimental Details_
We executed the experimentation using the TensorFlow-2.5 framework having Python 3.9.13 on an Ubuntu 20.04.2 LTS-based machine with specifications including AMD EPYC 7552 Processor running at 2.20 GHz with 48 CPU cores and 256 GB RAM, NVIDIA A100-PCIE GPU with 40 GB of memory. In this paper, all the presented results were obtained from \(DB_{t}\).
The hyper-parameters of our model were tuned and set during the model training with a focus on optimizing performance over \(DB_{v}\). For the transformer networks, we fixed the following hyper-parameters empirically: transformer_layers (\(L\)) = 4, embedding_dimension (\(D\)) = 256, num_heads (\(h\)) = 6. In ASL, we set focusing parameters \(\gamma^{+}=0\), \(\gamma^{-}=1\), and probability margin \(m=0.2\). For Adam optimizer, we chose initial_learning_rate = \(10^{-3}\); exponential decay rates for 1\({}^{st}\) and 2\({}^{nd}\) moment estimates, i.e., \(\beta 1=0.9\), \(\beta 2=0.999\); zero-denominator removal parameter (\(\epsilon\)) = \(10^{-8}\). For the early stopping strategy, we set the patience parameter to 10 epochs, and we maintained a fixed mini-batch size of 32. We empirically chose \(\tau=0.3\) and \(\tau^{\prime}=0.03\) for our probabilistic module (refer to Eqn.s 10, 11).
We evaluated the model performance based on the _macro_-level analysis, considering standard metrics in multi-label classification, such as precision (\(\mathcal{P}\)) %, recall (\(\mathcal{R}\)) %, specificity (\(Sp\)) %, balanced accuracy (\(\mathcal{BA}\)) %, F-measure (\(\mathcal{FM}\)) %, and Hamming loss (\(\mathcal{HL}\)) [38].
### _Comparison with State-of-the-Art (SOTA) Models_
Table III presents the performance of the three models: RDT, ERDT and PrERDT, proposed in this paper. Here, we compare the performance of our proposed models with some major contemporary deep architectures, called baseline, and some related SOTA models. It may be noted the baseline models are designed for multi-class classification problems [31], and are typically not well-suited for handling multi-label classification challenges. Therefore, we improvised the baseline models by incorporating sigmoid as the output function and ASL as the loss function (refer to section III-B) to enable them to address multi-label classification. As evident from Table III, all three models, RDT, ERDT and PrERDT, outperformed the baseline models and SOTA in terms of all the performance evaluation metrics employed in this paper.
### _Ensemble Study_
Table IV presents the performance of various models participating in the ensemble. From this table, we have the following observations:
_(i) Overall performance of ERDT_: ERDT outperformed all other models listed in the first column of Table IV.
_(ii) Comparison between RDT and R+RT_: It is worth noting that the overall performance of the RDT was better than the R+RT model in terms of balanced accuracy and F-measure.
_(iii) Comparison between RDT and R+RDT_: As evident from Table IV, the performance of R+RDT is degraded compared to that of RDT in terms of balanced accuracy and F-measure. From Table VIII.(b), the reason for this can be explained. Table VIII.(b) shows the rank of each model based on the balanced accuracy for all 13 genres separately. According to these ranks, RDT performed better than R in all 13 genres. The negative influence of R caused the degradation of the performance of the R+RDT model in 8 out of 13 cases, which was reflected further in the overall performance of the R+RDT model across all genres.
_(iv) Significance of the participating models in the ensemble:_ As per the ranking shown in Table VIII.(b), RT performed
Fig. 4: (a) Distribution of movie count w.r.t. individual poster count, (b) Distribution of movie & poster count w.r.t. genre-label count.
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Model & \(\mathcal{P}\) & \(\mathcal{R}\) & \(Sp\) & \(\mathcal{BA}\) & \(\mathcal{FM}\) & \(\mathcal{HL}\) \\ \hline R & 52.04 & 50.44 & 85.36 & 67.90 & 48.97 & 0.18524 \\ RT & 52.54 & 55.53 & 86.67 & 71.10 & 53.57 & 0.17833 \\ RDT & 55.01 & 57.25 & 87.08 & 72.16 & 55.69 & 0.17069 \\ R + RT & 54.76 & 56.28 & 87.04 & 71.66 & 54.69 & 0.16902 \\ R + RDT & 55.35 & 56.46 & 86.97 & 71.72 & 55.20 & 0.16954 \\ RT + RDT & 54.96 & 57.44 & 87.28 & 72.36 & 55.70 & 0.16755 \\ ERDT & **55.95** & **57.88** & **87.35** & **72.61** & **56.40** & **0.16546** \\ \hline \end{tabular}
\end{table} TABLE IV: Performance by various models for ensemble analysis
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Method & \(\mathcal{P}\) & \(\mathcal{R}\) & \(Sp\) & \(\mathcal{BA}\) & \(\mathcal{FM}\) & \(\mathcal{HL}\) \\ \hline \multirow{4}{*}{Baseline} & ResNet02 [30] & **32.84** & 50.44 & 88.36 & 67.90 & 48.97 & 0.18524 \\ & DenseNet21 [59] & 26.10 & 27.18 & 78.62 & 53.09 & **22.63** & 0.30120 \\ & EfficientNet2B [40] & 51.37 & **25.53** & **85.95** & **69.24** & **51.40** & **0.18503** \\ & WT [11] & 29.11 & 27.40 & 78.61 & 53.01 & 22.58 & 0.25243 \\ & InceptionV3 [41] & 21.03 & 28.80 & 79.12 & 33.96 & 24.34 & 0.25599 \\ & MobileNetV2 & 37.34 & 33.22 & 80.57 & 56.90 & 30.37 & 0.23799 \\ \hline Improvement of BDT & 2.97 & 4.72 & 1.13 & 2.93 & 4.29 & 7.55 \\ Improvement of ERDT & 3.91 & 3.55 & 1.40 & 3.38 & 5.00 & 10.58\% \\ Improvement of PrERDT & 5.73 & 2.12 & 2.76 & 2.44 & 3.86 & 12.36\% \\ \hline \hline \multirow{4}{*}{SOTA} & Chu et al. [48] & 19.73 & 27.32 & - & 20.89 & - \\ & Gornavick et al. [2] & 36.76 & 35.12 & **80.91** & **58.02** & 33.49 & **0.24290** \\ & Pohur et al. [21] & 28.76 & 47.76 & 68.18 & 57.97 & 34.3688 \\ & Wi et al. [1] & **52.89** & **51.18** & - & **49.61** & - \\ \hline Improvement of RRT & 2.12 & 6.07 & 6.17 & 14.15 & 6.08 & 29.75 \\ Improvement of ERDT & 3.06 & 6.70 & 6.44 & 14.60 & 6.79 & 31.88\% \\ Improvement of PrERDT & 4.88 & 3.47 & 7.80 & 13.66 & 5.65 & 33.24\% \\ \hline \multirow{4}{*}{Proposed} & RDT & 55.01 & 57.25 & 78.08 & 72.16 & 55.69 & 0.17069 \\ & ERDT & 55.95 & **57.88** & 87.35 & **72.61** & **56.50** & 0.16546 \\ \cline{1-1} & PrERDT & **57.77** & 54.65 & **88.71** & 71.68 & 55.26 & **0.16216** \\ \hline \end{tabular}
\end{table} TABLE III: Comparison with baseline and SOTA models
better than RDT for two genres (i.e., _mystery_, and _sci-fi_). Hence, we first choose to combine RT and RDT to improve the overall performance of RDT. Our experimental results show that in 6 of 13 genres, RT+RDT, indeed, performed better than RT and RDT individually. RT+RDT also outperformed RT and RDT independently in terms of overall performance across all genres. Furthermore, as per our observation from Table VIII.(b), for a few genres (e.g., _adventure_, _animation_, _thriller_), the overall performances of R, RT, and RDT are comparable. Hence, we select R, RT, and RDT as the fundamental models for our ensemble. It may be noted from Table VIII.(b) that in 6 out of 13 genres, ERDT performed better than RT+RDT. Moreover, in 5 out of the 8 cases for which R+RDT performed worse than RDT, the ERDT model performed better than RDT due to the influence of RT. In terms of the overall performance, ERDT also turned out to be the best, comparing all the fundamental models used for our ensemble and their other possible combinations. This justifies our choice of fundamental models for the ensemble. In Fig. C.2 of Appendix C, we show the qualitative performance of ERDT using heat map encoding.
### _Genre-wise Analysis_
According to Table IV, we have evaluated the models based on their balanced accuracy (and F-measure), resulting in the following ranking: ERDT \(\succ\) RT+RDT \(\succ\) RDT \(\succ\) R+RDT \(\succ\) R+RT \(\succ\) RT \(\succ\) R. Here, the notation Model-A \(\succ\) Model-B indicates that the balanced accuracy of Model-A surpasses that of Model-B. Here, we perform a detailed analysis of the models' performance across different genres, focusing on balanced accuracy. Table VIII.(a) provides a breakdown of the genre-wise performance analysis for all the models. From this table, we have yielded the following noteworthy findings:
_(i) Comparison among fundamental models:_ RT demonstrated superior performance when compared to R in 12 out of 13 genres, highlighting its improvement over the latter. Similarly, RDT, being an enhancement over RT, outperformed RT in 11 out of 13 genres.
_(ii) Comparison between the proposed fundamental model with other ensemble models:_ RDT outperformed R+RT in 7 out of 13 genres. We observed performance improvements for the R+RDT and RT+RDT models over the RDT model in 5 and 8 of the 13 genres, respectively. The ERDT model performed better than the RT+RDT model in 6 of 13 genres. Interestingly, despite the RT+RDT model outperforming ERDT in more genres when considering the count, the quantitative measure of improvement for ERDT was significantly higher than the degradation observed for ERDT in all the genres where RT+RDT surpassed the ERDT model.
_(iii) Performance of R, RDT, and ERDT on imbalanced genres:_ Fig. 5 represents the ratio between positive and negative samples for each genre. This figure clearly illustrates that certain genres, such as _biography_, _animation_, and _fan-tasy_, suffer from significant data imbalance issues. Here, our analysis focuses on evaluating the performance of R, RDT, and ERDT specifically for these imbalanced genres. Fig. 6 visually depicts the comparative performance of these models in terms of specificity and recall for the _biography_ and _fantasy_ genres, both of which exhibit high levels of data imbalance. As observed in Fig.s 6 (a) and (b), the specificity performance of R is notably high, while its recall performance is considerably low in these imbalanced genres. This indicates that R struggles to address the challenge posed by imbalanced data effectively since R is unable to identify the posters belonging to these genres. In contrast, the RDT model performed better by identifying more posters from the imbalanced genres compared to the R model, resulting in improved recall. However, this gain in recall came at the cost of reduced specificity. The trade-off between recall and specificity is observed for the ERDT model. In other words, ERDT exhibits higher specificity compared to RDT but lower than R. Additionally, ERDT demonstrates higher recall compared to R but lower than RDT. As a result, the balanced accuracy of the ERDT model surpasses that of both the R and RDT models.
_(i) Ablation study for RDT:_ As discussed earlier, RDT is the composition of a residual network and dense transformer. Therefore, here, we first compare the performance of RDT with other component models, such as the Residual network (R), Transformer network (T), Dense Transformer (DT), and Residual Transformer network (RT). As evident from Table V, RDT outperformed R, T, RT, and DT. This comprehends the impact of our proposed fundamental model RDT.
_(ii) Incremental performance improvement due to improvisation of the models:_ It is worth noting from Table V that as RT is an improvisation over R and T individually, the performance of RT is better than each of R and T. Similarly, as DT is an improvisation over T, the performance of DT is better than T. Finally, RDT is an improvisation over RT and DT models; therefore, the performance of RDT is better than them.
_(iii) Ablation study for ERDT:_ Table V shows that our ensemble model ERDT performed better than any other combination of the fundamental models (i.e., R, RT, and RDT) used in the proposed ensemble.
The analysis for PrERDT is presented next.
### _Analysis of Probabilistic Module_
The last row of Table V shows the performance of PrERDT. It may be noted when we used the probabilistic module on ERDT, i.e., for the PrERDT model, the recall decreased more than the improvement in the precision and specificity (refer to the last two rows of Table V). Consequently, the balanced accuracy and the F-measure were decreased for PrERDT. However, the motivation for introducing the probabilistic module is to enhance precision while making only minimal concessions in recall, ultimately leading to improved balanced accuracy and the F-measure. The reason behind obtaining the counterintuitive outcome can be elucidated by referring to Fig. 7. The performance of our probabilistic module highly relies on accurately identifying the first genre through ERDT. However, according to Fig. 7, the hit ratio of the ERDT model for identifying the first genre is 0.7701. Consequently, the PrERDT model sometimes discarded the correctly predicted second and third genres due to its dependency on the erroneously predicted first genre, thus causing a decline in recall.
To validate the correctness of our hypothesis, we conducted an additional experiment on different subsets of test data where the hit ratio is notably high. Table VI shows the performance of our probabilistic module for five different subsets of test data characterized by a high hit ratio. As observed from Table VI, the precision for the PrERDT model exhibited improvement when compared to the ERDT model without compromising the recall value. Consequently, this enhancement translated into improved balanced accuracy and F-measure metrics for the PrERDT model. These findings underscore the effectiveness of our probabilistic module.
### _Performance Analysis based on Genre Label Count_
As mentioned earlier in section IV-A and shown in Fig. 4.(b), each poster in our dataset is associated with either 1, 2, or 3 genre labels. In this experiment, we partitioned the test data into three subsets with posters associated with 1, 2, and 3 genres, and present the results in Table VII. From Table VII, it can be comprehended that posters having 3 genres yielded the best performance. Here also, in most of the cases, ERDT demonstrated the best performance.
## V Conclusion
In this paper, we worked on multi-label genre identification solely from movie poster images. We did not take any aid from any other visual/ textual/audio modalities. We initially proposed a Residual Dense Transformer (RDT) with asymmetric loss to handle imbalanced data; then improvised the model using an ensembled variation of RDT, i.e., ERDT to tackle multi-label genre identification. We also added a probabilistic module to our models (e.g., PrERDT) to eliminate unnecessary genres. For experiments, we procured 13882 number of poster images from IMDb. Our models exhibited encouraging performances and bit some major SOTA architectures. In the future, we will focus on enhancing the performance for some specific genres, e.g., _biography_, _fantasy_, _mystery_, where our current models have shown subpar results. Currently, PrERDT
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c|c} \hline Model & \(\mathcal{Hit}\) & \(\mathcal{P}\) & \(\mathcal{R}\) & \(\mathcal{Sp}\) & \(\underline{BA}\) & \(\mathcal{FM}\) & \(\mathcal{HL}\) \\ \hline ERDT & 0.8889 & 79.48 & 91.78 & 95.60 & 93.69 & 84.27 & 0.04102 \\ PERDDT & 80.17 & 91.78 & 95.70 & 93.74 & 84.64 & 0.04017 \\ \hline ERDT & 0.9000 & 87.25 & 96.44 & 95.84 & 96.14 & 91.08 & 0.03760 \\ PERDDT & 0.9111 & 87.81 & 87.28 & 95.86 & 91.57 & 85.72 & 0.03760 \\ PERDDT & 87.94 & 87.28 & 96.05 & 91.66 & 85.79 & 0.03675 \\ PERDDT & 84.70 & 92.51 & 95.73 & 94.12 & 88.10 & 0.03880 \\ PERDDT & 9.9133 & 84.94 & 92.51 & 95.77 & 94.14 & 88.23 & 0.03846 \\ \hline ERDT & 83.49 & 92.82 & 94.46 & 93.64 & 86.21 & 0.04444 \\ PERDDT & 84.78 & 92.82 & 94.54 & 93.68 & 87.24 & 0.04358 \\ \hline \end{tabular}
\end{table} TABLE VI: Significance of the probabilistic module
\begin{table}
\begin{tabular}{l|c|c|c|c|c|c} \hline Model & \(\mathcal{D}^{<2>}\) & \(\mathcal{TD}^{<2>}\) & \(\mathcal{TD}^{<3>}\) \\ \hline Model & w/o Pr & Pr & w/o Pr & Pr & w/o Pr & Pr \\ \hline R & 50.68 & 50.70 & 63.88 & 62.70 & 67.87 & 66.79 \\ \hline RT & 48.67 & 49.09 & 68.08 & 68.29 & 70.93 & 70.13 \\ \hline RDT & 50.73 & 51.31 & 69.57 & 66.61 & 72.07 & 70.86 \\ \hline R + RT & 49.45 & 49.08 & 67.90 & 66.95 & 71.60 & 70.46 \\ \hline R + RDT & 50.70 & **51.62** & 68.24 & 65.05 & 71.28 & 70.14 \\ \hline RT + RDT & 49.87 & 49.73 & 69.20 & 68.61 & 72.20 & 71.37 \\ \hline ERDT & **51.58** & 49.36 & **70.26** & **69.69** & **72.34** & **71.41** \\ \hline \end{tabular}
\end{table} TABLE VII: Performance study (\(\mathcal{BA}\) %) on genre label count
Fig. 7: Performance analysis on hit ratio \((\mathcal{Hit})\).
\begin{table}
\begin{tabular}{c
lags behind ERDT due to a lower hit ratio. We will also endeavor to improve the performance of ERDT in multi-label classification, so that the hit ratio improves, and eventually boosts the efficacy of PrERDT.
## Appendix A Gene label co-occurrence matrix
Fig. A.1 provides a visual representation of the relationships and occurrences among various movie poster genres as noted in section IV-A.
## Appendix B Dataset Challenges
As mentioned in section I, the information contained within the posters introduces additional complexities when it comes to identifying the movie genre, and we briefly outline these challenges as follows.
_(a) Background:_ The background of movie posters serves a significant purpose in establishing a sense of atmosphere or setting, piques curiosity, and helps individuals to make informed decisions about whether the movie aligns with their interests and preferences. For example, _horror_ movie posters (Fig. C.1._v. xviii_) often utilize specific background elements with dark shadows and eerie lighting to entice the viewers with a chilling, suspenseful, and frightening mood. Based on the information available on the poster background, we can further categorize it as below:
_-- Less information:_ Sometimes, a poster background may contain little to no information, which brings challenges to automated genre identification (Fig.s C.1._i-iv_).
_-- Moderate & adequate information:_ Often, the background has sufficient visual characteristics to convey its genre (Fig.s C.1._v-vi_).
_-- Complex background:_ In some cases, the background of a poster becomes complex due to having enormous and/or composite visual effects/elements (Fig.s C.1._vi-viiii_).
_(b) Foreground:_ The foreground in a poster plays a crucial role in capturing the viewer's attention and creating visual interest. By strategically placing dynamic foreground elements, such as the main characters or key plot elements, the poster can effectively convey the theme or atmosphere of the movie. In the case of a _romantic_, _comedy_, featuring the two leads in a playful stance in the foreground may help in establishing the genre and the central focus of the film, which is the relationship between those characters (Fig. C.1._xii_).
_-- Cast image:_ Often, the lead casts' portrait, full/half body images cover the entire poster (Fig.s C.1._ix-xii_), which makes our task challenging due to relying only upon visual elements without taking any aid from face recognition and object detection modules.
_-- Scene image:_ The scene images present in the entire poster sometimes brings challenges due to complex visual elements, visual clutter and lack of cohesive composition (Fig.s C.1._xiii-xiiv_).
_-- Cast in Scene:_ The hybridization of cast and scene images can also be observed, where the cast image may be fused with scene images (Fig.s C.1._xvi-xvi_).
_(c) Inter variance:_ Often, different movie posters may have visual similarities while belonging to diverse genres. This can be done to challenge audience expectations, create intrigue, or highlight genre mashups. Such instances make the genre identification task quite difficult. For example, movie genre of Fig.s C.1._xviii_ poster is _horror_, but Fig. C.1._xvii_ does not, although visually quite similar; similarly, Fig.s C.1._ixix_ and _xx_ show non-identical genres, while sharing similar visual elements.
_(d) Intra variance:_ Generally, a movie has multiple posters of various designs, where different designs may emphasize multiple aspects of the same movie to effectively market it to a diverse range of viewers, which brings additional challenges in identifying the genre. For example, Fig.s C.1._xxii_ and _xxii_ are the posters from the same movie; however, they show variation in visual elements. Similarly, Fig.s C.1._xxiii_ and _xxiiv_ show intra-variation.
_(e) Collage:_ Sometimes, a movie poster combines various instances of the abovementioned background and foreground information, and creates a collage made from tiny images. Such collage posters make genre identification challenging due to amalgamating a pool of information (Fig.s C.1._xcv-xxiii_).
_(f) Text \(\leftrightarrow\) Image:_ In certain movie posters, some texts or titles are sometimes displayed as images rather than traditional typography (Fig.s C.1._xxii-xxcxii_). This artistic approach is often used to convey a specific theme or style associated with the movie. This brings additional challenges to our task, since we are not taking any aid from the text recognition module.
## Appendix C Qualitative result: Heat map encoding
In Fig. C.2, we showcase the qualitative outcomes of ERDT through heat map encoding as mentioned in section IV-D. We have selected 20 sample posters, and present the corresponding ground-truth and ERDT-predicted heat map encodings using a gray color code.
|
2303.18232 | DIME-FM: DIstilling Multimodal and Efficient Foundation Models | Large Vision-Language Foundation Models (VLFM), such as CLIP, ALIGN and
Florence, are trained on large-scale datasets of image-caption pairs and
achieve superior transferability and robustness on downstream tasks, but they
are difficult to use in many practical applications due to their large size,
high latency and fixed architectures. Unfortunately, recent work shows training
a small custom VLFM for resource-limited applications is currently very
difficult using public and smaller-scale data. In this paper, we introduce a
new distillation mechanism (DIME-FM) that allows us to transfer the knowledge
contained in large VLFMs to smaller, customized foundation models using a
relatively small amount of inexpensive, unpaired images and sentences. We
transfer the knowledge from the pre-trained CLIP-ViTL/14 model to a ViT-B/32
model, with only 40M public images and 28.4M unpaired public sentences. The
resulting model "Distill-ViT-B/32" rivals the CLIP-ViT-B/32 model pre-trained
on its private WiT dataset (400M image-text pairs): Distill-ViT-B/32 achieves
similar results in terms of zero-shot and linear-probing performance on both
ImageNet and the ELEVATER (20 image classification tasks) benchmarks. It also
displays comparable robustness when evaluated on five datasets with natural
distribution shifts from ImageNet. | Ximeng Sun, Pengchuan Zhang, Peizhao Zhang, Hardik Shah, Kate Saenko, Xide Xia | 2023-03-31T17:47:23Z | http://arxiv.org/abs/2303.18232v2 | # DIME-FM : DIstilling Multimodal and Efficient Foundation Models
###### Abstract
Large Vision-Language **F**oundation **M**odels (VLFM), such as CLIP, ALIGN and Florence, are trained on large-scale datasets of image-caption pairs and achieve superior transferability and robustness on downstream tasks, but they are difficult to use in many practical applications due to their large size, high latency and fixed architectures. Unfortunately, recent work shows training a small custom VLFM for resource-limited applications is currently very difficult using public and smaller-scale data. In this paper, we introduce a new distillation mechanism (DIME-FM) that allows us to transfer the knowledge contained in large VLFMs to smaller, customized foundation models using a relatively small amount of inexpensive, unpaired images and sentences. We transfer the knowledge from the pre-trained CLIP-ViT-L14 model to a ViT-B/32 model, with only 400M public images and 28.4M unpaired public sentences. The resulting model "Distill-ViT-B/32" rivals the CLIP-ViT-B/32 model pre-trained on its private WiT dataset (400M image-text pairs): Distill-ViT-B/32 achieves similar results in terms of zero-shot and linear-probing performance on both ImageNet and the ELEVATER (20 image classification tasks) benchmarks. It also displays comparable robustness when evaluated on five datasets with natural distribution shifts from ImageNet.
## 1 Introduction
In contrast to neural networks learnt to solve a single target vision task (task-specific models), CLIP [67] and other Vision-Language "**F**oundation **M**odels" (VLFMs) [50, 95] achieve superior accuracy on diverse novel downstream tasks and improved robustness to natural domain shifts during inference. At the same time, small and customizable VLFMs are in high demand for many applications that have limited computational resources (AV, AR/VR and other edge devices). Unfortunately, only a few labs in the world can afford the large-scale vision-language datasets (e.g. WiT [67] with 400M image-text pairs) and the immense computing resources required to train VLFMs. Efforts to re-create VLFMs on public data [93, 76, 16]) either fall short on accuracy or require even more expensive training on huge datasets of images paired with captions (over 5B pairs [73]).
Instead of pretraining, model distillation used to offer a convenient way to obtain a smaller custom model. Recent work distills CLIP specifically for one or a few target tasks (task-specific distillation). For example, some works [89, 90, 98, 66] distill the CLIP's image feature maps for better visual feature representations. BeamCLIP [42] distills CLIP logits for a single target image classification task, ImageNet-1K [18]. More recently, CLIP-TD [87] distills CLIP to solve three specific vision-language tasks. Even though these task-specific distillation works achieve good performance for the specialized downstream task, they are not scalable to solve new downstream tasks by zero-shot transferring. **There is no approach for distilling VLFMs to another foundation model** which preserves transferability to _novel_ tasks and robustness to domain shifts. Due to the unaffordable large-scale pretraining and the lack of foundation-model distillation mechanism, **practitioners must rely on the few labs to release smaller VLFMs, and cannot easily customize their size or architecture.
In this work, we successfully distill smaller custom VLFMs using only smaller-scale public data, but achiev
Figure 1: **Conceptual Figure of our Vision-Language Knowledge Distillation DIME-FM. We distill the knowledge from a large VLFM “CLIP-ViT-L/14’ pretrained on 400M private image-text paired dataset. We only use public unpaired image and text corpora as inputs. Our Distill-ViT-B/32 rivals CLIP-ViT-B/32 in both transferability and robustness. ZS: Zero-Shot, LP: Linear Probing.**
ing comparable transferability and robustness as if they were pre-trained on the large-scale data. Specifically, we transfer the knowledge from the released CLIP-ViT-L/14 [67] to our small VLFM "Distill-ViT-B/32". During the distillation, we adopt only **40M images from public datasets and 28.4M unpaired sentences**. Remarkably, with less than one-tenth of CLIP's pretraining dataset WiT, Distill-ViT-B/32 achieves comparable transferability and robustness to CLIP-ViT-B/32 [67] (see Fig. 1).
To accomplish this, we propose a novel distillation mechanism to **DI**still **M**ultimodal and **E**fficient **F**oundation **M**odels (**DIME-FM**) from CLIP. In standard distillation of image classification models with the fixed categories (_i.e_. fixed-vocabulary models), the class scores (logits) are matched between the teacher and student models [36, 43, 5, 62, 9]. However, since VLFMs do not have fixed-vocabulary logits, we instead match similarity of images to sentences (_i.e_. open-vocabulary logits) to retain the transferability (especially zero-shot ability) and robustness of VLFMs. We perform a careful ablation study of how "vocabulary", determined by training sentences, affects the student model's performance and find that it is crucial to perform distillation with a visually-related vocabulary rather than a random vocabulary. To construct a visually-related distillation text corpus, we propose an efficient algorithm that selects visually-grounded sentences (_i.e_. sentences which describe the visual world) from an NLP corpus rather than require the expensive human-annotated image captions or use noisy web-crawled image-related text. On top of text selection algorithm, we design two distillation losses to augment open-vocabulary logits in VLFM and empirically show that our novel distillation losses benefit vision-language (VL) knowledge distillation.
To summarize, we make three contributions in this paper:
1. We propose a vision-language knowledge distillation mechanism **DIME-FM** to transfer knowledge of pre-trained huge VLFMs to **small foundation models** with smaller-scale public images and unpaired sentences.
2. We distill the pre-trained CLIP-ViT-L/14 to Distill-ViT-B/32, with only unpaired 40M public images and 28.4M sentences. Notably, our Distill-ViT-B/32 rivals the CLIP-ViT-B/32 that was pre-trained on private 400M image-text paired data in both transferability and robustness.
3. Our proposed **DIME-FM** consists of an efficient algorithm to construct a visually-grounded text corpus from an NLP corpus and two specific distillation losses to augment open-vocabulary logits in VL distillation.
## 2 Related Works
**Vision-Language Foundation Models.** Many previous works focus on learning a generic alignment between language and vision features extracted by pretrained encoders [28, 44, 51, 58, 79, 88, 97] to improve many downstream tasks, _e.g_. Visual Question Answering (VQA) [4, 92, 38], Image Captioning [2, 53, 71, 32]_etc_. Recently, inspired by the great success on generic NLP model transferring to the downstream tasks [68, 69, 11], CLIP [67] and other large VLFMs [39, 50, 95, 52, 94] pretrain on hundreds of million image-text pairs to learn transferable visual representation from natural language supervision with contrastive learning. These works have shown astonishing transferring performance, such as zero-shot and linear probing evaluations, on various downstream tasks [49] as well as a great robustness to the distribution shift from ImageNet [67]. Without the use of private large-scale data, it is challenging to learn small custom foundation models that possess comparable transferability and robustness. ELEVATER evaluation [49] shows that training VLFMs [93, 76] using relative small public datasets (\(\leq\) 40M image-text pairs) and even with help of external knowledge, _e.g_. WordNet [61] and Wiktonary [60], cannot close the performance gap in comparison to CLIP [67] or Florence [95]. Trained with CLIP-Filtered 400M image-text pairs [74], OpenCLIP [16] still performs worse than CLIP at each model size because of the possible poorer quality of paired data. In this paper, instead of pretraining the model using contrastive loss with paired data, we distill from CLIP-ViT-L/14 to different models with smaller-scale public images and unpaired sentences.
**Uni-modal Knowledge Distillation.** In general, knowledge distillation [36] transfers knowledge from one model (teacher) to another (student). It optimizes a student model to match some certain output of the teacher model. With a single modality, there are two main ways of distillations: (1) knowledge distillation of the fixed-vocabulary prediction logits [36, 43, 5, 62, 9]. (2). feature distillation on the final or intermediate activation of the network [72, 37, 3, 35, 96, 81]. In this paper, we do not require the same feature dimension in both teacher and student foundation models. To avoid complex tricks to circumvent the mismatch of feature dimensions using feature distillation methods, we adopt the simple logit distillation for the vision-language distillation. Instead of applying KL divergence loss to fixed-vocabulary logits in the uni-modal logit distillation, we apply KL divergence loss to feature similarity scores (_i.e_. open-vocabulary logits) in VLFMs. Moreover, we still use the uni-modal logit distillation as a regularizer in the distillation.
**Model Distillation from CLIP.** Some works [89, 90, 98, 66] perform feature distillation of CLIP image encoder with Masked Image Modeling [7, 20, 30, 21, 6, 22, 86, 91] to learn a new image encoder which claim superior finentuning performance on ImageNet-1K [18] and ADE20K [99]. They ignore the language encoder during the distillation and do not maintain the alignment of image and text in the feature space. BeamCLIP [42] distills the CLIP using logits computed by
images from the public image datasets and class names of ImageNet-1K, and achieves better ImageNet-1K Top-1 linear probe accuracy than vision-only self-supervised learning (SSL) methods [14, 12]. CLIP-TD [87] distills knowledge from CLIP into existing architectures to solve targeted vision-language (VL) tasks. Even though these works achieve better performance in their specific tasks, their student models lose the capability of VLFMs, as they are not scalable to solve new tasks by zero-shot transferring. Instead of distilling CLIP and tuning it for specific downstream task(s), we wish to distill another foundation model from CLIP, and our result model yields the comparable transferability and robustness performance to the foundation models with the similar model size but pretrained on hundreds of million image-text pairs.
## 3 Vision-Language Knowledge Distillation
In this paper, we propose our VL knowledge distillation **DIME-FM** which uses the public unpaired images and text to distill a small VLFM from a pretrained large VLFM (CLIP-ViT-L/14). First, we mathematically define VLFMs and our VL distillation setting. Then, we introduce our novel training losses and our text construction algorithm.
**Preliminaries.** A dual-encoder VLFM consists of an image and text encoder to extract image/text embeddings respectively, then project the image and text embeddings to the common feature space. To get more flexible design choices for the dimensions of the separate image/text feature spaces and the final shared feature space, we separate the image and text projection layers from the image and text feature encoders. Therefore, a standard dual-encoder VLFM can be defined as a quartet \([f_{\theta},g_{\phi},\mathbf{A},\mathbf{B}]\). \(f_{\theta}\) and \(g_{\phi}\) are image and text encoders which encode the image \(\mathbf{x}\) and text \(\mathbf{t}\) into their own feature spaces (as \(\bar{\mathbf{u}}\in\mathbb{R}^{d^{v}}\) and \(\bar{\mathbf{v}}\in\mathbb{R}^{d^{l}}\))1 respectively. \(\mathbf{A}\in\mathbb{R}^{d\times d^{v}}\) and \(\mathbf{B}\in\mathbb{R}^{d\times d^{l}}\) are two linear layers projecting image and text embeddings (\(\bar{\mathbf{u}}\) and \(\bar{\mathbf{v}}\)) to \(\mathbf{u}\) and \(\mathbf{v}\) in a shared \(d\)-dim feature space:
Footnote 1: The upper scripts \(v\) and \(l\) are short for vision and language, respectively.
\[\bar{\mathbf{u}}=f_{\mathbf{\theta}}(\mathbf{x}),\quad\bar{\mathbf{v}}=g_{\mathbf{\phi}}(\mathbf{t}), \quad\mathbf{u}=\mathbf{A}\bar{\mathbf{u}},\quad\mathbf{v}=\mathbf{B}\bar{\mathbf{v}} \tag{1}\]
The similarity score between the image and text embeddings
\[s(\mathbf{u},\mathbf{v})=\mathbf{u}^{T}\mathbf{v}/(\|\mathbf{u}\|\|\mathbf{v}\|) \tag{2}\]
reveals the semantic relationship between image and text encoded in VLFMs. It plays an important role in transferring to downstream tasks and being robust to domain shift.
**Problem Definition.** Given a public unpaired image corpus \(\mathcal{X}\) and text corpus \(\mathcal{T}\), we distill a small VLFM \([f_{\widehat{\mathbf{\theta}}},g_{\widehat{\mathbf{\phi}}},\widehat{\mathbf{A}}, \widehat{\mathbf{B}}]\) from a pretrained large VLFM \([f_{\theta},g_{\phi},\mathbf{A},\mathbf{B}]\), where
\[\widehat{\mathbf{u}} =f_{\widehat{\mathbf{\theta}}}(\mathbf{x})\in\mathbb{R}^{\widehat{d}^{v }},\quad\widehat{\mathbf{v}}=g_{\widehat{\phi}}(\mathbf{t})\in\mathbb{R}^{\widehat{d }}, \tag{3}\] \[\widehat{\mathbf{u}} =\widehat{\mathbf{A}}\widehat{\mathbf{u}}\in\mathbb{R}^{\widehat{d }},\quad\widehat{\mathbf{v}}=\widehat{\mathbf{B}}\widehat{\mathbf{v}}\in\mathbb{R}^{ \widehat{d}}, \tag{4}\]
where \(\widehat{(\cdot)}\) is the component in the student model corresponding to \((\cdot)\) in the teacher model. Notably, we can freely choose the image, text and projected embeddings' dimensions (\(\widehat{d}^{v}\), \(\widehat{d}^{l}\) and \(\widehat{d}\)) in the student VLFM, which can be different from those (\(d^{v}\), \(d^{l}\) and \(d\)) in the teacher VLFM.
In contrast to the expensive pretraining VLFMs [67, 50, 16] with large-scale image-text pairs, we do not require any paired data for optimization. During the distillation, we match the similarity scores of feature embeddings between the teacher and student VLFMs, which ensures our distilled small image encoder \(f_{\widehat{\mathbf{\theta}}}\) is still superior in transferability and robustness, as if it were trained on large-scale paired data. To this end, we propose our VL distillation mechanism **DIME-FM** including two novel distillation losses (Sec. 3.1) and an efficient text selection algorithm to construct the training text corpus (Sec. 3.2).
The capacity of CLIP text encoder has few effects on CLIP performance [67] or the inference latency for downstream visual tasks [49, 80]. To make the presentation simple while keeping the essential idea, we fix the text encoder, i.e., \(g_{\widehat{\mathbf{\phi}}}=g_{\mathbf{\phi}}\), and focus on VL knowledge to distill a small custom image encoder \(f_{\widehat{\mathbf{\theta}}}\) as a transferable and robust vision backbone. If a small text encoder \(g_{\widehat{\mathbf{\phi}}}\) is desired, we can apply the proposed method to distill \(g_{\mathbf{\phi}}\) while fixing \(f_{\widehat{\mathbf{\theta}}}\).
### Optimization for VL Knowledge Distillation
In standard uni-modal logit distillation, the objective is to match the fixed-vocabulary logits predicted by the student to the logits predicted by the teacher on the same input sample [36]. In VLFMs, the vocabulary is not fixed and the outputs are similarity score between images and sentences. Thus we change our objective to match the distribution of
Figure 2: **Illustration of our proposed distillation losses. In each iteration, we compute two losses (\(\mathcal{L}_{wl}\), \(\mathcal{L}_{p\cdot l}\)) and one regularizer (\(\mathcal{L}_{udist}\)) with a min-batch of images and texts to distill knowledge from the teacher to the student. We freeze all parameters in the teacher model and learn the student model from scratch.**
these scores produced by the student to the distribution produced by the teacher. Specifically, we minimize the KL divergence of score distributions computed over image dataset \(\mathcal{X}\) and text dataset \(\mathcal{T}\) (see Eq. 2) using three separate losses.
We first define the general form of applying KL divergence to distill the similarity scores and then define the three losses.. Suppose we have two batches of embeddings \(\{\mathbf{w}_{i}^{1}\}_{i=1}^{B_{1}}\) and \(\{\mathbf{w}_{j}^{2}\}_{j=1}^{B_{2}}\)2 in the teacher model's shared \(d\)-dim feature space. All similarity scores form a teacher score matrix \(\mathbf{S}\in\mathbb{R}^{B_{1}\times B_{2}}\), where \(\mathbf{S}_{i,j}=s(\mathbf{w}_{i}^{1},\mathbf{w}_{j}^{2})\). Similarly, we have the student's score matrix as \(\widehat{\mathbf{S}}\in\mathbb{R}^{B_{1}\times B_{2}}\), where \(\widehat{\mathbf{S}}_{i,j}=s(\widehat{\mathbf{w}}_{i}^{1},\widehat{\mathbf{w}}_{j}^{2})\). Each row and column of the score matrix can be seen as _open-vocabulary logits_. We measure the row-wise (indexing \(i\)) and column-wise (indexing \(j\)) discrepancy between \(\mathbf{S}\) and \(\widehat{\mathbf{S}}\) with KL divergence:
Footnote 2: they can be image or text’s embeddings. This will be further explained in the following.
\[\begin{split}\mathcal{L}_{KL}(\widehat{\mathbf{S}};\mathbf{S}, \mu)&=\sum\nolimits_{i}\text{KL}(\sigma(\mu\mathbf{S}_{i})|| \sigma(\mu\widehat{\mathbf{S}}_{i}))\\ &+\sum\nolimits_{j}\text{KL}(\sigma(\mu\mathbf{S}_{j}^{T})|| \sigma(\mu\widehat{\mathbf{S}}_{j}^{T})),\end{split} \tag{5}\]
where \(\sigma\) is the softmax function and \(\mu\) is a temperature.
In particular, we propose two losses (\(\mathcal{L}_{vl}\) and \(\mathcal{L}_{p\text{-}vl}\), described in detail below) in form of Eq. 5 with different \(\mathbf{S}\) and \(\widehat{\mathbf{S}}\)'s. The third loss is a regularizer \(\mathcal{L}_{udist}\) to maintain the Euclidean Distance between every pair of image embeddings (a.k.a geometry of image embeddings) during the distillation. Our final VL distillation objective is (see Fig 2):
\[\min_{\widehat{\mathbf{f}}_{\widehat{\mathbf{\theta}}},\widehat{\mathbf{A}},\widehat{ \mathbf{B}}}(1-\lambda_{1})\mathcal{L}_{vl}+\lambda_{1}\mathcal{L}_{p\text{-}vl }+\lambda_{2}\mathcal{L}_{udist}\;, \tag{6}\]
where \(\lambda_{1}\in[0,1]\) and \(\lambda_{2}\in\mathbb{R}^{+}\) are two hyperparameters to control each loss weight. We study the efficacy of three losses with various \(\lambda_{1}\) and \(\lambda_{2}\)'s in Sec. 4.5.
**VL Score Distillation Loss \(\mathcal{L}_{vl}\).** We distill the VL score matrices in form of Eq. 5. Given an image batch \(\{\mathbf{x}_{i}\}_{i=1}^{B^{v}}\subset\mathcal{X}\) and a text batch \(\{\mathbf{t}_{j}\}_{j=1}^{B^{l}}\subset\mathcal{T}\), they are projected to \(\{\mathbf{u}_{i}\}_{i=1}^{B^{v}}\) and \(\{\mathbf{v}_{i}\}_{i=1}^{B^{l}}\) in the teacher's shared feature space respectively and projected to \(\{\widehat{\mathbf{u}}_{i}\}_{i=1}^{B^{v}}\) and \(\{\widehat{\mathbf{v}}_{i}\}_{i=1}^{B^{l}}\) in the student's feature space. Therefore, we define the teacher's and student's VL score matrices as
\[\mathbf{S}_{i,j}^{vl}=s(\mathbf{u}_{i},\mathbf{v}_{j}),\quad\widehat{\mathbf{S}}_{i,j }^{vl}=s(\widehat{\mathbf{u}}_{i},\widehat{\mathbf{v}}_{j}), \tag{7}\]
with which we define VL Score Distillation Loss as:
\[\mathcal{L}_{vl}=\mathcal{L}_{KL}(\widehat{\mathbf{S}}^{vl},\mathbf{S}^{vl}, \mu^{vl}) \tag{8}\]
**Pseudo-VL Score Distillation Loss \(\mathcal{L}_{p\text{-}vl}\).** Our study on the efficacy of text corpus (see Sec 4.4) shows that enlarging the text corpus \(\mathcal{T}\) introduces more text embeddings and results in more open-vocabulary logits, which in turn benefits the VL knowledge distillation.
Motivated by this, besides adding more visually-grounded sentences to \(\mathcal{T}\), we introduce image embeddings as additional pseudo text embeddings. Since image and text embedding are trained to live in a shared space, image embeddings are a reasonable substitute for embeddings of visually-grounded text. For a given image \(\mathbf{x}_{j}\) and its image embedding \(\mathbf{u}_{j}\), we assume that there is a sentence \(\mathbf{t}_{j}\) whose text embedding \(\mathbf{v}_{j}\) perfectly matches \(\mathbf{u}_{j}\) in the shared space:
\[\mathbf{v}_{j}=\mathbf{u}_{j},\quad\mathbf{v}_{j}=\mathbf{B}\bar{\mathbf{v}}_{j}\Rightarrow \bar{\mathbf{v}}_{j}\approx\mathbf{B}^{\dagger}\mathbf{v}_{j}=\mathbf{B}^{\dagger}\bm {u}_{j}, \tag{9}\]
where \(\mathbf{B}^{\dagger}\) is the pseudo-inverse3 of matrix \(\mathbf{B}\). We treat the image embedding \(\mathbf{u}_{j}\) as the pseudo paired text embedding of the input image \(\mathbf{x}_{j}\) in the teacher model. For the student model, based on Eq. 4, 9 and \(\bar{\mathbf{v}}_{j}=\widehat{\mathbf{v}}_{j}\) (due to the fixed text encoder), we get the pseudo paired text embedding \(\widehat{\mathbf{v}}_{j}\) of the image \(\mathbf{x}_{j}\) as \(\widehat{\mathbf{v}}_{j}=\widehat{\mathbf{B}}\widehat{\mathbf{v}}_{j}=\widehat{ \mathbf{B}}\bar{\mathbf{v}}_{j}\approx\widehat{\mathbf{B}}\mathbf{B}^{\dagger}\mathbf{ u}_{j}\). We note that \(\widehat{\mathbf{B}}\mathbf{B}^{\dagger}\mathbf{u}_{j}\equiv\mathbf{u}_{j}\) when we do not reduce the projected dimension (\(\widetilde{d}=d\)) and keep \(\widehat{\mathbf{B}}=\mathbf{B}\). By replacing the text embeddings \(\mathbf{v}_{j}\) and \(\widehat{\mathbf{v}}_{j}\) in Eq. 7 with pseudo text embeddings \(\mathbf{u}_{j}\) and \(\widehat{\mathbf{B}}\mathbf{B}^{\dagger}\mathbf{u}_{j}\) respectively, we get the pseudo VL score matrices as:
Footnote 3: also known as Moore–Penrose inverse
\[\mathbf{S}_{i,j}^{p\text{-}vl}=s(\mathbf{u}_{i},\mathbf{u}_{j}),\quad\widehat{\mathbf{S }}_{i,j}^{p\text{-}vl}=s(\widehat{\mathbf{u}}_{i},\widehat{\mathbf{B}}\mathbf{B}^{ \dagger}\mathbf{u}_{j}) \tag{10}\]
with which we define pseudo-VL Score Distillation Loss as:
\[\mathcal{L}_{p\text{-}vl}=\mathcal{L}_{KL}(\widehat{\mathbf{S}}^{p\text{-}vl}; \mathbf{S}^{p\text{-}vl},\mu^{p\text{-}vl}). \tag{11}\]
Some uni-modal self-supervised learning (SSL) works [77, 64] also compute the similarity score matrix (similar to \(\mathbf{S}^{p\text{-}vl}\)) from the same image batch, and then assign the positive/negative ground-truth label for each element in the score matrix. However, in the VL distillation, we treat \(\mathbf{S}^{p\text{-}vl}\) as the supplement to \(\mathbf{S}^{vl}\) which further augments text embeddings. Moreover, we use \(\mathbf{S}^{p\text{-}vl}\) as the pseudo label from the teacher and minimize the discrepancy between \(\widehat{\mathbf{S}}^{p\text{-}vl}\) and \(\mathbf{S}^{p\text{-}vl}\) without any ground-truth labels.
**Uni-Modal Distance Preserving Regularizer \(\mathcal{L}_{udist}\).** In addition to matching the similarity score of a student image embedding and a teacher image embedding in \(\mathcal{L}_{p\text{-}vl}\), we introduce a regularizer \(\mathcal{L}_{udist}\), which distills similarity score \(s\) of two normalized student image embeddings4 from the teacher model, to keep the geometry of image embeddings in the student model close to that in the teacher model.
Footnote 4: The similarity score of two normalized embeddings already encodes their relative locations.
Suppose we have two images \(\mathbf{x}_{i}\) and \(\mathbf{x}_{j}\) as well as their projected embeddings (\(\mathbf{u}_{i}\) and \(\mathbf{u}_{j}\)) in the teacher's feature space and projected embeddings (\(\widehat{\mathbf{u}}_{i}\) and \(\widehat{\mathbf{u}}_{j}\)) in the student's feature space. We define the score matrices to preserve the distances of image embeddings as:
\[\mathbf{S}_{i,j}^{udist}=s(\mathbf{u}_{i},\mathbf{u}_{j}),\quad\widehat{\mathbf{S}}_{i,j }^{udist}=s(\widehat{\mathbf{u}}_{i},\widehat{\mathbf{u}}_{j}). \tag{12}\]
We define the uni-distance preserving loss as a regularization term in the VL distillation as:
\[\mathcal{L}_{udist}=\mathcal{L}_{KL}(\widehat{\mathbf{S}}^{udist};\mathbf{S}^{udist },\mu^{udist}). \tag{13}\]
### Constructing Visually-Grounded Text Corpus
To effectively distill the information from the pre-trained VLFMs, the choice of image corpus \(\mathcal{X}\) and text corpus \(\mathcal{T}\) is crucial. Constructing image corpus \(\mathcal{X}\) is relatively easy due to large scale natural images available on web although care must be taken to filter them to avoid duplicates and harmful content and increase diversity. However, we cannot simply use text crawled from web as \(\mathcal{T}\), because the concept distribution of natural language corpus is very different from that of a visual-grounded sentence corpus. As we show in Sec. 4.6, we use 3 million unfiltered natural sentences as \(\mathcal{T}\) which gives much worse performance than using 3 million image captions of GCC-3M [75]. So it is important to select \(\mathcal{T}\) relating to visual concepts.
With an image-text paired dataset \(\{(\mathbf{x}_{i},\mathbf{t}_{i})\}_{i=1}^{N}\), a simple option is that we take \(\mathcal{X}=\{\mathbf{x}_{i}\}_{i=1}^{N}\) and \(\mathcal{T}=\{\mathbf{t}_{i}\}_{i=1}^{N}\), where \(\mathcal{T}\) and \(\mathcal{X}\) have overlapped semantic meanings. However, we do not assume the availability of any image-text paired data and this simple option is not achievable.
Since the vision-language teacher model \([f_{\theta},g_{\phi},\mathbf{A},\mathbf{B}]\) maps images and text into the same feature space, we can quantify the modality gap between \(\mathcal{T}\) and \(\mathcal{X}\) by measuring the distribution discrepancy between their projected embedding distributions. Given a large NLP corpus \(\mathcal{T}_{large}\), we can select \(\mathcal{T}\) from \(\mathcal{T}_{large}\), by minimizing the discrepancy between \(\mathcal{T}\)'s and \(\mathcal{X}\)'s embedding distributions:
\[\min_{\mathcal{T}\subset\mathcal{T}_{large}} \text{Discrepancy}(\mathcal{U},\mathcal{V})\] (14) s.t. \[\mathcal{U}=\{\mathbf{A}f_{\mathbf{\theta}}(\mathbf{x}):\mathbf{x}\in \mathcal{X}\}, \tag{15}\] \[\mathcal{V}=\{\mathbf{B}g_{\mathbf{\phi}}(\mathbf{t}):\mathbf{t}\in\mathcal{ T}\} \tag{16}\]
This is a combinatorial optimization problem. It is expected to be NP-hard to find the exact global minimum. We propose Algorithm 1 using greedy search to approximately solve the problem. We assume that the cardinality of \(\mathcal{T}\) and \(\mathcal{U}\) is similar. If we want \(|\mathcal{T}|<|\mathcal{U}|\), we can simply do a \(|\mathcal{T}|\)-mean clustering of Algorithm 1's outputs, and construct \(\mathcal{T}\) with the resulting cluster centers. We do not see a need for \(|\mathcal{T}|>|\mathcal{U}|\), since the image corpus \(\mathcal{X}\) can be as large as we want.
```
Input: image embeddings \(\mathcal{U}\) as defined in Eq.15. A large text corpus \(\mathcal{T}_{large}\). Output: Selected text corpus \(\mathcal{T}\), and \(|\mathcal{T}|\approx|\mathcal{U}|\)
1\(\mathcal{U}_{left}\longleftarrow\mathcal{U},~{}\mathcal{T}_{avail}\longleftarrow \mathcal{T}_{large},~{}\mathcal{T}\longleftarrow\emptyset,U_{p}=\infty\)
2while\(\mathcal{U}_{left}\neq\emptyset\) and \(|\mathcal{U}_{left}|/U_{p}<0.95\)do
3\(U_{p}=|\mathcal{U}_{left}|,~{}Matched=dict()\)
4for\(\mathbf{u}\in\mathcal{U}_{left}\): /* find the best text that matches the image */
5\(\mathbf{t}(\mathbf{u})=\operatorname*{arg\,max}_{\mathbf{t}\in\mathcal{T}_{avail}}s(\mathbf{u}, \mathbf{B}\cdot g_{\mathbf{\phi}}(\mathbf{t}))\) //
6\(Matched[\mathbf{u}]=\mathbf{t}(\mathbf{u})\)
7for\(\mathbf{u},\mathbf{t}\in Matched.items()\): /* For all images matching to the same text, pick the first match */
8if\(\mathbf{t}\in\mathcal{T}_{avail}\): \(\mathcal{U}_{left}\longleftarrow\mathcal{U}\backslash\{\mathbf{u}\}\)
9\(\mathcal{T}_{avail}\longleftarrow\mathcal{T}_{avail}\backslash\{\mathbf{t}\},~{} \mathcal{T}.add(\mathbf{t})\)
10
11
```
**Algorithm 1**Constructing text corpus \(\mathcal{T}\)
Generally, the downstream tasks are unknown before distillation. We use our constructed \(\mathcal{T}\) as the text input and call this **Task-Agnostic VL Distillation**. However, in practice, sometimes we know some of the class names used in the downstream tasks before distillation. In this case, we can incorporate those class names into the training text corpus \(\mathcal{T}\). We refer to this as **Task-Aware VL Distillation**. We compare these two VL distillations in Sec 4.3.
## 4 Experiments
We first compare our distilled models to two state-of-the-art VLFMs with the same model capacity, CLIP [67] and UniCL [93]. We then compare task-agnostic and task-aware knowledge distillation, and investigate the influence of data scale on transferability and robustness. Finally, we carefully ablate our proposed distillation losses and our algorithm for text corpus construction.
### Settings
**Evaluation benchmarks.** Foundation models are typically evaluated on transferability to downstream tasks (via zero-shot and linear probing) as well as robustness to data shifts. Following [49], we evaluate all baselines and our models in three settings: (1) Average **Zero-Shot on ELEVATER**[49], a dataset of 20 image-classification tasks; (2) **Zero-Shot on IN-1K**, the ImageNet-1K [18] validation set; (3) Average **Linear Probing on ELEVATER**. For robustness, we follow CLIP [67] to report average zero-shot performance on five datasets [70, 33, 8, 85, 34] with domain shifts from IN-1K.
**Training Data.** Following the academic track proposed in ELEVATER [49], we form our _image corpus_ with images from ImageNet-21K (i.e. ImageNet-22K [46] excluding IN-1K classes), GCC-15M (including GCC-3M [75] and GCC-12M [13]) and YFCC-14M [82]. **We construct our _text corpus_ in two different ways**: (1) Following UniCL [93], from GCC-15M and YFCC-14M captions and the prompt sentences with ImageNet-21K (IN-21K) class names and 80 templates; (2) Selecting \(\mathcal{T}\) from \(\mathcal{T}_{large}\) using images in GCC-15M and YFCC-14M with Algorithm 1. We use
ROBERTa [54]'s pretraining datasets [29, 100, 1, 83, 26] (total of 1.58B sentences) as \(\mathcal{T}_{large}\).
Note that we generally do not use paired image-text data in training. We never load image-text pairs and never use pair labels explicitly in our loss function unless specified. For each experiment, we specify the exact image and text corpora used for distillation.
**Other Settings.** We find that \(\mathcal{L}_{vl}\) alone achieves good performance, so we use it as our loss function in most experiments except for Table 1 & 3 and Fig. 10. In all experiments we distill only the image encoder and use it together with the teacher's text encoder in evaluation. See Supplementary Material Sec.A for implementation and evaluation details.
### Comparison with CLIP and UniCL
**Comparison with CLIP.** We distill a small model from the released CLIP-ViT-L/14 checkpoint using the ViT-B/32 image encoder [23]. We compare Distill-ViT-B/32 with CLIP-ViT-B/32 in Table 1. Both models have the same inference cost (4.4 G FLOPs/img), but CLIP is trained on the private 400M WiT dataset [67], while ours uses 40M images and 28.6M sentences from public datasets (IN-21K, GCC-15M and YFCC-14M). Training with just \(\mathcal{L}_{vl}\) slightly underperforms CLIP, but after adding \(\mathcal{L}_{p\text{-}vl}\) to expand the vocabulary, the two models' performance becomes similar across the zero-shot and linear-probing testbeds. \(\mathcal{L}_{udist}\) improves our zero-shot accuracy on IN-1K and linear-probing on ELECTR but reduces zero-shot accuracy on ELEVATER. The robustness score of our distilled model is higher than CLIP-ViT-B/32 when training with \(\mathcal{L}_{p\text{-}vl}\) and \(\mathcal{L}_{udist}\). While this can be partially explained by our higher accuracy on IN-1K, it is still remarkable as we use less than one-tenth of CLIP training data and no image-text pairs.
Instead of captions, we also try using a text corpus \(\mathcal{T}\) consisting of 28.4M sentences selected using Algorithm 1 from a language-only corpus \(\mathcal{T}_{large}\), using query images from GCC-15M and YFCC-14M. Distilling on \(\mathcal{T}\) and IN-21K prompt sentences, Distill-ViT-B/32 yields better average zero-shot performance on ELEVATER and IN-1K than CLIP-ViT-B/32 (61.4% vs. 60.3%), better linear probing performance (79.2% vs. 78.2%) as well as better robustness (\(50.2\%\) vs. \(48.6\%\)). We analyze the quality of our constructed \(\mathcal{T}\) and the human-annotated captions in Sec. 4.6.
We note that Distill-ViT-B/32 falls short on Zero-Shot on ELEVATOR. After the careful analysis, we find the large CLIP-ViT-L/14 performs much worse than the small CLIP-ViT-B/32 on PatchCamelyon [84] (51.2% vs. 60.7%) and KITTI Distance [27] (13.8% vs. 29.0%) in ELEVATER. After removing these two tasks, Distill-ViT-B/32 yields the same zero-shot score (61.0%) on ELEVATER as CLIP-ViT-B/32. See Supplementary Material Sec. E for more analysis for each individual downstream dataset.
**Comparison with UniCL.** In Table 2, we compare our distillation approach with UniCL [93], which trains contrastively on smaller-scale public image-text pairs, unifying captioning datasets and pseudo-captioned classification datasets. Following the settings in UniCL, we adopt "IN-21K" and "IN-21K + YFCC14M" as our training datasets and use Swin-Tiny Transformer [55] as our student image encoder. In UniCL, both image and text encoders are trained from scratch. We report UniCL's performance by evaluating its released checkpoints trained with two different data sources. For a fair comparison, we further introduce UniCL* in which we use the pretrained CLIP-ViT-L/14 text encoder as UniCL's. During training, we fix the text encoder's weights and only optimize the image encoder with contrastive loss. UniCL* achieves better zero-shot performance than UniCL due to CLIP's strong text encoder. Nevertheless, our Distilled-UniCL* significantly outperforms UniCL* on all evaluation benchmarks with only the \(\mathcal{L}_{vl}\) loss. This indicates that distilling a small VLFM using strong pseudo-labels from large VLFMs is better than contrastive pretraining when we do not have large-scale datasets. Even though our experiment with "IN-21K + YFCC-14M" shows that enlarging data scale reduces the performance gap between distillation and pretraining (more analysis in Supplementary Material Sec. E), **DIME-FM** is more data-efficient, since it does not require any expensive image-text pairs.
on downstream tasks with known classes and generalization to other downstream tasks with unknown classes, under different loss weight \(\lambda_{1}\). BeamCLIP [42] uses _only_ the IN-1K prompt text to distill CLIP's image encoder. Table 3 shows that this generalizes poorly to other unknown downstream tasks (_e.g_. ELEVATER). With larger weight \(\lambda_{1}\) on \(\mathcal{L}_{p\text{-}vl}\) to expand text embeddings, BeamCLIP's student model generalizes better on ELEVATER but is still worse than our task-agnostic knowledge distillation. When we target multiple downstream tasks and only use prompt text with their class names (_i.e_. IN-1K and ELEVATER) as the input text corpus (denoted as "DS Prompt Text" in Table 3), it is hard to balance different tasks, _e.g_. zero-shot performance on ELEVATER improves while zero-shot performance on IN-1K worsens compared to [42]. Combining the large text corpus \(\mathcal{T}\) and the prompt sentences of downstream class names is a good practice for task-aware distillation.
### Influence of Dataset Scale
We investigate the influence of image and text datasets' scale on transferability and robustness of the student model by fixing the dataset scale of one modality and varying the other. For the fixed-size modality, we use images or text from "IN-21K + GCC-15M".
From Fig. 3 (a), we find that the transferability of student foundation models improves with larger image or text corpus, but it is more sensitive to the image corpus size. Also, prompt sentences with IN-21K class names describe diverse visual concepts, so training with these achieves comparable transferability to training with 3M captions from GCC-3M.
In Fig. 3 (b), we study the correlation between robustness on IN-1K variant datasets and original performance on IN-1K, as well as the correlation between robustness and the size of image/text corpus. Fig. 3 (b.i) shows that when we fix the text corpus size, robustness correlates with the training image corpus size more strongly than with IN-1K performance. Even though distilling directly with IN-1K images produces better performance on IN-1K, it does not guarantee better robustness to domain shifts from IN-1K. In Fig. 3 (b.ii), we freeze the image corpus size and find a different trend, in which robustness directly relates to performance on IN-1K regardless of the text corpus size.
We conclude that VL distillation methods should focus on increasing the training image set to achieve better transferability and robustness. If downstream class names are unknown, it is critical to construct a text corpus that covers more visual concepts. If downstream class names are known, using them during distillation greatly benefits robustness.
### Ablation Studies on Losses
We examine the effect of \(\mathcal{L}_{p\text{-}vl}\) and \(\mathcal{L}_{udist}\) by Zero-Shot on ELEVATER and IN-1K with IN-21K images and part of GCC-3M captions as the training data. See Supplementary Material Sec. G for more ablation studies on losses.
**Ablation on \(\mathcal{L}_{p\text{-}vl}\).** We gradually put more weights on the \(\mathcal{L}_{p\text{-}vl}\) by increasing \(\lambda_{1}\) in Eq. 6 from 0 to 1 in Fig. 10 (a). We compare the inter-batch version (i.e. \(\mathbf{u}_{i}\) and \(\mathbf{u}_{j}\) in Eq. 10 from different batches) and intro-batch version (i.e. \(\mathbf{u}_{i}\) and \(\mathbf{u}_{j}\) from the same batch) of \(\mathcal{L}_{p\text{-}vl}\) and find the intro-batch \(\mathcal{L}_{p\text{-}vl}\) performs better than inter-batch \(\mathcal{L}_{p\text{-}vl}\), so we
Figure 4: **Ablation Studies on \(\mathcal{L}_{p\text{-}vl}\) and \(\mathcal{L}_{udist}\).**
Figure 3: **Transferability and Robustness for different Image/Text Dataset Sizes. (a) zero-shot transferability of our student model increases with larger training image/text corpus; (b.i) shows robustness strongly correlates to the training image dataset size (represented as the dot size); (b.ii) shows robust score strongly correlates to IN-1K performance when changing the training text.**
\begin{table}
\begin{tabular}{c|c c c c} \hline \multirow{2}{*}{\(\lambda_{1}\)} & \multirow{2}{*}{Input Text Corpus} & \multicolumn{2}{c}{Zero-Shot} & \multirow{2}{*}{Linear Probing} \\ & & ELEVATER & & \\ \hline \multirow{4}{*}{0} & _28.6M Text_ & 53.5\% & 64.2\% & 77.9\% \\ & IN-1K Prompt Text[42] & 47.4\% & **67.2\%** & 76.9\% \\ & DS Prompt Text & 56.8\% & 57.2\% & 78.9\% \\ & _28.6M + DS Prompt Text_ & **57.5\%** & 65.6\% & **79.2\%** \\ \hline \multirow{4}{*}{0.3} & _28.6M Text_ & 55.8\% & **64.8\%** & 78.4\% \\ & IN-1K Prompt Text[42] & 50.8\% & **66.6\%** & 77.1\% \\ & DS Prompt Text & **57.8\%** & 60.3\% & **79.4\%** \\ & _28.6M + DS Prompt Text_ & 57.7\% & 66.1\% & **79.4\%** \\ \hline \multirow{4}{*}{0.3} & _28.6M Text_ & 55.5\% & **64.2\%** & 78.8\% \\ & IN-1K Prompt Text[42] & 53.1\% & **65.4** & 78.0\% \\ \cline{1-1} & DS Prompt Text & **59.4\%** & 61.7\% & **79.8\%** \\ \cline{1-1} & _28.6M + DS Prompt Text & 57.5\% & 64.9\% & 79.5\% \\ \hline \end{tabular}
\end{table}
Table 3: **Task-Agnostic vs. Task-Aware. DS = Downstream.**
keep into-batch version in other experiments. Furthermore, adding \(\mathcal{L}_{p\text{-}vl}\) with \(\lambda_{1}\leq 0.9\) brings better zero-shot performance than only using \(\mathcal{L}_{vl}\) in all three settings. However, we observe the dramatic performance drop when we totally replace \(\mathcal{L}_{vl}\) with \(\mathcal{L}_{p\text{-}vl}\) (_i.e_. \(\lambda_{1}=1\)). We argue improvement with smaller \(\lambda_{1}\)'s and drop at \(\lambda_{1}=1\) both due to the gap between images and text embeddings in the shared feature space (More analysis in Sec. 4.6.).
**Ablation on \(\mathcal{L}_{udist}\).** We increase \(\lambda_{2}\) from 0 to 5, to introduce \(\mathcal{L}_{udist}\) as a regularization term. Generally, \(\mathcal{L}_{udist}\) benefits the Zero-Shot on ELEVATER since it tries to preserve the geometry of image features. \(\mathcal{L}_{udist}\) slightly improves IN-1K performance when \(\lambda_{2}\) is small but it quickly harms IN-1K performance when \(\lambda_{2}\) gets larger. We suspect the poor student embedding in early training along with the large regularization term detours the gradient decent trajectory. We find \(\mathcal{L}_{udist}\) is less effective than the similar \(\mathcal{L}_{p\text{-}vl}\), so we only use \(\mathcal{L}_{udist}\) as a regularizer. Our main experiment (Table. 1) further shows \(\mathcal{L}_{udist}\) is less effective when applying \(\mathcal{L}_{p\text{-}vl}\) and \(\mathcal{L}_{udist}\) together.
### Analysis of Constructed Text Corpus
**Text Corpus based on GCC-3M [75] Images.** We first compare the distillation performance of our constructed \(\mathcal{T}\) with the original GCC-3M captions and with randomly sampled NLP sentences in Table 4. \(\mathcal{T}\) is constructed using Algorithm 1 on the large-scale NLP corpus and the GCC-3M image set. We find that our constructed \(\mathcal{T}\) yields better zero-shot and compatible linear-probing performance compared to the original GCC-3M Captions, while the unfiltered NLP corpus at the same size performs poorly.
**Pair-level Analysis.** We analyze the quality of image-text pairs via the similarity score computed with CLIP-ViT-L/14 (teacher model). In Fig. 5 (a), we compute the histogram of similarity scores for image-caption pairs in GCC-15M and YFCC-14M. We also compute the same similarity score histogram for our selected sentence with its query image. Our sentences selected from 1.5B candidate sentences yield higher average similarity score than the human annotations. See our visualization for the selected text and the effect of each NLP dataset in Supplementary Material Sec. B-C.
**Distribution-level Analysis.** We also analyze the distribution of our constructed \(\mathcal{T}\) in the shared feature space. We plot T-SNE (see Fig. 5 (b)) of normalized embeddings for samples from four different corpora: images, human-annotated captions, our selected sentences and random sentences from the NLP corpus. Even though original CLIP models [67] yield astonishing zero-shot performance on a large variety of downstream tasks, the images and their human annotated captions surprisingly do not overlap in the T-SNE visualization (We also provide MMD among these four corpora in Supplementary Material Sec. F). We conclude that the contrastive loss used in CLIP only pushes the text closer to its related image does not close the distribution gap between image and text corpora in feature space. This explains the effectiveness of \(\mathcal{L}_{p\text{-}vl}\) where we use the image embeddings (Blue dots in Fig. 5 (b)) as the pseudo text embeddings. \(\mathcal{L}_{p\text{-}vl}\) expands the text feature space and unsurprisingly leads to better performance. When we completely replace \(\mathcal{L}_{vl}\) with \(\mathcal{L}_{p\text{-}vl}\), the distillation performance drops a lot due to the large gap between the image and text modalities in the feature space. Moreover, the distributions of our selected \(\mathcal{T}\) and the human-annotated captions are more similar. On the other hand, samples of the ROBERTa NLP Corpus have a different distribution from the visually-grounded sentences. These results provide further support for our Algorithm 1.
## 5 Conclusion
In this paper, we propose a vision-language knowledge distillation mechanism DIME-FM that distills knowledge in pre-trained VLFMs to small foundation models, without using any paired image-text data. We distill pre-trained CLIP-ViT-L/14 to our Distill-ViT-B/32 model, with only 40M public images and 28.4M unpaired text and our model rivals the CLIP-ViT-B/32 model that was pretrained on private large-scale WiT dataset in both transferability to novel tasks and robustness to natural domain shifts. Particularly, we propose an efficient text selection algorithm and two novel distillation losses for vision-language knowledge distillation. This paper shows how to achieve a small custom foundation model with limited unpaired data and released huge CLIP foundation models, which is the first trial in distilling a multi-modal foundation model while preserving its foundation properties. There are many interesting directions not covered in this paper and left for exploration in the future, such as VL distillation with large-scale paired image-text data (_e.g_. 400M+) and distillation of foundation models of other multi-modalities (_e.g_. video-language).
\begin{table}
\begin{tabular}{c c c c} \hline \multirow{2}{*}{Text Corpus} & \multicolumn{2}{c}{Zero-Shot} & \multirow{2}{*}{Linear Probing} \\ & ELEVATER & IN-1K & ELEVATER \\ \hline GCC-3M (Text) & 38.6\% & 39.0\% & **68.2\%** \\ Unfiltered NLP (3M) & 35.9\% & 33.2\% & 65.2\% \\ Our Constructed \(\mathcal{T}\) (3M) & **40.4\%** & **39.2\%** & 67.7\% \\ \hline \end{tabular}
\end{table}
Table 4: **Distillation with Different Text Corpora of the Same Size.** Images from GCC-3M serve as the image dataset.
Figure 5: **Analysis of Text Corpus selected from the ROBERTa NLP Corpus. Best viewed in color.** |
2309.16244 | The Hyper Suprime-Cam extended Point Spread Functions and applications | We present extended point spread function (PSF) models for the Hyper
Suprime-Cam Subaru Strategic Program Public Data Release 3 (HSC-SSP PDR3) in
all $\textit{g,r,i,Z}$ and $\textit{Y}$-bands. Due to its 8.2m primary mirror
and long exposure periods, HSC combines deep images with wide-field coverage.
Both properties make HSC one of the most suitable observing facilities for low
surface brightness (LSB) studies, which are particularly sensitive to the PSF.
By applying a median stacking technique of point-like sources with different
brightness, we show how to construct the HSC-SSP PDR3 PSF models to an extent
of R $\sim$ 5.6 arcmin. These models are appropriate for the HSC-PDR3
intermediate-state data which do not have applied the final aggressive
background subtraction. The intermediate-state data is especially stored for
users interested in large extended objects, where our new PSFs provide them
with a crucial tool to characterise LSB properties at large angles. We
demonstrate that our HSC PSFs behave reasonably in two scenarios. In the first
one, we generate 2-D models of a bright star, showing no evidence of residual
structures across the five bands. In the second scenario, we recreate the
PSF-scattered light on mock images with special consideration of the effect of
this additional flux on LSB measurements. We find that, despite the
well-behaved nature of the HSC-PDR3 PSFs, there is a non-negligible impact on
the faint light present in the mock images. This impact could lead to incorrect
LSB measurements if a proper star subtraction is not applied. | L. P. Garate-Nuñez, A. S. G. Robotham, S. Bellstedt, L. J. M. Davies, C. Martínez-Lombilla | 2023-09-28T08:35:52Z | http://arxiv.org/abs/2309.16244v2 | The Hyper Suprime-Cam extended Point Spread Functions and applications to measuring the intra-halo light
###### Abstract
We present extended point spread function (PSF) models for the Hyper Suprime-Cam Subaru Strategic Program Public Data Release 3 (HSC-SSP PDR3) in all \(g\),\(r\),\(i\),\(Z\) and \(Y\)-bands. Due to its 8.2m primary mirror and long exposure periods, HSC combines deep images with wide-field coverage, making it one of the most suitable observing facilities for low surface brightness (LSB) studies. By applying a median stacking technique of point sources with different brightnesses, we show how to construct the HSC-SSP PDR3 PSF models to an extent of R \(\sim\) 5.6 arcmin. These new PSFs provide the community with a crucial tool to characterise LSB properties at large angles. We apply our HSC PSFs and demonstrate that they behave reasonably in two cases: first, to generate a 2-D model of a bright star, and second, to remove the PSF-scattered light from an Ultra Deep image of the 400020 Galaxy And Mass Assembly (GAMA) group in the SXDS field. Our main focus in this second application is characterising the \(r\)-band intra-halo light (HHL) component of 400020. Building on advanced source extraction techniques with careful consideration of PSF flux, we measure the IHL surface brightness (SB) group profile up to \(\sim\) 31 mag arcsec\({}^{-2}\) and R = 300 kpc. We estimate the IHL fraction (f\({}_{\rm IHL}\)) profile, with a mean of f\({}_{\rm IHL}\)\(\sim\) 0.13. Our results show that not removing the PSF light can overestimate the IHL SB by \(\sim\) 1.7 mag arcsec\({}^{-2}\) and the f\({}_{\rm IHL}\) by \(\sim\) 30%.
keywords: instrumentation: detectors - galaxies: clusters: intracluster medium - galaxies: haloes - methods: data analysis - techniques: image processing - techniques: photometric
## 1 Introduction
Atmospheric conditions and the optics of each of the telescope, instruments, and detectors are the two main reasons why the light of a point source is spread in astronomical images. To quantify this scatter, scientists have defined a function known as point spread function (PSF, Born & Wolf, 1999). The PSF describes the two-dimensional light distribution in the telescope focal plane produced by point sources, where extensive studies have been performed to carefully characterise the imaging system.
The inner part of the PSF (within a few tens of arcseconds) is dominated by atmospheric turbulence (Kolmogorov, 1941) and is generally well represented by a Moffat function (Moffat, 1969; Racine, 1996). An accurate determination of the inner part of the PSF is necessary to recover the real information of the point source and remove the light that is contaminating the surroundings of the bright stars. In the case of studying low surface brightness (LSB) galaxies and features, we also need an accurate determination of the extended PSF profile to remove its faint scattered light that contaminates at large distances from the point source. The outer region of the PSF is less understood and was first measured by King, 1971, who attributes an \(r^{-2}\) behaviour to this part of the radial profile (de Vaucouleurs, 1958). Subsequent measurements mostly fitted the PSF wings with a power-law profile, where the power index ranges from 1.6 to 3 (Gonzalez et al., 2005; Bernstein, 2007; Slater et al., 2009). In addition, the PSF spreads the light from all sources (not only point sources Sandin, 2014; Trujillo & Fizil, 2016; Tang et al., 2018). Because of this, the PSF modeling will play a crucial role in two different steps: recovering the spread light from point and extended sources and also removing this contaminating light from the image.
Although the PSF becomes rapidly fainter with increasing \(r\), the total integrated light from the outer region could have an effect when estimating LSB sources (Gallagher & Ostriker, 1972; Merritt, 1984; Uson et al., 1991; Murante et al., 2004; Mihos et al., 2005; Martinez-Lombilla & Knagen, 2019; Montes, 2022). de Vaucouleurs, 1948, 1953, 1958 focused on how scattered light affects large elliptical galaxies observations, where the effects of the PSF were found to be of minor importance. However, more recent works have found that the measured amount of stellar light in the halos could be affected by the PSF.
Star images can be combined to extract the principal components of the PSF signature via the stacking method (La Barbera et al., 2012;
D'Souza et al., 2014). For example, de Jong 2008 worked with data in the Hubble Ultra Deep Field (HUDF; Beckwith et al., 2006) of the Hubble Space Telescope (HST) and stacked Sloan Digital Sky Survey (SDSS; York et al., 2000) images to estimate the effect of the PSF wings on edge-on galaxies, finding a significant impact on the measured stellar halos. Trujillo & Bakos 2013 used the TinyT HST PSF modeling software (Krist, 1995) to generate the Hubble UDF PSF and conclude that scattered light from the extended PSF has a major impact on the surface brightness galaxy profiles.
Telescopes that are highly optimized for low surface brightness have a true PSF that is well behaved and easy to model. The absence of reflective surfaces, anti-reflection coatings, an unobstructed pupil, and a fast focal ratio are the main characteristics of a telescope suitable for studying LSB sources (Abraham & van Dokkum, 2014). Slater et al. 2009 made an effort to mitigate the multiple internal reflections of bright point sources that complicate the estimation of the extended stellar PSF of the Case Western Reserve University's Burrell Schmidt telescope located at the Kitt Peak National Observatory. These improvements helped to remove the excess light from extremely faint structures of the intracluster light around the Virgo cluster (Mihos et al., 2009; Janowiecki et al., 2010; Rudick et al., 2010). In the case of the Dragonfly Telephoto Array (Abraham & van Dokkum, 2014), due to its wide field of view and optical coatings designed to minimize reflections, the array is suitable for studying extended stellar halos of luminous nearby galaxies (Merritt et al., 2016, 2016; Cohen et al., 2018). Since it is composed of telephoto lenses, the Dragonfly Telephoto Array is likely to have more stable PSF wings than reflecting telescopes, as the latter introduces more scatter (Nelson et al., 2008). Merritt et al. 2020 indicate that the PSF effects are minimal at the Dragonfly galaxy outskirts. However, more stellar halo measurements are needed in order to analyse the effect of PSF wings.
Another competitive instrument for detecting low surface brightness emission is the Hyper Suprime-Cam (HSC; Miyazaki et al., 2015, 2018; Komiyama et al., 2018; Kawanomoto et al., 2018). Attached at the prime focus of the Subaru Telescope (Mauna Kea, Hawaii) are developed by the National Astronomical Observatory of Japan, HSC is one of the state-of-the-art cameras that provides a wide field of view and is also capable of reaching deep optical imaging in a short exposure time. The HSC team developed the Hyper Suprime-Cam Subaru Strategic Program (HSC-SSP; Aihara et al., 2018), a survey with a coverage of 1400 deg\({}^{2}\) in five different bands (gr\(z\)Z) as well as four narrow filters. HSC-SSP is providing an unprecedented database for LSB studies. For example, Greco et al. 2018a detected \(\sim 800\) low-surface brightness galaxies in the first 200 deg\({}^{2}\) of the Wide HSC-SSP layer, where approximately half of which are ultra-LSB with central surface brightnesses in the \(g\)-band of \(\mu_{0}\)\(>24\) mag arcsec\({}^{-2}\)(Greco et al., 2018; Kado-Fong et al., 2021). Furnell et al. 2021 studied the intra-cluster light (ICL) growth using a sample of 18 X-Ray clusters with deep HSC data in the \(i\)-band between \(0.1<z\)\(\leq 0.5\). Given its coverage, depth, and image quality (median seeing in all bands of \(\sim 0.7\) arcsec), this survey can be seen as a predecessor of the upcoming Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST; LSST Science Collaboration et al., 2009).
The HSC image processing pipeline (Bosch et al., 2018, 2019) estimates the PSF using a structured version of the public PSFEx code (Bertin, 2013). The approach of this code is to use unresolved star data in the field to estimate the PSF from an exposure, where the PSF model can be reconstructed for any location on the image. However, Mandelbaum & Hyper Suprime-Cam (HSC) Collaboration 2017 found that the PSFEx model of HSC PSF has super-resolution problems beyond a certain seeing threshold.
Montes et al. 2021 constructed an HSC PSF with \(g\) and \(i\) observations from HSC-SSP to analyse the intracluster light of the Abell 85 cluster of galaxies. The authors followed the method outlined by Infante-Sainz et al. (2020), which was originally designed for SDSS. In this method, the PSF reconstruction process is divided into three parts, using stars within a wide brightness range to build each region. Montes et al. 2021 used four parts instead, where the two inner parts are made by stacking stars of faint magnitude and the two outer parts are made by fitting a power-law function to the brightest star profile in the field and extrapolating it up to R \(\sim 7\) arcmin. Two years later, Martinez-Lombilla et al. 2023 followed the same method to characterise the HSC PSF with observations from the HSC-SSP Public Data Release 2 (Aihara et al., 2019). In this case, the authors divided the PSF into three parts and applied a median stacking technique of starts in each part for the \(g\),\(r\),\(i\)-bands up to an extent of R \(\sim 5.6\) arcmin in the \(g\) and \(r\)-bands and R \(\sim 4.2\) arcmin in the \(i\)-band. They used these PSF models to analyse the intra-halo light (HIL, also known as intra-group light or intra-cluster light) present in an intermediate-redshift Galaxy And Mass Assembly (GAMA, Driver et al., 2011) group of galaxies.
HSC-SSP PDR3 (Aihara et al., 2022) is the newest public data release from HSC, which is publicly accessible since August 2021. In comparison to both previous PDRs, this release increases the sky coverage to the required depths across all five filters. From PDR2 to PDR3, the partially observed area increased from 1114 deg\({}^{2}\) to 1470 deg\({}^{2}\), where the covered area at full depth (\(\sim 26\) mag at \(5\sigma\)) increased from 305 deg\({}^{2}\) to 670 deg\({}^{2}\). In addition, due to a new global sky subtraction algorithm, the overall quality of the PDR3 data has also improved and the extended wings of bright sources are better preserved. By building on advanced source extraction techniques and following the method from Infante-Sainz et al. (2020), we characterise the HSC-SSP PDR3 PSFs in each of the \(g\),\(r\),\(i\),\(Z\) and \(Y\) bands down to a radius of R \(\sim 5.6\) arcmin. The PSF image FITS files and the scripts to follow the PSFs reconstruction are made available to the astronomical community. We also used the PSFs to make an estimation of the fraction of IHL present in the 400020 GAMA group at redshift 0.258.
The structure of the paper is as follows. In Section 2 we discuss the data products used for this work. Section 3 explains the steps we follow to reconstruct the HSC-SSP PDR3 PSFs. We investigate and compare our methodology to derive the PSFs with other methodologies commonly used in the literature in Section 4. Section 5 is dedicated to testing the performance of the HSC-SSP PDR3 PSFs on two real observations. We first use the PSFs to generate 2D models of an HSC-SSP PDR3 bright star in Section 5.1. Secondly, in Section 5.2, we use the PSFs to analyse the HIL component in an Ultra Deep (UD) image of a GAMA group of galaxies. We summarise the results of this work in Section 6.
We adopt throughout an H\({}_{0}\) 68.4 km s\({}^{-1}\) Mpc\({}^{-1}\), \(\Omega_{\Lambda}\) 0.699 and \(\Omega_{\rm m}\) 0.301 cosmological model, corresponding to the cosmology of Planck Collaboration et al. 2020. All magnitudes are in the AB magnitude system and the HSC-SSP PDR3 zero point magnitude is m\({}_{zero}\) = 27 mag.
## 2 Data
In this work, we use data from the Hyper Suprime-Cam Subaru Strategic Program (Aihara et al., 2018). In particular, the images are from the last public data release of the HSC-SSP Survey, HSC-SSP PDR3 (Aihara et al., 2022). We use the Gaia Data Release 3 catalogue (Gaia Collaboration et al., 2022) to find the positions of the stars
that are used to obtain the empirical PSF models. The motivation to construct extended and well characterised HSC-SSP PDR3 PSFs is to improve our future research on studying the intra-halo light (IHL) that is present in groups and clusters of galaxies with HSC observations. Following this idea, we also use the GAMA galaxy group catalogue G\({}^{3}\)Cv10 (Robotham et al., 2011) to select a GAMA group to test our PSF models and make a preliminary IHL estimation. This catalogue overlaps with HSC-SSP PDR3 in the GAMA-fields G09, G12 & G15, and partially overlaps in G02. For the purpose of this work, we only use the HSC data that overlaps with GAMA. The HSC star images from the overlapped regions are deemed sufficient for reconstructing the PSFs (see Section 4.2 for further details).
### The Hyper Suprime-Cam SSP Survey
The Hyper Suprime-Cam is an optical-infrared imaging camera mounted on the prime focus of the 8.2-meter Subaru Telescope. It is operated by the National Astronomical Observatory of Japan (NAOJ) and is located at the Mauna Kea Observatory in Hawaii. The large focal plane is paved with a mosaic of 116 charge-couple devices (CCDs) detector formatting 2k x 4k, covering a field of view of \(\sim\) 1.7 deg\({}^{2}\) with a pixel scale of \(\sim\) 0.168 arcseconds.
HSC-SSP consists of three layers (Wide, Deep & UltraDeep) and uses five broad-band (_gri2Y_) and four narrow-band filters. Each of the layers has a different sky coverage and depth: 1400 deg\({}^{2}\) (\(\tau\) - 26 mag), 27 deg\({}^{2}\) (\(\tau\) \(>\) 27 mag), and 3.5 deg\({}^{2}\) (\(\tau\) \(\sim\) 28 mag) for the Wide, Deep, and UltraDeep layers respectively. The Wide layer consists of three fields covering 916 pointings in total: HECTOMAP, Spring, and Autumn, which were selected to overlap other multi-wavelength surveys to maximise the scientific synergy with HSC. In addition, AEGIS (Davis et al., 2007) is a single pointing to Wide layer depth used for calibrating photometric redshifts. The Deep layer consists of four fields: XMM-LSS (XMM Large Scale Structure survey, Pierre et al., 2004), Extended-COSMOS (E-COSMOS, Aihara et al., 2018), ELAIS-N1 (European Large Area ISO Survey, Oliver et al., 2000) and DEEP2-F3 (Newman et al., 2013). E-COSMOS, DEEP2-3 and ELAIS-N1 each have four pointings, while XMM-LSS has three. HSC-SSP also covers two UltraDeep fields: COSMOS (Cosmic Evolution Survey, Scoville, 2007), which consists of one single pointing centred and partially overlapped with the E-COSMOS Deep field, and SXDS (Subaru/XMM-Newton Deep Survey, Furusawa et al., 2008), one pointing located in the western extreme of XMM-LSS and partially overlapped with this field.
#### 2.1.1 Drb3
To date, the HSC-SSP has three public releases: PDR1 (Aihara et al., 2018), PDR2 (Aihara et al., 2019) and PDR3 (Aihara et al., 2022). Every new PDR is a major update in terms of depth and area in comparison to their predecessors. In addition, a significant number of improvements in the data processing HSC pipeline have taken place since HSC-SSP PDR1 data was released, with careful consideration of the sky estimation capabilities (Kelvin et al., 2023). The sky background estimation remains one of the challenges in low surface brightness imaging and is one of the major sources of systematics (Fliri & Trujillo, 2016; Mihos et al., 2017; Liu et al., 2023).
PDR1 over-subtracted the sky around the extended wings of stars and bright galaxies, and this issue was mitigated by a global sky subtraction introduced in PDR2 (see Figure 5 of Aihara et al., 2019). The improved pipeline subtracted the background using superpixels of 1k x 1k pixels (\(\sim\) 168" on a side). Among the biggest changes between PDR2 and PDR3, the newest pipeline also performs a global sky subtraction for extended object science, but this time the sky subtraction consists of gridding a visit image into superpixels of 8k x 8k (\(\sim\) 23' on a side). However, false detections and measurement failures around the extended wings of bright stars and galaxies were still present in PDR3 and made the authors add a second local sky subtraction using 256 x 256 superpixels (\(\sim\) 43" on a side). Following this idea, there are two types of coadd images in PDR3: the first with both sky subtractions applied, and the second with only the global sky subtraction. As we are interested in studying the intra-halo light surrounding extended objects, we need the well-preserved wings of the HSC bright star images to reconstruct extended PSFs. For this reason, we decide to work with the latter type of coadded image. This intermediate-stage data is not publicly available on the HSC-SSP website via the Data Archive System (DAS) search tool and we contacted the HSC software team to download it. For the reconstruction of the HSC-SSP PDR3 PSFs we only use the HSC-SSP PDR3 data that is overlapped with GAMA, which amounts to \(\sim\) 8 TB of data.
### The GAMA galaxy group catalogue G\({}^{3}\)Cv10
GAMA is a spectroscopic and photometric survey of \(\sim\) 330,000 galaxies down to a depth magnitude limit of r \(<\) 19.8 mag over \(\sim\) 250 deg\({}^{2}\)(Driver et al., 2011; Liske et al., 2015; Driver et al., 2022). The survey was carried out over seven years using the optical AAOmega multi-object spectrograph on the 3.9-meter Anglo-Australian Telescope (AAT) at the Siding Spring Observatory. The area coverage is split into five regions: three equatorial fields (G09, G12, and G15) and two southern fields (G02 and G23).
One of the key benefits of this survey is its high spectral completeness of \(\sim\) 98% (Robotham et al., 2010; Baldry et al., 2010; Driver et al., 2022). This enables us to make a statistically significant analysis of galaxy environments, including low mass groups (M \(<\) 10\({}^{13}\) h\({}^{-1}\) M\({}_{\odot}\)). Along with this increased redshift density, GAMA also provides multiple bands of imaging data covering the ultraviolet to far infrared.
The quality of this data, allowed Robotham et al. 2011 to create the GAMA galaxy group catalogue G\({}^{3}\)Cv10. This catalogue is built on a friend-of-friends (FOF) linking algorithm, which determines whether galaxies are associated with one another based on both their projected and comoving separations. The parameters of this algorithm were determined by comparing to a set of mock GAMA galaxy catalogues obtained from populating the Millenium dark-matter simulations (Springel et al., 2005) with galaxies using the GALFORM (Bower et al., 2006) semi-analytic model.
G\({}^{3}\)Cv10 contains 26194 galaxy groups (with 2 or more members) located at the GAMA II equatorial (Liske et al., 2015) and G02 survey regions. Some of the listed properties for each group in G\({}^{3}\)Cv10 are the halo mass, multiplicity, redshift, total \(r\)-band luminosity, size, position, velocity dispersion, and identification of its Brightest Group Galaxy (BGG) originally defined in the \(r\)-band.
The regions where HSC-SSP PDR3 and GAMA overlap are shown in Fig. 1, where the total overlapping area consists of \(\approx\) 200 deg\({}^{2}\). The HSC Wide layer is indicated in pale blue, the Deep layer is indicated by blue-filled circles, and the UltraDeep layer is represented by dark blue single pointings. GAMA regions are shown in pale red.
## 3 HSC-SSP PDR3 PSF Derivation
We apply an empirical method to construct the PSF for the HSC-SSP PDR3 in each of the five _g,r,i,Z_ and \(Y\) bands. For each HSC band, we create a PSF in each of the four G02, G09, G12, and G15 GAMA regions (we do not have HSC observations in G23). We construct different PSFs in different regions to assess their stability across space and time. Once we have the PSFs per region and filter, we combine them across different regions and same filter in order to obtain the final PSFs for each HSC band. This is appropriate as, shown later, there is little variation in each PSF across the GAMA regions.
### Star selection
We follow a similar approach to Infante-Sainz et al. 2020, where the authors originally constructed a PSF for the SDSS. This method consists of reconstructing different parts of the PSF by the median-clipping stacking of stars within a wide brightness range. In particular, the authors divide the PSF construction into three parts. Bright and saturated stars are used to characterise the outer part (or wings) of the PSF, stars of intermediate brightness are used for the middle part and the core is characterised by non-saturated faint stars. As the central pixels of bright stars are highly affected by the saturation of the CCD dynamical range, the authors only use the outer pixels of these images to create the faint PSF wings. However, the central pixels of fainter stars are unaffected by saturation and can be used to reconstruct the PSF core.
Based on our own experimentation, in this work we divide the PSF into four radial regions: outer, middle, inner, and core, where each part has a different brightness level. These four selected magnitude ranges are:
* Outer: mag\({}_{\star,g}\) \(<\) 8
* Middle: 11 \(<\) mag\({}_{\star,g}\)\(<\) 11.5
* Inner: 14 \(<\) mag\({}_{\star,g}\)\(<\) 14.1
* Core: 18 \(<\) mag\({}_{\star,g}\)\(<\) 18.02
As previously mentioned, we use the Gaia Data Release 3 catalogue (Gaia Collaboration et al., 2022) to identify stars that satisfy these magnitude requirements in each of the HSC-SSP PDR3 and GAMA overlap regions (using the _g_-band data of the catalogue). The images of the stars are from the overlapping regions between HSC-SSP PDR3 and GAMA. We verify that this selection contains enough stars to accurately reconstruct the PSF by including extra HSC data and showing that it does not improve our results (this idea is discussed in detail in Sec. 4.2).
We then also apply an additional constraint that rejects the stars that are not within one degree of radial distance from the centre of any HSC image. By doing this, we avoid distortion problems around the borders of the images. The resulting number of stars per region is similar from band to band, where slight changes in the quantity are due to small coverage differences between the five HSC filters. Table 1 shows the number of stars that were used to reconstruct each part of the PSF in the _g_-band per GAMA field.
The size of each HSC cutout used to reconstruct the outer PSF is 4k x 4k pix\({}^{2}\), meaning that after applying the stacking and combination process (see below for full details), the resulting PSFs extend to a radius of R = 2000 pixels (5.6 arcmin) in all the five bands. As the rest of the selected stars are used to reconstruct the inner parts of the PSF, the cutout sizes are smaller: R = 1000 pixels for the middle
\begin{table}
\begin{tabular}{c c c c c} \hline \hline
**GAMA region** & **Outer** & **Middle** & **Inner** & **Core** \\ \hline G02 & 28 & 240 & 259 & 225 \\ \hline G09 & 98 & 753 & 1064 & 1034 \\ \hline G12 & 68 & 347 & 478 & 518 \\ \hline G15 & 69 & 447 & 515 & 762 \\ \hline \hline \end{tabular}
\end{table}
Table 1: Quantity of stars per GAMA region in the HSC _g_-band that was selected to reconstruct the different parts of the PSF. A similar amount of stars were used for the construction of the PSFs in the rest of the HSC filters.
Figure 1: Location of the HSC-Wide (pale blue), Deep (blue), and UltraDeep fields (dark blue) and the GAMA fields (pale red) on the sky in equatorial coordinates. The grey shadow represents the Milky Way (MW) plane and the centre of the MW is indicated by the grey-filled circle. The orange shadow indicates the ecliptic plane.
stack, R = 500 pixels for the inner stack, and R = 200 pixels for the core one.
We have made available the PSF reconstruction scripts written in the open-source R language at [https://github.com/luciagarate/HSC_PSFs](https://github.com/luciagarate/HSC_PSFs). This consists of three scripts: the first describes the selection of the star images for each PSF region and subsequently stacking (Section 3.2), the second describes the combination of the 4 regions to reconstruct a PSF per GAMA region and HSC filter (Section 3.3), and the third describes the process to reconstruct the final PSFs (Section 3.4). The HSC-SSP PDR3 PSFs in all five bands are also available publicly as FITS files.
### Stacking
After selecting stars and generating cutout HSC images per GAMA region and filter, we proceed to reconstruct the different parts of the HSC PSF. Section 3.2.1 explains how we build the outer part of the PSF. Section 3.2.2 describes the steps we followed to build the middle and inner parts of the PFS, and Section 3.2.3 contains the details about the construction of the core.
#### 3.2.1 Outer part of the PSF
We use stars with a magnitude brighter than m\({}_{g}\) = 8 for the PSF wings. The second column in Table 1 shows the number of stars used to reconstruct the outer PSF region in the \(g\)-band, where G02 has the lowest number because we only use the HSC (Wide) data that overlaps with GAMA.
To reconstruct the outer part of the PSF we apply a median stacking process to all the selected stars images. This is done with the warping and stacking package ProPane1, written in the open-source R language. As we use a median stacking technique that naturally ignores the bright pixels from the background sources, a masking process is not necessary in the stacking step. However, we take the additional precaation of masking the bright background sources to calculate the normalisation value in each image. Before stacking, we normalise each image by the value of the median flux contained in an annulus situated between 400 and 410 pixels from the centre of the bright star. The position of the annulus was chosen to be as close to the centre as possible but avoiding the central saturated region of each image. The width of the annulus was selected in order to get a good signal-to-noise ratio (S/N).
Footnote 1: [https://github.com/asgr/ProPane](https://github.com/asgr/ProPane)
To mask the star images in the normalisation step we use the ProFound2(Robotham et al., 2018) source finding and photometry analysis package. ProFound detects sources using dilated segments (ish-ophotal outlines) of arbitrary shape (rather than elliptical apertures) and measures statistics like flux, size, and ellipticity. In particular, we run the profoundImDiff function on the star image, which creates another image that is the result of the original minus a smoothed version (in this version the borders of the segmentation maps have no abrupt changes). The parameter that establishes the standard deviation of the blur is sigma. We then apply the profoundDilate function to the resulting image, which dilates the segmentation maps by typically 30%. During the dilation process, a watershed de-blending approach is taken, where the segments are not allowed to overlap. The dilation is important to make sure that the background sources are completely masked. The size parameter controls the width/diameter of the dilation kernel in pixels. For the outer stack, we create the dilated segmentation maps with sigma = 2 and size = 21 and we then mask all of the pixels inside each segment. The upper Fig. 2 shows an image of a star used to reconstruct the outer part of the PSF with the corresponding masks. The black circles show the normalisation annulus. As we are calculating the median flux inside the annulus, our masking approach is not very aggressive. The lower plot shows a histogram of the pixel flux values within the annulus of the upper image, where the vertical red line indicates the median value used to normalise the flux. Images -where the total masked area is more than 80%- are discarded for the stacking process. Finally, we proceed to make a median stack of the unmasked normalised images with the propaneStackFlatFunc function, which stacks already aligned (flat) images. The same normalisation process is applied in all the HSC filters.
Footnote 2: [https://github.com/asgr/ProFound](https://github.com/asgr/ProFound)
In order to be cautious, we also calculate the median stacks with the masked star images and conclude that the PSF reconstructed with the median unmasked stacking technique achieves a higher signal-to-noise ratio without biasing the result. This comparison is discussed in more detail in Section 4.3.
Figure 2: Upper: image of a star used to reconstruct the PSF outer part where the ProFound masks are indicated by the semi-transparent gray spots. The normalisation annulus used to stack all the images is delimited by the two black thin circles. Lower: histogram of the flux values of all the pixels within the annulus from the upper image, where the red vertical line indicates the median value of this distribution.
The leftmost panel of Fig. 3 shows the stack of the stars used to reconstruct the PSF outer part with the selected annulus for the normalisation.
#### 3.2.2 Middle and Inner parts of the PSF
The methodology to construct the middle part of the PSF is similar to that in Sec. 3.2.1. The main difference is in the normalisation step, where we select the annulus of inner radial distance 200 pixels from the centre and width 20 pixels. In addition, the size of each image used is 1k x 1k pix\({}^{2}\), and images, where the masked area is more than 50%, are not considered for the stacking process.
In the case of the inner PSF reconstruction step, the size of each image is 500 x 500 pix\({}^{2}\), and the normalisation annulus has an inner radius of 100 pixels and an outer radius of 140 pixels. To stack, we only select the images of stars that have less than 2% of pixels masked. We then proceed to apply the median stack with the unmasked images.
The selected sigma and size values for each filter and region are indicated in the script "1_Stacking.R" at [https://github.com/luciagarate/HSC_PSFs](https://github.com/luciagarate/HSC_PSFs). The middle and inner stacks with their respective selected annuli are shown on the second and third panels of Fig. 3.
#### 3.2.3 Core of the PSF
The inner radius of the annulus for the normalisation step in the core stack is 50 pixels and the outer is 100 pixels, where the size of each image is 200 x 200 pix\({}^{2}\). In this case, the masks for the normalisation step are calculated using the main function of ProFound, profoundProFound. This function offers great flexibility to create large segments around background sources, which is important for this step due to the faintness of the central star. Images that were selected for the stacking process have less than 20% of pixels masked. We also discard the stars that do not have the brightest pixel in the centre of the image. Due to GAIA coordinate errors, it is possible for the brightest pixel of the HSC image to be uncentred. Such cases are removed (we only want stars with low proper motion). As in the other three cases, we calculate the median stack with the selected unmasked images of the stars.
The rightmost panel of Fig. 3 shows the core stack with the selected annulus to normalise the images.
### Combination
The four separate median stacks are combined in order to obtain the final HSC PSF per band and GAMA region.
We begin by combining the outer and middle stacks. To combine both parts, we first normalise them separately by calculating the median flux within an annulus of an inner radius of 150 pixels and an outer radius of 160 pixels in each stack. We then replace inside a circle of radius 150 pix in the outer stack pixels from the middle stack, henceforth outer + middle. Similarly, we combine the outer + middle with the inner stack, and finally the outer + middle + inner with the core stack. The selected annuli to normalise the different parts are indicated in Table 2, as well as the chosen radii to combine them.
The values of the annuli to normalise each image and radii to combine the regions were chosen by analysing the 1-D radial profile of each stack, where we look for the brightest inner radius that has a good S/N in both parts and where the merging profiles agree well. We show the step-by-step 1-D radial profiles of the combined stacks in the left panel of Fig. 4, and the 1-D radial profiles of the single PSF stacks in the right panel 3. The salmon line in the first panel is the 1-D radial profile of the outer stack, whereas the blue line represents the 1-D profile of the outer and middle stack combination. The radius of this junction (150 pixels) is indicated with the right dashed grey line. Following this idea, the green line represents the combination of the three outer parts, and the purple one is the final combination. The 60 and 20-pixel junction radii are indicated by the middle and left dashed grey lines respectively. All the selected annuli to normalise and radii to replace are the same across the five HSC filters. In the second panel of Fig. 4, we show the 1-D radial profiles of the median single PSF stacks, where each point represents each pixel of the selected PSF region. The salmon points represent the outer region of the PSF, the blue and green points are the pixels from the middle and inner PSF regions parts respectively, whereas the purple points represent the core of the PSF. We also show the flux error associated with each of the four median stacks in pale blue. The errors are estimated by calculating the 1\(\sigma\)-quantile (half of the range containing the central 68% of data) of each stacked pixel divided by the square root of the number of images within each stack (see Table 1 for reference).
Footnote 3: The 1-D radial profiles are made with the flux and radius values of all the pixels of the respective PSF stack, therefore they are not radially-averaged profiles.
The spread in the flux pixels values of the 1-D radial profiles at R \(>\) 1000 pix is a factor or \(\sim\) 4.85 larger than the median of the stacking error associated with each individual pixel. In principle, the large scatter could be related to two different error sources: low signal-to-noise ratio and systematic uncertainties. In Sec. 4, we verify that adding more data does not improve the S/N at large radii in the resulting PSFs (i.e. the value of the quantitative scatter does not vary), suggesting systematic uncertainties in the background subtraction process are the only cause of the spreads in the PSF 1-D radial profiles at R \(>\) 1000 pix.
Fig. 5 shows the images of the above-mentioned combinations for the different parts: the first panel is the outer stack, the combination of the outer and middle parts is shown in the second panel, the third panel shows the combination of the two outer parts with the inner one and the final combination is shown in the fourth panel.
After combining the four median stacks, we obtain the HSC PSFs with a radial extension of \(\sim\) 5.6 arcmin. We then proceed to apply the next two final details to each of the PSFs in order to obtain the final HSC PSFs per band and GAMA region:
* We apply a symmetric process by making horizontal, vertical and diagonal reflections to obtain a symmetric 2-D PSF. To do this, we create a new image with the same dimension as the original PSF. For each pixel in the new image, we sum its original value with the values of the three reflected pixels values: the horizontally reflected pixel, the vertically reflected, and the diagonally reflected one.
* We then normalise the flux of each PSF image by making the sum of all the pixel values equal to one.
The script of the PSF combination process is available as "2_Combining.R" at [https://github.com/luciagarate/HSC_PSFs](https://github.com/luciagarate/HSC_PSFs). The rightmost panel of Fig. 5 shows the extended \(g\)-band HSC PSF in the GAMA region G15 after applying the final details.
### Final stack
We reconstruct the HSC-SSP PDR3 PSFs per band and GAMA region, resulting in four PSFs for each of the five filters.We proceed
to reconstruct the final extended HSC-SSP PDR3 PSFs by median stacking the PSFs within the same band. In the left panel of Fig. 6 we show the final \(g\)-band HSC PSF image up to R \(\sim\) 5.6 arcmin (2000 pixels). In the right panel, we show its 1-D flux radial profile, where the horizontal dashed line indicates the maximum flux value of the PSF divided by two (\(f_{\rm max}\)2), and the vertical one is indicating the full width at half maximum (FWHM) of the PSF also divided by two. In the \(g\)-band, the FWHM is 0.772 arcsec (\(\sim\) 4 pixels). The difference in flux between the peak and the minimum value is \(\Delta\)flux \(\sim\) 10\({}^{8}\), which is equivalent to \(\Delta\)mag \(\sim\) 20.
In the left panel of Fig. 6 we can easily identify some of the CCD artefacts (mainly more remarkable around bright stars) described in Aihara et al. 2022. These artefacts are unwanted or unintended effects that occur in CCD images (noise, blooming, hot pixels, dead pixels, etc) where some of them are identified and interpolated over in the CCD processing, but some artefacts may still be present in the processed images.
The bleeding (or blooming) due to CCD saturation is the horizontal feature that is present in all five bands (see Appendix A). We see this feature not only in the outer stack but also when stacking the stars from the middle region and even when stacking the inner region in some of the bands. Another observable artefact in Fig. 6 is the smooth halo component around the centre of the PSF, which is also present in all the stacks. The HSC-SSP PDR3 PSFs in the other four bands are shown in Appendix A.
In Fig. 7 we compare our \(r\)-band HSC-SSP PDR3 PSF flux profile (dark blue) + extrapolated dashed red line up to 50 arcmin versus the \(r\)-band PSF flux profiles of the 0.9 m Burrell Schmidt telescope (pink, Slater et al., 2009) and the Dragonfly Telephoto Array (green, Abraham & van Dokkum, 2014). The three profiles were normalised to have a flux equal to one within the range of 0.1 and 5 arcmin and the minimum radius plotted is 0.1 arcmin. In all three PSFs, more than 90% of the total flux is contained at R \(<\) 0.1 arcmin (Abraham & van Dokkum, 2014), with our PSF accounting for 97.2% of the total light within R \(<\) 0.1'. This means that less than \(\sim\) 3 % of the total flux is contained in the extended wings of the HSC-SSP PDR3 PSF. Both the Burrell Schmidt telescope and Dragonfly Telephoto Array are highly optimized for low surface brightness imaging and have well-behaved PSFs out to large radii. In comparison to them, we find the performance of our HSC-PDR3 PSF to be remarkably good.
## 4 Tests
We run a number of tests to demonstrate the validity of the choices made during the PSF reconstruction process. We first test the stability across time and space of our PSFs by comparing the reconstructed PSFs per GAMA field with the final PSF in each band. We then check that the number of stars with mag\({}_{\star,g}\)\(<\) 8 per filter and GAMA region is optimal to reconstruct the outer part of the PSF. Finally, we verify that the resulting PSFs when using unmasked images achieve a higher signal-to-noise ratio at large radii than the PSFs made with masked images.
### PSF temporal al spatial stability
As mentioned before in Sec. 3.3, the PSF reconstruction process begins with creating a PSF per GAMA field and then stacking the four PSFs per band to create the final one. In order to test the PSF performance, we analyse the differences of each reconstructed PSF
\begin{table}
\begin{tabular}{c c c} \multicolumn{3}{c}{**Normalisation annulus**} & **Combination radius** \\ \hline Outer & 160-150 & 150 \\ \hline Middle & 160-150 & \\ \hline Outer + Middle & 70-60 & 60 \\ \hline Inner & 70-60 & \\ \hline Outer + Middle + Inner & 30-20 & 20 \\ \hline Core & 30-20 & \\ \hline \end{tabular}
\end{table}
Table 2: Quantity of stars per GAMA region in the HSC \(g\)-band that was selected to reconstruct the different parts of the PSF. A similar amount of stars were used for the construction of the PSFs in the rest of the HSC bands.
Figure 3: First panel shows the outer stack, the second panel shows the middle stack, and the third and fourth panels are the inner and core stacks, all with their respective chosen annuli to normalise each star image indicated by thin black circles. The sizes of the respective stacks are: 4k x 4k pix\({}^{2}\), 1k x 1k pix\({}^{2}\), 500 x 500 pix\({}^{2}\) and 200 x 200 pix\({}^{2}\). All the images are from the G15 region in the \(g\)-band.
Figure 4: First panel: 1-D radial profiles of the outer stack (salmon), the combination of the outer and middle stacks (blue), the combination of the outer, middle, and inner stacks (green), and the combination of the outer, middle, inner and core stacks (purple), which is the final HSC \(g\)-band PSF. Second panel: 1-D radial profiles of the individual stacks, where the outer stack is represented by the purple points, the middle stack by the blue points, the inner stack is indicated by the green points, and the core stack by the purple points. We also show the flux stack error per pixel associated with each of the four median stacks, where the error is calculated as the \(1\sigma\)–quantile divided by the square-root of the number of images within each stack. In both panels, the right dashed grey line indicates the radius where we make the first combination (150 pix), the middle one indicates the radius where we join together the outer + middle with the inner stack (60 pix) and the left one indicates the radius where this combination and the core stack are combined (20 pix). The profiles are made with observations from the G15 GAMA region in the \(g\)-band.
Figure 5: Left panel: outer PSF stack, where the annulus that was selected to normalise both outer and middle stacks is represented by the two thin black circles. The inner radius (150 pix) delimits the pixels that we replace with the middle stack. Second panel: outer + middle combination with the annulus of outer radius 70 pix and inner one 60 pix, where we replace the inner stack. Middle panel: outer + middle + inner combination with the annulus situated between 30 and 20 pixels from the centre of the image, where we replace with the core stack inside the black circle delimited by the inner radius. Fourth panel: HSC \(g\)-band PSF before applying the final details (normalisation + symmetrisation). Right panel: HSC \(g\)-band PSF after applying the final details. All the observations are from the G15 GAMA region.
per GAMA region relative to the final PSF. To quantify this difference we use the percentage error:
\[\frac{\mathrm{GXX_{data}-Final_{model}}}{\mathrm{Final_{model}}}, \tag{1}\]
where GXX\({}_{\mathrm{data}}\) represents the PSF from each GAMA field GXX and Final\({}_{\mathrm{model}}\) is a model of the final PSF. In Fig. 8 we show the percentage difference between the \(g\)-band PSFs of each GAMA field and the \(g\)-band final HSC PSF model. The pink inset enlarges the profiles at the sub-1% level, to show the tiny differences. Only slight PSF variations over the GAMA regions are present in the \(g\)-band and also in the rest of the filters, indicating temporal and spatial stability. Note that G02 has more scatter at large radii due to the lower quantity of stars that contribute to the outer stack (we only use data where GAMA and HSC overlap, and in the case of G02 they only partially overlap).
### Outer stack
Compared to the stars of the middle, inner, and core regions, the quantity of bright stars (mag\({}_{\bullet,g}\) < 8) across all five filters is considerably lower (see Table 1). To check whether more bright stars will improve the PSF S/N at large radii, we reconstruct the outer PSF with all the stars in HSC Wide that satisfy the condition mag\({}_{\star}\) < 8, comparing it to the outer PSF created with only the mag\({}_{\star}\) < 8 stars from the overlapped regions between HSC Wide and GAMA. To find all the bright stars in the Spring, Autumn, and HECTOMAP fields (i.e. HSC Wide), we also make use of the Gaia Data Release 3 catalogue. The resulting number of bright stars in HSC Wide per filter is \(\sim\) 1740. This analysis is only done in the \(g\)-band.
We verify that increasing the number of bright stars makes no significant improvement in the signal-to-noise ratio of the PSF. In Fig. 9 we express the differences between both \(g\)-band outer stacks by showing the percentage error of each outer stack and the final
Figure 6: HSC-SSP PDR3 PSF in the \(g\)-band. The left panel is the image of the PSF and the right panel is the 1-D radial profile. The PSF is normalised to have a total flux equal to one. The horizontal dashed line indicates the f\({}_{\mathrm{max}}\)2 of the profile, where f\({}_{\mathrm{max}}\) is the maximum flux value, and the FWHM/2 (\(\sim\) 0.772 arcsec) is specified by the vertical dashed line.
Figure 7: Comparison of the \(r\)-band PSF radial profiles of Burrell Schmidt telescope (pink), Dragonfly Telephoto Array (green), and our HSC-PDR3 model (dark blue) with an extrapolation up to 50 arcmin shown as the red dashed line. Each of the profiles is normalised so that the flux between 0.1 arcmin and 5 arcmin is one.
HSC PSF model \(\text{Final}_{\text{model}}\) mentioned in Sec. 4.1 (in this model the outer stack is created with \(\text{m}_{g}<8\) stars from HSC Wide \(\cap\) GAMA). The blue curve shows the percentage error that compares the outer stack created with \(\sim\) 1740 stars (all \(\text{m}_{g}<8\) stars from HSC Wide) and the model. The pink curve shows the percentage error between the outer stack created with the \(\sim\) 270 \(\text{m}_{g}<8\) HSC Wide \(\cap\) GAMA stars and the model itself. The big spread in both curves at large radii is probably caused by background subtraction systematics. Since there are no clear improvements observed by stacking a higher quantity of stars, we decide to continue working with the simplest case: the outer region made of only the stars from the overlapping areas between HSC and GAMA.
### Stacking technique
We use the median statistic estimator to stack the unmasked images, as this is not heavily biased by the higher flux value pixels of the background objects. Additionally, the process of robustly masking sources may be problematic and induce errors. When applying aggressive object masks in order to be robust against biases, surrounding halo pixels from the central stars could also be masked (reducing the S/N).
To demonstrate the robustness of our approach, we reconstruct an HSC PSF where all the median stacks were made with the background-masked star images. The masks are created in exactly the same way as in the normalisation step: by making use of the profoundImDiff and profoundDilate functions from the ProFound package. We then compare the PSF made with the masked images and the PSF made with the unmasked images per GAMA field. Fig. 10 shows the relative difference between both PSFs and the model of the final PSF \(\text{Final}_{\text{model}}\) (created by stacking the PSFs from each GAMA field and using the unmasking technique) in the \(g\)-band. Each panel indicates one GAMA region, with the blue percentage error curve coming from the PSF made with the masked images and the salmon curve coming from the PSF where the star images were unmasked. In all four panels, the salmon curves are less noisy across the different stacks (the junction radii of the four stacks are delimited by the dashed black lines) and appear to be smoother. The maximum relative difference in the \(g\)-band is 1%, which represents the best-case scenario compared with the other bands, as in the \(r\)-band this percentage goes up to 3% and up to 4% in the rest of the filters. This comparison confirms that stacking the unmasked star images with a median technique leads to PSFs with a better signal-to-noise ratio than the PSFs generated when we apply a median stack to the background-masked images. Improving the S/N via masking would require additional effort and time for probably little gain.
## 5 PSF applications
In this section, we show two examples of our new HSC-SSP PDR3 PSFs performance when used in scientific applications. The first example consists of using our empirical PSFs to generate a two-dimensional model of an HSC-PDR3 star image. Additionally, we use the PSFs to model and remove the scattered light in an image of a GAMA group of 14 galaxies situated in the HSC UD SXDS field.
### Star application
In the first application, we use the HSC-SSP PDR3 PSFs to fit a two-dimensional model per band to HSC-PDR3 observations of a bright star located in G09. The images of this star in the five different HSC filters are shown in the first row of Fig. 11. The next row shows the masked version of these images, where we use the ProFound package to mask the background sources and the cores of the stars. The central part of each image is masked up to a radius where the pixels are no longer saturated. This masking limit can easily be determined by plotting the star profiles. The reason behind avoiding the saturated pixel values is that they introduce fake flux measurements and could have a dramatic effect on the subsequent star profiling. Once we have correctly masked the images, we proceed to make the 2-D photometric star profile modelings with ProFit4(Robotham et al., 2017). We use this Bayesian 2-D galaxy profiling tool to generate a star model per band, where we make use of the already parameterized point-source profile function from the libProFit library. In this case, the only free parameter in the model that we fit is the star magnitude. We then use the profitMakeModel function, which gives us the option to specify the PSF model (our empirical HSC-SSP PDR3 PSFs images) that gives the shape to the star profiles. In this step, the PSF images are renormalized to ensure the flux contained per band is correct given the magnitude zero-point of HSC (\(\text{m}_{zero}\) = 27 mag). The resulting 2-D star models per band are shown in the third row of Fig. 11, and the fitted star magnitudes are indicated in the titles of each column. In the fourth panel, we present the residuals obtained by subtracting the models from the observations, revealing
Figure 8: Percentage error between each GAMA-field PSF and a model for the final PSF in the \(g\)-band, where different line colours correspond to the different GAMA regions. The rectangular pink box presents a zoomed-in view between 1 and 1000 pix on the x-axis and between -1 and 1 on the y-axis. The G02 PSF has more scatter because the number of stars in the overlapped region between G02 and HSC-Wide is smaller.
some residuals around the core of the stars. However, when we plot the subtracted images divided by the corresponding ProoFrr model (see fifth row), we see that the percentage difference is very low around the central regions. Finally, in the sixth row, we show the 1-D radial profile of the star up to a radius of R \(\sim\) 1500 pixels in light blue, the masked star profile in dark blue, and the ProoFrt model profile in red. The dashed black lines indicate the radii where the core of each star is saturated. We observe that the point-source models that were constructed with our empirical HSC-SSP PDR3 PSF images reproduce well the data across all five filters, including the vertical spikes in the \(Y\)-band star image. We also see that the noise of the ProoFit models is reduced compared to the noise of the star images as a consequence of the PSF stacking technique reconstruction.
### Galaxy group application
Our goal in this application is to make an estimation of how much of the total light of a galaxy group is actually in the intra-halo light component (i.e. the fraction of IHL or f\({}_{\rm IIL}\)). As the IHL is a faint component, the PSF-scattered flux could have a major impact on its measurement. We compensate for the PSF effect by estimating how much flux is spread, reallocating the lost flux to every source, and removing the PSF-scattered light from the original group image. Building on this innovative PSF removal technique, we make a more realistic IHL estimation.
#### 5.2.1 PSF-scattered light removal
We use the HSC-SSP PDR3 PSFs to remove the scattered light from a group image of 14 galaxies located in the HSC UD 5XDS field. We use the GAMA galaxy group catalogue G\({}^{3}\)Cv10 to select the group, which has GAMA GroupID = 400020, halo mass M\({}_{\rm halo}\) = 8.7 10\({}^{13}\) h\({}^{-1}\) M\({}_{\odot}\), virial radius R\({}_{\rm vir}\) = 765 kpc, and redshift z 0.258. For the purpose of this selection, we chose between groups with more than 10 members (G\({}^{3}\)Cv10 mass estimations are robust when N=10) and we discard the groups that have bright stars within 5 arcmin from the BGG. In Fig. 12 we present a 6.5 arcmin\({}^{2}\) color composite image of the selected group, made with the HSC _g,i,Y_-filters, where the brightest group galaxy (BGG) of the system is located at the centre of the image and the pink circles are indicating the centres of each spectroscopically confirmed group member. However, we make our IHL estimations on a smaller region of 2.8 arcmin\({}^{2}\) in order to avoid the two bright stars located in the north of the field, and therefore only 10 out of the 14 members are being considered in this cutout.
The process we follow to remove the PSF-scattered light from the 2.8 arcmin\({}^{2}\) galaxy group image is explained in detail here:
1. subtract the median sky value from the group image;
2. run ProFound in the source detection mode. ProFound identifies the objects in the image, creates segmentation maps around every detected source, and determines a flux estimation for each one;
3. reset all pixel values outside the segmentation maps (what we call "sky") to zero;
4. to reproduce the PSF effect, convolve the image with sky = 0 (we ensure the spread light comes purely from the sources) with the PSF via the fast Fourier transform (FFT) technique by making use of the proFitMakeConvolver and profitConvolve\({}^{3}\) functions;
5. to quantify the spread effect, count the flux inside each segment before (Flux\({}_{\rm meas}\)) and after the convolution (Flux\({}_{\rm conv}\)). This gives us a factor \(C\) = Flux\({}_{\rm meas}\)/Flux\({}_{\rm conv}\);
6. to ensure the conservation of the flux, rescale the measured flux of each segment by doing Flux\({}_{\rm real}\) = Flux\({}_{\rm meas}\cdot C\) in the image with sky=0, where Flux\({}_{\rm real}\) is the real flux after compensating for the PSF-scattered light. This image (Output 1) contains only Flux\({}_{\rm real}\) (unaffected by the PSF) from the galaxy group;
7. convolve Output 1 with the HSC-SSP PDR3 PSF;
8. mask the resulting PSF-convolved image and subtract it from the masked original group image. This image (Output 2) contains only the PSF-subtracted IHL flux.
In Fig. 13 we show a schematic overview of the processing steps mentioned above to remove the PSF scattered light from a group image. In our case, the analysis is focused on estimating the IHL of the galaxy group only in the HSC _r_-filter. Both \(g\) and _r_-bands achieve very similar effective depths, but due to the cosmetically cleaner images, we use the _r_-band for this initial application. We also note the scatter in the mass-to-light ratio is smaller in the _r_-band (see Robotham et al., 2020), suggesting this filter is the most likely to provide a better approximation of the IHL stellar light fraction.
In (i), the median sky value in the _r_-band image of our group is sky\({}_{\rm Y}\) 0.0005. After running ProFound, setting the sky to 0 in (iii)
Figure 9: The pink percentage error curve shows the difference between the data from the outer stack created with the m\({}_{g}\) < 8 from HSC Wide \(\cap\) GAMA and a model created with this data. Instead, the blue curve shows the percentage error between the data of the outer stack with all m\({}_{g}\) < 8 stars from HSC Wide and the model mentioned above. By comparing both outer parts of the PSF profiles, we find no significant differences between them, and this way we verify that stacking more bright stars does not improve the resulting PSFs.
is a key step to estimate how much flux is lost outside the segments as a consequence of the PSF convolution. Without setting sky = 0, the scattered light from the sky could get inside the segments, resulting in an inaccurate estimation of the PSF effect. To convolve, we use the \(r\)-band HSC-SSP PDR3 PSF model, both (iv) and (vii) convolutions take \(\sim 4\) sec. Although the observed group image is already affected by the PSF, we only use the (iv) convolution to get an estimation of the percentage of flux spread within every segment. Each segment has this information saved in the factor \(C\). By doing Flux\({}_{\rm real}\) = Flux\({}_{\rm meas}\cdot C\), we transform the measured flux of every source to the real flux unaffected by the PSF. Output 1 has only the flux from the galaxy members with Flux\({}_{\rm real}\), where this image is used to estimate the SB profiles of the group. However, the scattered flux outside the segments is still physically present in the image. To remove it, we simulate again the PSF effect by convolving Output 1 with the PSF in (vii). As the sky=0 in Output 1, the scattered flux in (6) outside the segments comes purely from the sources. We then mask the resulting image (6) and the original image (1). In the last step (viii), we subtract masked (6) from masked (1), where the remaining flux is the IHL of the group after removing the PSF flux. We use this image to estimate the SB profiles of the IHL.
#### 5.2.2 IHL estimation
Figure 14 shows the surface brightness profiles measured from the original imaging (green profiles) versus the profiles from the image where we subtract the PSF-scattered light (orange profiles). To calculate these SB profiles we estimate the median surface brightness within consecutive circular annular apertures. The radial range of these apertures goes from R = 0 to R \(\sim\) 330 kpc, divided into 50 equally linearly distributed bins. Each annulus within these bins has a width of 10 pixels. The solid lines present the galaxies plus IHL
Figure 10: Difference between a PSF image constructed by stacking the unmasked star images versus stacking the masked star images expressed by percentage errors per GAMA field. The salmon curve shows the difference between the unmasked PSF data and the final PSF model, where the final PSF is the result of stacking the PSFs from each GAMA field generated with the unmasking technique. Instead, the blue curve shows the difference between the unmasked PSF image and the same final PSF model. The relative difference is noisier when using the masking technique, meaning that stacking unmasked star images leads to a better S/N in the final PSF. The \(g\)-band represents the optimum case, as in the other bands the percentage errors go up to 4%.
Figure 11: HSC-SSP PDR3 PSF application to model a real HSC star image located in G09, where each column represents each of the HSC _g.r.i.z,Y_ filters. The first row shows the image of the star up to an extent of R \(\sim\) 3000 pix. The second row shows the masked version of the star image. The third row shows the PaoFrr model of the star, where we are making use of our HSC PSFs. The resulting fitted magnitudes per band are indicated in the headings of each column. In the next row, we show the result of subtracting the star PaoFrr model from the masked star image (\({}^{24}\) row - 3\({}^{24}\) row). The fifth row shows the subtraction of the masked image from the star model divided by the star model. The bottom panels show the 1-D radial profile of the star in light blue, the profile of the masked version of the star in dark blue, and the profile of the PaoFrr star model in red. The dashed black line indicates the radius up to where we mask the core of the star.
flux, whereas the dashed lines are the SB profiles when we only consider the IHL component. To generate Group+HHL profiles, we generate them separately from Output 1 (which contains only the flux from the group) and Output 2 (which contains only the flux from the IHL). The pale lines come from the Group+HHL profiles when different values are taken in the sky subtraction step. This range goes from 0.0002 to 0.0013, where the dark green and orange profiles are made with a mean sky subtraction value of sky\({}_{\rm{r}}\) 0.0005. All profiles are in the observed reference frame, except for the brown and blue lines, which are the SB profiles when accounted for the SB dimming in the form \((1+z)^{-4}\)(Tolman, 1930).
We then measure the \(r\)-band IHL fraction of the group in the observed reference frame using these processed images. This is computed by summing all the IHL flux within a circle of radius R centered on the BGG divided by the total flux produced by the members of the group (inside the 2.8 arcmin\({}^{2}\) region) and the IHL (Group+HLL), also contained in the same circle of radius R. The profile of the f\({}_{\rm{IHL}}^{\rm{f}}\) as a function of the radius of the circular aperture is shown in the bottom panel of Figure 15, where the orange dashed line indicates the f\({}_{\rm{IHL}}^{\rm{f}}\) when removing the PSF-scattered light and the green solid line indicates the f\({}_{\rm{IHL}}^{\rm{f}}\) from the original image. The upper panel is a zoom-in region of 2.8 x 2.8 arcsec\({}^{2}\) over the group after removing the PSF light and masking all the sources, where the 200 kpc circular aperture is indicated by the black dashed circle and the members of the group by black crosses.
We also show the distribution of IHL surface brightness by plotting the cumulative density distribution (CDF) in Fig. 16. The CDF of the Group+HHL is in the left panel with solid lines and the single IHL component is on the right panel with dashed lines. The orange CDFs are from the PSF-subtracted images and the green CDFs are from the original images.
#### 5.2.3 Implications for IHL measurements
In Figure 14, the orange curves are fainter than the green curves in the outskirts due to the removal of PSF-scattered light. It is clear that at very low SB, the profiles are highly dependent on the selected sky subtraction value, meaning that the profile errors are dominated by our subtraction choices. We can see how the scattered light caused by the PSF overestimates the flux measurements when R \(>\) 90 kpc up to a difference of almost \(\sim 2\) mag arcsec\({}^{-2}\).
In Figure 15, the percentage of flux in the IHL component ranges from \(\sim\) 5% at 60 kpc to \(\sim\) 20% at 240 kpc. Specifically, the f\({}_{\rm{IHL}}^{\rm{f}}\) is overestimated by \(\sim\) 30% at R = 200 kpc (black dashed vertical line) when we do not remove the scattered light of the PSF.
In Figure 16, the Group+HLS SB pixel values range between 28 and 19 mag arcsec\({}^{-2}\), whereas the SB pixels of the IHL component range between 28 and 25 mag arcsec\({}^{-2}\). In the right panel, we can see how the fraction of pixels fainter than a certain surface brightness limit (SBL) is higher for the IHL-PSF subtracted CDF, which agrees with our previous findings in Figures 14 and 15.
The most commonly used technique in the literature to separate the BGG from the IHL is to use an SBL of SB cut, which assumes that all the light below this limit is IHL. The widely adopted SBL in the \(V\)-band is \(\mu_{\rm{T}}^{\rm{lim}}\) 26 mag arcsec\({}^{-2}\)(Feldmeier et al., 2002, 2004; Mihos et al., 2005; Rudick et al., 2006, 2009, 2010, 2011). In the \(r\)-band, a SBL of \(\mu_{\rm{T}}^{\rm{lim}}\) 26.4 mag arcsec\({}^{-2}\) has been used in the literature (Krick and Bernstein, 2007). The vertical dashed black line is in both panels of Figure 16 indicate the SBL \(\mu_{r}^{\rm{lim}}\) 26.4 mag arcsec\({}^{-2}\). When applying this SB cut method on the left panel, only \(\sim\) 11% of the Group+HHL pixels are fainter than \(\mu_{r}^{\rm{lim}}\). This result is nearly independent of the PSF subtraction. However, when we analyse this SBL in the right panel of Fig. 16 (black dashed horizontal lines), we can appreciate that the fraction of IHL pixels fainter than \(\mu_{r}^{\rm{lim}}\) 26.4 mag arcsec\({}^{-2}\) is \(\sim\) 0.45 for the green profile and \(\sim\) 0.54 in the IHL PSF subtracted scenario. This means that approximately half of the light would not be considered as IHL according to the SB cut technique, whereas following our methodology we find IHL flux up to an SB of \(\mu_{r}\sim\) 25 mag arcsec\({}^{-2}\).
## 6 Conclusions
In this paper, we make public our new HSC-SSP PDR3 empirical PSF models for all \(g\),\(r\),\(i\),\(Z\),\(Y\) HSC filters. Using stacks of star images within a wide brightness range and following the method outlined by Infante-Sainz et al. (2020), we characterise the PSFs up to an extent of R \(\sim\) 5.6 arcmin. These extended models have a significant impact on the study of low surface brightness structures and will assist the HSC community in recovering the spread light from point and extended sources, as well as removing the contaminating light at large angles.
By comparing the PSF models from the different GAMA regions, we verify that the HSC PDR3 PSFs are stable across space and time. We then prove that our sophisticated median stacking technique with the unmasked background sources shows better results compared to the commonly used median stacking method of masking all background sources. Finally, we also demonstrate that increasing the number of stacked star images does not lead to better PSFs in terms of S/N at large radii.
We present two examples showing the performance of our PSFs in scientific applications. In the first application, we use the HSC PDR3 PSFs to fit a 2-D model of a star image in each filter. We demonstrate that our PSF models effectively make a good characterisation
Figure 12: RGB image of the selected HSC UD SXDS group of galaxies for our HSC-SSP PDR3 PSFs analysis, centred on the BCG of the system. The centers of each of the 14 galaxy group members are indicated with pink circles. This GAMA group is identified by the ID 400020 on the G\({}^{3}\)Cv10 group catalogue, with a halo mass of M\({}_{\rm{halo}}\) = 8.7 10\({}^{13}\) h\({}^{-1}\) M\({}_{\odot}\) and a redshift of z 0.258. The image size is 6.5 arcmin.
of these star images. In the second application, we select a group of 14 galaxies from the GAMA galaxy group catalogue G\({}^{3}\)Cv10 located in the HSC UD SXDS field to analyse the IHL component in the \(r\)-band (GAMA GroupID = 400020, M\({}_{\rm h}\) = 8.7 10\({}^{13}\) h\({}^{-1}\) M\({}_{\odot}\), z 0.258). Using advanced source extraction techniques, we measure the surface brightness profiles for both the Group+IHL and the single IHL component up to R \(\sim\) 300 kpc and SB \(\sim\) 31 mag arcsec\({}^{-2}\). We show that IHL can be overestimated by almost 2 mag in the original group image. Following this approach, we estimate the IHL fraction radial profile of 400020. The results show a median \(f^{\rm I}_{\rm IHL}\) of \(\sim\) 0.13, which is found to be overestimated by \(\sim\) 30% when the PSF-scattered light is not removed. In addition, we find that the widely used SB cut of \(\mu^{\rm lim}_{\rm r}\) 26.4 mag arcsec\({}^{-2}\) to separate the IHL component from the galaxy profile can underestimate the \(f^{\rm I}_{\rm IHL}\) by a factor of two. This result is independent of the PSF subtraction. We compare the \(f^{\rm I}_{\rm IHL}\) and IHL SB values whiri and without the removal of the PSF-spread light in Table 3.
The 2-D HSC PDR3 PSFs models are available as FITS files at [https://github.com/luciagarate/HSC_PSFs](https://github.com/luciagarate/HSC_PSFs), as well as the scripts to reproduce the reconstruction process.
## Acknowledgements
LPGN, ASGR, and SB acknowledge support from the ARC Future Fellowship scheme (FT200100375). LJMD acknowledges support from the ARC Future Fellowship scheme (FT200100055). This paper is based on data collected at the Subaru Telescope and retrieved from the HSC data archive system, which is operated by the Subaru Telescope and Astronomy Data Center (ADC) at NAOJ. Data analysis was in part carried out with the cooperation of Center for Computational Astrophysics (CfICA), NAOJ. We are honored and grateful for the opportunity of observing the Universe from Maunakea, which has the cultural, historical, and natural significance in Hawaii. The Hyper Suprime-Cam (HSC) collaboration includes the astronomical communities of Japan and Taiwan, and Princeton University. The HSC instrumentation and software were developed by the National Astronomical Observatory of Japan (NAOJ), the Kavli Institute for the Physics and Mathematics of the Universe (Kavli IPMU), the University of Tokyo, the High Energy Accelerator Research Organization (KEK), the Academia Sinica Institute for Astronomy and Astrophysics in Taiwan (ASIAA), and Princeton University. Funding was contributed by the FIRST program from the Japanese Cabinet Office, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), the Japan Society for the Promotion of Science (JSPS), Japan Science and Technology Agency (JST), the Toray Science Foundation, NAOJ, Kavli IPMU, KEK, ASIAA, and Princeton University. This paper makes use of software developed for the Large Synoptic Survey Telescope. We thank the LSST Project for making their code available as free software at [http://dm.lsst.org](http://dm.lsst.org).
We have used catalogues from GAMA, a joint European-Australasian project based around a spectroscopic campaign using the Anglo-Australian Telescope. The GAMA input catalogue
Figure 13: Schematic view of the processing steps outlined in Sec. 5.2 to remove the PSF scattered light from the group image. All the panels (except for the PSF) are images of a zoom-in region of 2.8 x 2.8 arcsec\({}^{2}\) in the \(r\)-band, centred at the BCG of the 400020 GAMA group. The sources identified by ProFound in (ii) are delimited by multi-colored segments.
is based on data taken from the Sloan Digital Sky Survey and the UKIRT Infrared Deep Sky Survey. Complementary imaging of the GAMA regions is being obtained by a number of independent survey programmes including GALEX MIS, VST KiDS, VISTA VIKING, WISE, Herschel-ATLAS, GMRT and ASKAP providing UV to radio coverage. GAMA is funded by the STFC (UK), the ARC (Australia), the AAO, and the participating institutions. The GAMA website is [http://www.gama-survey.org/](http://www.gama-survey.org/).
All of the work presented here was made possible by the free and open R software environment (R Core Team 2023). All figures in this paper were made using the R magacaxis package (Robotham 2016b). This work also makes use of the celestial package (Robotham 2016a).
## Data Availability
The HSC-SSP PDR3 PSFs FITS files and reconstruction scripts are available at [https://github.com/luciagarate/HSC_PSFs](https://github.com/luciagarate/HSC_PSFs). The images are from the Hyper Suprime-Cam Public Data Release 3 (HSC-PDR3; Aihara et al. 2022) and can be obtained from [https://hsc-release.mtk.nao.ac.jp/doc/index.php/data-access_pdr3/](https://hsc-release.mtk.nao.ac.jp/doc/index.php/data-access_pdr3/). The catalogue information was extracted from the Galaxy And Mass Assembly (GAMA) survey Galaxy Group Catalogue (G\({}^{3}\)Cr10; Robotham et al. 2011).
|
2309.09364 | Building a P2P RDF Store for Edge Devices | The Semantic Web technologies have been used in the Internet of Things (IoT)
to facilitate data interoperability and address data heterogeneity issues. The
Resource Description Framework (RDF) model is employed in the integration of
IoT data, with RDF engines serving as gateways for semantic integration.
However, storing and querying RDF data obtained from distributed sources across
a dynamic network of edge devices presents a challenging task. The distributed
nature of the edge shares similarities with Peer-to-Peer (P2P) systems. These
similarities include attributes like node heterogeneity, limited availability,
and resources. The nodes primarily undertake tasks related to data storage and
processing. Therefore, the P2P models appear to present an attractive approach
for constructing distributed RDF stores. Based on P-Grid, a data indexing
mechanism for load balancing and range query processing in P2P systems, this
paper proposes a design for storing and sharing RDF data on P2P networks of
low-cost edge devices. Our design aims to integrate both P-Grid and an
edge-based RDF storage solution, RDF4Led for building an P2P RDF engine. This
integration can maintain RDF data access and query processing while scaling
with increasing data and network size. We demonstrated the scaling behavior of
our implementation on a P2P network, involving up to 16 nodes of Raspberry Pi 4
devices. | Xuanchi Guo, Anh Le-Tuan, Danh Le-Phuoc | 2023-09-17T20:09:55Z | http://arxiv.org/abs/2309.09364v1 | # Building a P2P RDF Store for Edge Devices
###### Abstract.
The Semantic Web technologies have been used in the Internet of Things (IoT) to facilitate data interoperability and address data heterogeneity issues. The Resource Description Framework (RDF) model is employed in the integration of IoT data, with RDF engines serving as gateways for semantic integration. However, storing and querying RDF data obtained from distributed sources across a dynamic network of edge devices presents a challenging task. The distributed nature of the edge shares similarities with Peer-to-Peer (P2P) systems. These similarities include attributes like node heterogeneity, limited availability, and resources. The nodes primarily undertake tasks related to data storage and processing. Therefore, the P2P models appear to present an attractive approach for constructing distributed RDF stores. Based on P-Grid, a data indexing mechanism for load balancing and range query processing in P2P systems, this paper proposes a design for storing and sharing RDF data on P2P networks of low-cost edge devices. Our design aims to integrate both P-Grid and an edge-based RDF storage solution, RDF4Led for building an P2P RDF engine. This integration can maintain RDF data access and query processing while scaling with increasing data and network size. We demonstrated the scaling behavior of our implementation on a P2P network, involving up to 16 nodes of Raspberry Pi 4 devices.
The Semantic Web, Peer-To-Peer system, Distributed RDF Store, Edge Devices +
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
+
Footnote †: ccs: Information and Communication, 2019
contain distributed edge devices (Kirshman et al., 2017). It can support the implementation of distributed edge-based applications by equipping edge devices with the capability to cooperate to achieve common goals. The P2P model for edge computing can leverage many edge devices' computational and storage resources. It can also offer flexibility for dynamic edge networks and enhance information sharing between edge nodes (Kirshman et al., 2017).
This motivates us to use a P2P model to build an RDF store for lightweight edge devices to manage and process large-scale RDF data efficiently. P-Grid (Beng et al., 2015) is a structured P2P system that provides load balancing and efficient search using randomised routing. Besides, it abstracts a trie structure, which makes it suitable for processing range queries commonly used in RDF data querying. Notwithstanding, its original design does not support the RDF data model and edge devices. Meanwhile, RDF4Led was developed as a lightweight RDF storage and SPARQL processor that is tailor-built for edge hardware. Consequently, integrating P-Grid and RDF4Led can provide a promising solution to create a decentralised architecture to store and share RDF data on the edge of the IoT. On top of the RDF4Led storage design, we add an additional index layer to enable indexing on the P2P system.
The contributions of this paper are as follows:
1. An alternative design for a distributed RDF store for the P2P system of edge devices based on RDF4Led and P-Grid.
2. A complete implementation of a distributed RDF store in Java by integrating the P-Grid and RDF4Led code base.
3. A set of experiments to evaluate the performance of the implementation in a P2P system using numerous Raspberry Pi 4 devices. Measurement and analysis of the time taken to search and join operations of the RDF data under different data sizes and network sizes.
This paper is constructed as follows. Section 3 explains the rationale of the system from the aspects of storage design and access structure. Section 4 describes the architecture of the system and its detailed implementation. The experimental evaluation of the system is discussed in Section 5. Section 2 discusses the related work and the paper is summarised in Section 6 with conclusions and future work.
## 2. Related Work
Federated query processing approaches are widely used for querying distributed RDF data across multiple heterogeneous data sources. These approaches decompose each query into subqueries directed to the SPARQL endpoints of related data sources and retrieve the results in an integrated manner (Kirshman et al., 2017). Despite providing complete query results, query federation introduces a single point of failure and faces challenges in efficiently managing a large number of data sources and queries due to the execution of subqueries on multiple data sources.
To address scalability issues, some research works have explored the combination of RDF data storage with a P2P architecture, with a focus on RDF data indexing and query processing within these networks. Peers collaborate to build a distributed index and achieve optimal load balancing for storage and query tasks.
In the realm of decentralised architectures for sharing and querying semantic data, Pignic (Pignic, 2015) stands out as a resilient and decentralised solution. By employing replication, Pignic ensures data availability and resilience to node failures. However, it falls into the category of unstructured P2P systems, where queries are flooded throughout the network, leading to challenges such as low search efficiency, lack of guarantee for rare data retrieval, and increased network traffic.
In contrast, structured P2P systems offer several advantages, including scalability, robustness, load balancing, and predictable searching costs for distributed RDF data stores. Research efforts like RDFPeers (Dong et al., 2016), 3rdf (Dong et al., 2016), and Atlas (Han et al., 2017) use DHT-based P2P overlays for distributed RDF data storage and querying. RDFPeers (Dong et al., 2016) is the pioneering P2P system that implements a distributed RDF repository. It stores each triple at three different places in the network and can handle various native queries, including atomic triple patterns, disjunctive and range queries, and conjunctive multi-predicate queries. However, RDFPeers has inherent limitations, including challenges in load balancing mechanisms for peers storing popular triples and the lack of support for data indexing strategies tailored to edge devices.
Figure 1. An example of the three-layer organisation of SPO layout. Each circled number represents a peer in the network. The dotted blue line represents that the data block has a replica on the buffer layer of another peer.
Inspired by the advantages of structured P2P RDF repositories, our work leverages the state-of-the-art structured P2P system, P-Grid, to extend RDF4Led for large-scale, lightweight device networks. The integration aims to address the challenges associated with querying RDF data in decentralised environments, providing promising opportunities for scalable and efficient query processing in edge applications.
## 3. Distributed RDF storage using P-Grid model
### Design of Distributed RDF Storage
To design distributed RDF engines for the P2P system of lightweight edge devices, we adopt the RISC-style design philosophy in (Krishnan et al., 2017). The features of an RDF store are centralised around data access and join operations. To answer a SPARQL query, the primary mission of an RDF engine is to perform graph pattern matching over the RDF dataset. The RDF engine has to search for RDF triples that match triple query patterns and compute the joins between the matched triples. In the scope of this paper, we aim to enable enhanced data access for RDF data in a P2P environment and reuse the join operators in state-of-the-art engines such as RDF4Led. That means we focus on indexing RDF data in a P2P system of edge devices to find triples that match a triple pattern efficiently.
RDF data can be stored with multiple indexes; thus, different triple query pattern variants can be efficiently answered (Krishnan et al., 2017). The multiple indexes approach ensures that whichever components of a triple pattern are bound, there is always an appropriate index for an efficient search for the triples that match the pattern. Hence, we organise RDF triples in three indexing layouts: _SPO_ (Subject - Predicate - Object), _POS_, and _OSP_. These three permutations are sufficient to answer all query patterns, e.g., the SPO layout can cover the triple patterns with the bound subject \((s,?p,?o)\) and the bound subject-predicate \((s,p,?o)\).
We use a hybrid three-layer indexing strategy to maintain the index for RDF triples in a P2P system, including Physical Layer, Buffer Layer, and Distributed Layer. According to the RDF4Led storage design, the Physical Layer involves storing RDF data in flash storage. The Buffer Layer is used to cache recently accessed data and data updates before reading from or writing to the Physical Storage and index data in the Physical Storage. The Distributed Layer defines how the RDF data is distributed over the decentralised P2P network utilising a P-Grid overlay structure. Figure 1 illustrates an example of an SPO layout composed of these three layers in our system.
The Physical Layer can be viewed as a key-value store. RDF graphs are compressed into numerous RDF molecules, which are compact sorted lists of properties and objects related to one subject as described in (Krishnan et al., 2017). Therefore, storage space could be greatly saved by avoiding redundant storage of subject values. These RDF molecules are sorted into pages and then grouped into blocks, which adapt to the flash I/O behaviour. In Figure 1, it is assumed that RDF triples are stored as encoded binary strings, and subscripts of the triples represent the order of binary strings for simplicity. To illustrate, \((s_{1},p_{1},o_{1})\) is before \((s_{1},p_{1},o_{2})\) as \(o_{1}\)'s encoded string is smaller than \(o_{2}\)'s encoded string. Peer1 stores three key-value pairs_(molecules)_ in its physical layer. The molecule with _key1_ uses its first tuple as its self key, and its value is the combination of the ordered tuples from \((s_{1},p_{1},o_{1})\) to \((s_{1},p_{1},o_{4})\).
Regarding the index structure in the Buffer Layer, RDF4Led adopts a similar idea to Block Range Index(BRIN) using a small tuple to represent the information of data blocks from its persistent storage. This approach minimises the memory size to store and maintains the index data. In the middle of Figure 1, each peer has a Buffer Layer with data blocks related to key-value pairs in its Physical Layer. Peer1's first data block is formed by extracting the first tuple \((s_{1},p_{1},o_{1})\) and the last tuple\((s_{1},p_{1},o_{4})\) as well as the key _key1_ of its first key-value pair, that is the first RDF molecule stored in its Physical Layer. Each data block also indicates its original owner. For example, the blue data block of Peer2 points to an RDF molecule stored in Peer1's Physical Layer. Because Peer1 is the actual owner who fully holds the molecule, Peer2 merely has this RDF molecule's summarised information.
The Distributed Layer is a virtual overlay layer running on top of the physical network. It is formed by building a fully decentralised access structure P-Grid based on the Distributed Hash Table(DHT) abstraction. Like the other DHT-based P2P systems, P-Grid links each RDF peer with partitions of the overall RDF data space. Thus, it enables the decentralised storage and maintenance of RDF data among RDF peers. The distribution of RDF data and RDF peers in the P-Grid overlay is exemplified in the top layer of Figure 1. It shows a particular case where the overall RDF graph is partitioned into exactly three parts, each of which denotes RDF triples starting with \(s_{1}p_{1}\), \(s_{1}p_{2}\), or \(s_{2}\). Each peer has a path associated with what data partition it owns in its storage.
Additionally, Peer1's path is a concatenation of binary strings of \(s_{1}\) and \(p_{1}\) referred to as \(s_{1}p_{1}\). All RDF triples (except triples stored as replicas) in its storage start with \(s_{1}p_{1}\).
Moreover, each peer owns a routing table to which it could look up whom to forward queries when the requested data is out of its range. Thanks to its routing table, Peer1 is aware that when it is queried for RDF triples starting with \(s_{2}\), it should forward these queries to Peer3. Furthermore, for queries regarding \(s_{1}p_{2}\), Peer2 may have the requested data. Thus, Peer1 will send the query to Peer2 instead.
### P-Grid Access Structure
In this subsection, we will focus on the access structure of the P-Grid model with the illustration of a specific example. Moreover, we will also introduce P-Grid's prefix-based routing scheme in detail.
Though (Krishnan et al., 2017) states that access structures using k-ary balanced trees can significantly reduce the number of hops compared to binary trees, we assume the binary tree structure is constructed. This assumption conforms with the fact that RDF triples are encoded in binary strings.
In accordance with (Becker et al., 2016)(Becker et al., 2016)(Becker et al., 2016), each RDF peer has a unique address that identifies itself in the community of peers. It can use this address to communicate with other peers in the network. That means there is a one-to-one mapping between peer \(x\) and its address _addr_: \(x\mapsto addr\), where \(addr\) belongs to the full address space \(ADDR\).
Different from the original key space of the P-Grid access structure, we define the maximal key length as a fixed number \(m\). If
each element of a triple is encoded into an integer 32-bit long, the maximal key length \(m\) of an RDF triple is 96. \(A\) binary string \(key_{t}\) represents the key of an RDF triple \(t\)
\[key_{t}=p_{1}p_{2}...p_{k},1\leq k\leq m,\]
where each digit \(p_{i}\in\{0,1\}\). The value of each key is the sum of all non-zero exponents of \(2\) :
\[val(key_{t})=\sum_{i=1}^{k}p_{i}2^{m-i}.\]
Additionally, interval
\[I(key)=[val(key),val(key)+2^{m-k})\subseteq K,\]
Each interval \(I(key)\) indicates a key space partition and \(K\) denotes the entire key space.
One of the distinguishing features of the P-Grid is that its peers' identifiers are decoupled from their paths. They do not have constant or predetermined paths in the overlay network. Their paths vary dynamically during the network construction and maintenance for more balanced data distribution. The data partition determines each peer's path it is responsible. It also tells the peer's location in the overlay network. Assume that peer \(p\) stores a set of data items \(\delta(p)\) and each data item is encoded into a key, then \(\delta(p)\) is a set of all these keys,
\[\delta(p)=\{key_{1},key_{2},...,key_{p}\}.\]
The path of peer \(p\) is defined as the common prefix of all keys,
\[path(p)=p_{1}p_{2}...p_{n}.\]
Each path is mapped to a key interval. In other words, we could learn from \(path(p)=p_{1}p_{2}...p_{n}\) that peer \(p\) takes responsibility for the interval \(I(path(p))\) in the key space and all keys starting with \(p_{1}p_{2}...p_{n}\) fall under peer \(p\)'s key space. Note that although P-Grid has the abstraction of a tree, the nodes residing in the overlay network are hierarchy-less and are all leaf nodes in the tree.
Take Figure 2 as an example; the trie has four peers and binary strings represent all RDF triples. Peer1 stores RDF triples like: \((s_{1},p_{1},o_{1})\), \((s_{1},p_{1},o_{2})\), \((s_{1},p_{2},o_{3})\), \((s_{1},p_{4},o_{6})\), After being encoded, these triples are transformed into fixed-length binary strings(here, we assume the length is 6 for simplification), which in turn will be: \(\textbf{000001},\textbf{000000},\textbf{00100},\textbf{001100}\). As these keys share the common prefix **00**, Peer1's path is defined as **00**. The same holds for other peers in the network.
Because of its trie structure, P-Grid's searching algorithm is based on a prefix routing scheme. Each peer maintains a routing table. Each level of the routing table contains one or multiple references to a peer on the other side of the binary tree at the same level. The entry level denotes the prefix length. For its prefix with length \(i\):
\[prefix(p,i)=p_{1}p_{2}...p_{i},1\leq i\leq n-1\]
, and \(prefix(p,0)\) is empty, peer \(p\) keeps references to other peers in its routing table:
\[ref(p,i)=l_{i}=\{p_{x}\parallel\forall p_{x},prefix(p_{x},i)=prefix(p,i-1)+p _{i}^{-}\}.\]
Thus, peer \(p\) keeps a list of \(n\) entries \((0,I_{0}),...,(n-1,I_{n-1})\) as its routing table. The peers in \(l_{i}\) have the same prefix of length \(i-1\), but its digit at position \(i\) is opposite to that of \(path(p)\).
Since all peers have paths of length \(2\) in the binary tree shown in Figure 2, their routing tables' highest level is \(2\) (indexing from \(0\)) as shown in Figure 3. Take the routing table of Peer1 for an example; at level \(0\), it stores Peer3 or Peer4 or both as the reference peers; at level \(1\), Peer2 is selected accordingly.
The routing table ensures that a peer will answer a query as long as its requesting data exists in the overlay. Peer1 can only answer queries with key **00**. When it is required to answer a query \(q\) with key **11**, Peer1 learns from its routing table quickly that it should forward this query to Peer3. Because \(q\) and Peer1's path have an empty common prefix, Peer3 is the only candidate at level \(0\). By being forwarded to the next peer, \(q\) is getting closer to its final destination. After that, Peer3 forwards the query \(q\) to Peer4. Because \(q\) and Peer3's path have a common prefix of length \(1\), Peer4 is the candidate at level \(1\). Finally, Peer4 receives the query, which is within its scope. It will forward its locally searching results back to the query initiator.
## 4. System Architecture and Implementation
This section will describe our implementation that integrates the RDF4Led engine and P-Grid system to create a distributed RDF store for a P2P network of lightweight edge devices. It utilises the flash-friendly RDF storage of RDF4Led and the P-Grid virtual binary search tree to efficiently manage and query RDF data on each peer in the network. Figure 4 illustrates the architecture integrating
Figure 3. Example of P-Grid Routing Table, corresponding to the P-Grid trie structure in Figure 2. Each peer’s routing table has a maximum level of \(2\). The dotted directed red lines show the paths that a query, whose path is 11, follows to find the answering peer using the prefix-based routing mechanism.
Figure 2. Example of P-Grid Trie Structure, showing four peers in a perfect binary search tree with a maximum level of two. The binary strings represent the encoded RDF triples stored on each peer.
the RDF4Led and P-Grid components on a single peer. The critical components to be extended are the RDF storage and SPARQL query processor of RDF4Led, and the State Management and Lookup Service of P-Grid.
Here, the blue part represents the original architecture of RDF4Led consisting of an _Input Handler_ that is tied to a _Dictionary_ to translate between string-based RDF resources and encoded identifiers. _Dictionary_ adopts a hash function to create a fixed-length integer deterministically as a representation of an original string of arbitrary length. Because of its natural behaviour, the hash function is suitable for key-value structures.
The encoded RDF triples are indexed with three index layouts (SPO, POS, OSP) and are stored with a _Storage Manager_ that employs a two-layer index for each layout as presented in (Goyal et al., 2017). SPARQL queries are registered on the system via a _Query Handler_ and are compiled with a _Query Compiler_. For compiling a SPARQL query, the _Dictionary_ will involve converting RDF nodes in basic graph patterns to encoded identifiers. A _Query Executor_ is implemented to execute the query plans computed by the _Query Compiler_ and to produce the output results. The _Output Handler_ returns the original format of RDF resources for these output results from the _Query Executor_.
The red part encompasses essential functions adopted from P-Grid (Brock et al., 2017). The _State Manager_ from P-Grid serves as a controller for a peer, facilitating state transitions based on given inputs. It includes primary states, such as the bootstrapping phase, exchange phase, replicating phase, and running phase. The bootstrapping phase initiates when a peer joins the P2P network, aiming to discover and familiarise itself with other participants. Subsequently, during the exchange phase, existing peers in the P-Grid overlay structure undergo stabilisation, but data distribution might remain imbalanced. To address this, the exchange phase reorganises and sorts data items among RDF peers. A static approach with a global replication factor of two ensures that each data item has two replicas in the P2P network. During the exchange phase, only data blocks are replicated, with each replica recording the origin peer containing the actual RDF triples within the block. Origin peers halt initiating replicating requests until their data blocks meet the global replication requirement. Once the exchange phase is complete, the running phase commences, making a peer ready to work. Peers in this phase can both initiate query requests and respond to queries from other peers in the P-Grid network. Throughout each phase, the _State Manager_ communicates through a _Communication Handler_, facilitating message exchange. The _Lookup Service_ triggers lookup requests to the _Remote Lookup Request Handler_, which forwards requests to other peers. The _Routing Table_ aids the _State Manager_ and the _Remote Lookup Request Handler_ in identifying the peers to communicate with.
With this architecture, each peer in the network has an RDF4Led _Storage Manager_ responsible for storing and maintaining the RDF data locally. The _Storage Manager_ handles data insertion or deletion and resolves query requests. If new data needs insertion or updating, the _Dictionary_ will first encode the string into an identifier to accelerate the search and save the memory space in the _Storage Manager_. The design of the flash-aware storage layout and indexing scheme of a single RDF4Led machine are in use as they cater to the need for a suitable storage method for lightweight edge devices. Hence, the _Storage Manager_ contains a buffer layer and a physical RDF storage layer. The data in the physical layer is organised as data blocks; the buffer layer is the index of each data block in the physical layer. In our system, the indexes of the data blocks are published to the _State Manager_. Using the peer information from the _Routing Table_ and based on the indexed key, the _State Manager_ will decide which data block should be replicated or exchanged to which peer to maintain the load balance for the network. To retrieve RDF triples from the Physical Layer, the _Storage Manager_ initially searches the Buffer Layer to identify the indexes of the data blocks potentially containing the desired results. Subsequently, the _Storage Manager_ accesses the encoded values from the Physical Storage Layer, utilising the key value of each data block. This retrieval allows the _Storage Manager_ to further decompose the encoded value into multiple tuples, facilitating subsequent result trimming.
After compiling a SPARQL query, the _Query Compiler_ computes an optimal query plan. Each triple query request of the query plan is resolved by the _Lookup Service_, which will search in the local storage of a peer or forward the request to remote peers. The search mechanism in the P2P system is indicated by _Routing Table_, which is essential for a structured P2P overlay, as it holds the information of other peers. The _Routing Table_ ensures that a triple query request is answered by a particular peer as long as the requested data exists in the overlay. When matched triples are found in a peer, the result sets are forwarded back to the _Query Executor_ as a final or intermediate result. The final result generated by the _Query Executor_ would be translated by the _Dictionary_ back to the original format of the triples as the output.
## 5. Evaluation and Analysis
### Evaluation Setup
#### 5.1.1. Software and Hardware
We implemented our system in Java and reused as much of the source code from RDF4Led and P-Grid as possible. We also re-implemented some parts using updated technologies. For instance, we recycled the dictionary module from RDF4Led and the bootstrapping mechanism from P-Grid. The Java
Figure 4. Overview of system architecture, showing the relationship between the major components. Each component is taken from RDF4Led or P-Grid. Each arrow pointed in a direction indicates a dependency relationship between the modules.
WebSocket implementation in the initial version of P-Grid was replaced by gRPC to improve the system's ability to handle asynchronous message passing.
We conducted our experiments using a cluster of 4 to 16 Raspberry Pi4 (Pi4) devices, which serve as lightweight and cost-effective edge devices for the IoT. Each device is equipped with quad-core processors clocked at 1.5GHz, 8GB of RAM, and an onboard LAN connection with a speed of 1Gbps. Peers are considered directly interconnected with every other peer in the experiments.
#### 5.1.2. Performance Metrics
In this evaluation, we focus on testing and evaluating our system's performance in terms of query execution time (QET). The metric is critical in edge applications where data access and retrieval within numerous lightweight computing devices are of paramount importance. Throughout our evaluation process, we measured the QET of searching and retrieving matching RDF triples of an atomic triple pattern among a set of P2P nodes, as well as the QET of join operations across multiple atomic query patterns.
#### 5.1.3. Dataset and Storage setup
For our experiments, we utilise the ISD (Integrated Surface Dataset) 1, a notable weather dataset comprising weather observations collected from 20 thousand weather stations worldwide since 1901. This dataset encompasses various measurements, including temperature, wind speed, wind angle, and more. Moreover, each observation is accompanied by timestamps indicating when these measurements were recorded.
Footnote 1: [https://www.ncdc.noaa.gov/iad](https://www.ncdc.noaa.gov/iad)
To transform the ISD data into RDF, we reuse the data schema from our previous work (Kang et al., 2018), which employs the SSN/SOSA ontology (Kang et al., 2018) to describe the metadata of sensors and the sensor readings in the ISD dataset. The process of mapping the values and attributes of each observation to the schema requires approximately 87 RDF triples. We have chosen observation records from multiple weather stations, thereby representing datasets of different sizes. The dataset is split and loaded into participant nodes with a reasonably balanced distribution of keys with the assumption that the P-Grid construction process has halted. Because the P-Grid exchange function was proven to balance the distribution of keys (Becker et al., 2018).
### Experiments and Analysis
#### 5.2.1. Exp I: QET of a Single Atomic Triple Pattern
To initiate the study of our system's behaviour when responding to a SPARQL query, we measured the QET of a SPARQL query containing a single atomic triple pattern, as depicted in Listing 1. Given that this query doesn't entail any join operations, this experiment aims to offer an analysis of how message passing within a P2P network influences the QET of such a P2P RDF engine.
```
PREFIXsosa:<[http://www3.org/ns/sosa/](http://www3.org/ns/sosa/)> PREFIXrdf:<[http://www.w3.org/1999/02/22-rdf-syntax-ns#](http://www.w3.org/1999/02/22-rdf-syntax-ns#)> SELECTTobservation WHERE{?Observationrdf:typesosa:Observation.XTP1}
```
Listing 1Atomic Triple Pattern - List all observations.It is essential to note that the measurements we obtained here regarding the IQ delay within our setup. Through a microbenchmark of network IO, we determined that the act of sending 1000 messages, each of 1KB in size, consumes approximately 1147 milliseconds. It's worth noting that the delay in local storage IO is notably minor in comparison, rendering it inconsequential when compared to the time taken for communication.
As mentioned in the previous section, the number of triples is divided approximately equally among the involved peers, indicating that the network achieved a balanced key distribution after multiple data exchange phases during P-Grid construction. With N participating nodes, the query initiator required at most log(N) hops to locate the final results.
Under these data setup conditions, we varied the size of the ISD dataset to 26K, 52K, 140K, 208K, 416K, 720K, 1M, and 2M, as shown on the x-axis in Figure 5. Consequently, this led to varying numbers of RDF triples being returned for the atomic triple pattern: 2K, 4K, 10K, 16K, 31K, 54K, 75K, and 153K. It is worth noting that in this scenario, the size of the result set accounted for nearly 8% of the total dataset. The number provided is significantly larger than the actual result size typically returned from a SPARQL query, which often falls below 1% or even 0.1%. We measured the QET by recording the time from the initiation of a request until the initiator received all matching results from the answering nodes. The test results for query execution time when responding to a single atomic triple pattern on different data scales in our setup are presented in Figure 5.
As shown in Figure 5, our system experiences delays in searching and retrieving data, ranging from 1 to 6.5 seconds, across datasets comprising 26K to 2M triples. Throughout the querying process, the communication cost encompasses several factors, including the hops required to locate answering peers, the expense incurred as answering peers transmit messages containing possible block entries, the outlay for the query initiator to request matching RDF triples for each block entry received from answering peers, and the cost for answering peers to send messages containing matching RDF triples.
Furthermore, the results shown in Figure 5 highlight that QET is significantly influenced by the number of matching RDF triples returned. Increasing the dataset size leads to a considerable delay increase. In this context, the difference in QET across various network sizes is not very significant. Increasing the number of involved nodes results in slight delays. This is primarily due to the
Figure 5. QET of Atomic Triple Pattern TP1 Using ISD Dataset. **N is the number of peers in the system.**
fact that, when considering datasets of the same size, the number of matching RDF triples remains constant, with only one or two hops added during the searching phase.
To gain further clarity on the impact of message passing quantity, we repeated the experiment using various triple patterns (TPs). To avoid redundancy, we're presenting results exclusively from our 16-node network. Figure 6 depicts the test outcomes utilizing a triple query pattern from the 2nd SPARQL query, employed in our second experiment (see Section 5.2.2). Given the similarity in the number of matched triples between TP3 and TP4 in the query shown in Listing 2, and TP1 in the query presented in Listing 1, the delays almost the same.
For TP2, we fixed the subject %sensor% to a specific sensor IRI, resulting in a fixed number of matched triples and returned results, even as the data scale increased. The QET remains consistent despite the growth in data size. Our system achieved the capability to return around four thousand results within less than a second in the context of a 16-node system.
#### 5.2.2. Exp2: QET of Complex Join Query Patterns
A practical SPARQL query may contain multiple join operations, which will result in enormous execution time. Thus in this experiment, we further measure the QET of answering a query with multiple joins under uniform data distribution. Here we consider an example which contains a star-pattern join query as shown in Listing 2.
```
PRFFIXsosa:<[http://www.w3.org/ns/sosa/](http://www.w3.org/ns/sosa/)> PRFFIXrdf:<[http://www.w3.org/1999/02/22-rdf-syntax-ns#](http://www.w3.org/1999/02/22-rdf-syntax-ns#)> SELECT?Obs?featureOfInterest?ObsProperty WHERE{ NessorRXsosa:modeObservation?obs.XIP2 TObssosa:hasFeatureOfInterest?featureOfInterest.XTP3 TObssosa:observeOfProperty?obsProperty.XIP4}
```
Listing 2: Join Query Pattern containing 3 atomic triple patterns -List the information of all observations made by a sensor.
Using an ISD dataset of 26K triples, and a cluster of 16 Pi4s, the QET for the join query, as illustrated in Listing 2, was found to be 11.15s. To extrapolate the execution time of join queries with uniform data distribution across various dataset sizes and network scales, we are prompted to employ synthetic data to execute an analogous join query.
To emulate a star pattern join query as in Listing 2, the following join query is used :
\[q_{1}:(s_{1},p_{1},?o_{1})\] **JOIN** \[q_{2}:(?s_{2},p_{2},?o_{2})\] **ON** \[q_{1}.o_{1}=q_{2}.s_{2}\] **JOIN** \[q_{3}:(?s_{3},p_{3},?o_{3})\] **ON** \[q_{1}.o_{1}=q_{3}.s_{3}\]
In Figure 7, each peer stores an equal number of tuples, thereby demonstrating a well-balanced virtual P-Grid trie upon completion of construction. We consider the queries \(q_{1}:(1,2,?x)\), \(q_{2}:(?x,3,?y)\), and \(q_{3}:(?x,4,?z)\) in Figure 8. As the join algorithm in (19), a mapping solution is kept and sent to each triple pattern of the graph pattern throughout the join process. We assume \(q_{1}\) is initially visited, resulting in the variable \(x\) and its corresponding values being added to the mapping. The new mapping will be sent to visit the other two triple patterns. Since \(q_{2}\) and \(q_{3}\) both contain variable \(x\), replacing \(x\) with each real value from the mapping solution and executing \(q_{2}\) and \(q_{3}\) in parallel becomes feasible, leading to reduced waiting time. The retrieval rate for each answering node is fixed at 1 per cent of its local storage capacity.
Figure 8. Process of bind join among \(q_{1},q_{2}\) and \(q_{3}\). \(q_{2}\) and \(q_{3}\) share a common variable \(x\) with \(q_{1}\), making it possible for \(q_{2}\) and \(q_{3}\) to replace \(x\) with real values in parallel.
Figure 6. QET of Atomic Triple Patterns TP1, TP2, TP3, TP4 Using ISD Dataset on 16 Pi4s. N is the number of peers.
Figure 7. Example of two join operations with uniform data distribution. Each peer has \(10^{4}\) tuples. Peer 0 initiates the join query.
Figure 9 presents the test results. As anticipated, the query execution time increases with the number of answering nodes and the storage size of each peer.
The figure illustrates that there is a direct correlation between the execution time of the join query and the number of peers participating in the query. This suggests that the more peers are involved in the join query, the longer it takes to complete the query due to the increased communication overhead. Furthermore, a significant rise in execution time is observed when the number of tuples per peer reaches 1M. However, when the number of tuples per peer remains below \(10^{5}\), the execution time shows little variation. This phenomenon may be attributed to the longer search time required for each answering peer in its local storage with a substantially larger dataset, resulting in an increased number of messages in transit.
## 6. Conclusion
The proposed approach has the potential to advance the field of RDF data management in IoT edge devices in terms of enabling effective integration of IoT data through semantic interoperability. We realised our approach as a distributed RDF engine by integrating two related works in the field: RDF4Led and P-Grid. Leveraging the advances of the two systems, our implementation preserves the two-layer storage structure from RDF4Led and the access structure of P-Grid to enable storing and querying RDF data on IoT devices with limited resources. We implemented the system using part of the source code from RDF4Led and P-Grid. Furthermore, we designed a set of experiments to evaluate the performance of the implementation in a P2P system using up to 16 Raspberry Pi 4 devices. The measurement and analysis of the time taken to search and join operations showed that our system is able to operate with different data sizes (up to 10 million per node).
The results presented in this paper pave the way for future research in semantic data processing on P2P networks at the edge of IoT. Our work contributes to the development of distributed RDF data stores and provides a foundation for future research on optimising query processing and exploring new data availability and replication techniques. The possible directions for extending this work can be to investigate new techniques of load imbalance caused by node departures or failures and data updates in a P2P system. Integrating our system with Saturn (Sutton et al., 2015), an overlay architecture on P-Grid, can enhance load distribution and fault tolerance. For multiple join queries, efficient management of intermediate results is crucial to mitigate network delays and I/O costs. Distributing join operators across nodes can optimise performance. This _task assignment problem_ has been addressed by certain papers (Beng et al., 2015)(Mohammad et al., 2015) using a decentralised algorithm that progressively refines the placement of operators towards an optimal placement.
###### Acknowledgements.
This work is supported by the German Research Foundation (DFG) under the COSMO project (grant No. 453130567), and by the European Union's Horizon WINDERA under the grant agreement No. 101079214 (AloTwin), and RIA research and innovation programme under the grant agreement No. 101092908 (SmartEdge).
|
2309.13314 | Reflexive extended locally convex spaces | For an extended locally convex space (elcs) $(X,\tau)$, the authors in [10]
studied the topology $\tau_{ucb}$ of uniform convergence on bounded subsets of
$(X,\tau)$ on the dual $X^*$ of $(X,\tau)$. In the present paper, we use the
topology $\tau_{ucb}$ to explore the reflexive property of extended locally
convex spaces. It is shown that an elcs is (semi) reflexive if and only if any
of its open subspaces is (semi) reflexive. For an extended normed space, we
show that reflexivity is a three-space property. | Akshay Kumar, Varun Jindal | 2023-09-23T09:13:19Z | http://arxiv.org/abs/2309.13314v1 | # Reflexive extended locally convex spaces
###### Abstract.
For an extended locally convex space (elcs) \((X,\tau)\), the authors in [10] studied the topology \(\tau_{ucb}\) of uniform convergence on bounded subsets of \((X,\tau)\) on the dual \(X^{*}\) of \((X,\tau)\). In the present paper, we use the topology \(\tau_{ucb}\) to explore the reflexive property of extended locally convex spaces. It is shown that an elcs is (semi) reflexive if and only if any of its open subspaces is (semi) reflexive. For an extended normed space, we show that reflexivity is a three-space property.
Key words and phrases:Extended locally convex space, extended normed space, weak topolgy, weak\({}^{*}\) topology, strong topology, reflexive spaces 2
In [10], the authors employed the flc topology to examine the dual of an elcs. Specifically, they studied the weak topology on an elcs \((X,\tau)\) and the weak\({}^{*}\) topology on the dual \(X^{*}\) of \((X,\tau)\). Besides this, on \(X^{*}\), they also studied the topology \(\tau_{ucb}\) of uniform convergence on bounded subsets of \((X,\tau)\).
In the present paper, we use the topology \(\tau_{ucb}\) to study reflexive extended locally convex spaces.
The paper is organized as follows: the second section presents all essential preliminary results and definitions. In Section 3, we define and study reflexive extended locally convex spaces. More specifically, we relate the reflexivity of an elcs \((X,\tau)\) with the reflexivity of its finest space \((X,\tau_{F})\), where \(\tau_{F}\) is the corresponding flc topology. We also show that an elcs \((X,\tau)\) is reflexive if and only if any of its open subspaces is reflexive. Further, in the case of an enls, we prove that the reflexivity property is a three-space property. As an application of our results, in the final section, we look at the reflexivity of some well known function spaces.
## 2. Preliminaries
The underlying field of a vector space is denoted by \(\mathbb{K}\) which is either \(\mathbb{R}\) or \(\mathbb{C}\). We adopt the following conventions for \(\infty\): \(\infty.0=0.\infty=0\); \(\infty+\alpha=\alpha+\infty=\infty\) for every \(\alpha\in\mathbb{R}\); \(\infty.\alpha=\alpha.\infty=\infty\) for \(\alpha>0\); \(\inf\{\emptyset\}=\infty\).
An _extended seminorm_\(\rho:X\rightarrow[0,\infty]\) on a vector space \(X\) is a function which satisfies the following properties.
1. \(\rho(\alpha x)=|\alpha|\rho(x)\) for each \(x\in X\) and scalar \(\alpha\);
2. \(\rho(x+y)\leq\rho(x)+\rho(y)\) for all \(x,y\in X\).
An _extended norm_\(\|\,\cdot\,\|\colon X\rightarrow[0,\infty]\) is an extended seminorm with the property: if \(\|\,\,x\,\,\|=0\), then \(x=0\). A vector space \(X\) endowed with an extended norm \(\|\,\cdot\,\|\) is called an _extended normed linear space_ (or _extended normed space_) (enls, for short), and it is denoted by \((X,\|\,\cdot\,\|)\). The _finite subspace_ of an enls \((X,\|\,\cdot\,\|)\) is defined as
\[X_{fin}=\{x\in X:\|\,\,x\,\,\|<\infty\}.\]
Note that the extended norm \(\|\,\cdot\,\|\) on \(X_{fin}\) is actually a norm. Therefore \((X_{fin},\|\,\cdot\,\|)\) is a conventional normed linear space.
We say an enls \((X,\|\,\cdot\,\|)\) is an _extended Banach space_ if it is complete with respect to the metric \(d(x,y)=\min\{\|\,\,x-y\,\,\|,1\}\) for all \(x,y\in X\). One can prove that an enls \((X,\|\,\cdot\,\|)\) is an extended Banach space if and only if the finite space \((X_{fin},\|\,\cdot\,\|)\) is a Banach space. For details about extended normed linear spaces, we refer to [1, 4, 5].
Suppose \((X,\|\cdot\|_{1})\) and \((Y,\|\cdot\|_{2})\) are extended normed linear spaces. Then for a continuous linear map \(T:X\to Y\), we define
\[\|\ T\ \|_{op}{=}\sup\{\|\ T(x)\ \|_{2}{:}\|\ x\ \|_{1}{\leq}1\}.\]
In particular, if \(f\in X^{*}\), then \(\|\ f\ \|_{op}{=}\sup\{|f(x)|:\|\ x\ \|_{1}{\leq}1\}.\) The following points about an enls \((X,\|\ \cdot\ \|)\) are given in [1].
1. \(X_{fin}\) is open in \((X,\|\ \cdot\ \|)\).
2. \(\|\ f\ \|_{op}{=}\|\ f|_{X_{fin}}\ \|_{op}\) for every \(f\in X^{*}\), where \(f|_{X_{fin}}\) is the restriction of \(f\) on the normed linear space \((X_{fin},\|\ \cdot\ \|)\).
3. For any linear functional \(f\) on \(X\), we have \(f\in X^{*}\) if and only if \(f|_{X_{fin}}\) is continuous on \(X_{fin}\).
4. If \(f\in X^{*}\) and \(\|\ f\ \|_{op}{\neq}\)\(0\), then \(|f(x)|\leq\)\(\|\ f\ \|_{op}\|\ x\ \|\) for every \(x\in X\).
It follows from the point (2) given above that \(\|\ \cdot\ \|_{op}\) may not be a norm on the dual \(X^{*}\) of an enls \((X,\|\ \cdot\ \|)\). However, following [1], we call it the _operator norm_ in the sequel.
A vector space \(X\) endowed with a Hausdorff topology \(\tau\) is said to be an _extended locally convex space_ (elcs, for short) if \(\tau\) is induced by a collection \(\mathcal{P}=\{\rho_{i}:i\in\mathcal{I}\}\) of extended seminorms on \(X\), that is, \(\tau\) is the smallest topology on \(X\) under which each \(\rho_{i}\) is continuous. We define
\[X^{\rho}_{fin}=\{x\in X:\rho(x)<\infty\}\]
for any extended seminorm \(\rho\) on \(X\) and the _finite subspace_\(X_{fin}\) of an elcs \((X,\tau)\) by
\[X_{fin}=\bigcap\left\{X^{\rho}_{fin}:\rho\ \text{is continuous on}\ (X,\tau)\right\}.\]
Suppose \((X,\tau)\) is an elcs and \(\tau\) is induced by a family \(\mathcal{P}\) of extended seminorms on \(X\). Then the following facts are either easy to verify or given in [14].
1. There exists a neighborhood base \(\mathcal{B}\) at \(0\) in \((X,\tau)\) such that each element of \(\mathcal{B}\) is absolutely convex (balanced and convex);
2. \(X_{fin}\) with the subspace topology is a locally convex space;
3. \(X_{fin}=\bigcap_{\rho\in\mathcal{P}}X^{\rho}_{fin}\);
4. if \(\rho\) is any continuous extended seminorm on \((X,\tau)\), then \(X^{\rho}_{fin}\) is a clopen subspace of \((X,\tau)\);
5. \(X_{fin}\) is an open subspace of \((X,\tau)\) if and only if there exists a continuous extended seminorm \(\rho\) on \((X,\tau)\) such that \(X_{fin}=X^{\rho}_{fin}\). In this case, we say \((X,\tau)\) is a _fundamental elcs_.
It is shown in Proposition 4.7 of [14] that if \(\mathcal{B}\) is a neighborhood base at \(0\) in an elcs \((X,\tau)\) consisting of absolutely convex sets, then \(\tau\) is induced by the collection \(\{\mu_{U}:U\in\mathcal{B}\}\) of Minkowski functionals.
**Definition 2.1**.: ([14]) Suppose \(U\) is any absolutely convex set in an elcs \((X,\tau)\). Then the _Minkowski functional_\(\mu_{U}:X\to[0,\infty]\) for \(U\) is defined as
\[\mu_{U}(x)=\inf\{\lambda>0:x\in\lambda U\}.\]
The following facts are immediate from the above definition.
1. The Minkowski functional \(\mu_{U}\) for the set \(U\) is an extended seminorm on \(X\). In addition, if \(U\) is absorbing, then \(\mu_{U}\) is a seminorm on \(X\).
2. \(\{x\in X:\mu_{U}(x)<1\}\subseteq U\subseteq\{x\in X:\mu_{U}(x)\leq 1\}\).
3. The Minkowski functional \(\mu_{U}\) is continuous on \(X\) if and only if \(U\) is a neighborhood of \(0\) in \((X,\tau)\).
If \(A\) is any nonempty set in a topological space \((X,\tau)\), then we denote the closure and interior of \(A\) in \((X,\tau)\) by \(\mathrm{Cl}_{\tau}(A)\) and \(\mathrm{int}_{\tau}(A)\), respectively. We also adopt the following terminology for an elcs \((X,\tau)\).
1. If \(U\) is any absolutely convex subset of \(X\), then \(X^{U}_{fin}=\{x\in X:\mu_{U}(x)<\infty\}\).
2. If \(A\subseteq X\), then \(\mathrm{ab}(B)\) is the smallest absolutely convex set in \(X\) that contains \(A\).
3. If \(A\subseteq X\), then \(A^{\circ}=\{f\in X^{*}:|f(x)|\leq 1\) for every \(x\in A\}\) is called the polar of \(A\) in \(X^{*}\).
4. If \(A\subseteq X^{*}\), then \(A_{\circ}=\{x\in X:|f(x)|\leq 1\) for every \(f\in A\}\) is called the polar of \(A\) in \(X\).
For other terms and definitions, we refer to [13, 15, 16].
## 3. Reflexivity
The aim of this section is to explore reflexive extended locally convex spaces. To define reflexivity property of an elcs \((X,\tau)\), we first need to define the topology \(\tau_{ucb}\), on \(X^{*}\), of uniform convergence on bounded subsets of \((X,\tau)\). For an elcs, the topology \(\tau_{ucb}\) has been studied extensively in [10]. We first give the definition of a bounded set in an elcs.
**Definition 3.1**.: ([11]) Suppose \((X,\tau)\) is an elcs. Then \(A\subseteq X\) is said to be _bounded_ in \((X,\tau)\) if for every neighborhood \(U\) of \(0\), there exist \(r>0\) and a finite set \(F\subseteq X\) such that \(A\subseteq F+rU\).
The following points about bounded sets in an elcs \((X,\tau)\) are either easy to prove or given in [11].
1. Every finite subset of \(X\) is bounded.
2. Every subset of a bounded set is bounded.
3. When \((X,\tau)\) is a conventional locally convex space, \(A\subseteq X\) is bounded in the sense of Definition 3.1 if and only if it is absorbed by each neighborhood of \(0\) in \((X,\tau)\).
4. Suppose \((Y,\sigma)\) is any elcs and \(T:X\to Y\) is a continuous linear operator. Then for every bounded set \(A\) in \((X,\tau)\), \(T(A)\) is bounded in \((Y,\sigma)\). In particular, for every \(f\in X^{*}\), \(f(A)\) is bounded in \(\mathbb{K}\).
5. No subspace (other than the zero subspace) is bounded.
6. If \(x_{n}\to 0\) in \((X,\tau)\), then \(\{x_{n}:n\in\mathbb{N}\}\) is bounded in \((X,\tau)\).
**Definition 3.2**.: ([10]) Let \((X,\tau)\) be an elcs. Then the topology \(\tau_{ucb}\), on \(X^{*}\), of _uniform convergence on bounded subsets of_\((X,\tau)\) is induced by the collection \(\mathcal{P}=\{\rho_{B}\,:\,B\,\) is bounded subset of \((X,\tau)\}\) of seminorms on \(X^{*}\), where \(\rho_{B}(\phi)=\sup_{x\in B}|\phi(x)|\) for \(\phi\in X^{*}\).
The following points for an elcs \((X,\tau)\) are either easy to verify or given in [1, 10].
1. \((X^{*},\tau_{ucb})\) is a locally convex space and \(\mathcal{B}=\{B^{\circ}:B\,\)is bounded in \((X,\tau)\}\) is a neighborhood base at \(0\) in \((X^{*},\tau_{ucb})\), where the polar \(B^{\circ}\) of \(B\subseteq X\) is given by \(B^{\circ}=\{\phi\in X^{*}:|\phi(x)|\leq 1\) for all \(x\in B\}\).
2. Recall from [10] that the _weak\({}^{*}\) topology_ on \(X^{*}\) is induced by the collection \(\{\rho_{x}:x\in X\}\) of seminorms, where \(\rho_{x}(f)=|f(x)|\) for every \(f\in X^{*}\). Clearly, \(\tau_{w^{*}}\) is coarser than \(\tau_{ucb}\).
3. If \((X,\|\,\cdot\,\|)\) is an enls such that \(X=X_{fin}\oplus M\), then \((X^{*},\tau_{ucb})\) is isomorphic (linear homeomorphic) to \((X^{*}_{fin},\|\,\cdot\,\|_{op})\times(M^{*},\tau_{w^{*}})\), where \(\tau_{w^{*}}\) is the weak\({}^{*}\) topology on the dual \(M^{*}\) of the enls \((M,\|\,\cdot\,\|)\) (see, Theorem 4.11 in [1]).
Recall that when \((X,\tau)\) is a classical locally convex space, the topology \(\tau_{ucb}\) is more popularly known as the strong topology. In particular, for an elcs \((X,\tau)\) with the flc topology \(\tau_{F}\) by the _strong topology_\(\tau_{s}\) on \(X^{*}\), we mean the topology of uniform convergence on bounded subsets of the locally convex space \((X,\tau_{F})\).
**Remark 3.3**.: It is easy to prove that if \(\tau_{F}\) is the flc topology of an elcs \((X,\tau)\), then every bounded set in \((X,\tau)\) is bounded in \((X,\tau_{F})\). Converse may not be true (see, Proposition 5.3 in [10]). Therefore \(\tau_{ucb}\) is coarser than \(\tau_{s}\).
For an elcs \((X,\tau)\) and \(x\in X\), if a net \((f_{\lambda})\) converges to \(f\) in \((X^{*},\tau_{ucb})\), then \(f_{\lambda}(x)\to f(x)\). Consequently, the map \(J_{x}:(X^{*},\tau_{ucb})\to\mathbb{K}\) defined by \(J_{x}(f)=f(x)\) for \(f\in X^{*}\) is a continuous linear functional. Hence the _canonical map_\(J:X\to(X^{*},\tau_{ucb})^{*}\) defined by \(J(x)=J_{x}\) for all \(x\in X\) is well defined.
**Remark 3.4**.: For an enls \((X,\|\,\cdot\,\|)\), the map \(J:(X,\|\,\cdot\,\|)\to(X^{*},\|\,\cdot\,\|_{op})^{*}\) may not be well defined. Since for every \(z\notin X_{fin}\) and \(n\in\mathbb{N}\), we can find a continuous linear functional \(f_{n}\) on \(X\) such that \(f_{n}(z)=n\) and \(f(x)=0\) for every \(x\in X_{fin}\). Clearly, \(\|\,f_{n}\,\|_{op}\!\!=0\) for every \(n\in\mathbb{N}\). Consequently, \(f_{n}\to 0\) in \((X^{*},\|\,\cdot\,\|_{op})\) but \(|J_{z}(f_{n})|=n\to 0\).
**Definition 3.5**.: Let \((X,\tau)\) be an elcs. Then we say \(X\) is _semi-reflexive_ if the canonical map \(J\) is surjective, and we say \(X\) is _reflexive_ if \(J\) is both surjective and continuous when \((X^{*},\tau_{ucb})^{*}\) is equipped with the topology of uniform convergence on bounded subsets of \((X^{*},\tau_{ucb})\).
**Example 3.6**.: Let \(X\) be a vector space with the _discrete extended norm_ defined by
\[\parallel x\parallel_{0,\infty}=\begin{cases}0;&\quad\text{if }x=0\\ \infty;&\quad\text{if }x\neq 0.\end{cases}\]
Then only finite subsets are bounded in \((X,\parallel\cdot\parallel_{0,\infty})\). Since the given space is a discrete space, the canonical map \(J\) is continuous. Now, if \(\psi\in(X^{*},\tau_{ucb})^{*}\), then there exists a finite set \(A=\{x_{1},x_{2},...,x_{n}\}\subseteq X\) such that \(A^{\circ}\subseteq\psi^{-1}(-1,1)\). It is easy to show that \(\bigcap_{x\in A}J_{x}^{-1}(0)\subseteq\psi^{-1}(0)\) (if \(\epsilon>0\) and \(f(x_{j})=0\) for \(1\leq j\leq n\), then \(\frac{1}{\epsilon}f(x_{j})=0\) for \(1\leq j\leq n\). So \(|\psi(f)|<\epsilon\)). By Lemma 3.9, p. 67 in [8], \(\psi\) is a linear combination of \(J_{x_{1}},J_{x_{2}},..,J_{x_{n}}\). Therefore \(J\) is surjective. Hence \((X,\parallel\cdot\parallel_{0,\infty})\) is reflexive.
**Proposition 3.7**.: _Let \((X,\parallel\cdot\parallel)\) be an enls. Then \(J:(X,\parallel\cdot\parallel)\to(X^{*},\tau_{ucb})^{*}\) is always continuous._
Proof.: Let \(x_{n}\to 0\) in \((X,\parallel\cdot\parallel)\) and let \(B\) be a bounded subset of \((X^{*},\tau_{ucb})\). Then there exist \(\alpha>0\) and \(n_{0}\in\mathbb{N}\) such that \(\parallel f\parallel_{op}\leq\alpha\) and \(x_{n}\in X_{fin}\) for each \(f\in B\), \(n\geq n_{0}\). Consequently,
\[|J_{x_{n}}(f)|=|f(x_{n})|\leq\alpha\parallel x_{n}\parallel\text{ for every }f\in B\text{ and }n\geq n_{0}.\]
Therefore \(J_{x_{n}}\to 0\) in \((X^{*},\tau_{ucb})^{*}\). Hence \(J\) is continuous.
**Corollary 3.8**.: _Let \((X,\parallel\cdot\parallel)\) be an enls. Then \(X\) is reflexive if and only if it is semi-reflexive._
We next study the reflexivity of an elcs \((X,\tau)\) in relation to the properties of the corresponding finest space \((X,\tau_{F})\).
Recall that a locally convex space \((X,\tau)\) is said to be _barreled_ if each barrel (closed, absolutely convex and absorbing set) is a neighborhood of \(0\).
**Proposition 3.9**.: _Let \((X,\tau)\) be a reflexive elcs with the flc topology \(\tau_{F}\). Then \((X,\tau_{F})\) is barreled._
Proof.: Let \(B\) be a barrel in \((X,\tau_{F})\). By Theorem 8.8.3, p. 251 in [12], \(B^{\circ}\) is pointwise bounded. Since \((X,\tau)\) is reflexive, we have \((X^{*},\tau_{ucb})^{*}=\{J_{x}:x\in X\}\). So \(B^{\circ}\) is bounded in \((X^{*},\tau_{ucb})\). Consequently, \((B^{\circ})^{\circ}\) is a neighborhood of \(0\) in \((X^{*},\tau_{ucb})^{*}\). As \(J:(X,\tau)\to(X^{*},\tau_{ucb})^{*}\) is continuous, \(J^{-1}\left((B^{\circ})^{\circ}\right)\) is a neighborhood of \(0\) in \((X,\tau)\). Note that \(J^{-1}\left((B^{\circ})^{\circ}\right)=(B^{\circ})_{\circ}\). Consequently, by applying Bipolar theorem in \((X,\tau_{F})\), we have \((B^{\circ})_{\circ}=B\). Therefore \(B\) is
a neighborhood of \(0\) in \((X,\tau)\). Since \(B\) is an absorbing and absolutely convex neighborhood of \(0\) in \((X,\tau)\), the Minkowski functional \(\mu_{B}\) is a continuous seminorm on \((X,\tau)\). By Theorem 3.5 in [11], \(\mu_{B}\) is a continuous seminorm on \((X,\tau_{F})\). Therefore \(B\) is a neighborhood of \(0\) in \((X,\tau_{F})\). Hence \((X,\tau_{F})\) is barreled.
**Theorem 3.10**.: _Let \((X,\tau)\) be an elcs with the flc topology \(\tau_{F}\). If \((X,\tau_{F})\) is reflexive, then \((X,\tau)\) is reflexive. Converse holds if \((X,\tau_{F})\) is semi-reflexive._
Proof.: Let \((X,\tau_{F})\) be reflexive. Since \(\tau_{ucb}\subseteq\tau_{s}\) and \((X^{*},\tau_{s})^{*}=\{J_{x}:x\in X\}\), we have \((X^{*},\tau_{ucb})^{*}=\{J_{x}:x\in X\}\). Therefore \((X,\tau)\) is semi-reflexive. Now, let \(B\) be a bounded set in \((X^{*},\tau_{ucb})\). Then \(B\) is bounded in \((X^{*},\tau_{w^{*}})\). By Theorem 8.8.3, p. 241 in [12], \(B_{\circ}\) is an absorbing subset of \(X\). Note that \(J^{-1}(B^{\circ})=\{x\in X:|J_{x}(f)|\leq 1\text{ for }f\in B\}=B_{\circ}\) is a barrel in \((X,\tau_{F})\) (see, Theorem 8.3.6, p. 234 in [12]). Since \((X,\tau_{F})\) is reflexive, by Theorem 15.2.6, p. 490 in [12], \((X,\tau_{F})\) is barreled. Consequently, \(B_{\circ}\) is a neighborhood of \(0\) in \((X,\tau_{F})\). So \(J^{-1}(B^{\circ})=B_{\circ}\) is a neighborhood of \(0\) in \((X,\tau)\). Therefore the canonical map \(J\) is continuous on \((X,\tau)\). Hence \((X,\tau)\) is reflexive.
Conversely, suppose \((X,\tau)\) is reflexive and \((X,\tau_{F})\) is semi-reflexive. Then by Proposition 3.9, \((X,\tau_{F})\) is barreled. By Theorem 15.2.6, p. 490 in [12], \((X,\tau_{F})\) is reflexive.
**Corollary 3.11**.: _Let \((X,\tau)\) be an elcs with the flc topology \(\tau_{F}\). Suppose anyone of the following conditions holds_
1. \((X^{*},\tau_{ucb})\) _is barreled;_
2. \((X,\tau)\) _is a fundamental elcs._
_Then \((X,\tau)\) is reflexive if and only if \((X,\tau_{F})\) is reflexive._
Proof.: Suppose anyone of the given assumptions holds. Then by Theorem 5.9 and Theorem 5.12(2) in [10], we have \(\tau_{ucb}=\tau_{s}\). Therefore \((X,\tau)\) is semi-reflexive if and only if \((X,\tau_{F})\) is semi-reflexive. By Theorem 3.10, \((X,\tau)\) is reflexive if and only if \((X,\tau_{F})\) is reflexive.
**Corollary 3.12**.: _Suppose \((X,\|\ \cdot\ \|)\) is an enls. Then \((X,\|\ \cdot\ \|)\) is reflexive if and only if \((X,\tau_{F})\) is reflexive._
Our next theorem relates the reflexivity of an elcs \((X,\tau)\) with the reflexivity of its open subspaces.
**Proposition 3.13**.: _Suppose \(M\) is an open subspace of an elcs \((X,\tau)\). Then there exists a continuous extended seminorm \(\rho\) on \((X,\tau)\) such that \(X^{\rho}_{fin}=M\)._
Proof.: Since \(M\) is an open subspace of \((X,\tau)\), there exists a continuous extended seminorm \(\mu\) on \((X,\tau)\) such that \(X^{\mu}_{fin}\subseteq M\). Define an extended seminorm \(\rho\) on \(X\) by
\[\rho(x)=\begin{cases}0;&\text{ if }x\in M\\ \infty;&\text{ if }x\notin M.\end{cases}\]
Note that \(\rho(x)\leq\mu(x)\) for every \(x\in X\). Consequently, \(\rho\) is continuous on \((X,\tau)\). It is easy to see that \(X^{\rho}_{fin}=M\).
We need the following remark and lemma in the proof of Theorem 3.16.
**Remark 3.14**.: Suppose \(\rho\) is a continuous extended seminorm on an elcs \((X,\tau)\) and \(M\) is a subspace of \(X\) with \(X=X^{\rho}_{fin}\oplus M\). Then \((M,\tau|_{M})\) is a discrete space. Therefore \(\tau|_{M}\) is induced by the discrete extended norm \(\|\,\cdot\,\|_{0,\infty}\). If \(\tau_{F}\) is the flc topology for \((X,\tau)\), then by Theorem 4.1 in [11], the flc topology for \((M,\tau|_{M})\) is \(\tau_{F}|_{M}\). By Example 3.6 and Corollary 3.12, both the spaces \((M,\tau|_{M})\) and \((M,\tau_{F}|_{M})\) are reflexive.
**Lemma 3.15**.: _Let \((X,\tau)\) be an elcs and let \(\rho\) be a continuous extended seminorm on \((X,\tau)\). If \(M\) is a subspace of \(X\) with \(X=X^{\rho}_{fin}\oplus M\) and \(\tau_{F}\) is the flc topology for \((X,\tau)\), then \((X,\tau_{F})\) is isomorphic to the product space \(\left(X^{\rho}_{fin},\tau_{F}|_{X^{\rho}_{fin}}\right)\times(M,\tau_{F}|_{M})\)._
Proof.: Consider the map \(\Psi:(X,\tau_{F})\to\left(X^{\rho}_{fin},\tau_{F}|_{X^{\rho}_{fin}}\right) \times(M,\tau_{F}|_{M})\) defined by \(\Psi(x=x_{f}+x_{m})=(x_{f},x_{m})\). Then \(\Psi\) is linear and bijective. Note that if \(U\) and \(V\) are neighborhoods of \(0\) in \(\left(X^{\rho}_{fin},\tau_{F}|_{X^{\rho}_{fin}}\right)\,\) and \(\,(M,\tau_{F}|_{M})\), respectively, then there exist continuous seminorms \(\mu\) and \(\nu\) on \(\left(X^{\rho}_{fin},\tau_{F}|_{X^{\rho}_{fin}}\right)\,\) and \(\,(M,\tau_{F}|_{M})\) such that \(\mu^{-1}([0,1))\subseteq U\) and \(\nu^{-1}([0,1))\subseteq V\). It is easy to see that \(\lambda(x)=\mu(x_{f})+\nu(x_{m})\) for \(x\in X\) and \(x=x_{f}+x_{m}\) is a continuous seminorm on \((X,\tau)\) as \(X^{\rho}_{fin}\) is an open subspace of \((X,\tau)\) and \(\lambda=\mu\) on \(X^{\rho}_{fin}\). By Theorem 3.5 in [11], \(\lambda\) is continuous on \((X,\tau_{F})\). Note that \(\Psi(\lambda^{-1}([0,1)))\subseteq U\times V\). Therefore \(\Psi\) is continuous.
For the continuity of \(\Psi^{-1}\), let \(\rho\) be a continuous seminorm on \((X,\tau_{F})\). Then \(\rho|_{X^{\rho}_{fin}}\) and \(\rho|_{M}\) are continuous on \((X^{\rho}_{fin},\tau_{F}|_{X^{\rho}_{fin}})\) and \((M,\tau_{F}|_{M})\), respectively. Note that
\[\Psi^{-1}\left(\rho|_{X^{\rho}_{fin}}^{-1}\left(\left[0,\frac{1}{2}\right) \right)\times\rho|_{M}^{-1}\left(\left[0,\frac{1}{2}\right)\right)\right) \subseteq\rho^{-1}([0,1)).\]
Therefore \(\Psi^{-1}\) is continuous.
**Theorem 3.16**.: _Suppose \((X,\tau)\) is an elcs with the flc topology \(\tau_{F}\). Then the following statements are equivalent._
1. \((X,\tau)\) _is reflexive;_
2. \(\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\) _is reflexive, for every continuous extended seminorm_ \(\rho\) _on_ \(X\)_;_
3. \(\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\) _is reflexive, for some continuous extended seminorm_ \(\rho\) _on_ \(X\)_._
Proof.: (1)\(\Rightarrow\)(2). Suppose \(\rho\) is a continuous extended seminorm on \((X,\tau)\) and \(\psi\) is a continuous linear functional on \(\left(\left(X_{fin}^{\rho}\right)^{*},\tau_{ucb}^{\rho}\right)\), where \(\tau_{ucb}^{\rho}\) is the topology of uniform convergence on bounded subsets of \(\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\). Define a linear functional \(\Psi\) on \(X^{*}\) by \(\Psi(f)=\psi(f|_{X_{fin}^{\rho}})\) for \(f\in X^{*}\). Suppose \((f_{\alpha})\) is a net in \((X^{*},\tau_{ucb})\) converging to \(0\). Then \(f_{\alpha}|_{X_{fin}^{\rho}}\to 0\) in \(\left(\left(X_{fin}^{\rho}\right)^{*},\tau_{ucb}^{\rho}\right)\) as every bounded subset of \(\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\) is also bounded in \((X,\tau)\). Consequently, \(\Psi(f_{\alpha})=\psi(f_{\alpha}|_{X_{fin}^{\rho}})\to 0\). Thus \(\Psi\in(X^{*},\tau_{ucb})^{*}\). Since \((X,\tau)\) is reflexive, there exists an \(x_{0}\in X\) such that \(\Psi=J_{x_{0}}\). If \(x_{0}\notin X_{fin}^{\rho}\), then there exists an \(f\in X^{*}\) such that \(f(x_{0})\neq 0\) and \(f(X_{fin}^{\rho})=0\). Then \(0\neq J_{x_{0}}(f)=f(x_{0})=\Psi(f)=\psi(f|_{X_{fin}^{\rho}})=0\). So \(x_{0}\in X_{fin}^{\rho}\). For every \(f\in\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)^{*}\), \(\psi(f)=\Psi(f^{\prime})=J_{x_{0}}(f^{\prime})=f^{\prime}(x_{0})=f(x_{0})\), where \(f^{\prime}\) is a continuous linear extension of \(f\) on \(X\) which is possible by Corollary 4.2 in [11]. Hence \(\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\) is semi-reflexive.
To complete the proof it is enough to show that the canonical map \(J_{\rho}:\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\rightarrow\left( \left(X_{fin}^{\rho}\right)^{*},\tau_{ucb}^{\rho}\right)^{*}\) on \(X_{fin}^{\rho}\) is continuous. Let \(M\) be a subspace of \(X\) such that \(X=X_{fin}^{\rho}\oplus M\). Suppose \(D\) is any bounded set in \(\left(\left(X_{fin}^{\rho}\right)^{*},\tau_{ucb}^{\rho}\right)\). Consider \(Z=\{f^{\prime}:f\in D\}\), where \(f^{\prime}\) is the continuous linear extension of \(f\) on \(X\) which is \(0\) on \(M\). Then \(Z\) is pointwise bounded. Since \((X,\tau)\) is reflexive, by Proposition 3.9, \((X,\tau_{F})\) is barreled. By Theorem 11.3.4 and Theorem 11.3.5, p. 384 in [12], \(Z\) is bounded in \((X^{*},\tau_{s})\). So \(Z\) is bounded in \((X^{*},\tau_{ucb})\). Consequently, \(Z^{\circ}\) is a neighborhood of \(0\) in \((X^{*},\tau_{ucb})^{*}\). Thus \(J^{-1}\left(Z^{\circ}\right)\) is a neighborhood of \(0\) in \((X,\tau)\) as \((X,\tau)\) is reflexive. Therefore \(J^{-1}\left(Z^{\circ}\right)\cap X_{fin}^{\rho}\) is a neighborhood of \(0\) in \(\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\). Note that \(J^{-1}\left(Z^{\circ}\right)\cap X_{fin}^{\rho}=J_{\rho}^{-1}(D^{\circ})\) (if \(x\in X_{fin}^{\rho}\), then \(|f(x)|\leq 1\) for every \(f\in D\iff\left|g(x)\right|\leq 1\) for every \(g\in Z\)). Which implies that \(J_{\rho}^{-1}(D^{\circ})\) is a neighborhood of \(0\) in \(\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\). Therefore \(J_{\rho}\) is continuous. Hence \(\left(X_{fin}^{\rho},\tau|_{X_{fin}^{\rho}}\right)\) is reflexive.
The implication (2)\(\Rightarrow\)(3) is obvious.
(3)\(\Rightarrow\)(1). Let (3) hold for some continuous extended seminorm \(\rho\) on \((X,\tau)\) and let \(M\) be a subspace of \(X\) such that \(X=X^{\rho}_{fin}\oplus M\). Suppose \(\Psi\) is a continuous linear functional on \((X^{*},\tau_{ucb})\). Define linear functionals \(\Psi_{1}\) and \(\Psi_{2}\) on \(\left(X^{\rho}_{fin},\tau|_{X^{\rho}_{fin}}\right)^{*}\) and \((M,\tau|_{M})^{*}\), respectively, by \(\Psi_{1}(f)=\Psi(\hat{f})\) for \(f\in\left(X^{\rho}_{fin},\tau|_{X^{\rho}_{fin}}\right)^{*}\) and \(\Psi_{2}(g)=\Psi(g^{\prime})\) for \(g\in(M,\tau|_{M})^{*}\), where
\[\hat{f}(x)=\left\{\begin{array}{ll}f(x)&\mbox{if}\ \ x\in X^{\rho}_{fin}\\ 0&\mbox{if}\ \ x\in M\end{array}\right.\qquad g^{\prime}(x)=\left\{\begin{array}{ ll}0&\mbox{if}\ \ x\in X^{\rho}_{fin}\\ g(x)&\mbox{if}\ \ x\in M.\end{array}\right.\]
It is easy to see that if nets \(f_{\alpha}\to 0\) in \(\left(\left(X^{\rho}_{fin}\right)^{*},\tau^{\rho}_{ucb}\right)\) and \(g_{\beta}\to 0\) in \((M^{*},\tau^{m}_{ucb})\), then \(\hat{f}_{\alpha}\to 0\), \(g^{\prime}_{\beta}\to 0\) in \((X^{*},\tau_{ucb})\), where \(\tau^{\rho}_{ucb}\) and \(\tau^{m}_{ucb}\) are the topologies of the uniform convergence on bounded subsets of \((X^{\rho}_{fin},\tau|_{X^{\rho}_{fin}})\) and \((M,\tau|_{M})\), respectively. Since \(\Psi\in(X^{*},\tau_{ucb})^{*}\), we have \(\Psi_{1}\in\left(\left(X^{\rho}_{fin}\right)^{*},\tau^{\rho}_{ucb}\right)^{*}\) and \(\Psi_{2}\in\left(M^{*},\tau^{m}_{ucb}\right)^{*}\). So there exists \(x_{f}\in X^{\rho}_{fin}\) such that \(\Psi_{1}(f)=f(x_{f})\) for all \(f\in(X^{\rho}_{fin},\tau|_{X^{\rho}_{fin}})^{*}\) as \(\left(X^{\rho}_{fin},\tau|_{X^{\rho}_{fin}}\right)\) is semi-reflexive. By Remark 3.14, \(M\) is also semi-reflexive. There exists \(x_{m}\in M\) such that \(\Psi_{2}(g)=g(x_{m})\) for all \(g\in(M,\tau|_{M})^{*}\). Note that for every \(f\in X^{*}\), \(\Psi(f=\widehat{f}|_{X^{\rho}_{fin}}+f|^{\prime}_{M})=\Psi(\widehat{f}|_{X^{ \rho}_{fin}})+\Psi(f|^{\prime}_{M})=\Psi_{1}(f|_{X^{\rho}_{fin}})+\Psi_{2}(f|_ {M})=f|_{X^{\rho}_{fin}}(x_{f})+f|_{M}(x_{m})=f(x_{f}+x_{m})=J_{x_{f}+x_{m}}(f)\). Therefore \((X,\tau)\) is semi-reflexive.
Let \(D\) be a bounded set in \((X^{*},\tau_{ucb})\). Then it is pointwise bounded. Since both \(\left(X^{\rho}_{fin},\tau|_{X^{\rho}_{fin}}\right)\) and \((M,\tau|_{M})\) are reflexive, by Theorem 3.10, both the spaces \(\left(X^{\rho}_{fin},\tau_{F}|_{X^{\rho}_{fin}}\right)\) and \((M,\tau_{F}|_{M})\) are barreled. By Lemma 3.15 and Theorem 11.12.4, p. 409 in [12], \((X,\tau_{F})\) is barreled. Therefore by Theorem 11.3.4, p. 384 in [12], \(D\) is equicontinuous on \((X,\tau_{F})\). Consequently, \(D_{\circ}\) is a neighborhood of \(0\) in \((X,\tau_{F})\). Note that \(J^{-1}(D^{\circ})=\{x\in X:|f(x)|\leq 1\) for \(f\in D\}=D_{\circ}\). So \(J^{-1}(D^{\circ})\) is a neighborhood of \(0\) in \((X,\tau)\). Hence \(J\) is continuous and \((X,\tau)\) is reflexive.
**Remark 3.17**.: A similar result holds if we replace reflexive by semi-reflexive in the statement of Theorem 3.16.
**Corollary 3.18**.: _Let \((X,\|\ \cdot\ \|)\) be an enls. Then \((X,\|\ \cdot\ \|)\) is reflexive if and only if \((X_{fin},\|\ \cdot\ \|)\) is reflexive._
**Corollary 3.19**.: _Let \((X,\|\ \cdot\ \|)\) be a reflexive enls. Then \((X,\|\ \cdot\ \|)\) is an extended Banach space._
Proof.: If \((X,\|\ \cdot\ \|)\) is reflexive, then \((X_{fin},\|\ \cdot\ \|)\) is reflexive. By classical theory for normed space, \((X_{fin},\|\ \cdot\ \|)\) is a Banach space. Hence by Proposition 3.11 in [1], \((X,\|\ \cdot\ \|)\) is an extended Banach space.
Recall that for an elcs \((X,\tau)\) with the dual \(X^{*}\), the _weak topology_\(\tau_{w}\) on \(X\) is the locally convex space topology induced by the collection \(\mathcal{P}_{w}=\{\rho_{f}:f\in X^{*}\}\) of seminorms on \(X\), where \(\rho_{f}(x)=|f(x)|\) for every \(x\in X\). It is easy to prove that the weak topologies corresponding to \((X,\tau)\) and \((X,\tau_{F})\) are same.
**Theorem 3.20**.: _Let \((X,\|\,\cdot\,\|)\) be an extended Banach space with the flc topology \(\tau_{F}\). Then the following assertions are equivalent:_
1. \((X,\|\,\cdot\,\|)\) _is reflexive;_
2. \((X,\tau_{F})\) _is reflexive;_
3. \((X_{fin},\|\,\cdot\,\|)\) _is reflexive;_
4. _the closed unit ball_ \(B_{X}\) _is weakly compact;_
5. _the weak topology has the Heine-Borel property._
Proof.: (2)\(\Leftrightarrow\)(5). It follows from the fact that a locally convex space is semiflexive if and only if its weak topology has the Heine-Borel property (see, Theorem 15.2.4, p. 489 in [12]).
(4)\(\Leftrightarrow\)(3). Note that if \(\tau_{w}\) is the weak topology corresponding to \((X,\|\,\cdot\,\|)\) (or \((X,\tau_{F})\)), then \(\tau_{w}|_{X_{fin}}\) is the weak topology of the Banach space \((X_{fin},\|\,\cdot\,\|)\). Consequently, the equivalence follows from the fact that a Banach space is reflexive if and only if its closed unit ball is weakly compact (see, Exercise 15.101, p. 516 in [12]).
The equivalences (1)\(\Leftrightarrow\)(2)\(\Leftrightarrow\)(3) follow from Corollaries 3.12, 3.18.
Recall that a normed linear space \(X\) is reflexive if and only if \(J(B_{X})\supseteq B_{X^{**}}\), where \(J\) is the canonical map on \(X\), and \(B_{X}\) and \(B_{X^{**}}\) are the closed unit balls in \(X\) and \((X^{*},\|\,\cdot\,\|_{op})^{*}\), respectively. We next prove an analogous result for an enls.
**Theorem 3.21**.: _Suppose \((X,\|\,\cdot\,\|)\) is an extended normed space and \(J\) is the corresponding canonical map on \(X\). Then \(X\) is reflexive if and only if \(J(B_{X})=\left(B_{X^{*}}\right)^{\circ}\), where \(B_{X^{*}}\) is the closed unit ball in \((X^{*},\|\,\cdot\,\|_{op})\) and \(\left(B_{X^{*}}\right)^{\circ}\) is the polar of \(B_{X^{*}}\) in \((X^{*},\tau_{ucb})^{*}\)._
Proof.: It is easy to see that \(J(B_{X})\subseteq\left(B_{X^{*}}\right)^{\circ}\). For the reverse inclusion consider \(\psi\in\left(B_{X^{*}}\right)^{\circ}\). Since \(X\) is reflexive, there is an \(x_{0}\in X\) such that \(\psi=J_{x_{0}}\). Therefore for any \(\phi\in B_{X^{*}}\), we have \(|\phi(x_{0})|=|J_{x_{0}}(\phi)|=|\psi(\phi)|\leq 1\). So by Proposition 4.9 in [1], \(x_{0}\in B_{X}\).
Conversely, suppose \(J(B_{X})=\left(B_{X^{*}}\right)^{\circ}\). By Corollary 3.18, it is enough to show that \((X_{fin},\|\,\cdot\,\|)\) is reflexive. Let \(B_{X_{fin}}\) and \(B_{X_{fin}^{**}}\) be the closed unit balls in \((X_{fin},\|\,\cdot\,\|)\) and \((X_{fin}^{*},\|\,\cdot\,\|_{op})^{*}\), respectively. To show \((X_{fin},\|\,\cdot\,\|)\) is reflexive, it is enough to show that \(J_{f}(B_{X_{fin}})\supseteq B_{X_{fin}^{**}}\), where \(J_{f}\) is the canonical map on \(X_{fin}\). Suppose \(\Psi\in B_{X_{fin}^{**}}\). Define a linear functional \(\hat{\Psi}\) on \(X^{*}\) by \(\hat{\Psi}(f)=\Psi(f|_{X_{fin}})\) for every \(f\in X^{*}\). It is easy to see that
\(|\Psi(f|_{X_{fin}})|\leq\parallel\Psi\parallel_{op}\parallel f\parallel_{op}\leq 1\) for every \(f\in B_{X^{*}}\). Then \(\hat{\Psi}\) is continuous on \((X^{*},\parallel\cdot\parallel_{op})\). Consequently, \(\hat{\Psi}\in(X^{*},\tau_{ucb})^{*}\). Also observe that \(\hat{\Psi}\in(B_{X^{*}})^{\circ}\). So there exists an \(x\in B_{X}=B_{X_{fin}}\) such that \(J_{x}=\hat{\Psi}\) as \(J(B_{X})=(B_{X^{*}})^{\circ}\). For any \(\phi\in(X_{fin},\parallel\cdot\parallel_{op})^{*}\), we have \(\Psi(\phi)=\hat{\Psi}(\phi^{\prime})=J_{x}(\phi^{\prime})=\phi^{\prime}(x)= \phi(x)\), where \(\phi^{\prime}\) is any continuous linear extension of \(\phi\) on \(X\). Therefore \(\Psi=J_{f}(x)\in J_{f}(B_{X_{fin}})\).
In the next result, we show that the reflexive property in an extended Banach space is a three-space property, that is, if \(Y\) is a closed subspace of an enls \((X,\parallel\cdot\parallel)\) and any two of the spaces \((Y,\parallel\cdot\parallel)\), \((X,\parallel\cdot\parallel)\) and \((X/Y,\parallel\cdot\parallel_{q})\) are reflexive, then the third one is also reflexive, where \(X/Y=\{x+Y:x\in X\}\) and \(\parallel x+Y\parallel_{q}=\inf\{\parallel x-y\parallel:y\in Y\}\).
**Theorem 3.22**.: _Let \((X,\parallel\cdot\parallel)\) be an enls and let \(Y\) be a closed subspace of \(X\). Then \((X,\parallel\cdot\parallel)\) is reflexive if and only if both \((Y,\parallel\cdot\parallel)\) and the quotient space \((X/Y,\parallel\cdot\parallel_{q})\) are reflexive._
Proof.: Suppose \((X,\parallel\cdot\parallel)\) is reflexive. Then \(Y_{fin}=\{y\in Y:\parallel y\parallel<\infty\}\) is a closed subspace of a reflexive space \((X_{fin},\parallel\cdot\parallel)\). By Theorem 15.2.7, p. 490 in [12], \((Y_{fin},\parallel\cdot\parallel)\) is a reflexive space. Then by Corollary 3.18, \((Y,\parallel\cdot\parallel)\) is reflexive. Note that the finite subspace \((X/Y)_{fin}\) of the quotient space \((X/Y,\parallel\cdot\parallel_{q})\) is equal to \(\{x+Y:x\in X_{fin}\}\) (see, Theorem 3.21 in [1]). It is easy to see that if \(x\in X\setminus X_{fin}\) and \(\parallel y\parallel=\infty\) for some \(y\in Y\), then \(\parallel x-y\parallel=\infty\). So for every \(x\in X_{fin}\), we have
\[\parallel x+Y\parallel_{q}=\inf\{\parallel x-y\parallel:y\in Y\}=\inf\{ \parallel x-y\parallel:y\in Y_{fin}\}=\parallel x+Y_{fin}\parallel_{q}.\]
Therefore \(\Big{(}(X/Y)_{fin}\,,\parallel\cdot\parallel_{q}\Big{)}\) is isometrically isomorphic to \((X_{fin}/Y_{fin},\parallel\cdot\parallel_{q})\). Since reflexivity in a normed linear space is a three-space property (see, p. 491 in [12]), we have \((X_{fin}/Y_{fin},\parallel\cdot\parallel_{q})\) is reflexive. So \(\Big{(}(X/Y)_{fin}\,,\parallel\cdot\parallel_{q}\Big{)}\) is reflexive. By Corollary 3.18, \((X/Y,\parallel\cdot\parallel_{q})\) is reflexive.
Conversely, suppose both \((Y,\parallel\cdot\parallel)\) and \((X/Y,\parallel\cdot\parallel_{q})\) are reflexive. By Corollary 3.18, both \((Y_{fin},\parallel\cdot\parallel)\) and \(((X/Y)_{fin},\parallel\cdot\parallel_{q})\) are reflexive. So \((Y_{fin},\parallel\cdot\parallel)\) and \((X_{fin}/Y_{fin},\parallel\cdot\parallel_{q})\) are reflexive. Therefore \((X_{fin},\parallel\cdot\parallel)\) is reflexive. Hence by Corollary 3.18, \((X,\parallel\cdot\parallel)\) is reflexive.
In general, if \(\mu\) is a finitely compatible norm for an enls \((X,\parallel\cdot\parallel)\), that is, both \(\mu\) and \(\parallel\cdot\parallel\) induce the same topology on \(X_{fin}\), then the reflexivity of \((X,\mu)\) may not have any relation with the reflexivity of \((X,\parallel\cdot\parallel)\) (see, Examples 3.24, 3.25). But, there exists a finitely compatible norm \(\nu\) such that \((X,\parallel\cdot\parallel)\) is reflexive whenever \((X,\nu)\) is reflexive.
**Theorem 3.23**.: _Suppose \((X,\|\ \cdot\ \|)\) is an extended normed space with \(X=X_{fin}\oplus M\). Then there exists a finitely compatible norm \(\nu\) for \((X,\|\ \cdot\ \|)\) with the following property: If \((X,\nu)\) is reflexive, then \((X,\|\ \cdot\ \|)\) is reflexive._
Proof.: Let \(\phi\in X^{*}\) such that \(\phi^{-1}(0)=X_{fin}\). Consider the norm \(\nu\) on \(X\) by \(\nu(x)=\parallel x_{f}\parallel+|\phi(x_{M})|\), where \(x=x_{f}+x_{M}\) with \(x_{f}\in X_{fin}\) and \(x_{M}\in M\). Then \(\nu\) is a finitely compatible norm for \((X,\|\ \cdot\ \|)\) and \(\phi\in(X,\nu)^{*}.\) As \((X,\nu)\) is reflexive and \(\phi\in(X,\nu)^{*}\) with \(\phi^{-1}(0)=X_{fin}\), \((X_{fin},\|\ \cdot\ \|)\) is reflexive. By Corollary 3.18, \((X,\|\ \cdot\ \|)\) is reflexive.
**Example 3.24**.: Let \(c_{00}\) be the collection of all eventually zero sequences with the discrete extended norm \(\|\ \cdot\ \|_{0,\infty}\). Then \((c_{00},\|\ \cdot\ \|_{0,\infty})\) is a reflexive space but the finitely compatible normed space \((c_{00},\|\ \cdot\ \|_{\infty})\) is not reflexive.
**Example 3.25**.: Let \(l_{p}\ (1<p<\infty)\) be the space of all \(p\)-summable real sequences. Suppose \(M\) is a subspace of \(l_{p}\) with \(l_{p}=c_{00}\oplus M\). Define an extended norm on \(l_{p}\) by
\[\parallel x\ \|=\begin{cases}\parallel x\ \parallel_{p},&\quad\text{if}\ x\in c _{00}\\ \infty,&\quad\text{otherwise}.\end{cases}\]
Then \(X_{fin}=c_{00}\) is not reflexive. Consequently, \((l_{p},\|\ \cdot\ \|)\) is a non-reflexive extended normed linear space. Note that the classical \(p\)-norm \(\|\ \cdot\ \|_{p}\) is a finitely compatible norm on \((X,\|\ \cdot\ \|)\). Also, the normed space \((l_{p},\|\ \cdot\ \|_{p})\) is a reflexive space.
## 4. Applications to Function spaces
Let \((X,d)\) be a metric space and let \(C(X)\) be the set of all real-valued continuous functions on \((X,d)\). By a _bornology_\(\mathcal{B}\) on \((X,d)\), we mean a collection of nonempty subsets of \(X\) that covers \(X\) and is closed under finite union and taking subsets of its members. A subfamily \(\mathcal{B}_{0}\) of \(\mathcal{B}\) is a _base_ for \(\mathcal{B}\) if it is cofinal in \(\mathcal{B}\) under the set inclusion. In addition, if every member of \(\mathcal{B}_{0}\) is closed in \((X,d)\), then we say \(\mathcal{B}\) has a _closed base_. For more details about metric bornologies, we refer to [2].
In this section, we study the reflexivity of the function spaces \((C(X),\tau^{s}_{\mathcal{B}})\) and \((C(X),\tau_{\mathcal{B}})\), where \(\tau^{s}_{\mathcal{B}}(\tau_{\mathcal{B}})\) is the topology of strong uniform convergence (uniform convergence) on the elements of \(\mathcal{B}\). We first define these topologies.
**Definition 4.1**.: ([3]) Let \(\mathcal{B}\) be a bornology on a metric space \((X,d)\). Then _the topology \(\tau_{\mathcal{B}}\) of uniform convergence on \(\mathcal{B}\)_ is determined by a uniformity on \(C(X)\) having base consisting of sets of the form
\[[B,\epsilon]=\{(f,g):\forall x\in B,\ |f(x)-g(x)|<\epsilon\}\ \left(B\in \mathcal{B},\ \epsilon>0\right).\]
**Definition 4.2**.: ([3]) Let \(\mathcal{B}\) be a bornology on a metric space \((X,d)\). Then _the topology \(\tau_{\mathcal{B}}^{s}\) of strong uniform convergence on \(\mathcal{B}\)_ is determined by a uniformity on \(C(X)\) having base consisting of sets of the form
\[[B,\epsilon]^{s}=\left\{(f,g):\exists\ \delta>0\ \forall x\in B^{\delta},\ |f(x)-g(x)|<\epsilon\right\}\ (B\in \mathcal{B},\ \epsilon>0),\]
where for \(B\subseteq X\), \(B^{\delta}=\bigcup_{y\in B}\{x\in X:d(x,y)<\delta\}\).
Suppose \(\mathcal{B}\) is a bornology on a metric space \((X,d)\). Then the topology \(\tau_{\mathcal{B}}\) on \(C(X)\) is induced by the collection \(\mathcal{P}=\{\rho_{B}:B\in\mathcal{B}\}\) of extended seminorms, where \(\rho_{B}(f)=\sup_{x\in B}|f(x)|\) for \(f\in C(X)\). Similarly, the topology \(\tau_{\mathcal{B}}^{s}\) on \(C(X)\) is induced by the collection \(\mathcal{P}=\{\rho_{B}^{s}:B\in\mathcal{B}\}\) of extended seminorms, where \(\rho_{B}^{s}(f)=\inf_{\delta>0}\left\{\sup_{x\in B^{\delta}}|f(x)|\right\}\) for \(f\in C(X)\). Hence both the function spaces \((C(X),\tau_{\mathcal{B}})\) and \((C(X),\tau_{\mathcal{B}}^{s})\) are actually extended locally convex spaces. For more details related to \(\tau_{\mathcal{B}}\) and \(\tau_{\mathcal{B}}^{s}\), we refer to [3, 6, 7].
Recall that if \((X,\tau)\) is a locally convex space with the dual \(X^{*}\), then the _Mackey topology_\(\tau_{M}\) is a locally convex space topology on \(X\) whose neighborhood base at \(0\) is given by
\[\mathcal{B}_{M}=\{B_{\circ}:B\ \text{is an absolutely convex and weak}^{*}\ \text{ compact subset of}\ X^{*}\}.\]
We say \((X,\tau)\) is a _Mackey space_ if \(\tau=\tau_{M}\). Note that \(\tau_{M}\) is the largest locally convex topology on \(X\) such that \((X,\tau_{M})^{*}=X^{*}\). For more details about Mackey topology, we refer to [12, 13].
**Proposition 4.3**.: _Suppose \(\tau_{w^{*}}\) and \(\tau_{w}\) are the weak\({}^{*}\) and weak topologies for an elcs \((X,\tau)\), respectively. Then_
\[\mathcal{B}=\{B^{\circ}:B\ \text{is absolutely convex and compact in}\ (X,\tau_{w})\}\]
_is a neighborhood base at \(0\) in \((X^{*},\tau_{M})\), where \(\tau_{M}\) is the Mackey topology for the locally convex space \((X^{*},\tau_{w^{*}})\)._
Proof.: Let \(D\) be an absolutely convex and weak\({}^{*}\) compact subset of \((X^{*},\tau_{w^{*}})^{*}\). Then \(D=\{J_{x}:x\in A\}\) for some \(A\subseteq X\) as \((X^{*},\tau_{w^{*}})^{*}=\{J_{x}:x\in X\}\). Since \(D\) is absolutely convex, \(A\) is absolutely convex. Note that a net \((x_{\lambda})\) in \(X\) converges weakly to \(x\) if and only if \(f(x_{\lambda})\to f(x)\) for every \(f\in X^{*}\) if and only if \(J_{x_{\lambda}}(f)\to J_{x}(f)\) for all \(f\in X^{*}\). Therefore \(A\) is weakly compact. It is easy to prove that \(A^{\circ}=D_{\circ}\). Which completes the proof.
**Theorem 4.4**.: _Suppose \((X,\tau)\) is an elcs. Then \((X,\tau)\) is semi-reflexive if and only if every bounded subsets of \((X,\tau)\) is relatively weakly compact._
Proof.: Suppose \((X,\tau)\) is semi-reflexive. Then \((X^{*},\tau_{ucb})^{*}=(X^{*},\tau_{w^{*}})^{*}=\{J_{x}:x\in X\}\). Therefore \(\tau_{ucb}\subseteq\tau_{M}\), where \(\tau_{M}\) is the Mackey topology for \((X^{*},\tau_{w^{*}})\). Now, let \(A\) be any bounded set in \((X,\tau)\). Then \(A^{\circ}\) is a neighborhood of \(0\) in \((X^{*},\tau_{ucb})\). Consequently, \(A^{\circ}\) is a neighborhood of \(0\) in \((X^{*},\tau_{M})\). By
Proposition 4.3, there exists an absolutely convex and weakly compact subset \(B\) of \(X\) such that \(B^{\circ}\subseteq A^{\circ}\). Therefore \(A\subseteq(A^{\circ})_{\circ}\subseteq(B^{\circ})_{\circ}\). By applying Bipolar Theorem on \(B\) in \((X,\tau_{w})\), we obtain \((B^{\circ})_{\circ}=B\). Hence \(A\) is relatively weakly compact.
Conversely, suppose every bounded set is weakly compact in \((X,\tau)\). Then \(\tau_{ucb}\subseteq\tau_{M}\). Therefore \((X^{*},\tau_{ucb})^{*}=(X^{*},\tau_{M})^{*}=(X^{*},\tau_{w^{*}})^{*}=\{J_{x}:x \in X\}\). Hence \((X,\tau)\) is semi-reflexive.
**Theorem 4.5**.: _Let \(\mathcal{B}\) be a bornology with a closed base on a metric space \((X,d)\). If \((C(X),\tau_{\mathcal{B}}^{s})\)\((\)or \((C(X),\tau_{\mathcal{B}}))\) is reflexive, then \(\mathcal{K}\subseteq\mathcal{B}\)._
Proof.: It follows from Theorems 3.15, 4.3 in [9] and Proposition 3.9.
**Theorem 4.6**.: _Let \(\mathcal{B}\) be a bornology with a closed base on a metric space \((X,d)\). If \((C(X),\tau_{\mathcal{B}}^{s})\)\((\)or \((C(X),\tau_{\mathcal{B}}))\) is reflexive, then \(X\) is a discrete space._
Proof.: Suppose \(K\) is any compact set in \((X,d)\). Then by Theorem 4.5, \(K\in\mathcal{B}\). We show that \(K\) is a finite set. Let \((x_{n})\) be any sequence in \(K\) converging to \(x_{0}\). For every \(n\in\mathbb{N}\), there exists an \(f_{n}\in C(X,[0,1])\) such that \(f_{n}(x_{j})=0\) for \(1\leq j\leq n\) and \(f_{n}(x_{0})=1\). Consider \(A=\{f_{n}:n\in\mathbb{N}\}\). Clearly, \(A\) is bounded in \((C(X),\tau_{\mathcal{B}}^{s})\). By Theorem 4.4, \(A\) is relatively weakly compact in \((C(X),\tau_{\mathcal{B}}^{s})\). Therefore there exists a cluster point \(f\) of \(A\). It is easy to prove that the topology of pointwise convergence is coarser than the weak topology for \((C(X),\tau_{\mathcal{B}}^{s})\) as the maps \(\Psi_{x}:(C(X),\tau_{\mathcal{B}}^{s})\to\mathbb{R}\) defined by \(\Psi_{x}(f)=f(x)\) is continuous for every \(x\in X\). For every \(m\in\mathbb{N}\) and \(\epsilon>0\), there exists an \(n>m\) such that \(|f(x_{m})-f_{n}(x_{m})|<\epsilon\). Therefore \(f(x_{m})=0\) for every \(m\in\mathbb{N}\). Consequently, \(f(x_{m})\to f(x_{0})=0\). But, there also exists an \(n\in\mathbb{N}\) such that \(|f(x_{0})-f_{n}(x_{0})|<\frac{1}{2}\). We arrive at a contradiction. Hence \(X\) is a discrete space.
Suppose \((X,d)\) is a metric space. Then the topology \(\tau_{u}\) of uniform convergence on \(C(X)\) is induced by the extended norm \(\parallel f\parallel_{\infty}=\sup_{x\in X}|f(x)|\). It is known that for a compact space \((X,d)\), the normed space \((C(X),\parallel\cdot\parallel_{\infty})\) is reflexive if and only if \(X\) is finite (Example 15.5.2, p. 502 in [12]). We now prove a similar result without assuming \((X,d)\) to be compact. The next theorem also shows that the converse of Theorem 4.6 may not be true.
**Theorem 4.7**.: _Suppose \((X,d)\) is a metric space. Then the uniform space \((C(X),\parallel\cdot\parallel_{\infty})\) is reflexive if and only if \(X\) is finite._
Proof.: If \(X\) is finite, then \(C(X)\) is finite dimensional. Therefore \((C(X),\parallel\cdot\parallel_{\infty})\) is reflexive. Conversely, suppose \((C(X),\parallel\cdot\parallel_{\infty})\) is reflexive and \(X\) is infinite. Then by Example 15.5.2, p. 502 in [12], \(X\) cannot be compact. So there exists a countable, closed and discrete subset \(T=\{t_{n}:n\in\mathbb{N}\}\) of \(X\). If \(Y=\{f\in C(X):f=0\text{ on }T\}\), then \(Y\) is a closed subspace of \(C(X)\). By
Theorem 3.22, \(C(X)/Y\) is reflexive. Now, we show that the space \(l_{\infty}\) of all bounded real sequences is isometrically isomorphic to a subspace of \(C(X)/Y\). Let \(z=(z_{n})\in l_{\infty}\) and \(m=\underset{n\in\mathbb{N}}{\inf}z_{n}\) and \(M=\underset{n\in\mathbb{N}}{\sup}z_{n}\). Since \(T\) is discrete and closed, by Tietze extension theorem, there exists a \(f_{z}\in C(X)\) such that \(f_{z}(t_{n})=z_{n}\) for \(n\in\mathbb{N}\). Define \(F_{z}(x)=\max\{m,\min\{M,f_{z}(x)\}\}\) for \(x\in X\). Then \(F_{x}\in C(X)\) with \(F_{x}(t_{n})=x_{n}\) for \(n\in\mathbb{N}\). Consider \(\Psi:l_{\infty}\to C(X)/Y\) by \(\Psi(z)=F_{z}+Y\). Then \(\Psi\) is linear as \(F_{\alpha x+y}=\alpha F_{x}+F_{y}\) on \(Y\) for \(x,y\in l_{\infty}\) and \(\alpha\in\mathbb{R}\). Note that if \(z=(z_{n})\in l_{\infty}\) and \(f\in Y\), then \(\parallel F_{z}+Y\parallel\leq\parallel F_{z}\parallel_{\infty}\leq\max\{|m|, |M|\}=\parallel z\parallel_{\infty}\) and \(\parallel z\parallel_{\infty}=\underset{n\in\mathbb{N}}{\sup}|z_{n}|=\underset {n\in\mathbb{N}}{\sup}|F_{z}(t_{n})|=\underset{n\in\mathbb{N}}{\sup}|F_{z}(t_{ n})-f(t_{n})|\leq\parallel F_{z}-f\parallel_{\infty}\). Therefore \(\Psi\) is an isometry. By Theorem 3.22, \(\Psi(l_{\infty})\) is reflexive. Consequently, by Exercise 3.61, p. 97 in [8], \(l_{\infty}\) is reflexive. Which is not true.
|
2309.07361 | Judging a video by its bitstream cover | Classifying videos into distinct categories, such as Sport and Music Video,
is crucial for multimedia understanding and retrieval, especially in an age
where an immense volume of video content is constantly being generated.
Traditional methods require video decompression to extract pixel-level features
like color, texture, and motion, thereby increasing computational and storage
demands. Moreover, these methods often suffer from performance degradation in
low-quality videos. We present a novel approach that examines only the
post-compression bitstream of a video to perform classification, eliminating
the need for bitstream. We validate our approach using a custom-built data set
comprising over 29,000 YouTube video clips, totaling 6,000 hours and spanning
11 distinct categories. Our preliminary evaluations indicate precision,
accuracy, and recall rates well over 80%. The algorithm operates approximately
15,000 times faster than real-time for 30fps videos, outperforming traditional
Dynamic Time Warping (DTW) algorithm by six orders of magnitude. | Yuxing Han, Yunan Ding, Jiangtao Wen, Chen Ye Gan | 2023-09-14T00:34:11Z | http://arxiv.org/abs/2309.07361v1 | # Judging a video by its bitstream cover
###### Abstract
Classifying videos into distinct categories, such as Sport and Music Video, is crucial for multimedia understanding and retrieval, especially in an age where an immense volume of video content is constantly being generated. Traditional methods require video decompression to extract pixel-level features like color, texture, and motion, thereby increasing computational and storage demands. Moreover, these methods often suffer from performance degradation in low-quality videos. We present a novel approach that examines only the post-compression bitstream of a video to perform classification, eliminating the need for bitstream. We validate our approach using a custom-built data set comprising over 29,000 YouTube video clips, totaling 6,000 hours and spanning 11 distinct categories. Our preliminary evaluations indicate precision, accuracy, and recall rates well over 80%. The algorithm operates approximately 15,000 times faster than real-time for 30fps videos, outperforming traditional Dynamic Time Warping (DTW) algorithm by six orders of magnitude.
**Keywords:** Content Analysis, Video Classification, Entropy Coding
## 1 Introduction
Video classification is fundamental for multimedia services, enabling functionalities such as content retrieval, recommendation, and optimized coding. Traditional techniques focus on analyzing video features such as color, texture, and motion, while recent approaches incorporate deep learning-based algorithms[1][2][3][4].
A major drawback of current approaches is their dependency on pixel-domain features, resulting in computational and storage inefficiencies. This issue is exacerbated by the high-volume video uploads to platforms like YouTube and TikTok, where videos are typically compressed. Extracting pixel data from such compressed videos necessitates full decoding, leading to a storage increase ratio of up to 75:1 for a 1080p30 video compressed at 10 Mbps. Even with optimizations like down-sampling spatial and temporal resolutions and specialized low-memory-footprint classification techniques [5][6][7], the classification of the 30,000 hours of video uploaded to YouTube every hour would necessitate a method operating thousands of times faster than real-time, while consuming hundreds of times more storage. Furthermore, these techniques often falter in classifying low-quality videos and raise privacy issues, as decryption is needed. Some videos, due to DRM policies, cannot even be decrypted during transmission.
does not rely on pixel domain information. Instead, we use a compressed bitstreams as input for a ResNet-based deep neural network, without the need for bitstream decoding or parsing. This approach leverages the intricate information encapsulated by modern video compression algorithms, particularly advanced spatial and temporal prediction methods found in modern video coding standards such as H.264/AVC [8], H.265.HEVC [9] and H.266/VVC [10]. These information include spatial or temporal prediction ("Prediction Mode"), as well as the location of the reference for prediction (e.g. "Motion Vector"), while small number of bits represent prediction error ("residual"). Optimized encoders also incorporate advanced rate control algorithms, which allocates and enforces bitrates across and inside frames. At a very high level, the complexity of individual frames determines the size of the reference frames after compression, the uniformity of the frames determine the size of the predicted frames, whereas the frequency of scene changes is determined by factors such as camera movement rhythms. Utilizing solely this information-rich compressed stream, our methodology significantly reduces computational and storage demands, while ensuring privacy and security.
To evaluate the efficacy of our approach, we curated a comprehensive dataset comprising 29,142 video segments across 11 broad YouTube categories, collectively exceeding 6,000 hours in duration. Existing datasets, such as YouTube-8M [11], Activity Net [12], UCF101 [13], and Net Sports-1M [14], primarily focus on short-form content, often limiting clips to less than 5 seconds. Our dataset, illustrated in Figure 1, is designed to be more encompassing, sourcing through keyword and metadata searches on YouTube.
Through experiments, we observed precision, accuracy, and recall rates consistently exceeding 80% across numerous test cases. The algorithm demonstrated particular sensitivity to distinct "editing styles", including factors such as shot selection, camera angles, and movement. Remarkably, even when each video frame was represented by a singular numerical value, the classifier could effectively distinguish between diverse video categories such as NBA games, football
Figure 2: Video Clip Statistics.
matches, and classical or pop concerts. For categories in which editing styles are more diverse, the proposed method was less effective. However, it was still able to identify specific vloggers on YouTube.
Traditionally, we're cautioned against judging a book by its cover. However, much can be inferred about a book's caliber from its cover -- the paper quality, meticulous typesetting, and chosen color palette. In a similar vein, our research reveals that a video's compression encoded bitstream is a treasure trove of information, enabling us to appraise a video by its "bitstream cover" with impressive accuracy. This encoded bitstream, while only a fraction of the size of the original video, originates from a sophisticated encoding process and comprises of hundreds, if not thousands, of data points. The sophisticated encoding process, combined with the encoded bitstream's sheer length, makes it a prime target for deep learning to extract insights.
We believe in the potential of exploring the encoded bitstream for video classification offers. This method is especially beneficial for large-scale video analysis and archive digitization, given its capacity to rapidly process vast data sets. Its efficiency also makes it suitable for real-time applications such as broadcasting and digital marketing. Nonetheless, limitations exist; the approach struggles with videos of similar editing styles, as seen in Gaming videos, where the predominant style is recorded gameplay accompanied by a floating head commentary, as well as differentiating certain finer nuances, such as discerning between NBA clips featuring different basketball stars. However, recognizing its promise, we have open-sourced our model (see Code Availability), inviting research into encoded bitstreams and development of more sophisticated models aimed at video classification without decompression; details of which can be found in the 'Code Availability' section.
## 2 Results
### Data preparation
We created a large data set consisting of 29,142 video clips, each containing at least 3,000 frames, and their corresponding bitstreams compressed using different encoding settings or downloaded from YouTube ("entropy coded covers"). The clips were selected by searching for key words, and then chosen based on high play volume or high attention. The clips span 11 large and diverse YouTube channels [15], including Movie, Entertainment, Knowledge, Cooking & Health, Gaming, Technology, Music, Sports, Beauty & Fashion, News, and Education. Detailed video statistics are shown in Fig. 2.
### Hypothesis verification
We propose that videos across various categories exhibit unique style and editing traits, influencing the encoding decisions made by optimized video encoders. These resulting encoded bitstreams may provide revealing high-level statistics, useful for classifying videos with pronounced stylistic differences--such as NBA vs. Football or Pop vs. Classical concerts. This also holds true for content from different social media influencers. Figure 3 substantiates our claim, showing that video clips from various categories display distinct temporal patterns.
We then calculated the inter- and intra-class Kullback-Leibler Divergence (KLD) of the test data set. For clarity and space consi
Figure 3: Compressed video size (in bits) as a function of frame number
Figure 4: Kullback-Leibler Diverge.
Figure 5: Effectiveness on broad video categories on YouTube
Figure 6: Performance on channels with heavy overlap in content
Figure 7: Evaluating ability to discriminate individual vloggers within a channel
sub-classes, namely NBA, TikTok, Football, Classical Concerts, Pop Concerts, Gaming (League of Legends), Music Videos, and Culinary Exploration. As evident in Fig. 4, the intra-class KLD--a single, first-order metric at the clip-scale level--is typically orders of magnitude smaller than the KLD between the same class and other classes, despite its limitations in capturing long-term and high-order patterns in bitrate time distribution
It should also be noted that the clips in the categories conceptually overlap, e.g. TikTok might also contain NBA or Dance clips. We intentionally created such overlaps to verify the capability of the algorithm to capture the "main characteristics" of the content. Because we built the training and testing data sets using search and meta-data, we tend to only discover highly popular and viral clips for each category. For an NBA-themed clip to become viral on TikTok, it usually follows a certain recognizably TikTok style, as a result of the TikTok filters, editing tools and various social marketing "rules" which makes it distinctively different from a clip of standard NBA broadcast. We believe it useful to distinguish an NBA clip as such when it originates from a live broadcast, but to identify that same clip as TikTok content once it has been integrated and disseminated within the TikTok platform. In no cases was the proposed classification algorithm in the loop of test clip class verification and assignment to the training and test data sets.
### Evaluating classifier performance across diverse data sets
We employ a ResNet-based classifier[16] to validate our approach, using the sizes of the encoded frames in bits as the input features. It is worth noting that the reported performance could potentially be enhanced by training a more sophisticated classifier that leverage higher-order input features.
To evaluate the robustness and accuracy of our classifier, we carried out tests in three distinct setups: coarse-grained YouTube channel classification, a mixed-category test using clips from channels with similar content, and a fine-grained classification focusing on different vloggers within the same category.
Initially, we evaluate the classifier's performance using high-level YouTube channel classifications, including but not limited to Movie, Entertainment, Knowledge, Cooking & Health, Gaming, Technology, Music, Sports, Beauty & Fashion, News, and Education. Notably, the Music category encompasses subcategories like Classical and Pop concerts, T-Series, and music videos, while the Sports category includes NBA, Football, WWE, and MLB among others (see Supplementary Table 1 for more information). Fig. 5 displays that the classification accuracy exceeds 80% for all categories, and even surpassing 95% in many instances. The lowest accuracies were recorded in the Movie and Entertainment categories, which naturally have a wide range of content types.
Next, we evaluated the classifier's versatility by examining its performance on TikTok--a platform renowned for its wide array of content. Specifically, we mixed TikTok content with content from various categories prevalent on TikTok, including NBA, Classical Music, Football, and Pop Music (Fig. 7). The goal was to investigate the classifier's ability to distinguish between, for instance, NBA-related content originating from TikTok and that posted by the NBA itself. We found that the classifier maintained an accuracy rate above 80% for most categories, with Pop Music being an exception, likely due to its stylistic diversity. Notably, the classifier achieved a respectable 89% accuracy rate for TikTok content, a finding we attribute to TikTok's widespread use of trending filters.
Finally, we assessed the classifier's capability for fine-grained distinctions within broad categories like Gaming, Cooking & Health, and Beauty & Fashion (Fig. 6). The classifier demonstrated over 90% accuracy in distinguishing vloggers within the Cooking & Health and Beauty & Fashion categories. However, it struggled to differentiate between vloggers in the Gaming category. This limitation arises because videos in the Gaming category often combine gameplay screen captures and vlogger commentary, making them inherently similar and thus challenging to classify.
Overall, the results confirm the efficacy of "entropy-encoded covers" as a key feature for video classification.
### Robustness evaluation across unseen bitrates
We first studied the impact of the mismatch between the bitrate at which the ResNet classifier is trained and the bitrate of the input clips to the classifiers. Table 1 shows the performance when the ResNet model was trained using 3,000 frames of 1.5Mbps _1.5Mbps, 1.2Mbps, 1.0Mbps, 800kbps, and 500k respectively,_ video clips encoded using the average bitrate (ABR) mode, for the classification of 3,000 frames of videos encoded also in ABR at between 1.5Mbps and 500kbps.
Clearly, classification performance was usually the best when the model trained at a certain bitrate was used for inputs also at the same bitrate. But even for video encoded down to about 50% of model rate, e.g. 800kbps inputs with a model trained at 1.5Mbps, the classification still performs gracefully. Only when the bitrate of the input videos to be classified dips down to 1/3 of the model training bitrate, does the classification become a guess.
### Rate control algorithms and B frames
To rigorously assess the robustness of our classifier across varying encoding settings, we initially conducted tests using Average Bitrate (ABR) mode (Table
1). Subsequently, we explored the Constant Bitrate (CBR) encoding scheme for a subset of 3,000 frames across a bitrate range of 800Kbps to 1.5Mbps to. The performance in both scenarios with and without the use of B-frames is detailed in Tables 2 and 3, respectively.
Our findings indicate robust classification performance in both conditions. Notably, the incorporation of B-frames yielded even higher accuracy, possibly because B-frames introduce bi-directional correlations in the encoded bitstream. These correlations may better capture the inherent characteristics of the video, thereby enriching the bitstream representation.
We further scrutinized the classifier's performance under the Constant Rate Control (CRC) mode, utilizing varying Constant Rate Factors (CRFs) including 0, 18, 23, 28, and 51. The results, with and without B-frames, are cataloged in Table 4 and Table 5, respectively. Although the overall classification remained robust, compared with the ABR and CBR mode, the impact of model mismatch was significantly higher. This discrepancy is likely attributable to the perceptual quality model in CRF encoding, which allocates differing weights to motion and texture clarity, thus affecting both encoding choices and the temporal distribution of the bitrate.
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline \hline Performance & \multicolumn{4}{c|}{Test Sets (Bitrate in kbps)} & \multicolumn{2}{c}{Training Sets} \\ Metric (\%) & 500 & 800 & 1000 & 1200 & 1500 & (Bitrate in kbps) \\ \hline \hline Precision & 85.70 & 75.62 & 68.82 & 64.16 & 54.00 & \\ Accuracy & 85.02 & 66.34 & 50.99 & 39.60 & 25.99 & 500 \\ Recall & 85.79 & 69.97 & 55.82 & 43.80 & 28.38 & \\ \hline Precision & 75.54 & 86.87 & 85.35 & 79.39 & 70.15 & \\ Accuracy & 75.62 & 87.62 & 83.54 & 72.40 & 52.35 & 800 \\ Recall & 74.93 & 88.01 & 84.79 & 75.90 & 56.91 & \\ \hline Precision & 71.24 & 86.82 & 87.98 & 85.55 & 78.12 & \\ Accuracy & 70.42 & 88.24 & 88.00 & 84.90 & 72.90 & 1000 \\ Recall & 70.63 & 88.65 & 88.48 & 86.00 & 76.65 & \\ \hline Precision & 58.25 & 69.78 & 75.19 & 74.86 & 72.25 & \\ Accuracy & 33.29 & 56.06 & 70.92 & 71.53 & 63.00 & 1200 \\ Recall & 38.74 & 59.07 & 74.62 & 75.56 & 68.84 & \\ \hline Precision & 49.41 & 72.67 & 78.55 & 84.76 & 85.23 & \\ Accuracy & 42.45 & 73.14 & 80.45 & 85.77 & 85.64 & 1500 \\ Recall & 45.61 & 73.68 & 80.56 & 86.30 & 86.35 & \\ \hline \hline \end{tabular}
\end{table}
Table 1: Classification performance with ABR encoding using B frames at various bitrates
### Impact of input size
To assess the classifier's performance as influenced by the number \(N\) of frames utilized for model training and classification, we experimented with a range of \(N\) values: 120, 240, 360, 480, 600, 720, 840, 960, 1200, 2400, 3600, and 4800 frames. The corresponding classification outcomes for the Average Bitrate (ABR) scenario are depicted in Fig. 8. The results indicate that smaller \(N\) values may compromise classification performance, likely due to the influence of localized content variations. Conversely, as \(N\) increases, classification efficacy enhances and reaches a stable state.
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline Performance & \multicolumn{3}{c|}{Test Sets (Bitrate in kbps)} & \multicolumn{2}{c}{Training Sets} \\ Metric (\%) & 800 & 1000 & 1200 & 1500 & (Bitrate in kbps) \\ \hline \hline Precision & 85.11 & 79.66 & 67.16 & 58.17 & \\ Accuracy & 85.27 & 74.63 & 42.20 & 21.16 & 800 \\ Recall & 85.72 & 77.27 & 48.34 & 29.31 & \\ \hline Precision & 85.16 & 87.93 & 84.80 & 66.10 & \\ Accuracy & 86.14 & 87.50 & 80.94 & 34.78 & 1000 \\ Recall & 86.67 & 88.14 & 82.67 & 42.65 & \\ \hline Precision & 79.19 & 84.65 & 85.76 & 79.34 & \\ Accuracy & 80.94 & 85.40 & 86.26 & 73.76 & 1200 \\ Recall & 81.74 & 86.48 & 87.47 & 76.69 & \\ \hline Precision & 66.07 & 73.66 & 81.19 & 84.55 & \\ Accuracy & 61.26 & 72.15 & 80.82 & 83.17 & 1500 \\ Recall & 60.65 & 71.06 & 80.70 & 83.24 & \\ \hline \hline \end{tabular}
\end{table}
Table 2: Classification performance using CBR encoding with B frames at various bitrates
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline Performance & \multicolumn{3}{c|}{Test Sets (Bitrate in kbps)} & \multicolumn{2}{c}{Training Sets} \\ Metric(\%) & 800 & 1000 & 1200 & 1500 & (Bitrate in kbps) \\ \hline \hline Precision & 84.49 & 80.99 & 70.50 & 56.61 & \\ Accuracy & 83.79 & 74.01 & 46.16 & 20.67 & 800 \\ Recall & 85.33 & 76.33 & 50.64 & 28.39 & \\ \hline Precision & 78.63 & 83.47 & 80.89 & 68.80 & \\ Accuracy & 78.22 & 83.29 & 76.36 & 42.45 & 1000 \\ Recall & 78.47 & 84.44 & 78.62 & 48.08 & \\ \hline Precision & 69.78 & 77.83 & 82.18 & 79.59 & \\ Accuracy & 64.11 & 75.99 & 81.19 & 74.38 & 1200 \\ Recall & 64.25 & 76.32 & 82.81 & 77.94 & \\ \hline Precision & 59.41 & 72.15 & 78.54 & 84.11 & \\ Accuracy & 58.04 & 72.77 & 79.58 & 82.92 & 1500 \\ Recall & 55.06 & 72.01 & 80.28 & 84.41 & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Classification performance using CBR encoding without B frames at various bitrates
### Classification speed
Our ResNet-based video classifier, although not specifically optimized for speed. Even so, on a server equipped with a single Nvidia A100 GPU, it processed 1,818 test videos--each containing 3,000 frames--in less than 13 seconds. This corresponds to a real-time factor of approximately 15,000 for 30fps
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline \hline Performance & \multicolumn{3}{c|}{Test Sets (Constant Rate Factors)} & \multicolumn{2}{c}{Training Sets} \\ Metric(\%) & 0 & 18 & 23 & 28 & 51 & Constant Rate Factors \\ \hline \hline Precision & 81.94 & 7.15 & 21.95 & 10.01 & 1.24 & \\ Accuracy & 82.30 & 14.79 & 13.86 & 12.87 & 9.90 & 0 \\ Recall & 82.69 & 17.31 & 15.48 & 14.99 & 12.50 & \\ \hline Precision & 16.63 & 84.85 & 77.59 & 63.32 & 17.53 & \\ Accuracy & 11.51 & 85.52 & 78.34 & 56.44 & 7.55 & 18 \\ Recall & 11.38 & 86.43 & 81.08 & 62.07 & 10.63 & \\ \hline Precision & 12.64 & 77.59 & 85.13 & 63.32 & 17.53 & \\ Accuracy & 17.08 & 78.34 & 85.77 & 56.44 & 7.55 & 23 \\ Recall & 13.58 & 81.08 & 85.88 & 62.07 & 10.63 & \\ \hline Precision & 12.64 & 52.06 & 80.21 & 87.05 & 9.42 & \\ Accuracy & 17.08 & 45.79 & 76.86 & 87.13 & 19.93 & 28 \\ Recall & 13.58 & 45.79 & 77.15 & 87.50 & 21.49 & \\ \hline Precision & 0.42 & 0.42 & 0.85 & 27.94 & 86.91 & \\ Accuracy & 3.34 & 3.34 & 3.47 & 5.32 & 86.14 & 51 \\ Recall & 12.50 & 12.50 & 12.59 & 14.27 & 86.82 & \\ \hline \hline \end{tabular}
\end{table}
Table 4: Classification performance using different Constant Rate Factors with B frames
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline \hline Performance & \multicolumn{3}{c|}{Test Sets (Constant Rate Factors)} & \multicolumn{2}{c}{Training Sets} \\ Metric(\%) & 0 & 18 & 23 & 28 & 51 & Constant Rate Factors \\ \hline \hline Precision & 82.59 & 8.18 & 6.01 & 7.50 & 1.24 & \\ Accuracy & 82.67 & 12.75 & 10.52 & 9.90 & 9.90 & 0 \\ Recall & 83.54 & 15.14 & 13.01 & 12.42 & 12.50 & \\ \hline Precision & 6.28 & 81.94 & 77.34 & 64.87 & 2.51 & \\ Accuracy & 17.08 & 82.05 & 74.63 & 52.48 & 10.89 & 18 \\ Recall & 14.02 & 83.64 & 77.88 & 58.58 & 13.25 & \\ \hline Precision & 3.99 & 76.41 & 82.30 & 75.76 & 6.47 & \\ Accuracy & 17.33 & 72.77 & 82.30 & 74.38 & 13.00 & 23 \\ Recall & 13.15 & 73.82 & 83.24 & 77.42 & 15.33 & \\ \hline Precision & 8.53 & 54.65 & 78.07 & 84.70 & 10.47 & \\ Accuracy & 17.20 & 42.70 & 73.51 & 83.42 & 16.21 & 28 \\ Recall & 12.76 & 44.71 & 75.00 & 84.41 & 17.89 & \\ \hline Precision & 3.61 & 5.40 & 19.49 & 29.24 & 82.48 & \\ Accuracy & 13.12 & 14.98 & 20.54 & 24.63 & 82.67 & 51 \\ Recall & 12.99 & 14.84 & 18.98 & 22.55 & 83.58 & \\ \hline \hline \end{tabular}
\end{table}
Table 5: Classification performance using different Constant Rate Factors without B frames
videos, within striking distance of the 30,000-to-1 ratio of total video duration uploaded to YouTube per unit time.
In contrast, we evaluate the conventional non-deep-learning-based Dynamic Time Warping (DTW) algorithm [17], commonly applied in Time Series Classification (TSC) problems. Utilizing the same dataset and categories but restricting the videos to short 30-frame clips, the DTW algorithm required an extensive 27 hours for classification. Given DTW's computational complexity of \(O(N^{2})\), where \(N\) is the time series length, the algorithm is approximately \(7.5\times 10^{7}\) times slower than the ResNet classifier. Moreover, the DTW-based accuracy rate is less than 0.35. Although longer time series might slightly improve DTW's performance, the algorithm's inefficiency makes it an impractical alternative for long-sequence time-series forecasting (LSTF) in video classification tasks based on a sequence of encoded frame.
## 3 Discussion
In this work, we introduce an innovative approach for video classification that leverages compressed bitstream representations of videos, eliminating the need for pixel-level decoding. This approach offers multiple advantages: it minimizes storage requirements, reduces computational overhead, enhances data privacy, and is resilient to performance degradation resulting from poor video quality.
Figure 8: Classification performance as a function of input size.
We demonstrate robust classification capabilities across both coarse and fine categories, including those that overlap. Furthermore, it operates with a speed of approximately 15,000 times real-time for videos at 30fps.
In our experiments, we employed a straightforward classifier that exclusively utilizes the time series of compressed video frame sizes as input. This information is readily accessible through various means--such as byte-aligned headers or Network Abstraction Layer (NAL) packets--without the need for decoding. This design not only simplifies computational complexity but also enhances data privacy. Remarkably, NAL packet-based analysis could even function with encrypted content, positioning our technique as a scalable solution for network carriers.
Although our classifier is still in its nascent stages with room for optimization, preliminary results are promising. One significant factor affecting classification performance is the number of frames (\(N\)) used for both training and classification. Our ongoing research includes adaptive methods that increment \(N\) progressively to improving classification outcomes. Additionally, we are examining the effects of employing diverse encoders and dynamically varying the
Figure 9: Bitrate variation for the same clip transcoded to different bitrates.
number of frames in the input clip for classification. In our tests, we intentionally selected \(N\) values that do not align with standard Group of Picture sizes (e.g., 30, 60) in an attempt to approximate the effects of adaptive intra-frame insertion. However, this area requires further investigation.
One limitation lies in the need to retrain the classifier when the number of classes changes. We suggest an adaptive model that could categorize new or unanticipated classes under a generic "Others" category, which could then serve as input for a specialized secondary classifier. We also limited our tests to distinguishing between categories with pronounced editing styles, such as Sports from Music Videos, and did not aim to differentiate, for instance, NBA clips featuring Michael Jordan from those featuring LeBron James. We anticipate challenges in identifying such nuanced distinctions, as those are likely obscured during the encoding process.
It is important to note that our work is an initial step and not the definitive solution in using a video's compression encoded bitstream representation for classification. Nevertheless, we believe our early findings demonstrate the considerable potential of this avenue. We have open sourced our model (see Code Availability), and aim to inspire subsequent research and discussion on developing more advanced classifiers that can surpass our current model without pixel-level decoding.
## 4 Methods
### Dataset and preprocessing
Distinct video categories display unique patterns related to scene transitions, texture, shot lengths, etc. When encoded by a rate-distortion optimized encoder, these patterns translate into varied bitstream distributions and entropy concerning motion vectors, modes, and bitrate allocations. This effect is even more pronounced for social network videos that use preset filters and special effects.
To test our hypothesis, we compiled a video dataset of 29,142 clips from YouTube, spanning 11 categories: Movies, Entertainment, Cooking & Health, Gaming, Technology, Music, Sports, Beauty & Fashion, News, and Education. Each category held a minimum of 400 clips, with each clip containing at least 3,000 frames. The clips varied in spatial resolution, from 298x480 for TikTok videos to 720p and 1080p for others, and were consistent in frame rate (either 30fps or 60fps) within their respective categories. The peak video bitrate recorded was 3Mbps. Fig. 3 illustrates the typical size sequences of compressed frames across categories.
For our experiment, we transcoded the videos to a 1.5Mbps bitrate using FFmpeg's H.264/AVC encoder, maintaining consistent encoding settings across all clips. As shown in Fig. 9, a test clip originally downloaded at 1.13Mbps was transcoded to both 1Mbps and 500kbps. While the encoding setting differed
from the original YouTube version, the frame size time series of various bitstreams remained highly correlated. This suggests that 1) post-compression frame sizes, reflecting bitrate variations, could be useful in video classification; and 2) it might be feasible to develop a single model capable of classifying videos across a spectrum of bitrates.
The Kullback-Leibler Divergence results (Fig.4) supports this idea.
### Problem formulation
Video classification using the time sequence of the sizes of video frames after compression (in encoding order) is a typical a Time Series Classification (TSC) problem [18][19][20]. We define the sizes of compressed frames in bits in encoding order as a time series \(X=[x_{1},x_{2},x_{3},...,x_{T}]\in\mathbb{R}^{T}\), where T represents the length of the time series. Consequently, video classification can be formulated as a mapping between \(X\) and a one-hot label vector \(Y\).
Traditional TSC algorithms often use distance-based metrics with a \(k\)-nearest neighbor classifier (\(K\)-NN). Such algorithms include Dynamic Time Warping (DTW) [17], Weighted DTW [21], Move-split-merge [22], Complexity invariant distance [23], Derivative DTW [24], Derivative transform distance [25], Elastic ensemble [26], etc., where DTW is often used as a baseline [19]. All such methods take the entire time series as input and are computationally intensive. Even following the reduced complexity TSC algorithm from Rodriguez et al. [27], TSC algorithms are in general, still highly time-consuming.
In recent years, deep learning based TSC has been studied broadly. A study by Fawaz et al.[28] compared the TSC performances of Time Convolutional Neural Network (Time-CNN) [29], Encoder [30], Fully Convolutional Neural Networks (FCNs) [16], Multi Layer Perceptron (MLP) [16], Residual Network (ResNet) [16], Multi-scale Convolutional Neural Network (MCNN) [31], Multi Channel Deep Convolutional Neural Network (MCDCNN) [32], Time Le-Net (t-LeNet) [33], and Time Warping Invariant Echo State Network (TWIESN) [34] using the UCR/UEA archive [35] and MTS archive [36] data sets. It was concluded that deep residual network architecture performs the best. Based on the above studies, we used DTW as a baseline for traditional TSC algorithms, and ResNet as the representative structure for deep learning based algorithms.
### Video classification using temporal bitrate variation
In this section, we provide details for training the time series classifier.
Our primary objective in this study was to explore the potential of bitrate time sequence for video categorization. Consequently, we prioritized this over intricate neural network design and opted for the established ResNet classifier [16], depicted in Fig. 10.
Model input.We set the labeled input-target pairs \((X_{i},Y_{i})\in\mathcal{D}\) as input, where \(\mathcal{D}\in\mathbb{R}^{N\times T\times C}\) denotes a set of \(N\) compressed video frames represented in bits, each with a fixed length of \(T\) and \(C\) channels. In contrast to traditional pixel-based approach, our approach treats each frame as a singular value, as opposed to millions of pixel values. This significantly accelerates classification speeds. The input length in our network is determined by the number of video frames used. The channel count is dependent on bitstream configuration. For instance, using simply compressed frame sizes as input yields a single channel. However, incorporating additional bitrate data from various parts of the bitstream, like motion vectors, headers, and texture information, can increase the number of channels.
Model architecture.The ResNet based classifier mainly consisted of 3 residual blocks, which are used to extract features from the input time series. To describe the process of a classifier predicting the class of a given input X in this supervised learning task, the equation can be expressed as:
\[\hat{y}=f_{L}(\theta_{L},x)=f_{L-1}(\theta_{L-1},f_{L-2}(\theta_{L-2},\dots,f_ {1}(\theta_{1},x+\mathcal{F}(x))) \tag{1}\]
where \(\hat{y}\) is the predicted output, and \(\mathcal{F}(x)\) denotes the shortcut connection in each residual block. The final output \(\hat{y}\) of the ResNet model is produced by a global average pooling layer and a softmax classifier. The global average pooling layer generates feature maps of \(\hat{y}\) for different video categories in this classification task, then the softmax classifier maps the feature vector to a probability distribution over the output classes. The main characteristic of the network is the residual connection between different convolutional layers, which effectively avoids vanishing gradient in the training process [37][16]. The number of filters for each convolutional blocks are set to 256, 512, 512 respectively.
#### 2.0.2 Model Training Specifics.
We used fixed parameter initialization and an auto-adjusted learning rate, with the factor at 0.5 and the patience level at
Figure 10: Residual architecture classifier based on [16].
40. To avoid overfitting, an early stop strategy was adopted with the patience set to 80. The adam optimizer was used for all contrast experiments.
For this multi-classification task, categorical cross-entropy loss function is used as the supervised training loss, which is written as
\[\text{Loss}(\mathbf{y},\mathbf{\hat{y}})=-\sum_{i=1}^{C}y_{i}\log(\hat{y}_{i}) \tag{2}\]
where \(y_{i}\) denotes one-hot encoding for the corresponding video category. \(C\) denotes the total number of categories. The optimization goal of Equation 2 is to maximize the likelihood of predicting true labels given the model parameters.
## 5 Data availability
Source data are provided at the following link: [https://tinyurl.com/bitstream-video-data](https://tinyurl.com/bitstream-video-data)
## 6 Code availability
The code for reproducing the results is available from GitHub at: [https://tinyurl.com/bitstream-video-code](https://tinyurl.com/bitstream-video-code)
|
2309.08052 | A Bayesian approach to breaking things: efficiently predicting and
repairing failure modes via sampling | Before autonomous systems can be deployed in safety-critical applications, we
must be able to understand and verify the safety of these systems. For cases
where the risk or cost of real-world testing is prohibitive, we propose a
simulation-based framework for a) predicting ways in which an autonomous system
is likely to fail and b) automatically adjusting the system's design to
preemptively mitigate those failures. We frame this problem through the lens of
approximate Bayesian inference and use differentiable simulation for efficient
failure case prediction and repair. We apply our approach on a range of
robotics and control problems, including optimizing search patterns for robot
swarms and reducing the severity of outages in power transmission networks.
Compared to optimization-based falsification techniques, our method predicts a
more diverse, representative set of failure modes, and we also find that our
use of differentiable simulation yields solutions that have up to 10x lower
cost and requires up to 2x fewer iterations to converge relative to
gradient-free techniques. Code and videos can be found at
https://mit-realm.github.io/breaking-things/ | Charles Dawson, Chuchu Fan | 2023-09-14T22:36:08Z | http://arxiv.org/abs/2309.08052v1 | A Bayesian approach to breaking things: efficiently predicting and repairing failure modes via sampling
###### Abstract
Before autonomous systems can be deployed in safety-critical applications, we must be able to understand and verify the safety of these systems. For cases where the risk or cost of real-world testing is prohibitive, we propose a simulation-based framework for a) predicting ways in which an autonomous system is likely to fail and b) automatically adjusting the system's design to preemptively mitigate those failures. We frame this problem through the lens of approximate Bayesian inference and use differentiable simulation for efficient failure case prediction and repair. We apply our approach on a range of robotics and control problems, including optimizing search patterns for robot swarms and reducing the severity of outages in power transmission networks. Compared to optimization-based falsification techniques, our method predicts a more diverse, representative set of failure modes, and we also find that our use of differentiable simulation yields solutions that have up to 10x lower cost and requires up to 2x fewer iterations to converge relative to gradient-free techniques. Accompanying code and video can be found at [https://mit-realm.github.io/breaking-things/](https://mit-realm.github.io/breaking-things/).
Automatic design tools, root-cause failure analysis, optimization-as-inference
## 1 Introduction
From power grids to transportation and logistics systems, autonomous systems play a central, and often safety-critical, role in modern life. Even as these systems grow more complex and ubiquitous, we have already observed failures in autonomous systems like autonomous vehicles and power networks resulting in the loss of human life [1]. Given this context, it is important that we be able to verify the safety of autonomous systems _prior_ to deployment; for instance, by understanding the different ways in which a system might fail and proposing repair strategies.
Human designers often use their knowledge of likely failure modes to guide the design process; indeed, systematically assessing the risks of different failures and developing repair strategies is an important part of the systems engineering process [2]. However, as autonomous systems grow more complex, it becomes increasingly difficult for human engineers to manually predict likely failures.
In this paper, we propose an automated framework for predicting, and then repairing, failure modes in complex autonomous systems. Our effort builds on a large body of work on testing and verification of autonomous systems, many of which focus on identifying failure modes or adversarial examples [3, 4, 5, 6, 7, 8], but we identify two major gaps in the state of the art. First, many existing methods [4, 5, 9, 7] use techniques like gradient descent to search _locally_ for failure modes; however, in practice we are more interested in characterizing the distribution of potential failures, which requires a global perspective. Some methods exist that address this issue by taking a probabilistic approach to sample from an (unknown) distribution of failure modes [6, 10]. However, these methods suffer from a second major drawback: although they can help the designer predict a range of
failure modes, they do not provide guidance on how those failure modes may be mitigated; they are also inefficient due to their use of gradient-free inference methods.
We address all of these drawbacks to develop a framework, shown in Fig. 1, for predicting and repairing failure modes in autonomous systems. Taking inspiration from inference-based methods [10; 6], we make three novel contributions:
1. We reframe the failure prediction problem as a probabilistic sampling problem, allowing us to avoid local minima and quickly find high likelihood, high severity failure modes.
2. We exploit the duality between failure prediction and repair to not only predict likely failure modes but also suggest low-cost repair strategies.
3. We employ automatic differentiation to take advantage of fast gradient-based sampling algorithms, substantially improving performance relative to the state of the art.
We demonstrate our approach on several large-scale robotics and control problems: swarm formation control with up to 10 agents, multi-robot search with up to 32 agents, an electric power transmission network with up to 57 nodes and 80 transmission lines. We compare our approach with baselines for both failure mode prediction and repair, showing that our framework outperforms the state-of-the-art and scales well beyond the capabilities of existing tools, converging to solutions that are up to 10x lower cost while requiring less than half as many iterations. We also demonstrate that the repair strategies developed using our approach can be deployed on hardware for the multi-robot search example, and we include a software implementation in the supplementary materials.
## 2 Related Work
Model-based verificationEarly approaches to model-based verification and fault identification used symbolic logical models of the system under test to formally reason about failures using (computationally expensive) satisfiability (SAT) solvers or search [11; 12]. More recent approaches to model-based failure mode identification have used mathematical models of the system dynamics to frame the problem as a reachability [13] or optimal control [14] problem. The challenge in applying these methods is that it may be difficult or impossible to construct a symbolic model for the system under test. In this work, we seek to retain the interpretability of model-based techniques while eliminating the requirement for a fully symbolic model, using automatically differentiable computer programs instead. Such models are comparatively easy to construct [8] and can even include implicitly differentiable components such as the solutions to optimization problems [15].
Adversarial testingVerification using adversarial optimization has been applied in both model-based [7; 5; 9] and model-free [3] contexts. Generally speaking, model-based adversarial techniques use gradient-based optimization to locally search for adversarial examples that cause a system failure, then use gradient-based optimization to locally repair those failures [7; 5]. The drawback of these methods is that they are inherently local and typically yield only a single adversarial counterexample. Model-free approaches [3] can avoid the issue of local minima by using zero-order
Figure 1: An overview of our method for predicting and repairing failure modes in autonomous systems, shown here handling connectivity failures in a drone swarm.
black-box optimization techniques but incur additional computational cost as a result. In contrast, we take a probabilistic approach where sample-efficient gradient-based sampling algorithms can be used to escape local minima and efficiently generate multiple potential failure cases [16].
InferenceOurs is not the first work to take a probabilistic approach to failure mode prediction. O'Kelly _et al._ develop an end-to-end verification pipeline for autonomous vehicles based on adaptive importance sampling [10], and Zhou _et al._ develop a failure mode prediction system based on gradient-free Markov Chain Monte Carlo (MCMC) [6]. We take inspiration from these works and make two key improvements. First, these existing works focus exclusively on predicting likely failure modes -- they do not include a method for mitigating these failure modes once discovered -- while we combine failure mode prediction with repair by recognizing the duality between these problems. Second, we use differentiable simulation to replace inefficient zero-order MCMC algorithms with fast gradient-based samplers, resulting in a substantial performance improvement.
There is also a complimentary body of work on algorithms for rare-event simulation [17; 18] that provide extensions to MCMC-based sampling algorithms that perform well even when we seek to simulate extremely rare failure cases. Our framework is completely compatible with rare-event simulation strategies commonly used in Sequential Monte Carlo (SMC), and we view the incorporation of these methods into our framework as a promising direction for future work.
## 3 Assumptions and Problem Statement
At the heart of our approach is a simulation model of the system under test, parameterized by two distinct sets of parameters. The _design parameters_\(x\in\mathcal{X}\subseteq\mathbb{R}^{n}\) are those parameters that the system designer may vary, while the _exogenous parameters_\(y\in\mathcal{Y}\subseteq\mathbb{R}^{m}\) are those that may vary uncontrollably (due to environmental variation, adversarial disturbance, the actions of other agents, etc.). Together, \(x\) and \(y\) define the _behavior_ of the system \(\xi\in\Xi\) (e.g. a trace of all relevant states and observables) through the _simulator function_, denoted \(\xi=S(x,y)\). In addition, we assume that a _cost function_\(J(\xi)\) is known; i.e. \(J\) reflects the property that the system designer wishes to verify. A summary of our notation is provided in Table 1 in the appendix; we will use "designs" and "failure modes" interchangeably with "design parameters" and "exogenous parameters", respectively.
**Assumption 1:**\(S\) and \(J\) are programs that we can automatically differentiate almost everywhere. This setting is more general than the case when an explicit mathematical model is known, but less general than a black-box setting (although many black-box systems in robotics can be automatically differentiated [19]). **Assumption 2:**\(x\) and \(y\) are continuous random variables with known, automatically differentiable, and potentially unnormalized prior probability densities \(p_{x,0}(x)\) and \(p_{y,0}(y)\). It may be counter-intuitive to model the design parameters as random variables, but this choice allows us to capture constraints on the design space by assigning low probability to infeasible designs. The prior distribution for \(y\) can be either estimated from historical data or constructed to reflect constraints on the operational domain. We restrict our focus to the continuous-parameter case, but our approach can be extended to handle mixed discrete parameters using block-resampling [20].
In this context, _failure prediction_ entails finding exogenous parameters \(y^{*}\) that, for some given \(x\), lead to a high cost. To ensure that predicted failures are plausible, we must also find values for \(y^{*}\) with high prior likelihood. To achieve this balance, we define the metric of _risk-adjusted cost_
\[J_{r}(x,y)=J\circ S(x,y)+\log p_{y,0}(y) \tag{1}\]
where \(\circ\) denotes function composition. Failure prediction is thus the problem of finding parameters \(y^{*}\) that lead to a high risk-adjusted cost; moreover, since it is likely that \(J_{r}\) will have multiple local minima with respect to \(y\) (i.e. multiple likely failure modes), we wish to sample a set \(\left\{y^{*}_{1},\dots,y^{*}_{n_{y}}\right\}\) of such failures. To generate this set, we replace deterministic optimization \(y^{*}=\arg\min_{y}J_{r}(x,y)\) with sampling from the unnormalized _pseudo-posterior_ (in the sense defined in [21]).
\[y^{*}\sim p(y^{*}|x)\propto p_{y,0}(y^{*})e^{J\circ S(x,y^{*})} \tag{2}\]
Likewise, the _failure repair_ problem seeks to find design parameters \(x^{*}\) that both have high prior likelihood (thus respecting the designer's prior beliefs about the design space) and result in a low cost across a range of anticipated failure modes; i.e. sampling from the unnormalized pseudo-posterior
\[x^{*}\sim p(x^{*}|y_{1}^{*},\ldots,y_{n_{y}}^{*})\propto p_{x,0}(x^{*})e^{- \sum_{i}J\circ S(x^{*},y_{i}^{*})/n_{y}} \tag{3}\]
## 4 Approach: Adversarial Inference
The primary challenge in sampling from these failure and repair distributions is that they will naturally shift as the design changes. Once the design has been updated to account for the current set of predicted failures, those failures will likely be out of date. To address this issue, we define a novel adversarial sampling method to alternate between sampling improved failure modes \(\{y_{1}^{*},\ldots,y_{n}^{*}\}\) and then sampling improved design parameters \(x^{*}\) to repair those failure modes, thus improving the robustness of the design while maintaining an up-to-date set of failure modes.
Our algorithm (detailed in Algorithm 1) proceeds in the style of a sequential Monte Carlo algorithm [18]. We begin by initializing \(n_{y}\) potential failure modes and \(n_{x}\) candidate designs sampled from their respective prior distributions. In each of \(K\) rounds, we first sample \(n_{x}\) new candidate designs from distribution (3) to repair the current set of predicted failure modes. We then select the design that performs best against all currently-predicted failures and sample \(n_{y}\) new sets of exogenous parameters (each representing a potential failure mode) from distribution (2). To sample from distributions (2) and (3), we use \(n_{x}\) and \(n_{y}\) parallel executions of a Markov chain Monte Carlo (MCMC) sampler. In order to handle potential multimodality in the design and failure space, we include optional tempering to interpolate between the prior and target distributions [18].
Our proposed adversarial inference algorithm can accept any MCMC sampling algorithm as a subroutine, either gradient-free or gradient-based. In our experiments, we compare the results of using both gradient-free (random-walk Metropolis-Hastings, or RMH) and gradient-based (Metropolis-adjusted Langevin algorithm, or MALA [22]); both of these are included in the appendix. Empirically, gradient-based samplers typically converge faster, particularly on high-dimensional problems, but in cases where a differentiable simulator is not available, a gradient-free sampler will suffice. We use MCMC for the sampling subroutine in all of our experiments, but our framework is also compatible with other approximate inference methods (e.g. variational inference).
```
Input: Population sizes \(n_{x}\), \(n_{y}\); rounds \(K\); substeps \(M\); stepsize \(\tau\); tempering \(\lambda_{1},\ldots,\lambda_{K}\). Output: Robust design \(x^{*}\) and a set of failures \(\left\{y_{1}^{*},\ldots,y_{n_{y}}^{*}\right\}\) with high risk-adjusted cost.
1 Initialize candidate designs \([x]_{0}=\left\{x_{1},\ldots,x_{n_{x}}\right\}_{0}\) sampled from \(p_{x,0}(x)\)
2 Initialize candidate failures \([y]_{0}=\left\{y_{1},\ldots,y_{n_{y}}\right\}_{0}\) sampled from \(p_{y,0}(y)\)
3for\(i=1,\ldots,K\)do
4\(p_{x,i}(x):=p_{x,0}(x)e^{-\lambda_{k}/n_{y}\sum_{y\in[y]_{i-1}}J\circ S(x,y)}\)
5\([x]_{i}\leftarrow\text{Sample}([x]_{i-1},M,\tau,p_{x,i})\)\(\triangleright\) Update candidate designs using predicted failures
6\(p_{y,i}(y):=p_{y,0}(y)e^{\lambda_{k}\min_{x\in[x]_{i-1}}J\circ S(x,y)}\)\(\triangleright\) Update failure predictions for new best design
7\([y]_{i}\leftarrow\text{Sample}([y]_{i-1},K,\tau,p_{y,i})\)
8return\([y]_{K}\), \(x^{*}=\arg\max_{x\in[x]_{N}}p_{x,i}(x)\)\(\triangleright\) Choose best design
```
**Algorithm 1**Failure prediction and repair using gradient-based sampling
On a theoretical level, any MCMC sampler will be sound so long as the resulting Markov chain is ergodic and satisfies detailed balance [23]. Unfortunately, there can be a large gap between asymptotic theoretical guarantees and practical performance. First, if the target distribution is multimodal and the modes are well-separated, then MCMC algorithms may be slow to move between modes, yielding a biased sampling distribution. To mitigate this effect, we include a tempering schedule \(0\leq\lambda_{1}\leq\ldots\leq\lambda_{K}\leq 1\) to interpolate between the prior and target distributions and run multiple MCMC instances in parallel from different initial conditions. Empirically, we find that tempering is not always needed, but we include it for completeness.
The second potential practical challenge arises from the continuity and differentiability (or lack thereof) of the simulator and cost function \(J\circ S\). Although gradient-based MCMC samplers like MALA remain sound so long as the target distribution is continuously differentiable almost everywhere (i.e. discontinuous or non-differentiable on a set of measure zero), in practice performance may suffer when the target distribution has large discontinuities. Because of these issues, we design our method to be compatible with either gradient-based or gradient-free sampling algorithms, and we compare the results of using both methods in Section 6.
The final practical consideration is that although the stochasticity in our sampling-based approach can help us explore the design and failure spaces, we incur a degree of sub-optimality as a result. When using gradient-based sampling, we have the option to reduce this sub-optimality by "quenching" the solution: switching to simple gradient descent (e.g. using MALA for the first 90 rounds and then gradient descent on the last 10 rounds). In practice, we find that quenching can noticeably improve the final cost without compromising the diversity of predicted failure modes.
## 5 Theoretical Analysis
Our prediction-and-repair framework can work with both gradient-free or gradient-based sampling subroutines, but it is important to note that gradients, when available, often accelerate convergence. To support this observation, we provide non-asymptotic convergence guarantees for the gradient-based version of our algorithm, drawing on recent results in Ma et al. [16].
To make these guarantees, first assume that \(J\) is \(L\)-Lipschitz smooth. Second, assume that the log prior distributions \(\log p_{y,0}\) and \(\log p_{x,0}\) are \(m\)-strongly convex outside a ball of finite radius \(R\). The first assumption is hard to verify in general and does not hold in certain domains (e.g. rigid contact), but it is true for most of our experiments in Section 6. The second is easy to verify for common priors (e.g. Gaussian and smoothed uniform). Let \(d=\max\left(\dim x,\dim y\right)\) be the dimension of the search space and \(\epsilon\in(0,1)\) be a convergence tolerance in total variation (TV) distance.
**Theorem 5.1**.: _Consider Algorithm 1 with the stated assumptions on smoothness and log-concavity. If \(m>L\) and \(\tau=\widetilde{\mathcal{O}}\left(\left(d\ln L/(m-L)+\ln 1/\epsilon\right)^{-1/ 2}d^{-1/2}\right)\), then sampling each round with TV error \(\leq\epsilon\) requires at most \(M\leq\widetilde{\mathcal{O}}\left(d^{2}\ln\frac{1}{\epsilon}\right)\) steps._
Since convergence time for each round of prediction and mitigation scales only polynomially with the dimension of the search space, our method is able to find more accurate failure predictions (and thus better design updates) than gradient-free methods with the same sample budget.
Proof sketchMa et al. [16] show that gradient-based MCMC enjoys fast convergence on non-convex likelihoods so long as the target likelihood is strongly log-concave in its tails (i.e. outside of a bounded region). It would be unrealistic to assume that the cost \(J(x,y)\) is convex, but we can instead rely on the strong log-concavity of the prior to dominate sufficiently in the tails and regularize the cost landscape. A formal proof is included in the appendix.
## 6 Experimental Results
There are two questions that we must answer in this section: first, does reframing this problem from optimization to inference lead to better solutions (i.e. lower cost designs and predicted failures that accurately cover the range of possible failures)? Second, does gradient-based MCMC with differentiable-simulation offer any benefits over gradient-free MCMC when used in our approach?
We benchmark our algorithm on a range of robotics and industrial control problems. We compare against previously-published adversarial optimization methods [7; 5] and compare the results of using gradient-based and gradient-free MCMC subroutines in our approach. We then provide a demonstration using our method to solve a multi-robot planning problem in hardware. The code used for our experiments can be found at [https://mit-realm.github.io/breaking-things/](https://mit-realm.github.io/breaking-things/)
BaselinesWe compare with the following baselines. **DR**: solving the design optimization problem with domain randomization \(\min_{x}\mathcal{E}_{y}[J_{r}(x,y)]\). **GD**: solving the adversarial optimization problem \(\min_{x}\max_{y}J_{r}(x,y)\) by alternating between optimizing a population of \(n_{x}\) designs and \(n_{y}\) failure modes using local gradient descent, as in [7; 5; 24]. We also include two versions of our method, using both gradient-free (**RMH**) and gradient-based (**MALA**) MCMC subroutines. All methods are given the same information about the value and gradient (when needed) of the cost and prior likelihoods. The gradient-free version of our approach implements quenching for the last few rounds.
EnvironmentsWe use three environments for our simulation studies, which are shown in Fig. 2 and described more fully in the appendix. **Multi-robot search:** a set of seeker robots must cover a search region to detect a set of hiders. \(x\) and \(y\) define trajectories for the seekers and hiders, respectively; failure occurs if any of the hiders escape detection. This environment has small (6 seeker vs. 10 hider, \(\dim x=60\), \(\dim y=100\)) and large (12 seeker vs. 20 hider, \(\dim x=120\), \(\dim y=200\)) versions. **Formation control:** a swarm of drones fly to a goal while maintaining full connectivity with a limited communication radius. \(x\) defines trajectories for each robot in the swarm, while \(y\) parameterizes an uncertain wind velocity field. Failure occurs when the second eigenvalue of the graph Laplacian is close to zero. This environment has small (5 agent, \(\dim x=30\), \(\dim y=1280\)) and large (10 agent, \(\dim x=100\), \(\dim y=1280\)) versions. **Power grid dispatch:** electric generators must be scheduled to ensure that the network satisfies voltage and maximum power constraints in the event of transmission line outages. \(x\) specifies generator setpoints and \(y\) specifies line admittances; failures occur when any of the voltage or power constraints are violated. This environment has small (14-bus, \(\dim x=32\), \(\dim y=20\)) and large (57-bus, \(\dim x=98\), \(\dim y=80\)) versions. **F16 GCAS:** a ground collision avoidance system (GCAS) must be designed to prevent a jet aircraft, modeled with aerodynamic effects and engine dynamics, from crashing into the ground. \(x\) defines a control policy neural network (\(\dim x\approx 1.8\times 10^{3}\)) and \(y\) defines the initial conditions (\(\dim y=5\)). **Pushing:** a robot manipulator must push an object out of the way to reach another object. Failure occurs if the object is knocked over while pushing. \(x\) defines a planning policy network (\(\dim x\approx 1.2\times 10^{3}\)) and \(y\) defines the unknown inertial and frictional properties of the object being pushed, as well as measurement noises (\(\dim y=7\)). We implement our method and all baselines in Python using JAX. All methods were run with the same population sizes and total sample budget, using hyperparameters given in the appendix.
Solution qualityFor each environment, we first solve for an optimized design and a set of predicted failure modes using each method. We then compare the performance of the optimal design on the predicted failure modes with the performance observed on a large test set of \(10^{5}\) randomly sampled exogenous parameters. The results of this experiment are shown in Fig. 3.
We find that both DR and GD often fail to predict failure modes that accurately cover the tail of worst-case behaviors: in the formation and power grid examples, both DR and GD falsely indicate that all predicted failures have been successfully repaired, despite a long tail of possible failures in both cases. In the search example, adversarial GD is able to predict a set of useful failure modes, but DR fails to do so. Only our method (with both gradient-free and gradient-based MCMC) accurately predicts the worst-case performance of the optimized design.
Figure 2: Environments used in our experiments. (Left to right) Multi-agent search-evasion, formation control, power dispatch, aircraft ground collision avoidance, and manipulation by pushing.
In addition to comparing the quality of the predicted failure modes, we can also compare the performance and robustness of the optimized design itself. On the search problem, our method finds designs with slightly improved performance relative to GD (but not relative to DR, since DR is not optimizing against a challenging set of predicted failure modes). On the formation problem, our method is able to find substantially higher-performance designs than either baseline method. On the power grid problem, our method finds designs that incur a higher best-case cost, since this problem includes a tradeoff between economic cost and robustness, but our method's designs are substantially more robust than the baselines, with much lighter tails in the cost distribution.
We observe that DR sometimes finds solutions that achieve lower average cost than those found by our method. We believe that this is due to DR optimizing against a less challenging failure population. This suggests the possibility of combining a failure dataset (predicted using our method) with an average-case dataset (sampled randomly from the prior) during repair; we hope to explore this and other adaptive strategies in future work.
Benefits of differentiable simulationAlthough we have designed our method to be compatible with either gradient-based or gradient-free MCMC subroutines, we observe that gradient-based samplers tend to converge faster than their gradient-free counterparts. Fig. 4 plots the performance of the best-performing design at each round against a static test set of 100 randomly sampled exogenous parameters for both gradient-based and gradient-free methods across all environments. Although these methods perform similarly on the formation problem, we see a clear pattern in the formation control, search-evasion, and power grid examples where gradient-based MCMC converges much faster, and this advantage is greater on higher-dimensional problems (second row), compensating for the additional time needed to compute the gradients (typically a 2-3x increase in runtime).
Hardware experimentsWe deploy the optimized hider and seeker trajectories in hardware using the Robotarium multi-robot platform [25] (we use 3 seekers and 5 hiders, since we had difficulty
Figure 4: Convergence rates of gradient-based (orange) and gradient-free (blue) MCMC samplers when used as subroutines for Algorithm 1. Shaded areas show min/max range over 4 random seeds.
Figure 3: A comparison of the cost of the optimal design on the predicted failure modes (red) and \(10^{5}\) randomly sampled test cases (blue).
testing with more agents in the limited space). We first hold the search pattern (design parameters) constant and optimize evasion patterns against this fixed search pattern, yielding the results shown on the left in Fig. 5 where the hiders easily evade the seekers. We then optimize the search patterns using our approach, yielding the results on the left where the hiders are not able to evade the seekers.
We also deploy an optimized policy for the pushing problem to a Franka Research 3 7-DoF robot arm. Fig. 5 shows a failure when the unoptimized policy fails to account for the uncertain center of mass of the bottle, as well as a successful execution with the repaired policy. Videos of all experiments are provided in the supplementary materials.
## 7 Discussion and Conclusion
Before sending any autonomous system out into the real world, it is important to understand how it will behave in range of operational conditions, including during potential failures. In this paper, we have presented a tool to allow the designers of autonomous systems to not only predict the ways in which a system is likely to fail but also automatically adjust their designs to mitigate those failures.
We apply our framework in simulation studies to a range of robotics and industrial control problems, including multi-robot trajectory planning and power grid control. Our results show that, relative to existing adversarial optimization methods, our novel sampling-based approach yields better predictions of possible failure modes, which in turn lead to more robust optimized designs. We also show empirically that, when it is possible to define a differentiable simulator, gradient-based MCMC methods allow our method to converge more than twice as fast as gradient-free methods.
### Limitations
Since it would be prohibitively costly to search for failure cases in hardware experiments (especially if failures resulted in damage to the robot), our method is restricted to searching for failures in simulation. As such, it is limited to predicting only failures that are modeled by the simulator, excluding failures that could arise due to unmodeled effects. Practically, our method could be used in conjunction with hardware testing by catching some failures earlier in the development process and reducing the cost of eventual hardware testing.
A notable limitation of our approach is that it requires knowledge of the prior distribution of the exogenous disturbances \(y\). Although this can be estimated in some cases (as in our experiments), in practice there may be uncertainty about the nature of this distribution. To address this, future works might investigate distributionally robust extensions of Algorithm 1 (akin to distributionally robust optimization methods [26]). Additional limitations are discussed in the appendix.
Figure 5: (Left) HW results for search-evasion with 5 hiders and 3 seekers, showing an initial search pattern (blue) and predicted failure modes (red). (Center) HW results for an optimized search pattern leaves fewer hiding places. (Right, top) An initial manipulation policy knocks over the object. (Right, bottom) The repaired manipulation policy pushes without knocking the bottle over.
#### Acknowledgments
C. Dawson is supported by the NSF GRFP under Grant No. 1745302. This work was partly supported by the National Aeronautics and Space Administration (NASA) ULI grant 80NSSC22M0070, Air Force Office of Scientific Research (AFOSR) grant FA9550-23-1-0099, and the Defense Science and Technology Agency in Singapore. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the sponsors.
|
2309.17172 | Domain-Adaptive Learning: Unsupervised Adaptation for Histology Images
with Improved Loss Function Combination | This paper presents a novel approach for unsupervised domain adaptation (UDA)
targeting H&E stained histology images. Existing adversarial domain adaptation
methods may not effectively align different domains of multimodal distributions
associated with classification problems. The objective is to enhance domain
alignment and reduce domain shifts between these domains by leveraging their
unique characteristics. Our approach proposes a novel loss function along with
carefully selected existing loss functions tailored to address the challenges
specific to histology images. This loss combination not only makes the model
accurate and robust but also faster in terms of training convergence. We
specifically focus on leveraging histology-specific features, such as tissue
structure and cell morphology, to enhance adaptation performance in the
histology domain. The proposed method is extensively evaluated in accuracy,
robustness, and generalization, surpassing state-of-the-art techniques for
histology images. We conducted extensive experiments on the FHIST dataset and
the results show that our proposed method - Domain Adaptive Learning (DAL)
significantly surpasses the ViT-based and CNN-based SoTA methods by 1.41% and
6.56% respectively. | Ravi Kant Gupta, Shounak Das, Amit Sethi | 2023-09-29T12:11:16Z | http://arxiv.org/abs/2309.17172v1 | Domain-Adaptive Learning: Unsupervised Adaptation for Histology Images with Improved Loss Function Combination
###### Abstract
This paper presents a novel approach for unsupervised domain adaptation (UDA) targeting H&E stained histology images. Existing adversarial domain adaptation methods may not effectively align different domains of multimodal distributions associated with classification problems. The objective is to enhance domain alignment and reduce domain shifts between these domains by leveraging their unique characteristics. Our approach proposes a novel loss function along with carefully selected existing loss functions tailored to address the challenges specific to histology images. This loss combination not only makes the model accurate and robust but also faster in terms of training convergence. We specifically focus on leveraging histology-specific features, such as tissue structure and cell morphology, to enhance adaptation performance in the histology domain. The proposed method is extensively evaluated in accuracy, robustness, and generalization, surpassing state-of-the-art techniques for histology images. We conducted extensive experiments on the FHIST dataset and the results show that our proposed method - Domain Adaptive Learning (DAL) significantly surpasses the VIT-based and CNN-based SoTA methods by 1.41% and 6.56% respectively.
Adversarial, Deep Learning, Domain Adaptation, Histology
## I Introduction
In traditional supervised learning, a model is trained using labeled data from the same domain as the test data. Obtaining labels for medical data is challenging due to the intricacies of medical expertise, making it costly and time-consuming. The need for specialized knowledge, meticulous review, and ethical considerations contribute to the difficulty in acquiring accurate and reliable annotations for medical datasets. However, when the distribution of the source and target domains differs significantly, the model's performance may suffer due to the domain shift. This domain shift can be because of color variation, data acquisition bias, distributional differences, domain-specific factors, covariate shift, staining techniques in medical histology images, etc. Unsupervised domain adaptation (UDA) techniques aim to mitigate this domain shift by aligning the feature distributions or learning domain-invariant representations by using only unlabeled samples from the target domain. Adversarial-based UDA employs a domain adversarial training framework, often based on Generative Adversarial Networks (GANs) [24] or domain adversarial neural networks (DANN) [1]. By learning domain-invariant representations, adversarial-based UDA models can effectively reduce the domain discrepancy and improve the generalization performance on the target domain. This approach has shown promising results in various domains, such as image classification, object detection, and semantic segmentation. However, while adversarial-based UDA has achieved notable success, challenges still exist. These include addressing the sensitivity to hyper-parameter tuning, handling the high-dimensional feature space, and effectively capturing complex domain shifts.
To address the aforementioned challenge, we develop an unsupervised domain adaptation approach that surpasses the state-of-the-art performance for histological images. We present our findings from developing convolution neural networks (CNNs) for such tasks based on CRCTP dataset (source) and NCT (target) from FHIST dataset [47], which is composed of several histology datasets, namely CRC-TP [32], LC25000 [33], BreakHis [34], and NCT-CRC-HE-100K [35]. These histology datasets consist of different tissue types and different organs. We consider each tissue type as a class label with one-hot encoding in the classification task. We framed our experiments on CRCTP and NCT with six classes (Benign, Tumor, Muscle, Stroma, Debris, and Inflammatory). The t-distributed stochastic neighbor embedding (tSNE) [54] plot in Figure 1 of source data distribution (circle shape) and target data distribution (square shape) while the color of classes differs with light and dark versions of the same color. The sample patches of each class from both domains are shown in Figure 2.
Our research endeavors converge on three key objectives: firstly, to reduce the discordance between source and target domains in histology images; secondly, to harness the distinctive attributes of histology, such as cellular morphology and tissue structure, to elevate adaptation performance specifically within the histology domain; and finally, to transcend the limitations of current UDA techniques, to achieve state-of-the-art accuracy, resilience, and generalization capabilities compared to
the previous methods. This comprehensive approach is crafted to encompass and intricately tackle the complexities presented by histology images.
Our adoption of deep learning for unsupervised domain adaptation in histology images is driven by its potential to enhance model generalization, extract optimal features, enable versatile cross-domain applications, and achieve field-advancing progress. By tailoring the combination of loss functions which leads to improved convergence and robustness, and with the leverage of deep learning's power, we aim to surpass current methods, benefiting various applications. Inspired by a conditional domain adversarial network (CDAN) [3], the core idea is to simultaneously train a feature extractor (typically a deep neural network) and a domain classifier (discriminator) to distinguish between source and target domains. We have examined different CNN-based feature extractor as ResNet-50 [2], ResNet-101 [2], ResNet-152 [2], VIT [44], and ConvMixer [46] to extract meaningful features. The feature extractor aims to learn domain-invariant representations, while the domain classifier tries to classify the domain of the extracted features correctly. During training, the feature extractor and domain classifier are optimized in an adversarial manner. The feature extractor aims to fool the domain classifier by generating indistinguishable features across domains, while the domain classifier tries to classify the domains correctly. This adversarial training process encourages the feature extractor to learn domain-invariant and transferable representations between the source and target domains. To achieve this, we propose a novel loss function pseudo label maximum mean discrepancy (PLMMD) along with different existing losses such as maximum information loss (entropy loss) [41, 42], maximum mean discrepancy (MMD) loss [8, 9, 10, 28], minimum class confusion (MCC) loss [43], etc. This combination of loss functions has the following specific advantages : Employing MCC loss enhances classification models by minimizing class confusion, particularly in scenarios with imbalanced class distributions. With maximum information loss, our model is encouraged to learn tightly clustered target features with uniform distribution, such that the discriminative information in the target domain is retained, while MDD loss measures the difference between the mean embeddings of two distributions, helping to quantify the dissimilarity between domains and facilitating domain adaptation techniques. Our proposed loss PLMMD enhances unsupervised domain adaptation by selectively emphasizing domain-invariant features through weight assignments. The benefit of this novel loss is, that training convergence is faster as compared to other scenarios. With the help of this novel combination of the loss function our method surpasses not only the CNN-based model state-of-art but also the transformer-based model for the histology images. To justify our claims for histology images, we use the FHIST dataset [47], whose sample images are shown in Figure 2.
Our stated goals were achieved by proposing an improved combination of loss functions tailored to address the unique challenges of H&E stained histology images. The performance evaluation was focused on accuracy, robustness, and generalization, to surpass state-of-the-art techniques in both domains. Furthermore, the research explored potential cross-domain applications in medical image analysis and computer vision, offering promising advancements in practical unsupervised domain adaptation with the help of various combinations of loss functions with different existing models.
## II Background and Related Work
In unsupervised domain adaptation, we have a source domain \(D_{s}=\{(x_{s_{i}},y_{s_{i}})\}_{i=1}^{n_{s}}\) of \(n_{s}\) labeled examples and a target domain \(D_{s}=\{(x_{t_{i}},y_{t_{i}})\}_{i=1}^{n_{t}}\) of \(n_{t}\) unlabeled examples. The source domain and target domain are sampled from joint distributions \(P(x_{s},y_{s})\) and \(Q(x_{t},y_{t})\) respectively. Notably, the two distributions are initially not aligned, that is, \(P\neq Q\).
Domain adversarial neural network (DANN) [1] is a framework of choice for UDA. It is a two-player game between domain discriminator \(D\), which is trained to distinguish the source domain from the target domain, and the feature representation \(F\) trained to confuse the domain discriminator \(D\) as well as classify the source domain samples. The error function of the domain discriminator corresponds well to the discrepancy between the feature distributions \(P(f)\) and \(Q(f)\)[40], a key to bound the target risk in the domain adaptation theory [55].
Alignment-based domain adaptation is another typical line of work that leverages a domain-adversarial task to align the source and target domains as a whole so that class labels can be transferred from the source domain to the unlabeled target one [4, 5, 6, 7]. Another typical line of work directly minimizes the domain shift measured by various metrics, e.g., maximum mean discrepancy (MMD) [8, 9, 10]. These methods are based on domain-level domain alignment. To achieve class-level domain alignment, the works of [11, 12] utilize the multiplicative interaction of feature representations and class predictions so that the domain discriminator can be aware of the classification boundary. Based on the integrated task and domain classifier, [13] encourages a mutually inhibitory
Fig. 1: Snapshot of t-SNE plot of source (CRC-TP) (Circle shape) and target (NCT) (Square shape), clearly shows significant difference between source and target data distribution.
relation between category and domain predictions for any input instance. The works of [14, 15] align the labeled source centroid and pseudo-labeled target centroid of each shared class in the feature space. Some work uses individual task classifiers for the two domains to detect non-discriminative features and reversely learn a discriminative feature extractor [16, 17, 18]. Certain other works focus attention on transferable regions to derive a domain-invariant classification model [19, 20, 21]. To help achieve target-discriminative features, [22, 23] generate synthetic images from the raw input data of the two domains via GANs [24]. The recent work of [25] improves adversarial feature adaptation, where the discriminative structures of target data may be deteriorated [26]. The work of [27] adapts the feature norms of the two domains to a large range of values so that the learned features are not only task-discriminative but also domain-invariant.
## III Proposed Method
The challenge of domain shift in a cross-domain classification task using unsupervised domain adaptation leverages the knowledge from a labeled source domain to improve the performance of a classifier on an unlabeled target domain. We propose a novel loss function that minimizes the domain discrepancy and aligns feature distributions across domains. Our datasets even differ in patch sizes - 150x150 pixels for the source domain and 224x224 for the target domain. Before training, the patches were subjected to data augmentation such as horizontal flip, vertical flip, and normalization to ensure consistency. To facilitate domain adaptation, we introduce a structure-preserving colour normalization technique to normalize the stain appearance of histopathology images across domains [45]. The normalization process aims to preserve the local structure while removing domain-specific variations. Therefore, the patches were colour normalized [29, 30].
From the color-normalized patches, we extracted features using ResNet152 trained on ImageNet [31]. Our proposed model architecture is based on a deep neural network with convolutional and fully connected layers, specifically tailored for domain adaptation.
In this work, we design a method to train a deep network \(N:x\to y\) which reduces the shifts in the data distributions across domains, such that the target risk \(r_{t}\)= \(E_{(x_{t},y_{t})\sim Q}[N(x_{t})\neq y_{t}]\) can be bounded by the source risk \(r_{s}\)= \(E_{(x_{s},y_{s})\sim P}[N(x_{s})\neq y_{s}]\) plus the distribution discrepancy disc(P, Q) quantified by a novel conditional domain discriminator. To minimize domain cross-domain discrepancy [1] in adversarial learning Generative Adversarial Networks (GANs) [] play a vital role. Features are represented by \(f=F(x)\) and classifier prediction, \(g=N(x)\) generated from deep network \(N\).
We improve existing adversarial domain adaptation methods in two directions. First, when the joint distributions of feature and class, i.e. \(P(x_{s},y_{s})\) and \(Q(x_{t},y_{t})\), are non-identical across domains, adapting only the feature representation \(f\) may be insufficient. A quantitative study [48] shows that deep representations eventually transition from general to specific along deep networks, with transferability decreased remarkably in the domain-specific feature layer \(f\) and classifier layer \(g\). Second, due to the nature of multi-class classification, the feature distribution is multimodal, and hence adapting feature distribution may be challenging for adversarial networks.
By conditioning, domain variances in feature representation \(f\) and classifier prediction \(g\) can be modeled simultaneously. This joint conditioning allows us to bridge the domain gap more effectively, enabling the adapted model to capture and align the underlying data distributions between the source and target domains. Consequently, incorporating classifier prediction as a conditioning factor in domain adaptation holds great potential for achieving improved transferability and generating domain-invariant representations in challenging cross-domain scenarios.
We formulate Conditional Domain Adversarial Network (CDAN) [3] as a minimax optimization problem with two competitive error terms: (a) \(E(N)\) on the source classifier N, which is minimized to guarantee lower source risk; (b) \(E(D,N)\) on the source classifier \(N\) and the domain discriminator \(D\) across the source and target domains, which is minimized over \(D\) but maximized over \(f=F(x)\) and \(g=N(x)\):
\[L_{cle}(x_{s_{i}},y_{s_{i}})=\mathbb{E}_{(x_{s_{i}},y_{s_{i}})\sim D_{x}}L(N(x_ {s_{i}}),y_{s_{i}}) \tag{1}\]
Fig. 2: Snapshot of sample images of each class from CRC-TP (top row) and NCT (bottom row) of FHIST dataset.
\[L_{dis}(x_{s},x_{t})= -\mathbb{E}_{x_{s_{i}}\sim D_{s}}\log[D(f_{s_{i}},g_{s_{i}})] \tag{2}\] \[-\mathbb{E}_{x_{t_{j}}\sim D_{t}}\log[1-D(f_{t_{j}},g_{t_{j}})],\]
where \(L\) is the cross-entropy loss, and \(h=(f,g)\) is the joint variable of feature representation \(f\) and classifier prediction \(g\). The minimax game of CDAN is
\[\min_{N}L_{clc}(x_{s_{i}},y_{s_{i}})-\lambda L_{dis}(x_{s},x_{t}) \tag{3}\] \[\min_{D}L_{dis}(x_{s},x_{t}),\]
where \(\lambda\) is a hyper-parameter between the two objectives to trade off source risk and domain adversary.
We condition domain discriminator \(D\) on the classifier prediction g through joint variable \(h=(f,g)\) to potentially tackle the two aforementioned challenges of adversarial domain adaptation. A simple conditioning of \(D\) is \(D(f\oplus g)\), where we concatenate the feature representation and classifier prediction in vector \(f\oplus g\) and feed it to conditional domain discriminator \(D\). This conditioning strategy is widely adopted by existing conditional GANs [24]. However, with the concatenation strategy, \(f\) and \(g\) are independent of each other, thus failing to fully capture multiplicative interactions between feature representation and classifier prediction, which are crucial to domain adaptation. As a result, the multimodal information conveyed in classifier prediction cannot be fully exploited to match the multimodal distributions of complex domains [49]. The multilinear map is defined as the outer product of multiple random vectors. And the multilinear map of infinite-dimensional nonlinear feature maps has been successfully applied to embed joint distribution or conditional distribution into reproducing kernel Hilbert spaces [49, 50, 51, 52]. Besides the theoretical benefit of the multilinear map \(x\otimes y\) over the concatenation \(x\oplus y\)[49, 53]. Taking advantage of the multilinear map, in this paper, we condition \(D\) on \(g\) with the multilinear map. Superior to concatenation, the multilinear map \(x\otimes y\) can fully capture the multimodal structures behind complex data distributions. A disadvantage of the multilinear map is dimension explosion.
We enable conditional adversarial domain adaptation over domain-specific feature representation \(f\) and classifier prediction \(g\). We jointly minimize with respect to (1) source classifier \(N\) and feature extractor \(F\), minimize (2) domain discriminator \(D\), and maximize (2) feature extractor \(F\) and source classifier \(N\). This yields the mini-max problem of Domain Adversarial Networks:
\[\min_{G} \mathbb{E}_{(x^{i}_{s},y^{i}_{s})\sim D_{s}}L(G(x^{i}_{s}),y^{i}_{ s}) \tag{4}\] \[+\lambda\left(\mathbb{E}_{x^{i}_{t}\sim D_{s}}\log[D(T(h^{i}_{s}))]\right.\] \[\left.+\mathbb{E}_{x^{i}_{s}\sim D_{t}}\log[1-D(T(h^{j}_{t}))]\right)\] \[\max_{D} \mathbb{E}_{x^{i}_{s}\sim D_{s}}\log[D(T(h^{i}_{s}))]+\mathbb{E}_ {x^{j}_{s}\sim D_{t}}\log[1-D(T(h^{j}_{t}))],\]
where \(\lambda\) is a hyper-parameter between the source classifier and conditional domain discriminator, and note that \(h=(f,g)\) is the joint variable of domain-specific feature representation \(f\) and classifier prediction \(g\) for adversarial adaptation.
The general problem of adversarial domain adaptation of the proposed model for classification can be formulated as follows:
\[L=\min_{N}L_{clc}(x_{s_{i}},y_{s_{i}})-\lambda L_{dis}(x_{s},x_{ t}) \tag{6}\] \[+\beta L_{IM}+\gamma L_{MCC}+\delta L_{MDD}+\eta L_{WMMD}\]
where \(\lambda\), \(\beta\), \(\gamma\), \(\delta\) and \(\eta\) are hyper parameters, \(L_{MCC}\) is minimum class confusion loss, \(L_{MDD}\) is maximum mean discrepancy loss, \(L_{WMDD}\) represents weighted maximum mean discrepancy loss and \(L_{IM}\) represents information maximization loss. All individual losses have their own specialty and this novel combination of loss significantly surpasses the performance of CNN-based models as well as transformer-based models. A detailed description of all the losses is given below in the losses section.
Fig. 3: Architecture of the proposed networks, where domain-specific feature representation \(f\) and classifier prediction \(g\) embody the cross-domain gap to be reduced jointly by the conditional domain discriminator \(D\).
### _Losses_
#### Iv-A1 Maximum Mean Discrepancy
Maximum mean discrepancy (MMD) is a kernel-based statistical test used to determine whether given two distributions are the same [8, 9, 10]. Given an random variable X, a feature map \(\phi\) maps X to an another space \(F\) such that \(\phi(X)\in F\). Assuming \(F\) satisfies the necessary conditions, we can benefit from the kernel trick to compute the inner product in \(F\) :
\[X,Y\text{ such that }k(X,Y)=\langle\phi(X),\phi(Y)\rangle_{F}, \tag{7}\]
where \(k\) is gram matrix produced using the kernel function. MMD is the distance between feature means. That means for a given probability measure \(P\) on \(X\), feature means is an another feature map that takes \(\phi(X)\) and maps it to the means of every coordinate of \(\phi(X)\) :
\[\mu_{p}(\phi(X))=[\mathbb{E}[\phi(X_{1})],....,\mathbb{E}[\phi(X_{m})]]^{T} \tag{8}\]
The inner product of feature means of \(X\sim P\) and \(Y\sim Q\) can be written in terms of kernel function such that:
\[\begin{split}\langle\mu_{p}(\phi(X)),\mu_{q}(\phi(Y))\rangle_{F }=\mathbb{E}_{P,Q}[\langle\phi(X),\phi(Y)\rangle_{F}]\\ =\mathbb{E}_{P,Q}[k(X,Y)]\end{split} \tag{9}\]
Given \(X\), \(Y\) maximum mean discrepancy is the distance between feature means of \(X\), \(Y\) :
\[MMD^{2}(P,Q)=||\mu_{P}-\mu_{Q}||_{F}^{2} \tag{10}\]
\[\begin{split} MMD^{2}(P,Q)=\langle\mu_{P}-\mu_{Q},\mu_{P}-\mu_{Q }\rangle\\ =\langle\mu_{P},\mu_{P}\rangle-2\langle\mu_{P},\mu_{Q}\rangle+ \langle\mu_{Q},\mu_{Q}\rangle\end{split} \tag{11}\]
Using the equation (9), finally above expression becomes
\[\begin{split} L_{MMD}=MMD^{2}(P,Q)\\ =\mathbb{E}_{P}[k(X,X)]-2\mathbb{E}_{P,Q}[k(X,Y)]+\mathbb{E}_{Q}[ k(Y,Y)]\end{split} \tag{12}\]
#### Iv-A2 Pseudo Label Maximum Mean Discrepancy
We calculated the PLMMD using a similar procedure to calculating MMD loss in equation (12). However, our proposed loss differs in terms of weights assigned to each similarity term. Hence we can define PLMMD loss as:
\[\begin{split} L_{PLMMD}=w_{XX}\mathbb{E}_{P}[k(X,X)]-2w_{XY} \mathbb{E}_{P,Q}[k(X,Y)]\\ +w_{YY}\mathbb{E}_{Q}[k(Y,Y)],\end{split} \tag{13}\]
where, \(w_{XX}\) represent weight to get similarity within the source domain, similarly, \(w_{YY}\) are weights for similarity within the target domain, and \(w_{XY}\) are weights to get similarity within source and target domain. For calculating the weights, first, we generated pseudo labels for the target using a source classifier. After that, the source and target pseudo-label is normalized to account for class imbalances. For each class common to both datasets, dot products of normalized vectors are computed to quantify instance relationships. Calculated dot products are normalized by the count of common classes, ensuring fairness. This returns three weight arrays, representing relationships between instances in the source dataset, target dataset, and source-to-target pairs.
#### Iv-A3 Minimum Class Confusion
The minimum class confusion loss \(\mathcal{L}_{MCC}\)[43] seeks to minimize confusion terms between classes \(j\) and \(j^{\prime}\), such that \(j\neq j^{\prime}\) where the indices are exhaustive over the set of classes. On the target domain, the class confusion term between two classes \(j\) and \(j^{\prime}\) is given by:
\[C_{jj^{\prime}}=\mathbf{\hat{y}}_{\cdot j}^{\intercal}\mathbf{\hat{y}}_{\cdot j ^{\prime}}^{\intercal}\]
A much more nuanced and meaningful formulation of the class confusion would be:
\[C_{jj^{\prime}}=\mathbf{\hat{y}}_{\cdot j}^{\intercal}\mathbf{W}\mathbf{\hat {y}}_{\cdot j^{\prime}}^{\intercal}, \tag{14}\]
where the matrix \(\mathbf{W}\) is a diagonal matrix. The diagonal terms \(W_{ii}\) are given as the softmax outputs of the entropies in classifying a sample \(i\). \(\mathbf{\hat{y}}_{ij}\) is given as:
\[\mathbf{\hat{y}}_{ij}=\frac{\exp(Z_{ij}/T)}{\sum_{j^{\prime}=1}^{c}\exp(Z_{ij ^{\prime}}/T)}, \tag{15}\]
where \(c\) is the number of classes, \(T\) is the temperature coefficient, and \(Z_{ij}\) is the logistic output of the classifier layer for the class \(j\) and the sample \(i\).
After normalizing the class confusion terms, the final MCC Loss function is given as:
\[\mathcal{L}_{MCC}=\frac{1}{c}\sum_{j=1}^{c}\sum_{j^{\prime}\neq j}^{c}|C_{jj^{ \prime}}|, \tag{16}\]
which is the sum of all the non-diagonal elements of the class confusion matrix. The diagonal terms represent the "certainty" in the classifier, while the non-diagonal terms represent the "uncertainty" in classification. The MCC loss can be added in conjunction with other domain adaptation methods.
#### Iv-A4 Information Maximization loss
The Information Maximization loss is designed to encourage neural networks to learn more informative representations by maximizing the mutual information between the learned features and the input data [41, 42]. This type of loss aims to guide the model to capture relevant and distinctive patterns in the data, which can be especially valuable in scenarios where unsupervised learning, domain adaptation, or feature learning are important. The assumptions that \(p_{t}=\text{softmax}(N(f(x_{t})))\) are expected to retain as much information about \(x_{t}\) as possible, and decision boundary should not cross high-density regions, but instead lie in low-density regions, which is also known as cluster assumption. These two assumptions can be met by maximizing mutual information between the empirical distribution of the target inputs and the induced target label distribution, which can be formally defined as:
\[\begin{split} I(p_{t};x_{t})=H(\overline{p}_{t})-\frac{1}{n_{t}} \sum_{j=1}^{n_{t}}H(p_{tj})\\ =-\sum_{k=1}^{K}\overline{p}_{tk}\log(\overline{p}_{tk})+\frac{1}{ n_{t}}\sum_{j=1}^{n_{t}}\sum_{k=1}^{K}p_{tkj}\log(p_{tkj}),\end{split} \tag{17}\]
where, \(p_{tj}=\text{softmax}(G_{c}(G_{f}(x_{tj})))\), \(\overline{p}_{t}=\mathbb{E}_{x_{t}}[p_{t}]\), and K is the number of classes. Maximizing \(-\frac{1}{n_{t}}\sum_{j=1}^{n_{t}}H(p_{tj})\) enforces the target predictions close to one-hot encoding, therefore the cluster assumption is guaranteed. To ensure global diversity, we also maximize \(H(\overline{p}_{t})\) to avoid every target data being assigned to the same class. With \(I(p_{t};x_{t})\), our model is encouraged to learn tightly clustered target features with uniform distribution, such that the discriminative information in the target domain is retained.
## IV Experimentation and Results
**Dataset:** To evaluate the proposed method, we introduce the FHIST dataset, a proposed benchmark for the few-shot classification of histological images [47]. FHIST is composed of several histology datasets, namely CRC-TP [32], LC25000 [33], BreakHis [34], and NCT-CRC-HE-100K [35]. For each class, there are close to 20,000 patches in the CRC-TP domain with a patch size of 150X150 pixels and around 10,000 patches of size 224X224 pixels in the NCT domain. We performed experiments with CRC-TP as the source and NCT as the target and vice versa. The tSNE plots shown in Figure 4 depict the distribution of target (NCT) at different stages of training. Different colors map different class types in the tSNE plot. We have plotted five classes in tSNE which are Benign, Tumor, Debris, Inflammatory, and Muscle + Stroma with 200 sample points from each five classes. We combined the last two classes because of their physiological as well as feature intertwining. The first plot(leftmost) shows the data distribution of NCT(as target) at epoch 0, and the second one shows the data distribution of NCT after four epochs, and the last one (rightmost) shows the target(NCT) data distribution after six epochs of domain adaptation. These histology datasets consist of different tissue types and different organs. We consider each tissue type as a class label with one-hot encoding in the classification task. We framed our experiments on CRC-TP and NCT with six classes (Benign, Tumor, Muscle, Stroma, Debris, and Inflammatory)
**Implementation:** All the experiments were conducted on an NVIDIA A100 in PyTorch, using the CNN-based neural network (ResNet-152) pre-trained on ImageNet [31] as the backbone for our proposed model. The base learning rate is 0.00001 with a batch size of 32, and we train models by 20 epochs. The hyper-parameters were \(\beta\)=0.05, \(\gamma\)=1.4, \(\delta\)=0.54 and \(\eta\)=0.54 for the experiment of CRC-TP \(\rightarrow\) NCT and NCT \(\rightarrow\) CRC-TP. We used AdamW [36] with a momentum of 0.9, and a weight decay of 0.001 as the optimizer. We adopt the standard protocol for unsupervised domain adaptation (UDA) where all labeled source samples and unlabeled target samples are utilized for training. To report our results for each transfer task, we use center-crop images from the target domain and report the classification performance. For a fair comparison with prior works, we also conduct experiments with the same backbone as ViT-based [44] as TVT [39], ResNet-50 [2], DANN [1], CDAN [3], GVB-GD [37], CHATTY+MCC [38] on FHIST dataset.
### _Dataset and Implementation_
### _Results_
Our analysis in Table I depicts results with different methods and feature extractors for the FHIST dataset. The top five methods are CNN model using ResNet-50 as a feature extractor trained on ImageNet dataset while TVT uses ViT based model pretrained on ImageNet-21k dataset. Our proposed method is a CNN-based model that utilizes ResNet-152 as a backbone with a novel combination of loss functions. Our model outperforms CNN-based models such as ResNet-50, DANN, CDAN, GVB-GD, and CHATTY+MCC, and surpasses the state of the Art (SoTA) CNN results by 6.56%. At the same time, our method also surpasses the transformer-based SoTA by 1.41%. We achieved an accuracy of 87.71% for CRC-TP to NCT domain adaptation and 74.81% for NCT to CRC-TP with an average accuracy of 81.26% for both tasks, as mentioned in Table I with bold text.
## Discussion and Conclusion
In this study, we have demonstrated that utilizing different combinations of loss functions with a CNN such as ResNet-152 can lead to significant improvements in unsupervised domain adaptation (UDA) performance that can surpass the performance of ViTs using other UDA methods. By leveraging the strengths of various loss functions tailored to specific domain characteristics, we have surpassed the state-of-the-art (SOTA) performance for histology images. We conducted ablation studies to understand the impact of the different feature extractors such as ConvMixer [46] and ResNet-101 [2]. However, the performance in these cases was worse than our reported results. To know the impact of individual loss and a combination of losses, we performed extensive experiments. Through comprehensive experiments, we discovered that Minimum Class Confusion (MCC) loss functions offer an enhancement to classification models by mitigating class confusion, particularly when faced with imbalanced class distributions. In parallel, we observed that information maximization losses aid the classifier in selecting the most certain samples for domain alignment. In our proposed approach, the Pseudo Label Maximum Mean Discrepancy (PLMMD) accelerates training convergence (comparision with CHATTY model shown in Figure 5) and notably enhances domain alignment by incorporating weighted considerations. Additionally, the Maximum Mean Discrepancy (MMD) loss effectively narrows the gap between the mean embeddings of the two distributions. By artfully combining these distinctive loss functions, we not only surpass the current state-of-the-art but also achieve a comprehensive solution that advances the field of classification models in diverse scenarios.
|
2309.08734 | Ensemble Forecasts of Solar Wind Connectivity to 1 Rs using ADAPT-WSA | The solar wind which arrives at any location in the solar system is, in
principle, relatable to the outflow of solar plasma from a single source
location. This source location, itself usually being part of a larger coronal
hole, is traceable to 1 Rs along the Sun's magnetic field, in which the entire
path from 1 Rs to a location in the heliosphere is referred to as the solar
wind connectivity. While not directly measurable, the connectivity between the
near-Earth solar wind is of particular importance to space weather. The solar
wind solar source region can be obtained by leveraging near-sun magnetic field
models and a model of the interplanetary solar wind. In this article we present
a method for making an ensemble forecast of the connectivity presented as a
probability distribution obtained from a weighted collection of individual
forecasts from the combined Air Force Data Assimilative Photospheric Flux
Transport - Wang Sheeley Arge (ADAPT-WSA) model. The ADAPT model derives the
photospheric magnetic field from synchronic magnetogram data, using flux
transport physics and ongoing data assimilation processes. The WSA model uses a
coupled set of potential field type models to derive the coronal magnetic
field, and an empirical relationship to derive the terminal solar wind speed
observed at Earth. Our method produces an arbitrary 2D probability distribution
capable of reflecting complex source configurations with minimal assumptions
about the distribution structure, prepared in a computationally efficient
manner. | D. E. da Silva, S. Wallace, C. N. Arge, S. Jones | 2023-09-15T19:52:40Z | http://arxiv.org/abs/2309.08734v1 | # Ensemble Forecasts of Solar Wind Connectivity to \(1\ R_{s}\) using ADAPT-WSA
###### Abstract
The solar wind which arrives at any location in the solar system is, in principle, relatable to the outflow of solar plasma from a single source location. This source location, itself usually being part of a larger coronal hole, is traceable to \(1\ R_{S}\) along the Sun's magnetic field, in which the entire path from \(1\ R_{S}\) to a location in the heliosphere is referred to as the solar wind connectivity. While not directly measurable, the connectivity between the near-Earth solar wind is of particular importance to space weather. The solar wind solar source region can be obtained by leveraging near-sun magnetic field models and a model of the interplanetary solar wind. In this article we present a method for making an ensemble forecast of the connectivity presented as a probability distribution obtained from a weighted collection of individual forecasts from the combined Air Force Data Assimilative Photospheric Flux Transport - Wang Sheeley Arge (ADAPT-WSA) model. The ADAPT model derives the photospheric magnetic field from synchronic magnetogram data, using flux transport physics and ongoing data assimilation processes. The WSA model uses a coupled set of potential field type models to derive the coronal magnetic field, and an empirical relationship to derive the terminal solar wind speed observed at Earth. Our method produces an arbitrary 2D probability distribution capable of reflecting complex source configurations with minimal assumptions about the distribution structure, prepared in a computationally efficient manner.
## Plain Language Summary
Solar wind which arrives at Earth mostly comes from one place on the sun at a time. There is no measurement of where that place is, but scientific modeling can give us an estimate. In this article, we present one way of making that kind of estimate. Our way gives a probability distribution of where the solar wind is coming from, instead a single estimate. The primary engine of our method is the ADAPT-WSA model. Applications of knowing where the solar wind is |
2309.05210 | Understanding the Impact of Post-Training Quantization on Large Language
Models | Large language models (LLMs) are rapidly increasing in size, with the number
of parameters becoming a key factor in the success of many commercial models,
such as ChatGPT, Claude, and Bard. Even the recently released publicly
accessible models for commercial usage, such as Falcon and Llama2, come
equipped with billions of parameters. This significant increase in the number
of parameters makes deployment and operation very costly. The remarkable
progress in the field of quantization for large neural networks in general and
LLMs in particular, has made these models more accessible by enabling them to
be deployed on consumer-grade GPUs. Quantized models generally demonstrate
comparable performance levels to their unquantized base counterparts.
Nonetheless, there exists a notable gap in our comprehensive understanding of
how these quantized models respond to hyperparameters, such as temperature, max
new tokens, and topk, particularly for next word prediction. The present
analysis reveals that nf4 and fp4 are equally proficient 4-bit quantization
techniques, characterized by similar attributes such as inference speed, memory
consumption, and the quality of generated content. the study identifies nf4 as
displaying greater resilience to temperature variations in the case of the
llama2 series of models at lower temperature, while fp4 and fp4-dq proves to be
a more suitable choice for falcon series of models. It is noteworthy that, in
general, 4-bit quantized models of varying sizes exhibit higher sensitivity to
temperature in the range of 0.5 to 0.8, unlike their unquantized counterparts.
Additionally, int8 quantization is associated with significantly slower
inference speeds, whereas unquantized bfloat16 models consistently yield the
fastest inference speeds across models of all sizes. | Somnath Roy | 2023-09-11T02:58:32Z | http://arxiv.org/abs/2309.05210v3 | # Understanding the Impact of Post-Training Quantization on Large Language Models
###### Abstract
Large language models (LLMs) are rapidly increasing in size, with the number of parameters becoming a key factor in the success of many commercial models, such as ChatGPT, Claude, and Bard. Even the recently released publicly accessible models for commercial usage, such as Falcon and Llama2, come equipped with billions of parameters. This significant increase in the number of parameters makes deployment and operation very costly. The remarkable progress in the field of quantization for large neural networks in general and LLMs in particular, has made these models more accessible by enabling them to be deployed on consumer-grade GPUs. Quantized models generally demonstrate comparable performance levels to their unquantized base counterparts. Nonetheless, there exists a notable gap in our comprehensive understanding of how these quantized models respond to hyperparameters, such as temperature, max new tokens, and top_k, particularly for the next word prediction.
The present analysis reveals that nf4 and fp4-4 are equally proficient 4-bit quantization techniques, characterized by similar attributes such as inference speed, memory consumption, and the quality of generated content. Nevertheless, these quantization methods exhibit distinct behaviors at varying temperature settings, both in the context of smaller and larger models. Furthermore, the study identifies nf4 as displaying greater resilience to temperature variations in the case of the llama_2 series of models at lower temperature, while fp4 and fp4-dq proves to be a more suitable choice for falcon series of models. It is noteworthy that, in general, 4-bit quantized models of varying sizes exhibit higher sensitivity to temperature in the range of 0.5 to 0.8, unlike their unquantized counterparts. Additionally, ints quantization is associated with significantly slower inference speeds, whereas unquantized bfbotal6 models consistently yield the fastest inference speeds across models of all sizes.
Sommath Roy Freshworks Inc [email protected] The widespread adoption of LLMs on a substantial scale gained traction following the successful establishment of ChatGPT (including GPT-3 and subsequent iterations) [2]. The pretraining of large transformer language models with 7 billion parameters and beyond demands a considerable amount of GPU computation, which can translate to costs amounting to millions of dollars. Such level of expenditure is beyond what academic research and small organizations can typically afford. Despite the high cost of deploying and operating large language models (LLMs), the recent release of the Falcon [6] and Llama2 [7] models has sparked optimism among small organizations and has increased their desire to deploy their own custom LLMs.
The efficient deployment of decoder only LLMs are challenging in practice because the generative inference proceeds sequentially, where the computation for each token depends on the previously generated tokens [8]. It is noteworthy that caching the attention key and value tensors of each layer can significantly improve the inference speed of smaller decoder-only models that fit on a single GPU memory. However, this is not possible for models that do not fit into the memory of a single GPU. To address the need for expensive high-end GPUs to support the deployment of these models, diverse forms of quantization have been put forward as potential solutions. The application of quantization methods to transformers emerges as a efficacious approach for mitigating sampling latency, while incurring minimal to negligible impact on overall performance [9]. Quantization techniques can be mainly characterized into three forms namely - i) quantization aware training [10, 11], ii) quantization aware fine-tuning [12, 13, 14], and iii) post training quantization (PTQ) [15, 16, 17]. In [18], the investigation primarily centers on evaluating the impact of diverse post-training quantization methods, employing perplexity scores as a benchmark. The perplexity scores are computed on datasets such as Wiki [19], PTB [20], and C4 [21], which most likely have served as foundational datasets during the training of most of the LLMs. It should be noted that these datasets are predisposed to exhibit favorable perplexity scores across all models, owing to their utilization in model training. Furthermore, it is acknowledged that perplexity, as a metric, may not effectively capture instances of repetitive generation within LLMs. Following outlines the primary contributions of the present study.
1. This study offers a systematic examination of the influence exerted by three pivotal hyper-parameters, namely, max new tokens, temperature, and top_k, on LLMs that have undergone quantization through widely adopted post-training quantization techniques such as [15]1 (hereafter, gptq) and
ever, it is crucial to note that pure quantization is a more aggressive approach and can also lead to a greater loss of accuracy. On the other hand, simulated quantization is a conservative approach and can achieve significant speedups without sacrificing too much accuracy. Pure quantization can be further categorized into W8A8 and W4A4, where the weights and activations are quantized to 8-bit integers and 4-bit integers, respectively [29][27].
### Gptq
It is a layer-wise quantization method based on the Optimal Brain Quantization (OBQ) [30]. The goal is to find a quantized weight matrix \(\widetilde{W}\) that minimizes the squared error between the quantized layer output \(\widetilde{W}X\) and the full-precision layer output \(WX\) as shown below.
\[\operatorname*{argmin}_{\widetilde{W}}\lVert WX-\widetilde{W}X\rVert^{2}\]
The OBQ algorithm iteratively quantizes one weight at a time, while the GPTQ algorithm utilizes a vectorized implementation that allows it to efficiently handle multiple rows of the weight matrix in parallel. This makes GPTQ significantly faster than OBQ, especially for large models.
#### 2.1.1 GPU Memory Consumption in 4-bit GPTQ Quantization
It is well-established that the goal of quantization is to deploy LLMs on consumer-grade GPUs having at most 24 GB.The distribution of GPU memory utilised by the models during GPTQ 4-bit quantization is shown below in Table 2. GPTQ quantization has following limitations.
* It is very GPU memory intensive process.
* Even 4-bit quantization of 40B model throws out of memory (OOM) on 80GB A100 GPU machine. Moreover, it is not possible to quantize 7B models on 24GB A10 GPU machines.
#### 2.1.2 Layerwise Error induced by GPTQ
GPTQ 4-bit quantization reduces the size of a model by more than 80%, i.e., a model of 14 GB is reduced to around 2 GB post quantization. It is important to note that the quantization error introduced by GPTQ is different for different models, as shown in Table 3. This is because the models shown in Table 3 have different architectures, including the number of heads, number of layers, embedding dimension, number of query groups in multi-query attention, block size, and hidden dimension.
### bitsandbytes Quantizations
bitsandbytes (bnb) provides implementation of five powerful and state-of-the-art quantization techniques namely i) int8, ii) fp4, iii) nf4, iv) fp4-dq4, and v) nf4-dq. The int8 quantization procedure[14] uses vector-wise quantization with separate normalization constants for each inner product in the matrix multiplication. However, they have found around 0.1% dominant
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline
**Features** & **Simulated Quantization** & **Pure Quantization** \\ \hline Operations & Floating and Fixed point & Fixed-point \\ \hline Need for de-quantization & Yes & No \\ \hline Inference speed & Slower & Comparatively Faster \\ \hline \end{tabular}
\end{table}
Table 1: _General understanding of simulated vs. pure quantization in transformer based LLMs_
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|} \hline
**Model** & **GPU Memory(GB)** \\ \hline stablelm\_3b & 19.54 \\ \hline redpajama\_3b & 9.58 \\ \hline falcon\_7b & 23.64 \\ \hline llama2\_7b & 24.83 \\ \hline llama2\_13b & 40.46 \\ \hline falcon\_40b and llama2\_70b & OOM on single A100 80GB GPU \\ \hline \end{tabular}
\end{table}
Table 2: _Distribution of GPU memory consumed by GPTQ 4-bits quantization for different models evaluated on Nvidia A100 80GB GPU machine_
activation outliers that has the potential to degrade the quality especially in bigger LLMs. Therefore, the precision for these dominant outliers are kept in float16. This scheme isolates the outlier feature dimensions into a 16-bit matrix multiplication, while still allowing more than 99.9% of the values to be multiplied in 8 bits.
QLoRA[12] introduced a new data type called 4-bit normal-float (nf4), which is optimal for normally distributed weights, double quantization to reduce the memory footprint, and paged optimization to manage memory spikes. These techniques together yield excellent inference speed without sacrificing the quality of generation. In nf4 quantization, the base model weights are stored in nf4 data type and computation is performed in bfloat16. However, the model weights are dequantized to bfloat16 in the forward pass for inference [31]. The bnb quantizations compress the model footprint in the range of 40% (int8) to 70% (nf4-dq). It is important to emphasize here that int8 quantization for llama2_70B throws OOM error on A100 80GB GPU machine. Rest of the details of compressed model size corresponding to bnb quantizations are described in the following sections.
## 3 Experiment
This section provides a detailed description of the models, prompts, decoding approach, and related hyper-parameters used to generate the data for the analysis.
### Model Description
A total of six pre-trained models with 3 billion to 70 billion parameters were selected for next-word prediction. These models are decoder-only, and their architecture-specific details are shown in Table 4. As can be seen, these models differ from each other in terms of the number of heads, number of layers, embedding dimension, number of query groups used in multi-query attention, sequence length, and intermediate size.
### Prompts Selection and Proposed Hypothesis
Ten prompts are designed to access the quality and inference speed of pre-trained models for next word generation. These prompts are selected on the simple proposed hypothesis and shown below in Table 5.
**Hypothesis 1:** All pre-trained LLMs trained on billions or trillions of tokens can be ideally conceptualized as a large tree, where each node represents a topic and the text continuations associated with that topic. As we traverse down the tree, the text continuations become more specific and focused. Conversely, as we traverse up the tree, the text continuations become more general and abstract.
**Hypothesis 2:** The quality of a pre-trained model can be assessed based on its ability to accurately identify the correct topic node and then traverse to the sub-topic node for focused next word prediction.
### Decoder Description
The current experiment uses a bare top_k sampling decoder without any additional features, such as repetition penalty. To assess the model's potential, we use a list of max new tokens, temperature as well as top_k. The max new tokens, temperature and top_k are [50, 100, 150, 200, 250, 300, 350, 400, 450, 500], [0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 1.0] and [1, 5, 10, 20, 50, 100, 200] respectively. The completion text is generated for every quantized models using all the combinations of max new tokens, temperature and top_k. The reason for high top_k such as 200 is that it might allow models to choose more diverse, less repetitive, and semantically coherent text.
## 4 Analysis
A total of 6300 (10 prompts \(\times\) 10 max new tokens \(\times\) 9 temperature \(\times\) 7 top_k) completion text is generated for each quantized model except falcon_40b and llama2_70b for 16bit 5. The evaluation of these completion texts is conducted through the computation of counting the duplicate content words, serving as a metric for assessing the quality of the generated text. The content words are the remaining words after removing the stop words. Additionally, the model's size in gigabytes (GB) serves as a key measure for quantifying GPU memory consumption, while tokens/sec is employed as a metric to gauge the model's inference speed.
Footnote 5: Both falcon_40b and llama2_70b encounter OOM errors on an 80GB GPU machine. falcon_40b throws an OOM error when the max new tokens exceeds 400, while llama2_70b faces an OOM error during the loading process.
### Memory Consumption and Inference Speed
The utilization of int8 quantization demonstrates a significant reduction in memory consumption, approximately in the range of 40% to 50%, when compared to bfloat16, as illustrated in Table 6. Nonetheless, it is important to note that this enhancement is accompanied by a corresponding trade-off in inference speed, with int8 exhibiting a slowdown of roughly 75% to 80% in comparison to bfloat16, as indicated in Table 7.
When evaluating memory consumption between the fp4 and nf4 quantization approaches for model sizes up to 13 billion, their distinctions are negligible. However, nf4 quantization exhibits a slight advantage over fp4 in terms of memory consumption for larger models such as falcon_40b and llama2_70b. Nevertheless, fp4-dq is found to be better in memory consumption (i.e., takes less memory) across the models compared to its counterpart, as shown in Table 6. It is worth noting that while double quantization offers a clear advantage in memory consumption, it results in an inference speed reduction of approximately 10% to 25% compared to the absence of double quantization, as outlined in Table 7.
In conclusion, among the various quantization methods,
\begin{table}
\begin{tabular}{|c|c|c|c|c|} \hline
**Model** & **mlp.proj** & **att.proj** & **mlp.fc** & **attn.attn** \\ \hline stablelm\_3b & 52850.7 & 12638.9 & 383200.9 & 844806.3 \\ redpajama\_3b & 23448.1 & 1048.9 & 137061.9 & 138947.9 \\ falcon\_7b & 19194.83 & 2362.39 & 149962.4 & 32886.3 \\ llama2\_7b & 22773.0 & 3198.7 & 170837.6 & 248520.0 \\ llama2\_13b & 27829.5 & 5470.5 & 247389.2 & 301002.0 \\ \hline \end{tabular}
\end{table}
Table 3: _Quantization error introduced by GPTQ in mlp projection, attention projection, fully connected and attention layers._
bfloat16 stands out as the least efficient in terms of memory consumption. However, it excels in terms of inference speed, except in the case of stablelm_3b.
### Temperature vs. Quality of Generation
A common pattern emerges within all quantization approaches, wherein an increase in temperature correlates with an elevation in number of duplicate content words except for bfloat16. However, it is worth noting that some models are more sensitive to even temperature lower than 0.5 compared to others.
When comparing the performance of stablelm_3b and redpajama_3b models, it becomes evident that the fp4 and nf4-dq quantization methods exhibit suboptimal results, characterized by an increased occurrence of duplicate words at lower temperature settings. However, the situation varies when considering falcon models, where nf4 quantization consistently demonstrates inferior performance across the entire temperature spectrum in comparison to other quantization methods.
In contrast, when assessing llama2 models, the situation becomes more nuanced, with most quantization approaches contributing significantly to repetitive generation. In this context, determining a clear front-runner among these methods proves to be a challenging task. Nevertheless, it is noteworthy that for the llama2_70b model, both fp4 and fp4-dq quantization methods outshine the others in terms of performance.
The analysis reveals that the int8-quantized model demonstrates effective control over the occurrence of duplicate content words for both llama2_13b and llama2_70b, effectively limiting them in the range of 40. In contrast, the bfloat16 models exhibit a characteristic of independence from temperature scaling, as they consistently generate a comparable number of repetitive words across all temperature settings except redpajama_3b.
### Max Returned Tokens vs. Quality of Generation
The term max returned tokens encompasses the combined value of max new tokens and the length of the input prompt in terms of tokens. the analysis reveals that an the count of duplicate words generated linearly increases with the increase of max returned tokens across all models and quantization methods.
### Top_k vs. Quality of Generation
The analysis offers a somewhat surprising insight, indicating that setting top_k equal to 1 tends to result in the lowest occurrence of duplicate words across models and quantization methods. Nonetheless, it's noteworthy that this effect reaches a point of saturation and loses distinctiveness when top_k is equal to or greater than 5.
### Overall Comparison
In terms of the average number of duplicate content words6 generated in absolute terms, our analysis reveals the following insights:
Footnote 6: The total number of content words generated for the unquantized model lies in the range of 1.34M to 1.45M and the maximum duplicate number of words is around 80K.
* For fp4 and fp4-dq compared to nf4 and nf4-dq across various models (except llama2 series), there is a consistent reduction in repetitive generation, typically ranging from 12% to 20% relative.
* In the case of nf4 and nf4-dq for llama2 models of different sizes, there is a more noticeable advantage, with relative reduction of 9% to 11% in repetitive generation.
* Int8 quantization has a more pronounced limitation on the number of generated words, producing approximately 30-50% fewer content words than 4-bit quantization. Additionally, it produces 25-40% more duplicate content words relative to 4-bit quantization at normalized scale.
* When comparing bfloat16 with 4-bit quantization, it's noteworthy that bfloat16 generally produces more number of content words, often ranging from approximately 3% to 10%. Nonetheless, bfloat16 tends to generate a marginally higher
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline
**Model** & **n\_head** & **n\_layer** & **embed\_dim** & **n\_query\_groups\_mqa** & **block\_size** & **intermediate\_size** \\ \hline stablelm\_3b & 32 & 16 & 4096 & 32 & 4096 & 16384 \\ redpajama\_3b & 32 & 32 & 2560 & 32 & 2048 & 10240 \\ falcon\_7b & 71 & 32 & 4544 & 1 & 2048 & 18176 \\ llama2\_7b & 32 & 32 & 4096 & 32 & 4096 & 11008 \\ llama2\_13b & 40 & 40 & 4096 & 32 & 5120 & 13824 \\ falcon\_40b & 128 & 60 & 8192 & 8 & 2048 & 32768 \\ llama2\_70b & 64 & 80 & 8192 & 8 & 5120 & 28672 \\ \hline \end{tabular}
\end{table}
Table 4: _Description of most relevant architectural specs of the pre-trained models used during the experiment_
\begin{table}
\begin{tabular}{|c|c|} \hline
**Prompt** & **General Expected Continuation** \\ \hline Life in London & Travel/Cultural/Work-Related/London specific stuff \\ It is easy to be a technic & Comparison of techie with other probable roles in tech sector \\ Stock brokers are earning & Stock brokers and their earning style, sources, etc \\ It looks like written by Shakespeare & Shakespeare style text comparison \\ Hello, my name is & Chat or Introduction \\ Global warming and AI & Global Warming and AI in general as well as their +ive and -ive association \\ Current world order & Essay/Discussion/Power and Politics related tow world order \\ Percentage of people adopre actors and singers & Stats on people following their favourite actors/singers and discussion on the related topic \\ Exercise and eating habits for & Eating habits and exercise routine in general (pros and cons) \\ Millennial and genz & Comparison and contrast between millennial and genz \\ \hline \end{tabular}
\end{table}
Table 5: _Prompts Description_
duplicate words, indicating relative inferiority of 1% to 3.5% with 4bit quantization.
The computation of average perplexity scores, with a token stride of 512, is conducted for all quantization levels across each model. An examination of these scores reveals that the perplexity values for all models reside within a relatively constrained range, typically ranging from 12 to 15. Consequently, it is discerned that perplexity, within this context, may not serve as a suitable metric for assessing the quality of the generated text.
## 5 Conclusions
In scenarios where GPU memory is not a limiting factor and the utmost priority is placed on achieving both high inference speed and accuracy, it is advisable to prioritize the utilization of bfloat16 for models up to 7 billions. This preference arises due to its reduced susceptibility to variations in temperature and max new tokens. Moreover, model upto 7 billion size effectively fits into a consumer grade GPU machine. Alternatively, nf4 and fp4 serves as the default choice for individuals seeking a balance between GPU utilization, accuracy and inference speed, thus offering a middle-ground solution that combines all aspects effectively.
It's worth noting that the adoption of double quantization, such as fp4-dq and nf4-dq, can lead to a marginal reduction in memory footprint. However, it is accompanied by a relatively decreased inference speed. Hence, the recommendation leans toward using quantization without the doubling approach. Additionally, when considering the nf4 and fp4 precision combination, it is recommended to use a temperature of less than 0.5, exactly 1.0, or a combination of these values to achieve optimal performance.
The current evaluation does not consider int8 to be a feasible alternative to other quantization methods. While int8 reduces memory usage, it significantly slows down inference and produces around 30-50% fewer words than other quantization methods.
It is important to note that the current experiment did not achieve satisfactory results in terms of accuracy and inference speed when using gptq 4-bit quantization. Further investigation is needed to replicate the comparable performance that has been reported in other studies7. Therefore, this result is not included in the analysis presented.
Footnote 7: [https://github.com/PanQiWei/AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ)
## 6 Limitations and Future Work
The current study is conducted on 7 models ranging in size from 3 billion to 70 billion parameters, and 10 prompts are used for next-word prediction using various combinations of hyperparameters. Further study with more models (containing \(\leq\) 1 billion parameters) and prompts might provide more insights into the effects of these hyperparameters on relatively smaller quantized LLMs.
Future work will focus on the primary causes of repetitive generation and their relationship to Hypothesis 1 and Hypothesis 2. Moreover, the results show that falcon has a faster inference speed than llama2 in the 7B category. However, falcon has a higher number of overall parameters than llama2. Therefore, future research will focus on model-specific factors that affect inference speed.
|
2309.15017 | Studying the association between Gitcoin's issues and resolving outcomes | The development of open-source software (OSS) projects usually have been
driven through collaborations among contributors and strongly relies on
volunteering. Thus, allocating software practitioners (e.g., contributors) to a
particular task is non-trivial and draws attention away from the development.
Therefore, a number of bug bounty platforms have emerged to address this
problem through bounty rewards. Especially, Gitcoin, a new bounty platform,
introduces a bounty reward mechanism that allows individual issue owners
(backers) to define a reward value using cryptocurrencies rather than using
crowdfunding mechanisms. Although a number of studies have investigated the
phenomenon on bounty platforms, those rely on different bounty reward systems.
Our study thus investigates the association between the Gitcoin bounties and
their outcomes (i.e., success and non-success). We empirically study over 4,000
issues with Gitcoin bounties using statistical analysis and machine learning
techniques. We also conducted a comparative study with the Bountysource
platform to gain insights into the usage of both platforms. Our study
highlights the importance of factors such as the length of the project, issue
description, type of bounty issue, and the bounty value, which are found to be
highly correlated with the outcome of bounty issues. These findings can provide
useful guidance to practitioners. | Morakot Choetkiertikul, Arada Puengmongkolchaikit, Pandaree Chandra, Chaiyong Ragkitwetsakul, Rungroj Maipradit, Hideaki Hata, Thanwadee Sunetnanta, Kenichi Matsumoto | 2023-09-26T15:36:55Z | http://arxiv.org/abs/2309.15017v1 | # Studying the association between Gitcoin's issues and resolving outcomes
###### Abstract
The development of open-source software (OSS) projects usually have been driven through collaborations among contributors and strongly relies on volunteering. Thus, allocating software practitioners (e.g., contributors) to a particular task is non-trivial and draws attention away from the development. Therefore, a number of bug bount platforms have emerged to address this problem through bounty rewards. Especially, Gitcoin, a new bounty platform, introduces a bounty reward mechanism that allows individual issue owners (backes) to define a reward value using cryptocurrencies rather than using crowdfunding mechanisms. Although a number of studies have investigated the phenomenon on bounty platforms, those rely on different bounty reward systems. Our study thus investigates the association between the Gitcoin bounties and their outcomes (i.e., success and non-success). We empirically study over 4,000 issues with Gitcoin bounties using statistical analysis and machine learning techniques. We also conducted a comparative study with the Bountysource platform to gain insights into the usage of both platforms. Our study highlights the importance of factors such as the length of the project, issue description, type of bounty issue, and the bounty value, which are found to be highly correlated with the outcome of bounty issues. These findings can provide useful guidance to practitioners.
## 1 Introduction
The development of Open-Source Software (OSS) projects is encouraged through collaborations and knowledge sharing among project contributors. It mostly relies on volunteer work. Recruiting an experienced developer for a challenging task becomes a non-trivial task (Choi, Chengalur-Smith and Whitmore, 2010). Thus, bounty platforms are used to support the development of OSS projects through the mechanisms of bounty and crowdsourcing (Zhou, Wang, Zhang, Chen and Hassan, 2021; Kanda, Guo, Hata and Matsumoto, 2017; Zhou, Wang, Bezemer, Zou and Hassan, 2020; Hata, Guo and Babar, 2017). A bounty refers to a reward for developers who can accomplish tasks, such as bug fixing or new feature creation, declared on a bounty platform (Finitter, Akhawe and Wagner, 2013). Those rewards could be in cash, cryptocurrency, or badges that can reflect the developer's reputation (Nakasai, Hata and Matsumoto, 2019). The previous study has shown that monetary incentive is an essential motivation that can attract the developers to contribute to each project on the platforms (Hata et al., 2017).
Recently, there have been a number of bounty platforms allowing contributors to participate and contribute to OSS projects, such as Bountysource1 and HackerOne.2 Specifically, the crowdsourcing bounty platform allows backers to support any specific tasks of projects, such as implementing new features or fixing bugs by providing rewards on resolving tasks as bounties. Several empirical studies aim to investigate the impact on software development projects using this bounty mechanism. Kanda et al. (2017) report that bounties tend to attract developers to work on the projects more than those without bounties. Several existing empirical studies on the Bountysource platform found that the amount of bounties is a factor that affects the outcome of the project (Kanda et al., 2017; Zhou et al., 2020).
Footnote 1: [https://www.bountysource.com](https://www.bountysource.com)
Footnote 2: [https://www.hackerone.com](https://www.hackerone.com)
Footnote 3: [http://gitcoin.co](http://gitcoin.co)
Footnote 4: [https://gitcoin.co/bounties/funder](https://gitcoin.co/bounties/funder)
Gitcoin3 is a new bounty platform established in 2017. It plays a role as a bounty-based collaboration community for funders (i.e., bounty issue owners) and developers (i.e., contributors) to work on the issues easily on GitHub.4 Gitcoin purely uses the Ethereum blockchain and cryptocurrency, such as Bitcoin, for their reward system. Funders provide a bounty in cryptocurrency-based rewards. In particular, Gitcoin allows a funder to solely support a bounty task, such as a project development task and bug resolving task, which is different from traditional bounty platforms that rely on crowdfunding mechanisms that require a number of funders to support a bounty. Currently, Gitcoin is gaining
more attention from funders and developers because of its unique reward system, as the number of bounties created on Gitcoin and the number of active developers are increasing. Over 6,000 bounty issues have been created on Gitcoin, and over 10,000 active developers participate in those bounties.5 While previous studies have focused on traditional bounty platforms such as Bountysource, it is important to note that these platforms share some common characteristics while also exhibiting differences. there remains a gap in understanding the dynamics and characteristics specific to cryptocurrency-based bounty platforms. To address this gap, a study on cryptocurrency-based bounty platforms and a comparative study between the two platforms are required.
Footnote 5: The data were collected on 20 January 2021.
In this paper, we aim to investigate the factors that impact the success of the bounty issue resolution created in Gitcoin. We thus adopt the studying approach from Zhou et al. (2020, 2021). In our study, we empirically study over 4,000 bounty issues (i.e., issue reports) in Gitcoin created between September 2017 to December 2020, which involve 1,096 software project repositories hosted in GitHub with a total bounty value of over 18,000 ETH and 11,000,000 USDT. Our study extracted 28 features characterizing the Gitcoin bounty issues (e.g., developer's experience level, bounty proposing time, and estimated development time). The study conducted by Zhou et al. (2020, 2021) on Bounysource highlighted the significant relationship between the likelihood of issue resolution and factors such as the timing of proposing bounties, the bounty value, and the duration of the issue. Our study, conducted on a different bounty platform, confirms these findings as we observe similar factors that impact issues on Gitcoin. However, our study also uncovers additional insights. We found that the description of tasks, the experience level of contributors, and the type of contribution are crucial in determining the success rate of issues. Furthermore, our comparative analysis reveals differences in terms of project types and focused topics on these platforms, which can serve as a guide for platform selection and creating issues with a higher chance of success.
This paper is organized as follows. Section 2 provides the background of bounty platforms, explains the Gitcoin platform, and discusses related work on bounty platform studying. Section 3 discusses our studying approach, including data collection, preprocessing and labeling, and data analysis. We then discuss our extracted features of the Gitcoin bounties in Section 4. In Section 5, we discuss the results from our feature and correlation analysis. We then report and discuss our findings in identifying important features in Section 6. In Section 7, we discuss the threats to the validity of our study, and we conclude our study in Section 8.
## 2 Background and related work
### _Bounty platforms_
Using bounties to accomplish tasks in software development projects has become a popular approach (Atiq and Tripathi, 2014). Since an OSS project allows developers to participate in the development of the projects, a bounty can thus enlarge the engagement of developers to work on the projects (Zhou, Wang, Bezemer and Hassan, 2020). A bounty could be in various forms, such as cash, cryptocurrency, and a badge. A badge is used to enhance the reputation of developers (Nakasai et al., 2019). Funding the project as a bounty is a popular way to drive OSS projects, which allows any developers to work on the projects freely. A funder can be either an individual or an organization that needs to accomplish a specific task (Coelho, Valente, Silva and Hora, 2018). For example, they are a project user and need to have a new feature implemented in the project.
Several studies were conducted to understand both intrinsic and extrinsic incentives of developer contributions. An intrinsic incentive refers to the enjoyment of contribution, the ability to work on the projects, or even the satisfaction of solutions (Krishnamurthy, Ou and Tripathi, 2014). In contrast, an extrinsic incentive can be motivated by the desire to get rewards. Studies reported that the developers are likely to complete tasks faster if an amount of bounty is provided (Kanda et al., 2017; Hata et al., 2017). The Bountysource platform is among the most popular and widely used (Zhou et al., 2020). Bountysource allows several funders to fund a bounty issue. Then, the developers can work on the projects through an issue tracking system such as GitHub and Bugzilla. In contrast, a Gitcoin bounty platform allows only one funder to fund a bounty issue.
### _Gitcoin_
Gitcoin is a bounty platform that allows developers to contribute to OSS projects with bounties. Gitcoin purely uses cryptocurrencies as a bounty reward for those who can resolve issues. This rewarding approach provides a high level of security on decentralized systems and no cost of transaction (Titov, Lundykova, Litvishko, Kalmykova, Prosekov and Senjyu, 2021). Gitcoin supports several types of tokens in cryptocurrency networks, such as Ethereum (ETH) and Bitcoin (BTC) (Li, Lu, Chen, Liu and Xu, 2019). Gitcoin particularly supports ERC20 tokens in its ecosystem. ERC20 uses the Smart Contracts program to help organize and formalize the agreements on the networks to work properly among users (e.g., companies and entrepreneurs) (Chen, Zhang, Chen, Zheng and Lu, 2020). Therefore, advocating ERC20 on the Gitcoin platform could ensure the authenticity and credibility between funders and contributors. With this intention, the funders can use well-known cryptocurrencies to fund issues on Gitcoin safely in any granted types of tokens on the platform.
Figure 1 shows the lifecycle of the Gitcoin bounty issue. Gitcoin uses GitHub's issue-tracking system to track issue-resolving progress. A GitHub issue corresponding to the Gitcoin bounty issue must exist in the GitHub issue tracking system. The GitHub issue is usually initiated by an issue owner (i.e., funder/backer) of a GitHub repository in Step 1. The issue owner then initiates the issue on the Gitcoin platform and fills in the issue's details, such as title,
contribution type, required experience level, expected task duration, and bounty value, as well as the corresponding GitHub issue link (Step 2). In Step 3, developers (i.e., contributors) can find Gitcoin issues that they would like to contribute from the Gitcoin platform. In addition, the issue owner can select whether developers must be approved before contributing or whether any developers can contribute to the issue (i.e., permissionless mode). In the former case, the prospect developer must send a request to contribute to the issue owner and wait for the acceptance before starting the work (4). Once the developer has been approved to work on the issue (5), they can access to source code and related resources from the provided GitHub issue and can also communicate with the issue owner from there. The developer resolves the issue in Step 7, and the issue owner can review the issue and decide whether to accept the issue resolution in Step 8. The issue owner then confirms the completion of the task on the Gitcoin platform to close the Gitcoin issue (9). The developer can then claim the bounty reward in the last step (10). Figure 2 shows an example of a Gitcoin issue.
### Studies of bounty platforms
Several bounty programs were launched in the past. For example, proprietary software like Google or Mozilla Firefox allowed internal employees to work on the vulnerability rewards programs (Luna, Allodi and Cremonini, 2019; Finifter et al., 2013). The empirical study from Finifter et al. (2013) described two vulnerability rewards programs (VRPs) between Chrome and Mozilla Firefox. They indicated that the monetary incentive could also help prevent the researchers from selling the research results of the system's security in a gray market. In addition, Zhao, Grossklags and Chen (2014) studied platforms that act as the middlemen between the contributors and the software vendors, such as BugCrowd,6 Synack,7 and CrowdCurity (as know as Cobalt at present).8 In particular, bounties in these platforms focus on security and penetration testing.
Footnote 6: [https://www.bugcrowd.com/](https://www.bugcrowd.com/)
Footnote 7: [https://www.synack.com/](https://www.synack.com/)
Footnote 8: [https://cobalt.io/](https://cobalt.io/)
Footnote 9: [https://gitton.co/issue/octopus-network/oct-tokens-eth/1/100226469](https://gitton.co/issue/octopus-network/oct-tokens-eth/1/100226469)
Footnote 10: [https://stackoverflow.com/](https://stackoverflow.com/)
Apart from the vulnerability rewards programs, the empirical studies of bounty platforms for open-source software projects are also ubiquitous in the software engineering research community. A number of projects cannot be accomplished without collaboration among the contributors in a community. According to the study from Zhou et al. (2020), a question-and-answer forum such as Stack Overflow11 is one type of bounty mechanism that allows contributors (i.e., answerers) to get the forum's reputation points. Moreover, Wang, Chen and Hassan (2018) described that the incentive systems like gamification were developed to encourage users to provide answers to questions. Thus, it can help motivate users to engage with the questions.
Footnote 11: [https://stackoverflow.com/](https://stackoverflow.com/)
As the number of bounty hunters (i.e., the ones who mainly work on resolving issues with a bounty reward) is
Figure 1: Gitcoin issue’s lifecycle
increasing in software development communities, a crowdsourcing bounty platform allows multiple funders to fund a bounty. The empirical study of Bountysource by Kanda et al. (2017) demonstrated that issues with bounties are more likely to be solved than those projects without bounties. Further empirical studies from Zhou et al. (2020) and Zhou et al. (2021) indicate that, on the Bountysource bounty platform, funders are likely to offer the bounties in higher value and more frequently than the individual backers (Zhou et al., 2021). They also reported that the bounty value is not the most important factor that attracts contributors to address the issues. They indicated that some contributors are not motivated by only rewards or monetization. Instead, they could be driven by their own interests or desires to commit to the work (Zhou et al., 2020). We thus adopt their studying approach to our study to understand the factors deriving from the Gitcoin bounty platform.
## 3 Methodology
In this section, we describe the approach of our study, including data collection, preprocessing, and labeling. The overview of our study is shown in Figure 3.
### Overview
We first collected bounty issues from the Gitcoin platform. We performed data preprocessing and labeling to identify issues that were successfully resolved, and the bounty rewards were claimed. We then perform feature extraction to characterize the Gitcoin issues. We extracted four groups of features from the issues, which are 1. primitive attributes, 2. bounty value-related features, 3. activity-related features, and 4. duration-related features. We then analyze the extracted features to investigate the factors that correlate and impact the outcome of Gitcoin's issues. In the first approach, we apply correlation analysis techniques, using the Spearman Rank Correlation Coefficient (Liu, 2010), to explore the correlation among each feature. To control for the risk of false positive results due to multiple comparisons, we use the Bonferroni correction in this analysis (Arcuri and Briand, 2014) (see Section 5).We then employed machine learning techniques, including Random forests (Lan, 2019) and logistic regression (Harrell, 2015), to construct classification models to study the relationship between the extracted features and the outcome of the issues (see Section 6). Specifically, we aimed to identify the features that have a strong correlation with the outcomes of the issues. We then applied the Point Biserial Correlation Coefficient (Bonett, 2020) with Bonferroni adjustment (Arcuri and Briand, 2014) to observe the correlation between important features and the outcomes. These approaches complement each other and help to conclude our study. The data used in our study, including raw data of collected Gitcoin issues and extracted features and the source code used in the study, are made publicly available at [https://doi.org/10.5281/zenodo.831355](https://doi.org/10.5281/zenodo.831355). Furthermore, to gain a comprehensive understanding of the differences and similarities in the issues, we conducted a comparison study between Gitcoin and Bountysource. This study is described in detail in Section 6.3. To perform the study, we used the Bountysource dataset provided by Zhou et al. (Zhou et al., 2020) as well as additional data that we collected.
### Data collection and preprocessing
We collected bounty issues that were created from September 2017 to December 2020 on the Gitcoin bounty
Figure 2: An example of the Gitcoin issue10
platform via Gitcoin's Application Program Interface (API).12 Specifically, we used the _bounties_ API to retrieve a list of bounty issues. The API responds with files in JSON format that contains all the information we need for our analysis. To prepare the data for our study, we apply various techniques such as data preprocessing, filtering, and feature extraction to the raw JSON files. Table 1 shows the number of issues in our dataset. We collected a total of 6,638 bounty issues from Gitcoin. In Gitcoin, an issue can be created on two Ethereum networks: Mainnet and Rinkeby. The latter is used by developers for testing the platform, while the former is used for the actual transactions. Therefore, our study only uses the issues created on the Mainnet network. In total, our dataset contains 4,584 issues in Mainnet (69% of the collected issues) as shown in Table 1.
Footnote 12: [https://docs.gitcoin.co/ikse/netmask/netmask-mobile/2954/100026352](https://docs.gitcoin.co/ikse/netmask/netmask-mobile/2954/100026352)
In addition, our study focuses on the Gitcoin issue-resolving outcomes. The collected issues were thus classified into two classes which are _success issues_ and _non-success issues_. A success bounty refers to an issue that is successfully solved, and a bounty reward has been paid out. On the other hand, a non-success bounty implies an unsuccessfully resolved and unpaid issue. These labels were determined based on the status of the issues and the bounty-paid status of the Gitcoin issues. Among those collected issues, 2,662 issues (58.1%) were marked as success issues.
## 4 Features
Table 2 shows the list of extracted features. In this section, we discuss each feature in detail.
### Primitive attribute of the Gitcoin issue
The primitive attributes indicate the basic information of an issue, e.g., bounty types, contribution types, and length of projects.
#### 4.1.1 Bounty types
The bounty type feature (_bounty_type_) explains the types of bounty provided in Gitcoin. This informs contributors what the bounty type of an issue is. There are 9 bounty types of an issue as depicted in Table 3. The first type is _Feature_ type. We found 1,851 issues (40.4%) from Mainnet which were assigned to this type. This is the largest type which reflects that an issue requires the contributors to develop or create new features of a system. The second most popular bounty type is _Improvement_ which requires improvements to existing features, functions, or the system that the issue owners own. We found 679 issues (14.8%) assigned to an improvement type. The third type is _Bug_ (333 issues - 7.3%). The _Bug_ type indicates a bug-fixing task. For example, the issue Not Able To Login Into WordPress From Metanask Mobile Browser13 is a bug type requiring someone to fix an error on login into the WordPress from the Metamask mobile browser. The next type is the _Documentation_. This type has only 224 issues (4.9% of the total number of issues). The _Documentation_ type could be related to information documentation such as programming guides, tutorials, descriptions of technologies, and translations.
Footnote 13: [https://gitcoin.co/issue/metanask/metanask-mobile/2954/100026352](https://gitcoin.co/issue/metanask/metanask-mobile/2954/100026352)
The _Security_ type refers to security-related implementation
\begin{table}
\begin{tabular}{l c c} \hline \hline
**Total numbers of** & **Numbers Percentage (Mainnet)** \\ \hline Issues in Gitcoin & 6,638 & \\ Issues in Mainnet network & 4,584 & \\ Closed issues & 3,744 & 81.7\% \\ Opened issues & 840 & 18.3\% \\ Success issues & 2,662 & 58.1\% \\ Non-success issues & 1,922 & 41.9\% \\ \hline \hline \end{tabular}
\end{table}
Table 1: Dataset description
Figure 3: Overview of our study
\begin{table}
\begin{tabular}{l l l l} \hline \hline
**Group** & **Feature Names** & **Descriptions** & **Rationale** \\ \hline
1. Primitive attributes & bound\_type & Type of bounty, e.g., Bug, Feature, and Security & The primitive attributes provide \\ & project\_length & Relative length of the project, e.g., hours, days, & essential information about the bounty \\ & weeks, months & & and serve as foundational details that \\ & experience\_level & Recommended experience level & can significantly correlate with the \\ & contribution\_type & Type of contribution, e.g., Traditional, Context, & outcome of the bounty by providing \\ & cooperative & Cooperative & primitive information to contributors. \\ & github\_comments* & The number of comments in an issue & \\ & description\_length* & Length of issue description & \\ \hline
2. Bounty value-related & token\_name & Type of token, e.g., ETH, GIT & The bounty value-related features focus \\ & value\_in\_eth* & Value of the bounty in Ehretown & on the value of the bounty and its \\ & value\_in\_ust* & Approximation of value in US Dollars at bounty & changes. These features reflect the \\ & web\_created timestamp & & incentives and rewards associated with \\ & value\_in\_not\_now* & Approximation of current value in US Dollars & the bounty, which can impact \\ & token\_value\_in\_usd & The actual value of the token associated with the & contributors’ motivation and willingness \\ & value\_in\_token* & & \\ & increased\_bounty\_times & & \\ & changed\_bounty\_value & The value that the bounty has been increased & \\ & & from created to the latest increasing bounty & \\ \hline
3. Activity-related & number\_of\_fulliments* & The number of participants that submitted work & The activity-related features are the \\ & number\_of\_interest* & The number of participants that are interested in & values that relate to the activities that \\ & number\_of\_interest* & An issue & happened to the bounty by other \\ & number\_of\_activities & The number of activities that occur in an issue & develops besides the creator of the \\ & number\_of\_user\_in\_activities & The number of usernames that shows in the & twenty. These features reflect the \\ & firstAct\_activity\_type & The activity type of the first activity occurs & indicating a funder. \\
4. Duration-related & duration\_create\_to\_copire & The number of days between the creation of an & indicating a level of collaboration and \\ & duration\_create\_to\_new\_bounty* & The number of days between the creation of an & \\ & duration\_create\_to\_worker\_applied & The number of days between the creation of an & various time-related values associated \\ & duration\_create\_to\_start & The number of days between the creation of an & with issues and their bounty*. These \\ & duration\_create\_to\_stop & The number of days between the creation of an & with issues and when the participant starts working \\ & duration\_create\_to\_stop & The number of days between the creation of an & insights into critical factors such as the \\ & duration\_create\_to\_done & The number of days between the creation of an & time sensitivity of a bounty, the ability \\ & duration\_create\_to\_submitted & The number of days between the creation of an & to set realistic deadlines, and the \\ & duration\_create\_to\_killed & The number of days between the creation of an & assessment of timeliness. \\ \hline \hline \end{tabular}
* adopted from Zhou et al. (2020b) and Zhou et al. (2021)
\end{table}
Table 2Feature descriptions
Figure 4: Bounty types
tasks such as vulnerability protection, software penetration, and system audit. The next bounty type is the _Design_ type, which has 46 issues (1%). The issue owners could require any design for their work in terms of flow design, wireframes, and architecture design, such as logo, schway product, and POAP badges used for NFTs events. The last bounty issue type is the _Code Review_ type. We found only 20 issues (0.4%). This bounty type requires contributors to review source code, such as defect identification, bug localization, and code quality analysis. Note that two more issue types do not have specific tasks. They are _NA_ type, which has 1,015 issues (22.1%), and _Other_ type, which has 346 issues (7.6%).
Figure 1 shows the success issues of each bounty type. We found that most of the bounty types have a higher number of success issues compared to non-success issues, which are _Feature_ (1,031 success issues - 55.7%), _NA_ (633 success issues - 62.4%), _Improvement_ (411 success issues - 60.5%), _Other_ (198 success issues - 57.2%), _Documentation_ (162 success issues - 72.3%), and Design (25 success issues - 54.4%). In contrast, there are two bounty types (_Bug_ and _Security_) that have a higher number of non-success issues than success issues. Note that we handled the minority of the issues (7.6%) that were assigned to undefinable types such as \(0\), _Andere_, and _Funkcja_ by grouping them into the _Other_ type.
#### 4.1.2 Project lengths
The project length feature (_project_length_) describes the relative time duration that an issue owner expects an issue to be completed. The issue owner can select four types of project length which are _Hours_, _Days_, _Weeks_, and _Months_ to provide an estimation of the task duration for contributors before committing to an issue. As shown in Table 4, the top two project lengths are _Hours_ - 2,249 issues (49.1%) and _Days_ - 1,164 issues (25.4%). However, we found that 536 issues (11.7%) were assigned to _Unknown_ and 337 issues (7.4%) were assigned to _NA_. According to Figure 1, most issues in the project lengths of Hours, Days, and Weeks are success issues. However, the project lengths of Months have 70% of issues that are non-success.
#### 4.1.3 Experience levels
The required experience level feature (_experience_level_) is a feature that declares the required experience level of contributors who potentially participate in an issue. An issue owner can specify the experience level to ensure that a contributor can resolve an issue. In Gitcoin, we found three experience level values, including _Beginer_, _Intermediate_, and _Advanced_. We can see from Table 5 and Figure 6 that the top required experience level in Gitcoin is the _Intermediate_ level (2,415 issues - 52.7%)14. It also shows that the _Intermediate_ level has a large number of success issues (58.2%) followed by the _Beginer_ level (18.7%). Interestingly, 2 experience levels have a higher non-success rate than the success ones: _Advanced_ and the _Other_ levels.
Footnote 14: Note that we also found the experience level named in German (_Mittlere_) and Polish (_Postedni_) which means ‘Intermediate’ in English. Thus, we then grouped this data into an Intermediate level as well.
#### 4.1.4 Contribution types
The contribution type feature (_contribution_type_) refers to the types of contribution provided in Gitcoin: _Tradition_, _Contest_, and _Cooperative_. The _Tradition_ type means that there is only one contributor who can be approved to contribute and get a bounty reward. This is the most popular contribution type containing 3,307 issues (72.1%). In contrast, the _Contest_ type allows a number of contributors to work on an issue, but only one can be paid. There are 862 issues (18.8%) of the _Contest_ type found. The least popular one is the _Cooperative_ type, which contains 415 issues (9.1%). It allows several contributors to work on the issue, and the issue owners can decide to pay the bounty to more than one contributor.
Figure 7 shows that the _Tradition_ type has the highest number of success issues (1,921 success issues - 58.1%) and also has a higher number of success issues than non-success issues. Similarly, the _Contest_ type has a higher number of success issues than non-success issues (571 success issues - 66.2%). However, the Cooperative type is the only one with a higher number of non-success issues than its counterparts (245 non-success issues - 59%).
\begin{table}
\begin{tabular}{l r} \hline \hline Length & Issues \\ \hline Hours & 2,249 \\ Days & 1,164 \\ Weeks & 276 \\ Months & 22 \\ Unknown & 536 \\ NA & 337 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Project lengths
\begin{table}
\begin{tabular}{l l r} \hline \hline Bounty Type & Description & Issues \\ \hline Feature & Creating new features of a system. & 1,851 \\ Improvement & Improving existing features, functions, or the system. & 679 \\ Bug & Creating a bug fix. & 333 \\ Documentation & Creating documentation for a system. & 224 \\ Security & Performing security-related activities, & 70 \\ & e.g., penetration test, system audit. & 46 \\ Design & Creating system or artistic design. & 20 \\ Code Review & Reviewing code & 20 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Bounty Types
\begin{table}
\begin{tabular}{l r} \hline \hline Exp. Level & Issues \\ \hline Beginner & 2,415 \\ Intermediate & 859 \\ Advanced & 515 \\ \hline \hline \end{tabular}
\end{table}
Table 5: Experience levels
#### 4.1.5 GitHub comments
The GitHub comment feature (_github_comments_) indicates the number of comments from a corresponding GitHub issue. The GitHub issue tracking system is a platform used for tracking the contributors' work and addressing an issue they are working on. It can also be used as an intermediary channel of communication between issue owners and contributors. Comments can be the issue description, work discussion, or additional information on an issue. Figure 4.1.3 shows the number of comments posted on the GitHub issues. It shows that the maximum number of comments reaches over 25 comments, and the minimum is 0. However, we found that the issues in any number of comments have a higher success rate than the non-success rate with 6 comments on average.
#### 4.1.6 Length of description
The length of the description feature (_description_length_) describes the relative length of the issue description (i.e., number of characters). This feature is extracted to observe whether the description length correlates with the outcome of an issue. Figure 9 shows the length of the issue description from the collected issues. We have noticed that the numbers of success issues and non-success issues are almost equal in those issues that have a longer description rather than those
Figure 5: Project lengths
Figure 6: Experience levels
Figure 7: Contribution types
shorter description issues which have a higher number of success issues.
### Bounty related features
The bounty value-related features reflect the value of the bounty reward of an issue, e.g., value in ETH, increasing bounty value, and token names.
#### 4.2.1 Token names
The token names feature (_token_name_) indicates the types of cryptocurrency tokens provided as a bounty reward given to the contributors who can resolve bounty issues in Gitcoin. Nevertheless, over one hundred token types are found on the platform, and we then picked the 5 most popular token types (Figure 10). The _Ethereum (ETH)_ is the most popular token type that has been used in Gitcoin projects with 2,416 issues, accounting for 52.7% of all the issues. The _Dai_ token (DAI) is the second most popular token, with 1,208 issues (26.4%). The second token is the _Sai_ token (SAI) found 435 issues (9.5%), followed by _USD_ Coin token (USDC) which consists of 66 issues (1.4%). The last one is the _MyBit_ token (MYB), which has 25 issues or only 0.6% of the total issues. According to the token types, most of them are operated under the Ethereum blockchain, as explained in Section 2.2. We found that the USDC token has 56.1% for non-success issues in the platform.
#### 4.2.2 Bounty value
For the features that are related to the bounty value in Gitcoin, we particularly focus on the bounty value in Ether or ETH (_value_in_eth_) since it is a token worked under the
Figure 8: GitHub comments
Figure 9: Length of description
Ethereum blockchain and is mostly used as a bounty reward in the Gitcoin platform. Figure 4.2.2 shows the success of the issues with ETH tokens. The maximum value of the ETH that has been proposed is 1.14 (0.2 on average). Additionally, we found that the issues are mostly successful in the range of bounty values between 0.1 to 0.4 ETH. On the other hand, the upper values have a higher non-success rate than the success rate. Since the ETH tokens are in digital currency, we also observe those values in U.S. Dollars (USD). Therefore, we extracted the Tether or USDT value (_value_in_usdt_) feature from Gitcoin issues to refer to the bounty value in USD since the USDT can imply a stablecoin of the U.S. Dollars (Grobys, Junttila, Kolari and Sapkota, 2021). Figure 12 shows the success of the issues with USDT token, in which we found that those values corresponded to the values in ETH. Moreover, Figure 13 shows the average token value in USD (_token_value_in_usd_) over time across all token names. It reflects the market values of various cryptocurrencies. It is observed that the majority of tokens experienced a decline in value during the year 2019. This downturn was followed by a notable recovery and upward trend in token values during the year 2021.
#### 4.2.3 The changes of bounty values
The issue owners can determine whether to increase or decrease a bounty value for their issues, such as attracting contributors to work on issues. We extract two features related to the changes in bounty values: the number of times that bounty has been increased (_increased_bounty_times_) and the total bounty value that has been changed (_changed_bounty_value_). The former explains how frequently an issue has been increased by an issue owner. The latter is calculated based on the changed bounty value based on the difference between the most recent value and the original value, i.e., the value at the first time that bounty has been proposed. This feature is to observe how much issue owners have to
Figure 11: ETH bounty value
Figure 10: Token names
increase their bounty values to attract contributors. We found that 63% of those issues having their bounty value increased were success issues. However, the bounty values have never been changed in most of the issues.
### Activity related features
The activity-related features indicate activities that occur with an issue, e.g., the number of interests and the number of activities.
#### 4.3.1 Number of fulfillments
The number of fulfillments (_number_of_fulfillments_) indicates the number of participants who submitted the work to issue owners. We found that 67.8% of total issues contain at least one fulfillment, and the highest number is 156 participants. Moreover, the number of issues with a large number of fulfillments also has higher success issues.
#### 4.3.2 Number of interests
The number of interests (_number_of_interests_) indicates the number of participants who are interested in working on
Figure 12: USDT bounty value
Figure 13: Average token value in USD over time across all token names
issues. From our investigation, only 19.3% of issues have zero interests, while 3,689 issues (80.7%) have at least one interest from contributors.
#### 4.3.3 Number of activities
The number of activities (_number_of_activities_) indicates the total number of activities that occurred in an issue. We extracted the activities from Gitcoin's issue change log. Those activities include proposing bounties, increasing bounty values, approving candidates, and making submissions.
#### 4.3.4 Number of users who interact with an issue
This feature (_number_of_user_in_activities_) is the total number of users who perform actions on an issue. On average, there are 5 users who interact with an issue. The highest number of users that interact with one issue is 170 users.
#### 4.3.5 The first activity type occurred on an issue
This feature _firstAct_activity_type_ indicates the activity type that occurred in an issue after issue creation. We found that proposing bounty, start working, and worker applied are the top three types that appeared as the first activity.
#### 4.3.6 The last activity type occurred on an issue
This feature _lastAct_activity_type_ captures the last activity type that occurred on an issue. Although we found that the majority type is the work submission activity (_work_submitted_) which indicates the resolving of issues, we have noted that the bounty may not be paid, which causes a non-success issue. We found that over 300 issues have been marked as submitted, but the bounties were not paid.
### Duration-related features
We determine the duration from the issue creation time to each stage of issues (i.e., issue status) to investigate the relationships between durations and the issue-addressing outcome. We extract eight duration-related features.
Figure 14 shows the extracted duration-related features. We found that issue owners usually add bounty rewards 10 minutes after the issue creation (_duration_create_to_new_bounty_). In addition, contributors mostly apply to work on an issue on day six after issues were created to apply to issues (_duration_create_to_worker_applied_) and spent, on average, eight days to resolve issues.
## 5 Feature and correlation analysis
Figure 15 shows the number of issues in each experience level and each bounty type. We found that the number of issues that are required the experience level of _Intermediate_ is the majority in several bounty types such as Improvement, Bug, and Feature. Therefore, this experience level is appropriate for various contributors. In addition, we found that the intermediate-level issues mostly relate to language translation, data migration/integration, and developing features in open-source projects. For the _Advanced_ level, the issues are mostly related to complex development tasks which require contributors that are proficient in applying technical knowledge since they need to highly understand the complication of work to implement the work successfully. In addition, we investigate the relationship between the different experience levels and their duration. We found that, on average, the _Intermediate_ level issues took five days to find a contributor while the other levels took six days (Figure 16). The resolution duration of the majority of issues at all levels is between six to eight days.
We then apply the Spearman Rank Correlation technique (Liu, 2010) to determine the correlation among the features. To account for multiple comparisons, we adjust the significance level using the Bonferroni correction. In this study, our focus is to investigate the correlation between each pair of features. The null hypothesis to be rejected is that there is no correlation between the two features being compared. If the adjusted p-value is less than the significance level, we reject the null hypothesis and conclude that there is a statistically significant correlation between the two features. The pairs of features with a strong correlation and a statistically significant p-value (\(\rho<0.05\)/number of comparisons) are reported as follows.
**Positive correlation between Project length and Experience level**
The correlation between _Project length_ and _Experience level_ is a moderately positive correlation (\(p<0.05\)/number of comparisons). From our investigation, we found the majority of issues that requires the beginner and intermediate experience levels are expected to be resolved within hours and days, while the issues that require the advanced level are mostly expected to be resolved in the length of months.
**Positive correlation between the number of times that bounty has been increased and the total bounty value that has been changed** The number of times that bounty has been increased and the total bounty value that has been changed is positively correlated. This indicates that the higher number of times that issue owner increases bounty, the higher the bounty value added.
## 6 Identifying features correlated with the outcome of the Gitcoin bounty issue
To identify a set of features that strongly correlate with the outcome of the Gitcoin bounty issue, we adopt two machine learning techniques to train classifiers to determine whether an issue will be a success or a non-success issue. Firstly, we adopt Random Forests (RF) (Breiman, 2001) to train a classifier. By using RF, we can also identify feature importance which can determine the strength of association between features and the target variable (i.e., successful issue resolution). RF uses the _Gini_ impurity measure to calculate the feature importance to explain to determine the significance of each feature in determining the output. Since RF provides an interpretable classification in terms of the impact of each feature on the outcome, this method has been applied in various empirical studies, e.g., Laaber, Basmaci and Salza (2021); Tagra, Zhang, Rajbahadur and
Hassan (2022). In addition to Random Forests, this study employs Logistic Regression Modeling (LG) and removing multicollinearity Harrell (2015), as seen in previous empirical studies in software engineering, e.g., Thongtanunam, McIntosh, Hassan and Iida (2017). In LG, a model derives a set of coefficients that represent the relationship between each feature and the binary outcomes (i.e., success or non-success) The coefficients can be interpreted as the change in the log-odds of the outcome as in a change of the feature variable. In terms of interpretability, we then apply the Wald test statistic to measure the strength and direction of the relationship between each feature and the outcomes.
We split the dataset into two parts: 70% for the training set and 30% for the test set. We also applied the stratified sampling technique Ye (catalog Wu, Zhexue Huang, Ng and Li, 2013) to preserve the proportion of the target variable in the training and test data set and applied the bootstrap sampling technique to overcome the over-fitting problem. Table 6 shows the number of issues in our dataset. In our study, we conduct two different experiments that aim to identify the importance of features in two scenarios by varying a set of features that are used in model training.
In the first experiment (Setting 1), we used all extracted features (Table 2) in model training to identify important features among all features that characterized the bounty issues. Since we also aim to provide suggestions to the issue owners when they create a bounty issue. In the second experiment (Setting 2), we thus used only the features that can be manipulated by the issue owners at their issue creation time. We can then identify a set of important features at the issue creation time. We then discuss the top important features. Note that we replace the values of _project_length_ and _experience_level_ with the ordinal values to reflect their meaning. For example, the _experience_level_ was mapped as follows: 1 for _Beginner_, 2 for _Intermediate_, and 3 for _Advanced_.
Table 7 shows the performance evaluation in terms of precision, recall, F1-score, and accuracy of the models trained from the two settings using Random Forest (RF) and Logistic Regression (LG). The results show that, in both settings from the two models, it can accurately classify the issue outcomes by achieving over 70% in all measurements.
\begin{table}
\begin{tabular}{l r r r} \hline \hline & **Success Issue** & **Non-success Issue** \\ \hline Training Set & 1,861 & 1,348 \\ Test Set & 797 & 578 \\ \hline \hline \end{tabular}
\end{table}
Table 6: Experimental setting
\begin{table}
\begin{tabular}{l r r r r} \hline \hline & **Precision** & **Recall** & **F1** & **Acc.** \\ \hline RF:Setting 1 & 0.99 & 0.98 & 0.99 & 0.99 \\ RF:Setting 2 & 0.85 & 0.91 & 0.88 & 0.86 \\ \hline LG:Setting 1 & 0.77 & 0.77 & 0.77 & 0.77 \\ LG:Setting 2 & 0.70 & 0.70 & 0.70 & 0.70 \\ \hline Baseline & 0.51 & 0.50 & 0.50 & 0.58 \\ \hline \hline \end{tabular}
\end{table}
Table 7: Evaluation Results of each experimental setting using Random Forests (RF), Logistic Regression (LG), and Baseline
Figure 14: The duration from issue creation to each stage
We however acknowledge the lower performance of logistic regression (LG) compared to Random Forests (RF). However, it is important to consider that LG and RF offer different advantages in terms of model interpretability and feature importance identification. While the importance of features in a Random Forest model helps identify which features have the most predictive power, LG models provide greater interpretability due to their linear nature. The coefficients in LG allow us to analyze the impact of each feature on the output, providing valuable insights into the relationship between features and the target variable. By combining insights from both models, we can achieve a more comprehensive understanding of the data and uncover underlying patterns. Moreover, both RF and LG also identify a common set of important features such as project length and experience level. In addition, to address concerns about the models' acceptance, we have included a baseline method using Zero-Rule as a sanity check. Zero-Rule predicts the most frequent label in the training set, serving as a simplistic benchmark. Both LG and RF outperform this baseline method, indicating their superiority in predictive performance. Therefore, it is reasonable to use these models for our interpretation and analysis. By considering the interpretability strengths of LG and the feature importance of RF, and validating their performance against a baseline, we ensure a robust evaluation of the models' effectiveness and reliability for our study.
Figure 16: The duration to find a contributor
Figure 15: The number of issues in each experience level and each bounty type
### Analysis of the feature importance from Random forests
Table 8 shows the list of the top ten important features from the first and second settings, respectively. The top three highest important features of the first setting are _duration_create_to_done_, _duration_create_to_submitted_, and _lastAct_activity_type_work_done_. As can be seen that those three most important features reflect a common scenario in software development that a proper time duration spent on an issue is critical to the success of issue resolution. However, it is worth noting that these features can only be gathered when the issues are closed.
In the second setting, we thus use the features that can be manipulated or controlled by the issue owner, _description_length_, _duration_create_to_new_bounty_, and _value_in_usdt_ are the top three most important features which must be taken into account when creating issues. In particular, we found that the number of days from issue creation until the first proposed bounty is the second most important feature that potentially determines the issue outcome. This finding corresponds with the result reported in (Zhou et al., 2020), which also found that the earlier bounty proposed, the higher likelihood of being addressed. We then further investigate the correlation between these features.
We apply the Point Biserial Correlation Coefficient (Bonett, 2020) with Bonferroni correction to measure the correlation between the numerical features and the binary outcomes of issues (i.e., success and non-success issues). Table 9 lists the features used in Setting 2 along with their correlation coefficient and p-value. Significant correlations at a statistically strong level (\(p<0.05/\text{number of features}\)) are indicated in the table. We used the features in Setting 2 for our analysis because they are manipulable and more useful to practitioners.
The studying result suggests that the features "token_name_DAI" and "bounty_type_Documentation" have the strongest positive correlation with issue outcomes, while the features "contribution_type_cooperative" and "token_name_DOT" have the strongest negative correlation with issue outcomes. The results are statistically significant, with all p-values falling below the significance threshold of 0.05 after the Bonferroni correction. It shows that the type of token used and the type of bounty offered could be strongly correlated with issue success. The results suggest that the token "DAI" is positively correlated with issue success, potentially due to the stability of the coin. Regarding the contribution types, "Documentation" bounties are positively correlated with issue success, while "Security" bounties are negatively correlated. This may reflect the fact that documentation bounties are easier to define and assess, while security bounties require more expertise to evaluate and address.
### Analysis of the feature's coefficients from Logistics regression
Table 10 presents the results of a logistic regression analysis of issue success. The coefficients of each feature indicate the direction of the correlation with issue success, while the p-values provide information on the statistical significance of each correlation. The analysis shows that issues associated with the tokens "DAI" and "ETH" are
\begin{table}
\begin{tabular}{l r r} \hline \hline Feature & Coefficient & p-value \\ \hline token \_name\_DAI & 0.619 \(\uparrow\) & \(<0.001\) \\ token \_name\_ETH & 0.130 \(\uparrow\) & 0.015 \\ project\_length\_code & -0.008 \(\downarrow\) & 0.015 \\ bounty\_type\_Feature & -0.039 \(\downarrow\) & 0.589 \\ bounty\_type\_NA & -0.068 \(\downarrow\) & 0.582 \\ contribution\_type\_traditional & -0.132 \(\downarrow\) & 0.009 \\ experience\_level\_code & -0.159 \(\downarrow\) & \(<0.001\) \\ \hline \hline \end{tabular}
\end{table}
Table 10: Coefficient and p-value estimates from logistic regression analysis of issue success.
\begin{table}
\begin{tabular}{l r} \hline \hline Features & Importance Values \\ \hline \hline
**Setting 1:** & \\ duration\_create\_to\_done & 0.229 \\ duration\_create\_to\_submitted & 0.076 \\ lastAct\_activity\_type\_work\_done & 0.062 \\ number\_of\_fulfillments & 0.048 \\ duration\_create\_to\_killed & 0.044 \\ token\_value\_in\_usdt & 0.042 \\ number\_of\_activities & 0.035 \\ lastAct\_activity\_type\_killed\_bounty & 0.034 \\ number\_of\_interests & 0.034 \\ \hline \hline
**Setting 2:** & \\ description\_length & 0.143 \\ duration\_create\_to\_new\_bounty & 0.139 \\ value\_in\_usdt & 0.126 \\ value\_in\_usdt & 0.102 \\ duration\_create\_to\_expire & 0.097 \\ value\_in\_enth & 0.089 \\ value\_in\_token & 0.052 \\ experience\_level\_code & 0.037 \\ project\_length\_code & 0.037 \\ country\_type\_Feature & 0.016 \\ \hline \hline \end{tabular}
\end{table}
Table 8: Feature importance from the two experimental settings using Random Forests
\begin{table}
\begin{tabular}{l r r} \hline \hline Feature & Coefficient & p-value \\ \hline \hline token \_name\_DAI & 0.619 \(\uparrow\) & \(<0.001\) \\ token \_name\_ETH & 0.130 \(\uparrow\) & 0.015 \\ project\_length\_code & -0.008 \(\downarrow\) & 0.015 \\ bounty\_type\_Feature & -0.039 \(\downarrow\) & 0.589 \\ bounty\_type\_NA & -0.068 \(\downarrow\) & 0.582 \\ contribution\_type\_traditional & -0.132 \(\downarrow\) & 0.009 \\ experience\_level\_code & -0.159 \(\downarrow\) & \(<0.001\) \\ \hline \hline \end{tabular}
\end{table}
Table 9: The correlation of the features with the issue outcomes using the Point Biserial Correlation Coefficient
more likely to be successful. The strong correlation between "token_name_DAI" and issue success is further supported by the results of the point biserial correlation analysis. Additionally, shorter project length, traditional contribution type, and lower experience level are all negatively correlated with issue success. This suggests that shorter projects may increase the chance of issue success, and the issues that require less experienced contributors may be more successful. Furthermore, the contest contribution type is preferred over cooperative and traditional.
### The comparison study between Gitcoin and Bountysource
This study compares some characteristics of bounty issues from Gitcoin and Bountysource, with the aim of providing useful recommendations to bounty issue funders and contributors on selecting the most appropriate platform to meet their needs. Specifically, we examine three aspects of these platforms: the programming languages used, the topics of issues posted, and the value of bounties offered. To conduct this study, we utilized the Bountysource dataset provided by Zhou et al. (2020), which includes valuable information on bounty issues in the Bountysource platform. In addition, we collected supplementary data, such as the topics of each issue corresponding to the issue key provided in the dataset.
Our first comparison is to investigate the programming languages used in bounty issues across both platforms. To identify the programming languages used in bounty issues across both platforms, we collected language-related tags from the GitHub repositories associated with each bounty issue in both datasets. Table 11 shows the percentage distribution of programming languages used in bounty issues on both Gitcoin and Bountysource platforms. The results show the differences in the distribution of programming languages between the two platforms. JavaScript is the most commonly used language on both platforms, with a distribution of 43.90% on Gitcoin and 17.50% on Bountysource. This could be attributed to the popularity of JavaScript in web development, which may be a primary focus of the bounty issues on both platforms. In addition, Bountysource has a higher percentage of C++ and Python, which is also widely used in general software development. However, the difference in language distribution may also reflect the types of projects and bounties available on each platform. Gitcoin, for example, has a focus on blockchain applications, which may explain the higher percentage of TypeScript and Solidity, which are both used in the development of smart contracts and other blockchain-related applications.
In the second comparison aspect, we focus on issue topics across the two platforms. Specifically, we analyzed the tags associated with bounty issues to identify the most commonly used topics on each platform. Table 12 shows a comparison of the top 30 topics identified from the bounty issues from Gitcoin and Bountysource platforms. The results show that Gitcoin focuses on blockchain-related topics such as Ethereum, Blockchain, ETH, Solidity, and DeFi, which are not represented in Bountysource. On the other hand, Bountysource shows more diversity in terms of topics, with programming languages, game-related topics, and backup-related topics appearing more frequently. This difference in focus may reflect the different goals and priorities of the two platforms and suggests that issue funders and contributors may need to consider these differences when selecting a platform.
The last comparison focuses on the bounty value of issues on the two platforms. We analyze the distribution of bounty value for the successful issues of both platforms. Figure 17 shows the cumulative distribution of USDT value for successful issues on Gitcoin and Bountysource platforms. By comparing the curves, it shows the differences in the distribution of USDT value for successful bounty issues on each platform. We can notice that the cumulative distribution for Gitcoin is generally lower than the cumulative distribution for Bountysource. This suggests that successful bounty issues on Gitcoin tend to have lower USDT values compared to successful bounty issues on Bountysource.
Moreover, since the dataset from Gitcoin (from 2017 to 2020) is newer than the Bountysource dataset from Zhou et al. (2020) (from 2012 to 2017), there might be differences in terms of the topics, bounty values, and languages used
\begin{table}
\begin{tabular}{l c c c} \hline \hline Bountysource & \% & Gitcoin & \% \\ \hline JavaScript & 17.50 & JavaScript & 43.90 \\ C++ & 14.20 & TypeScript & 12.50 \\ Python & 12.40 & Go & 9.50 \\ PHP & 9.10 & Python & 7.70 \\ C\# & 7.60 & Clojure & 4.00 \\ C & 6.60 & Rust & 3.30 \\ Java & 6.10 & Solidity & 3.10 \\ Ruby & 5.40 & C++ & 2.70 \\ TypeScript & 3.80 & HTML & 2.00 \\ Go & 2.40 & CSS & 1.60 \\ \hline \hline \end{tabular}
\end{table}
Table 11: Percentage distribution of programming languages in Bountysource and Gitcoin
\begin{table}
\begin{tabular}{l c} \hline \hline Bountysource’s top 30 topics \\ \hline Hacktoberfest, JavaScript, Python, Game, C, Linux, PHP, Game engine, C++, Cross-platform, RTS, Real-time strategy, OpenRA, Command and Conquer, Strategy game engine, Backup, Encryption, C\#, NET, Python 3, SSH, Engine, Deduplication, Dedupe, Cython, Compression, BorgBackup, Tiberian Dawn, Red Alert, Java. \\ \hline \hline \end{tabular}
\end{table}
Table 12: Top 30 Topics on Gitcoin and Bountysource Platforms.
in the bounties. We have performed an additional study to compare the topics of the latest issues of Gitcoin and BountySource. Since the API of Bountysource is no longer available15, we performed a manual comparison by going through the 20 latest issues on Gitcoin and Bountysource. To get the topic of the issues, we relied on multiple approaches. First, we checked the topics or labels assigned to Gitcoin or Bountysource bounties. Second, we checked the labels given to the bounties' associated GitHub issues. Third, we also checked the topics given to the GitHub project containing the issue. After analyzing the collected topics, we observed that 10 Gitcoin bounties contain blockchain-related issues compared to 3 bounties in Bountysource.
Footnote 15: We checked on 20th June 2023.
### Discussion
According to our feature analysis, correlation analysis, and feature importance analysis, we can discuss them as follows.
* The duration in each state of an issue potentially determines the issue outcome. We found that the duration-related features, especially the duration between issues creation until the work is done, are highly correlated with the success of issues. On average, those success issues usually take fourteen days to be resolved, while it is only one day for those non-success issues. In addition, the correlation analysis with the issue outcome using Point Biserial shows a strong positive correlation which indicates that the longer time spent working on issues can potentially lead to the issue being resolved and successfully have been paid. This suggests that issue funders should carefully determine the appropriate project length when creating an issue. This should reflect the estimated duration required for issue resolution. By providing an accurate estimate, funders enable contributors to assess their ability to deliver as expected.
* The length of the issue description appears to have a significant impact on the success of an issue. Figure 6.4 shows the number of characters of issue descriptions. As can be seen, on average, the description of the success issues is slightly longer than those of non-success. In addition, issues that require advanced-level experience usually have longer issue descriptions. Therefore, practitioners may benefit from spending the time and effort to write clear and detailed issue descriptions that clearly communicate the expectations and requirements for issue resolution.
* We found that contribution type, experience level, and bounty type strongly correlate with the outcome of issues. We then provide the bounty value of success issues in each group in Table 13, which can help issue owners initiate the bounty value of their issues. In addition, our analysis suggests that certain token types, such as DAI and ETH, are more commonly used in successful bounty issues, indicating that these tokens may be more attractive to contributors.
* We found that there is a correlation between the experience level of contributors and the outcome of issues. Issues that require beginner and intermediate experience levels have a higher percentage of success compared to non-success issues. However, those issues with an advanced experience level have a higher number of non-successful outcomes. The correlation between experience level and issue outcome is negative, suggesting that the scope of work for an issue should be clear and not too complicated for the contributor to complete.
* Our study found that although the actual token value was identified as an important feature in predicting the outcomes of bounty issues in RF, it did not show a strong statistical correlation with the success issues. It is important to note that the cryptocurrency market is known for its high volatility, and the value of cryptocurrencies can fluctuate significantly. To gain further insights, we specifically focused on Ethereum
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \multicolumn{2}{c}{} & \multicolumn{3}{c}{Value in USD} \\ \multicolumn{2}{c}{} & \multicolumn{1}{c}{Min} & Mean & Max \\ \hline \multirow{3}{*}{Project types} & Traditional & 0.00 & 148.96 & 736.48 \\ & Cooperative & 0.01 & 170.97 & 579.20 \\ & Contest & 0.00 & 118.45 & 737.85 \\ \hline \multirow{3}{*}{Experience levels} & Beginner & 0.00 & 79.64 & 500.00 \\ & Intermediate & 0.00 & 155.50 & 736.14 \\ & Advanced & 0.10 & 200.51 & 736.48 \\ \hline \multirow{3}{*}{Bounty types} & Feature & 0.00 & 146.03 & 700.00 \\ & Improvement & 0.00 & 146.14 & 691.35 \\ \cline{1-1} & Bug & 0.00 & 94.73 & 688.72 \\ \hline \hline \end{tabular}
\end{table}
Table 13: The bounty value (in USD) of success issues in each feature
Figure 17: Comparison of Cumulative Distribution of USDT Value for Gitcoin and Bountysource bounty issues
(ETH), as it was the most commonly assigned token to the bounty issues in our dataset. Figure 19 shows the actual value of ETH and the cumulative count of success issues, grouped by month and year according to the timeline. The figure shows the fluctuation in ETH value over time. Notably, during 2019, the value of ETH experienced a decline. We observed that the ETH value was particularly high in early 2018, coinciding with a high number of successful issues. However, as time progressed, both the value of ETH and the count of success issues exhibited a declining trend. These findings suggest a potential relationship between the value of cryptocurrency and the success of bounty issues.
* Our comparative study confirms that Gitcoin's topics primarily revolve around blockchain-related issues. Consequently, the variations in programming languages can be considered a natural consequence of this focus, while the value of the bounty is comparatively lower than that of Bountysource. This could suggest issue owner to select a platform that fit their requirements. In addition, this observation suggests that a bounty platform dedicated to a specific type of project may effectively attract attention from contributors.
## 7 Threats to validity
In this section, we discuss the threats to the validity of our study.
### External Validity
Regarding the generalizability of the work, our study focuses on only bounties issues from the Gitcoin platform. The results from our study might not be generalizable to other platforms. Therefore, our future work relates to studying issue reports from other platforms and different project settings that their bounty rewards are related to cryptocurrencies.
### Internal validity
We consider closed issues and examine only those that have been created and subsequently resolved. The data used in this analysis is sourced from the issues in Mainnet. These may have threats to the internal validity of the study. Nonetheless, we mitigate the threats by ensuring the validity of all extracted features through a validation process (e.g., manual validation process). The dataset from Gitcoin and the Bountysource dataset from Zhou et al. (2020) overlap only in 2017. Thus, there might be a threat to validity when comparing them. We mitigate this threat by performing an additional manual comparison of the 20 latest issues in Gitcoin and Bountysource.
### Construct Validity
In order to mitigate threats to construct validity, we apply the Bonferroni correction (i.e., p-value adjustment) to our correlation analysis among the extracted features and between the features and outcome of the issues. This technique helps to eliminate bias and improve the reliability of our results. We thus used statistical testing to determine the significance of any correlations found, further mitigating any potential threats to the construct validity of our experiment. In addition, to ensure the robustness of our experiments, we use a combination of techniques to analyze the correlation between the extracted features and the outcome of the issues. Specifically, we apply the Point Biserial Correlation Coefficient, which is a statistical method specifically designed for the correlation analysis of binary variables.
### Conclusion Validity
We mitigate threats to conclusion validity by taking a rigorous and cautious approach when drawing conclusions based on the extracted features from the studied platform. This includes carefully limiting our conclusions to only those observations and insights that can be directly supported by the data extracted from the platform.
## 8 Conclusion and Future Work
The development of open source software (OSS) projects is heavily progressed by contributions from volunteering developers. Bounty rewards in different schemes have been used to motivate participation in the OSS development. Gitcoin proposes a platform that allows issue owners to create a bounty reward using cryptocurrencies solely. The understanding of the phenomenon related to the use of bounty rewards in OSS projects promotes benefits yielded from this reward mechanism. We thus perform a study on over 4,000 Bitcoin issues by categorizing those issues into four main aspects, including primitive attributes, bounty value-related,
Figure 18: The distribution of description length grouped by the success and non-success issues
activity-related, and duration-related features. Using statistical and machine learning techniques, we identify factors that influence the outcome of the issues. It could be served as a guide for issue owners to increase the efficiency of their bounty rewards. We acknowledge the high volatility of the cryptocurrency market, and it is possible that fluctuations in cryptocurrency value may influence the success or failure of bounty issues. Our additional investigation into the actual value of cryptocurrency and its correlation with bounty success demonstrates the potential for a relationship. However, further study is required as part of our future work to better understand this relationship.
## 9 Acknowledgement
This work (Grant No. RGNS 64-164) was financially supported by Office of the Permanent Secretary, Ministry of Higher Education, Science, Research and Innovation.
|
2309.05204 | Accelerated Proximal Iterative re-Weighted $\ell_1$ Alternating
Minimization for Image Deblurring | The quadratic penalty alternating minimization (AM) method is widely used for
solving the convex $\ell_1$ total variation (TV) image deblurring problem.
However, quadratic penalty AM for solving the nonconvex nonsmooth $\ell_p$, $0
< p < 1$ TV image deblurring problems is less studied. In this paper, we
propose two algorithms, namely proximal iterative re-weighted $\ell_1$ AM
(PIRL1-AM) and its accelerated version, accelerated proximal iterative
re-weighted $\ell_1$ AM (APIRL1-AM) for solving the nonconvex nonsmooth
$\ell_p$ TV image deblurring problem. The proposed algorithms are derived from
the proximal iterative re-weighted $\ell_1$ (IRL1) algorithm and the proximal
gradient algorithm. Numerical results show that PIRL1-AM is effective in
retaining sharp edges in image deblurring while APIRL1-AM can further provide
convergence speed up in terms of the number of algorithm iterations and
computational time. | Tarmizi Adam, Alexander Malyshev, Mohd Fikree Hassan, Nur Syarafina Mohamed, Md Sah Hj Salam | 2023-09-11T02:45:39Z | http://arxiv.org/abs/2309.05204v1 | Accelerated Proximal Iterative re-Weighted \(\ell_{1}\) Alternating Minimization for Image Deblurring
###### Abstract
The quadratic penalty alternating minimization (AM) method is widely used for solving the convex \(\ell_{1}\) total variation (TV) image deblurring problem. However, quadratic penalty AM for solving the nonconvex nonsmooth \(\ell_{p}\), \(0<p<1\) TV image deblurring problems is less studied. In this paper, we propose two algorithms, namely proximal iterative re-weighted \(\ell_{1}\) AM (PIRL1-AM) and its accelerated version, accelerated proximal iterative re-weighted \(\ell_{1}\) AM (APIRL1-AM) for solving the nonconvex nonsmooth \(\ell_{p}\) TV image deblurring problem. The proposed algorithms are referred from the proximal iterative re-weighted \(\ell_{1}\) (IRL1) algorithm and the proximal gradient algorithm. Numerical results show that PIRL1-AM is effective in retaining sharp edges in image deblurring while APIRL1-AM can further provide convergence speed up in terms of the number of algorithm iterations and computational time.
total variation, convex optimization, deblurring, nonconvex optimization, alternating minimization
## I Introduction
In this paper, we are interested in the image deblurring problem obtained from the following image degradation model
\[\mathbf{f}=\mathbf{K}\mathbf{u}+\mathbf{n}, \tag{1}\]
where \(\mathbf{f}\in\mathbb{R}^{n}\) is the observed noisy and blurred image, \(\mathbf{K}\in\mathbb{R}^{n\times n}\) is a blur kernel, \(\mathbf{u}\in\mathbb{R}^{n}\) is the uncorrupted image to be estimated, and \(\mathbf{n}\in\mathbb{R}^{n}\) is additive Gaussian noise.
One way to solve (1) for the image \(\mathbf{u}\) is by minimizing the nonconvex nonsmooth composite optimization problem
\[\min_{\mathbf{u}\in\mathbb{R}^{n}}\frac{1}{2}\|\mathbf{K}\mathbf{u}-\mathbf{ f}\|_{2}^{2}+\mu\|\nabla\mathbf{u}\|_{p}^{p}, \tag{2}\]
where \(0<p<1\) and \(\nabla\in\mathbb{R}^{n\times n}\) are the discrete difference operator [1]. If \(p=1\), problem (2) results in the convex \(\ell_{1}\)-norm total variation (TV) image restoration [2].
Operator splitting methods such as the alternating direction method of multipliers (ADMM) and the alternating minimization (AM) [3, 4] for solving the convex \(\ell_{1}\)-norm TV problem produces the sublinear convergence rate of \(\mathcal{O}\left(\frac{1}{k}\right)\), which is quite slow in practice [5, 6]. Therefore, this has motivated researchers to accelerate these methods to improved the convergence rate to \(\mathcal{O}\left(\frac{1}{k^{2}}\right)\). However, the majority of the work in this direction is focused on convex optimization.
This paper, focuses on the model (2) when \(0<p<1\) i.e., \(\ell_{p}\) quasi-norm hence, the nonconvex nonsmooth \(\ell_{p}\)-norm TV. Our motivation mainly stems from the advantage of nonconvex nonsmooth penalties in restoring even sharper image quality compared to the \(\ell_{1}\)-norm TV [7]. Furthermore, this paper is further motivated by applying the quadratic penalty AM which is an operator splitting method for solving nonconvex nonsmooth image deblurring problems (2).
By this, we propose a proximal iterative re-weighted \(\ell_{1}\) alternating minimization (AM) algorithm along with its accelerated version. The proposed algorithm uses the ideas of proximal operators and the iterative re-weighted \(\ell_{1}\) (IRL1) method [8] in combination with the alternating minimization method [3]. To accelerate the convergence of the proposed method, Nesterov acceleration is also used [9, 10].
## II Related Works
The alternating minimization (AM) method for solving the \(\ell_{1}\) TV image deblurring problem was initially proposed in [3]. An accelerated version employing acceleration techniques from [10] was further investigated in [11].
Specifically, it was shown that the \(\ell_{1}\)-norm sub minimization problem of the AM can be seen as a proximal gradient step. Hence, amenable to acceleration via the proximal gradient method [10]. However, the accelerated AM assumes the minimization problem to be convex.
The interesting link between AM and the accelerated proximal gradient method shown in [11] suggests that AM may have links with nonconvex proximal gradient type methods.
For example, the re-weighted \(\ell_{1}\) method that was originally proposed in [8] for nonconvex \(\ell_{p}\) sparse recovery problems along with its proximal version [12, 13].
The proximal re-weighted \(\ell_{1}\) algorithm with acceleration is relatively new and studied in [14, 15, 16] and has shown promising results in minimizing problems of the form (2). However, their applications have been mainly restricted to sparse and low-rank matrix recovery problems. Furthermore, their connections with operator splitting methods such as AM for image deblurring have not been to our knowledge explored.
## III Problem Formulation and Algorithm
In this paper, we focus on image deblurring by minimizing the nonconvex nonsmooth optimization problem (2). However, we restrict our results and discussion to a value of \(p=0.1\). By adding a quadratic penalty term to (2), we have
\[\underset{\mathbf{u}}{\text{min}}\,\frac{1}{2}\|\mathbf{K}\mathbf{u}-\mathbf{ f}\|_{2}^{2}+\mu\|\mathbf{z}\|_{p}^{p}+\frac{\beta}{2}\|\mathbf{z}-\nabla \mathbf{u}\|_{2}^{2}. \tag{3}\]
Indeed, problem (3) is equivalent to problem (2) with constraint \(\mathbf{z}=\nabla\mathbf{u}\)[17]. To minimize (3), we can fix \(\mathbf{u}\) with the current value and minimize with respect to \(\mathbf{z}\) and vice-versa
\[\begin{cases}\mathbf{z}_{k+1}=\underset{\mathbf{z}}{\text{arg}\,\underset{ \mathbf{z}}{\text{min}}\,\frac{\beta}{2}}\|\mathbf{z}-\nabla\mathbf{u}\|_{2}^ {2}+\mu\|\mathbf{z}\|_{p}^{p},\\ \mathbf{u}_{k+1}=\underset{\mathbf{u}}{\text{arg}\,\underset{ \mathbf{u}}{\text{min}}\,\frac{1}{2}}\|\mathbf{K}\mathbf{u}-\mathbf{f}\|_{2}^ {2}+\frac{\beta}{2}\|\mathbf{z}-\nabla\mathbf{u}\|_{2}^{2}.\end{cases} \tag{4}\]
Quadratic penalty AM scheme (4) is a classic method and commonly used in the image and signal processing literature [3, 17].
Note that the minimization problem concerning \(\mathbf{z}\) in (4) is the nonconvex nonsmooth \(\ell_{p}\) minimization problem hence, amenable to the iterative re-weighted \(\ell_{1}\) minimization algorithm [8]. If \(p=1\) in (4), this minimization problem is convex and solvable using soft thresholding [18]. Consequently, it was shown that it is equivalent to the proximal gradient update [11]
\[\begin{split}\mathbf{z}_{k+1}=\underset{\mathbf{z}\in\mathbb{R} ^{n}}{\text{arg}\,\underset{\mathbf{z}\in\mathbb{R}^{n}}{\text{min}}\,f\left( \mathbf{y}_{k}\right)+}&\langle\nabla f\left(\mathbf{y}_{k}\right), \,\mathbf{z}-\mathbf{y}_{k}\rangle\\ &+\frac{L}{2}\|\mathbf{z}-\mathbf{y}_{k}\|_{2}^{2}+\mu\|\mathbf{ z}\|_{1},\end{split} \tag{5}\]
where \(\mathbf{y}_{k}=\nabla\mathbf{u}_{k}\), \(L\) is the Lipschitz constant of \(f\left(\mathbf{y}_{k}\right)=\frac{\beta}{2}\|\mathbf{z}-\mathbf{y}_{k}\|_{2}^ {2}\), and \(\nabla f\left(\mathbf{y}_{k}\right)=\beta\left(\mathbf{z}-\nabla\mathbf{u}\right)\). Due to this equivalence, the \(\mathbf{z}\) sub problem can be accelerated by the fast iterative shrinkage and thresholding algorithm (FISTA) [10].
### _Proximal Iterative re-Weighted \(\ell_{1}\) Alternating Minimization_
With previous foundations in place, coming back to the nonconvex nonsmooth \(\ell_{p}\) minimization subproblem \(\mathbf{z}\) in (4), we have
\[\mathbf{z}_{k+1}=\underset{\mathbf{z}}{\text{arg}\,\underset{ \mathbf{z}}{\text{min}}\,\frac{1}{2}\|\mathbf{z}-\nabla\mathbf{u}_{k}\|_{2}^ {2}+\frac{\mu}{\beta}\|\mathbf{z}\|_{p}^{p}, \tag{6}\]
after a simple re-arrangement of \(\beta\). The IRL1 approximately solves (6) by [8, 19]
\[\mathbf{z}_{k+1}=\underset{\mathbf{z}\in\mathbb{R}^{n}}{\text{arg}\,\underset {\mathbf{z}\in\mathbb{R}^{n}}{\text{min}}\,\frac{1}{2}\|\mathbf{z}-\nabla \mathbf{u}_{k}\|_{2}^{2}+\frac{\mu}{\beta}\sum_{i}w^{i}|z^{i}|, \tag{7}\]
where \(w^{i}=p\left(\left|z_{k}^{i}\right|+\epsilon\right)^{p-1}\), and \(z^{i}\) are the weights and entries of vector \(\mathbf{z}\) respectively. Note that in (7) the nonconvex nonsmooth \(\ell_{p}\) minimization problem is approximated into a convex weighted \(\ell_{1}\)-norm minimization hence, IRL1 is a convex relaxation method for nonconvex optimization problems.
By introducing a diagonal weight matrix \(\mathbf{W}_{k}=\text{diag}\left(w_{k}^{i},\cdots w_{k}^{n}\right)\), IRL1 problem (7) can be written as
\[\mathbf{z}_{k+1}=\underset{\mathbf{z}\in\mathbb{R}^{n}}{\text{arg}\,\underset {\mathbf{z}\in\mathbb{R}^{n}}{\text{min}}\,\frac{1}{2}\|\mathbf{z}-\nabla \mathbf{u}_{k}\|_{2}^{2}+\frac{\mu}{\beta}\|\mathbf{W}_{k}\mathbf{z}\|_{1}. \tag{8}\]
Problem (8) can be interpreted as solving a proximal linearization of the term \(f\left(\mathbf{z}\right)=\frac{1}{2}\|\mathbf{z}-\bar{\mathbf{z}}_{k}\|_{2}^ {2}\) at \(\bar{\mathbf{z}}_{k}=\nabla\mathbf{u}_{k}\) i.e.,
\[\begin{split}\mathbf{z}_{k+1}=\underset{\mathbf{z}\in\mathbb{R} ^{n}}{\text{arg}\,\underset{\mathbf{z}\in\mathbb{R}^{n}}{\text{min}}\,f\left( \bar{\mathbf{z}}_{k}\right)+}&\langle\nabla f\left(\bar{\mathbf{z} }_{k}\right),\,\mathbf{z}-\bar{\mathbf{z}}_{k}\rangle\\ &+\frac{L}{2}\|\mathbf{z}-\bar{\mathbf{z}}_{k}\|_{2}^{2}+\frac{\mu} {\beta}\|\mathbf{W}_{k}\mathbf{z}\|_{1},\end{split} \tag{9}\]
hence, by ignoring constant terms can be written as [10, 20]
\[\mathbf{z}_{k+1}=\underset{\mathbf{z}\in\mathbb{R}^{n}}{\text{arg}\,\underset {\mathbf{z}\in\mathbb{R}^{n}}{\text{min}}\,\frac{1}{2}\|\mathbf{z}-\mathbf{v}_{k }\|_{2}^{2}+\lambda\|\mathbf{W}_{k}\mathbf{z}\|_{1}, \tag{10}\]
with \(\mathbf{v}_{k}=\bar{\mathbf{z}}_{k}-\frac{1}{L}\nabla f\left(\bar{\mathbf{z}}_ {k}\right)\) and \(\lambda=\frac{\mu}{L\beta}\). Furthermore, by combining the weight entries with \(\lambda\) in the weight matrix \(\mathbf{W}_{k}\) we finally arrive at
\[\mathbf{z}_{k+1}=\underset{\mathbf{z}\in\mathbb{R}^{n}}{\text{arg}\,\underset {\mathbf{z}\in\mathbb{R}^{n}}{\text{min}}\,\frac{1}{2}\|\mathbf{z}-\mathbf{v}_{k }\|_{2}^{2}+\frac{p\lambda}{\left(\left|\bar{z}_{k}^{i}\right|+\epsilon\right)^{1 -p}}\|\mathbf{z}\|_{1}, \tag{11}\]
where \(\bar{z}_{k}^{i}\) is the \(i^{\text{th}}\) entry of \(\bar{\mathbf{z}}_{k}\) at the \(k^{\text{th}}\) iteration and \(\epsilon\) a very small number to avoid division by zero. Equation (11) can be solved in a closed form via the soft-thresholding operation as
\[\mathbf{z}_{k+1}=\text{sgn}\left(\bar{z}_{k}^{i}\right)\text{max}\left(0,\, \left|\bar{z}_{k}^{i}\right|-\frac{p\lambda}{\left(\left|\bar{z}_{k}^{i}\right|+ \epsilon\right)^{1-p}}\right). \tag{12}\]
From (12), it can be seen that solving the original nonconvex subproblem (6) boils down to a series of proximal weighted \(\ell_{1}\) minimization problem.
The idea of proximal re-weighted \(\ell_{1}\) minimization has been proposed in [12] and its convergence behavior analyzed in [13]. However, the applications are mainly confined to sparse signal recovery and its use as a sub-minimization problem in the AM scheme (4) for image deblurring to our knowledge has not been studied.
Next, the sub-minimization problem for \(\mathbf{u}\)
\[\mathbf{u}_{k+1}=\underset{\mathbf{u}}{\text{arg}\,\underset{ \mathbf{u}}{\text{min}}\,\frac{1}{2}\|\mathbf{K}\mathbf{u}-\mathbf{f}\|_{2}^{2}+ \frac{\beta}{2}\|\mathbf{z}_{k+1}-\nabla\mathbf{u}\|_{2}^{2}, \tag{13}\]
is a convex quadratic problem that can be solved by solving the following linear system
\[\mathbf{u}_{k+1}=\left(\mathbf{K}^{\top}\mathbf{K}+\beta\nabla^{\top}\nabla \right)^{-1}\left(\mathbf{K}^{\top}\mathbf{f}+\beta\nabla^{\top}\mathbf{z}_{k +1}\right). \tag{14}\]
Taking into account equations (12) and (14), the complete listing for the proximal iterative re-weighted \(\ell_{1}\) alternating minimization (PIRL1-AM) is listed as Algorithm 1.
### _Accelerated Proximal Iterative re-Weighted \(\ell_{1}\) Alternating Minimization_
To accelerate the proximal iterative re-weighted \(\ell_{1}\) AM for the nonconvex nonsmooth \(\ell_{p}\) TV image deblurring problem, accelerated techniques from [9, 10] can be used. Consider the sub-minimization problem (11), due to the convexity of (11) and its equivalence to minimizing a proximal linearization of \(\frac{1}{2}\|\mathbf{z}-\mathbf{v}_{k}\|_{2}^{2}\) as discussed earlier, we have
\[\begin{split}\mathbf{z}_{k+1}=\underset{\mathbf{z}\in\mathbb{R}^ {n}}{\text{argmin}}\,f\left(\mathbf{v}_{k}\right)+&\langle \nabla f\left(\mathbf{v}_{k}\right),\,\mathbf{z}-\mathbf{v}_{k}\rangle\\ &+\frac{L}{2}\|\mathbf{z}-\mathbf{v}_{k}\|_{2}^{2}+\tau\|\mathbf{ z}\|_{1},\end{split} \tag{15}\]
where \(\tau=\frac{p\lambda}{\left(|\bar{z}_{k}^{i}|+\epsilon\right)^{1-p}}\). Then, the acceleration strategies along the lines of [9, 10] can be employed, which involves the following iterative scheme
\[\begin{cases}\mathbf{z}_{k+1}=\underset{\mathbf{z}\in\mathbb{R}^{n}}{\text{ argmin}}\,f\left(\mathbf{y}_{k}\right)+\langle\nabla f\left(\mathbf{y}_{k} \right),\,\mathbf{z}-\mathbf{y}_{k}\rangle\\ +\frac{L}{2}\|\mathbf{z}-\mathbf{y}_{k}\|_{2}^{2}+\tau\|\mathbf{z}\|_{1}, \\ t_{k}=\frac{k-1}{k+2},\\ \mathbf{y}_{k+1}=\mathbf{z}_{k+1}+t_{k}\left(\mathbf{z}_{k+1}- \mathbf{z}_{k}\right).\end{cases} \tag{16}\]
The \(\mathbf{z}\) minimization in (16) as discussed previously has a closed form solution of
\[\mathbf{z}_{k+1}=\text{sgn}\left(\bar{z}_{k}^{i}\right)\text{max}\left(0,\,| \bar{z}_{k}^{i}|-\tau\right). \tag{17}\]
In the iterative scheme (16), the scalar \(t_{k}\) is known as the Nesterov momentum coefficient and changes in each iteration \(k\). The step \(\mathbf{y}_{k}\) is called the extrapolation step. Also, recall that \(\mathbf{z}_{k}=\nabla\mathbf{u}_{k}\). Acceleration techniques of this form are shown to match the theoretical lower bound of \(\mathcal{O}\left(\frac{1}{\kappa^{2}}\right)\) for smooth first-order convex optimization.
```
Initialize:\(\mathbf{u}_{0}\), \(\mathbf{z}_{0}\), \(\mu>0\), \(\beta>0\), \(p=0.1\), \(L=1\), and \(k=0\) whilenot convergeddo
2\(\mathbf{z}_{k+1}=\text{sgn}\left(\bar{z}_{k}^{i}\right)\text{max}\left(0,\,| \bar{z}_{k}^{i}|-\frac{p\lambda}{\left(|z_{k}^{i}|+\epsilon\right)^{1-p}}\right)\), \(\mathbf{u}_{k+1}=\left(\mathbf{K}^{\top}\mathbf{K}+\beta\nabla^{\top}\nabla \right)^{-1}\left(\mathbf{K}^{\top}\mathbf{f}+\beta\nabla^{\top}\mathbf{z}_{k +1}\right)\), \(k=k+1\)
```
**Algorithm 1**Proximal iterative re-weighted \(\ell_{1}\) alternating minimization (PIRL1-AM)
Having shown the equivalence between (11) and the proximal linearization step (15) along with its acceleration, the iterative scheme of the accelerated proximal iterative re-weighted \(\ell_{1}\) AM (APIRL1-AM) taking account the sub minimization problem for \(\mathbf{u}\) is as follows
\[\begin{cases}\mathbf{z}_{k+1}=\text{sgn}\left(\bar{z}_{k}^{i}\right)\text{max }\left(0,\,|\bar{z}_{k}^{i}|-\tau\right),\\ t_{k}=\frac{k-1}{k+2},\\ \mathbf{y}_{k+1}=\mathbf{z}_{k+1}+t_{k}\left(\mathbf{z}_{k+1}-\mathbf{z}_{k} \right),\\ \mathbf{u}_{k+1}=\left(\mathbf{K}^{\top}\mathbf{K}+\beta\nabla^{\top}\nabla \right)^{-1}\left(\mathbf{K}^{\top}\mathbf{f}+\beta\nabla^{\top}\mathbf{z}_{k +1}\right).\end{cases} \tag{18}\]
The complete algorithm is listed as Algorithm 2.
In Algorithms 1 and 2, the Lipschitz constant \(L\) is fixed to 1. In real applications, the value of \(L\) is usually unknown. The values of \(\mu\) and \(\beta\) are used for computing \(\lambda=\frac{\mu}{L\beta}\). Additionally, the linear system for solving \(\mathbf{u}\) can be solved very fast using the two-dimensional fast Fourier transforms (FFT) with complexity \(\mathcal{O}\left(n^{n}\log n\right)\)[3] while the \(\mathbf{z}\) problem is only of linear complexity \(\mathcal{O}\left(n\right)\).
## IV Results
In this section, we apply the two proposed algorithms on the nonconvex nonsmooth image deblurring problem (2) and discuss the results of the proposed algorithms without and with acceleration.
### _Experimental Setup_
For the experiments, the blurred and noisy images are obtained by the model (1). The blur kernel used is a Gaussian blur of size \(17\times 17\) pixel with \(\sigma=7\). Two blur levels were used namely, blur signal-to-noise ratio (BSNR) 30 and 20 [21]. The values for \(\beta\) are \(\beta=0.01\) and \(\beta=0.009\) for BSNR 20 and 30 respectively. The deblurring on each image was done 10 times and the average was taken.
All images are of size \(512\times 512\) pixels. Experiments were conducted using MATLAB1 on an Intel Core i3-10105 CPU operating at 3.70 GHz with a memory of 4 Gb.
Footnote 1: Codes at [https://github.com/tarmiziAdam2005/PIRL1-AM](https://github.com/tarmiziAdam2005/PIRL1-AM)
### _Results and discussion_
Table I shows the results of the two proposed algorithms. In terms of image quality of PSNR and SSIM [22], the algorithms gave almost similar results. Some2 of the deblurred images are shown in Figures 2 and 3. For BSNR 30 (less noise corruption), Figure 2 shows the ability of the methods in preserving sharp images.
Footnote 2: Only deblurred images of APIRL1-AM are shown due to both algorithms giving very similar PSNR and SSIM values.
However, there is a difference in terms of the number of iterations to converge to the required relative error value. For both BSNR levels tested, the acceleration technique used in APIRL1-AM manages to decrease the number of iterations to converge. Additionally, the time to converge also improves for APIRL1-AM compared to PIRL1-AM.
Figure 1 compares the convergence between APIRL1-AM and PIRL1-AM. It can be seen that APIRL1-AM converges faster than PIRL1-AM and exhibits acceleration ripples akin to first-order accelerated techniques [23]. Taking into account similar results in terms of PSNR and SSIM of the two algorithms, acceleration gives an additional advantage for arriving at similar deblurring quality at a lower number of iterations and CPU time.
## V Conclusion
In this paper, two algorithms PIRL1-AM and APIRL1-AM were proposed for nonconvex nonsmooth \(\ell_{p}\) TV image deblurring. The algorithms were derived by showing the links between the proximal gradient method and the proximal iterative re-weighted \(\ell_{1}\). Both algorithms were able to retain sharp images for image deblurring. For algorithm convergence, APIRL1-AM exhibits the optimal \(\mathcal{O}\left(\frac{1}{k^{2}}\right)\) rate of convergence and improves the CPU time and the number of iterations to converge. Future works include establishing the convergence rate of both algorithms and applying them to different problems.
\begin{table}
\begin{tabular}{c c c c c c c c c c} \hline \hline & \multicolumn{3}{c}{APIRL1-AM} & \multicolumn{3}{c}{PIRL1-AM} \\ \hline \hline BSNR 1v1 & & PSNR & SSIM & Itr & T & PSNR & SSIM & Itr & T \\ \hline \multirow{3}{*}{30} & _Peppers_ & 27.39 & 0.759 & 269 & 11.20 & 27.39 & 0.759 & 364 & 14.71 \\ & _Cameraman_ & 26.03 & 0.763 & 228 & 9.50 & 26.04 & 0.762 & 324 & 13.12 \\ \cline{1-1} & _Peppers_ & 25.52 & 0.625 & 141 & 6.03 & 25.52 & 0.625 & 186 & 7.63 \\ \cline{1-1}
20 & _Cameraman_ & 24.49 & 0.570 & 225 & 9.64 & 24.49 & 0.570 & 244 & 10.02 \\ \hline \hline \end{tabular}
\end{table} TABLE I: Image deblurring Results. Value of \(\mu=30\), \(60\) for Peppers and Cameraman respectively for BSNR 30. For BSNR 20, Value of \(\mu=100\) for both Peppers and Cameraman
Fig. 3: Deblurred image _Cameraman_ BSNR\(=20\). (a) Original. (b) Blurred (PSNR = 21.65, SSIM = 0.499). (c) Zoomed. (d) APIRL1-AM deblurred.
Fig. 2: Deblurred image _Peppers_ BSNR\(=30\). (a) Original. (b) Blurred (PSNR = 22.97, SSIM = 0.659). (c) Zoomed. (d) APIRL1-AM deblurred. |
2303.17883 | Single-ended Recovery of Optical fiber Transmission Matrices using
Neural Networks | Ultra-thin multimode optical fiber imaging promises next-generation medical
endoscopes reaching high image resolution for deep tissues. However, current
technology suffers from severe optical distortion, as the fiber's calibration
is sensitive to bending and temperature and thus requires in vivo
re-measurement with access to a single end only. We present a neural network
(NN)-based approach to reconstruct the fiber's transmission matrix (TM) based
on multi-wavelength reflection-mode measurements. We train two different NN
architectures via a custom loss function insensitive to global
phase-degeneracy: a fully connected NN and convolutional U-Net. We reconstruct
the 64 $\times$ 64 complex-valued fiber TMs through a simulated single-ended
optical fiber with $\leq$ 4\% error and cross-validate on experimentally
measured TMs, demonstrating both wide-field and confocal scanning image
reconstruction with small error. Our TM recovery approach is 4500 times faster,
is more robust to fiber perturbation during characterization, and operates with
non-square TMs. | Yijie Zheng, George S. D. Gordon | 2023-03-31T08:35:22Z | http://arxiv.org/abs/2303.17883v2 | # Single-ended Recovery of Optical fiber Transmission Matrices using Neural Networks
###### Abstract
Ultra-thin multimode optical fiber imaging technology promises next-generation medical endoscopes that provide high image resolution deep in the body (e.g. blood vessels, brain). However, this technology suffers from severe optical distortion. The fiber's transmission matrix (TM) calibrates for this distortion but is sensitive to bending and temperature so must be measured immediately prior to imaging, i.e. _in vivo_ and thus with access to a single end only. We present a neural network (NN)-based approach that quickly reconstructs transmission matrices based on multi-wavelength reflection-mode measurements. We introduce a custom loss function insensitive to global phase-degeneracy that enables effective NN training. We then train two different NN architectures, a fully connected NN and convolutional U-Net, to reconstruct \(64\times 64\) complex-valued fiber TMs through a simulated single-ended optical fiber with \(\leq 4\%\) error. This enables image reconstruction with \(\leq 8\%\) error. This TM recovery approach shows advantages compared to conventional TM recovery methods: 4500 times faster; robustness to 6% fiber perturbation during characterization; operation with non-square TMs and no requirement for prior characterization of reflectors.
Optical fiber imaging Transmission matrix reconstruction Custom loss function Neural network
## 1 Introduction
Ultra-thin endoscopes are a promising technique for enabling cell-scale imaging in difficult-to-reach parts of the body, with the potential to improve disease detection in organs such as the pancreas and ovaries. Commercial products using imaging fiber bundles around 1mm diameter are used in bile ducts [1] and flexible and full-color imaging has been demonstrated using distal scanning mechanisms that are typically around 2mm diameter [2; 3; 4]. To further reduce the size of endoscopes, recent work has focused on imaging through ultra-thin multimode fibers with diameters of 0.125mm and has achieved _in vivo_ fluorescence imaging in brains of immobilized mice [5]. However, there are some key limitations of these imaging systems that use ultra-thin optical fiber. First, the thinnest such imaging devices are made using multimode fiber (MMF), which suffers from significant optical distortion that changes whenever the fiber is perturbed, particularly for longer fibers (\(>\)1m) required to reach deep inside the human body [6]. Second, to calibrate this distortion, practical fiber bundle endoscopes require measurement of their transmission matrix (TM) which requires optical components at the distal end to focus the light onto the distal facet. If calibration is required immediately before use, such components would be required on the distal tip for _in vivo_ use, and would thus compromise the ultra-thin form factor of the endoscopes [7].
A number of methods have been proposed to calibrate fiber TMs without distal access including guidestars [8; 9; 10], a virtual beacon source [11], or reflective structures on the fiber tips [7; 12; 13]. Gordon et al. [13] proposed a single-ended method of TM recovery based on the fiber system shown in Figure 1, with a specially designed reflector stack. This approach avoids the need for measurement at both proximal and distal end of the fiber but works for non-unitary TM matrices. The reflection matrix, \(\mathbf{C}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\), describes how an incident field \(\mathbf{E}_{\mathbf{in}}\in\mathbb{C}^{\mathbb{M}^{2}}\) is transformed via propagation through the optical fiber, reflected by the reflector stack and finally transferred back through the fiber into
an output field \(\mathbf{E_{out}}\in\mathbb{C}^{M^{2}}\) at a wavelength of \(\lambda\):
\[\mathbf{C}_{\lambda}=\mathbf{E_{out}}_{\lambda}\mathbf{E_{in\lambda}^{-1}} \tag{1}\]
Theoretically, the forward TM, \(\mathbf{A}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\), can be unambiguously reconstructed at a fourth wavelength based on the measured reflection matrices at 3 different wavelengths. Specifically, the reconstruction of TM is achieved by solving a set of three quadratic matrix exponential equations:
\[\mathbf{C}_{\lambda_{1}}=\mathbf{A_{\lambda_{1}}^{T}}\mathbf{R}_{\lambda_{1}} \mathbf{A}_{\lambda_{1}} \tag{2}\]
\[\mathbf{C}_{\lambda_{2}}=(e^{(\log\mathbf{A}_{\lambda_{1}}\frac{\lambda_{1}}{ \lambda_{2}})})^{T}\mathbf{R}_{\lambda_{2}}(e^{(\log\mathbf{A}_{\lambda_{1}} \frac{\lambda_{1}}{\lambda_{2}})}) \tag{3}\]
\[\mathbf{C}_{\lambda_{3}}=(e^{(\log\mathbf{A}_{\lambda_{1}}\frac{\lambda_{1}}{ \lambda_{3}})})^{T}\mathbf{R}_{\lambda_{3}}(e^{(\log\mathbf{A}_{\lambda_{1}} \frac{\lambda_{1}}{\lambda_{3}})}) \tag{4}\]
where, \(\mathbf{A}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) is the transmission matrix at wavelength \(\lambda\), \(\mathbf{R}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) is the reflector matrix and \(\mathbf{e}^{(\log\mathbf{A}_{\lambda_{1}}\frac{\lambda_{1}}{\lambda_{2}})}\) is the transmission matrix adjusted for a wavelength \(\lambda_{2}\).
Currently, these equations can be solved by using an iterative approach which relies on optimization of the entire TM [13]. This therefore scales in complexity with the square of the matrix dimension, incurring significant computational time, especially for large matrices. In practice, the transmission matrix shows high sensitivity to bending and temperature so in a practical usage scenario would need to be measured very frequently and immediately prior to imaging. Large computational times are therefore not practical.
Considering this, there are several methods that have been developed in order to reduce the computational time for fiber imaging. These methods typically exploit prior knowledge about the fibers to improve or speed up TM reconstruction. For example, Li et al. [14] proposed a compressed sampling method based on the optical transmission matrix to reconstruct full-size TM of a multimode fiber supporting 754 modes at compression ratios down to 5% with good fidelity. Additionally, Huang et al. [15] retrieved the optical transmission matrix of a multimode fiber using the extended Kalman filter, enabling faster reconstruction.
Recently, there has been work on using deep learning approaches, involving convolutional neural networks, to reconstruct images via multimode fibers both in transmission and reflection modes [16; 17; 18]. These methods have the advantage of being fast, and also learning and utilizing important prior information about the fiber properties and the objects being imaged. However, their performance typically degrades significantly under fiber perturbation because they do not have access to reflection calibration measurements required to unambiguously resolve a TM. Further, because such approaches seek to approximate the forward propagation of light and often only consider amplitude image recovery, they often rely on classical mean-squared error loss functions for training.
In order to incorporate reflection calibration measurements following fiber perturbation, it may instead be advantageous to use AI approaches to reconstruct a transmission matrix rather than an image, though there has been relatively little work in this area. When reconstructing a transmission matrix comprising complex numbers, a particular type of degeneracy arises that is not well handled by conventional AI loss functions: a global phase factor. In many physical problems, including the recovery of transmission matrices for the purposes of image reconstruction and phase-hologram generation, global phase factors are not relevant as they do affect the perceived performance of the system: it is the _relative_ phase between pixels that must be preserved. Global phase may have a physical interpretation related to the physical length of the fiber, but in practice it is often arbitrary unless great care is taken. For example in interferometric systems the global phase is likely to be arbitrary unless the optical path lengths of the reference and sample arms are perfectly matched, which is very challenging for multimode fibers. Further, the global phase often drifts significantly during practical experiments [19], and approaches using phase-retrieval produce entirely arbitrary global phase values [20]. Therefore, in many important practical situations, conventional loss functions will convert arbitrary shifts in the global phase to large changes in value, which can confound minimization algorithms used to fit AI models. In such cases, models may arbitrarily learn a global phase factor (a type of 'overfitting') and may thus not be generalisable.
In this paper, we therefore propose a new method of implementing single-ended recovery of an optical fiber TM by solving equations 2-4 based on three reflection matrix measurements at three different wavelengths. Specifically, we present two different neural network architectures, fully connected neural network (FCNN) and convolutional U-net based neural networks, and demonstrate the performance of both. As a necessary step, a custom global phase insensitive loss function is developed to eliminate the effect of global phase factor during the model training process. We first validate our model by recovering \(64\times 64\) complex-valued fiber transmission matrices through a simulated single-ended optical fiber system (shown in Figure 1) with \(\leq 4\%\) error for both FCNN and convolutional U-net architectures. We
then demonstrate reconstructing \(8\times 8\) images through fiber based on recovered TM with \(\leq 8\%\) error. We highlight several advantages of this TM recovery approach compared to conventional TM recovery methods. Firstly, once the model is trained (\(\sim\)100 hours), it only requires \(\sim\)1 second for reconstruction, which is 4500 times faster than the conventional iterative approach. Secondly, the conventional method [13] can only reconstruct square TM problems, whereas this method is compatible with non-square-shaped TM with \(\leq 8\%\) error, which is potential for many practical cases where optical systems may have different mode bases at proximal and distal ends. Thirdly, no prior measurements for reflectors are required for this model, removing a significant experimental challenge.
## 2 Results
### TM recovery
This TM recovery model was trained on a simulated dataset comprising 800,000 sets of simulated reflection matrices, \(\mathbf{C}_{\lambda}\) at 3 wavelengths, \(\lambda=\lambda_{1},\ \lambda_{2},\ \lambda_{3}\), as input and a complex-valued non-unitary transmission matrix at wavelength \(\lambda_{1}\), \(\mathbf{A}_{\lambda_{1}}\) as output. It was then validated using 200,000 such sets not used in training. Figure 2(a) shows the training and validation loss in training the FCNN model over 2500 epochs using different loss functions, namely conventional mean absolute error (MAE), and weighted and unweighted versions of our global-phase insensitive custom loss function (Eq. 9 and 10 respectively). Both global-phase insensitive loss functions show a decreasing loss in both training and validation in the first 2000 epochs and converge after 2500 epochs, whereas the MAE loss function exhibits fluctuating non-converging loss values for both training (green line) and validation (pink line). Comparison between the two versions of our global-phase insensitive loss function shows that the weighted version reduces loss compared to the unweighted version by \(\sim\)10% in both training (blue line) and validation (red line). An example of a reconstructed TM predicted by the FCNN model using weighted loss function at different epochs is shown inset in Figure2(a). It can be seen that the predicted TM is getting closer to the target TM from 300 epochs to 2500 epochs. These indicate that using custom loss function can successfully avoid the global phase degeneracy which would otherwise prevent the model from learning.
Figure 2(b) compares the TM result predicted by our two different neural network architectures using different loss functions. Both FCNN and convolutional U-net-based neural networks cannot recover TM when using the MAE loss function but are capable of recovering TM using both versions of the global phase insensitive loss function, with a loss of \(\leq 4\%\) over 200,000 validation TMs. Compared to the unweighted version, there is a \(\sim 0.3\%\) reduction in error when using the weighted loss function in either FCNN or convolutional U-net architecture. Furthermore, we also
Figure 1: Single-ended optical fiber imaging system for TM recovery. The optical image, \(\mathbf{X}\in\mathbb{C}^{M\times M}\) is placed at far end of distal facet. Light with a field of \(\mathbf{E_{in}}\in\mathbb{C}^{M^{2}}\) propagates from the proximal facet through the optical fiber, with the forward transmission matrix of the optical fiber defined as \(\mathbf{A}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) at the wavelength, \(\lambda\). A reflector stack with a three-layer structure is placed at the distal facet, with its reflector matrix defined as \(\mathbf{R}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) at the wavelength \(\lambda\). Reflection matrix, \(\mathbf{C}_{\lambda}\in\mathbb{C}^{M^{2}\times M^{2}}\) can be repeatedly measured at three different wavelengths to recover the TM
evaluated the computational resource usage of the two different neural network architectures as shown in Figure2(c). We implemented the training process using Tensorflow 2.0 running on an NVIDIA Tesla V100 GPU. Compared to FCNN, the convolutional U-net shows significant advantages in memory usage because it requires 1000 times fewer trainable parameters and the converge time is reduced by \(20\%\). However, it shows 0.7% larger loss in the validation TMs. Both FCNN and convolutional U-net can recover TM at a loss \(\leq 4\%\), and also enable \(\sim\)1s prediction time.
### Image reconstruction
To evaluate the performance of recovered TMs for image reconstruction, we considered 3 example images denoted \(\mathbf{x}\in\mathbb{C}^{8\times 8}\): an amplitude-only image with a'space invader' pattern, a phase-only digit with a uniform amplitude and a random complex-valued image. Figure 3 shows the image reconstruction results based on recovered TM using FCNN and convolutional U-net networks. It can be seen that all three types of images can be successfully reconstructed based on recovered TMs using both neural network models, with all the image loss \(\leq 8\%\).
### Fiber perturbation
We then evaluate the robustness of our TM recovery model by swapping rows between different reflection matrices, simulating the effect of the TM changing mid-way through characterization. We simulated 10 sets of 64\(\times\)64 reflection matrices with five different perturbation rates indicating the numbers of rows swapped (1/64, 4/64, 8/64, 16/64, and 32/64). Figure 4 shows the TM recovery results and its corresponding image reconstructions for different fiber
Figure 2: (a) Training and validation loss using MAE, unweighted custom loss function and weighted loss function. TM recovery results over different epochs. (b) TM results recovered using two different neural network architectures (i.e. FCNN and convolutional U-net networks), with three different loss functions, namely MAE, unweighted loss function in Eq.9, and weighted loss function in Eq.10. (c) Comparison between FCNN and convolutional U-net architecture in aspects of loss, training time, prediction time, number of converging epochs and number of trainable parameters.
perturbation rates based on our pre-trained TM recovery FCNN model. It can be seen that our TM recovery model is compatible with optical fibers with a small perturbation rate (below 6%) with TM loss \(\leq 8\%\) and image loss \(\leq 15\%\) but performance degrades significantly above this.
### Recovery of non-square TMs
We next examine the important practical case of non-square TMs, e.g. where the desired representation at the distal end of a fiber might be different from that used at the proximal end and may have more elements. To recover a TM \(\mathbf{A}\in\mathbb{C}^{M_{p}\times M_{d}}\), we require that the reflection matrix, \(\mathbf{C}\in\mathbb{C}^{M_{p}\times M_{p}}\) and that the reflector matrix, \(\mathbf{R}\in\mathbb{C}^{M_{d}\times M_{d}}\). \(M_{p}\) and \(M_{d}\) represent the number of elements used for the basis representation at the proximal and distal ends of the fiber respectively. Figure 5 shows one example of recovered non-square-shaped TM \(\in\mathbb{C}^{12\times 6}\) using FCNN and convolutional U-net network, with loss of 5.95% and 9.3% respectively. Theoretically, a tall-matrix structured TM, \(\mathbf{A}\in\mathbb{C}^{M_{p}\times M_{d}}\), (where \(M_{p}>M_{d}\)) can be recovered by reflection matrices with large total elements, and thus producing a better recovery performance with less loss and training data.
### Computational resource usage
As the dimension of recovered images increases, we expect an increase in the TM dimension thus requiring more computational resources. Empirically measured computational resources are plotted in log-scale in Figure 6 (a)-(c): minimum training data, minimum memory usage, and converging time respectively. All indicate a quadratic relationship to the image dimension \(M\) for both FCNN and convolutional U-net models. For practical imaging applications we would desire at least \(32\times 32\) image resolution, giving a \(1024\times 1024\) TM, which would require training with \(>\)10 million examples, leading to memory consumption \(>\)1.5TB for the FCNN. By comparison, the convolutional U-net would require only 1.1TB of memory consumption. Compared to FCNN, convolutional U-net shows potential advantages in
Figure 3: Image reconstruction based on recovered TM. (a) and (b) show the TM and its error respectively. We consider three example images: (c) an amplitude-only ‘space invader’ pattern, (d) a phase-only digit with uniform amplitude, and (e) a random complex-valued image. We compare the reconstruction result recovered by FCNN (second column) and U-net architectures (third column) with the target result shown in the first column.
using 25% fewer memory resources and 20% less training data within 15% less training time. Figure 6(d) compares the prediction time using our neural network model with the conventional methods using iterative optimization approaches [13], where our FCNN model shows significantly less reconstruction time (\(\sim\)1s vs. 1920s for an 12 x 12 transmission matrix ) even with large size images but the conventional method requires increasing time when the size of image increases.
Figure 4: Effect of perturbations of fiber TM during reflection-mode characterization. (a) and (b) show the TM and its error respectively. We consider three example images: (c) an amplitude-only ‘space invader’, (d) a phase-only digit and (e) a random complex-valued image. We compare the reconstruction result recovered by FCNN for increasing levels of fiber pertrubation.
Figure 5: Non-square-shaped TM \(\in\mathbb{C}^{12\times 6}\) recovered by our TM recovery model using FCNN architecture and convolutional U-net model.
## 3 Discussion
We have demonstrated the successful reconstruction of forward fiber TMs based on reflection-mode measurements at multiple wavelengths using a novel neural network based approach encompassing two architectures: a fully-connected neural network and a convolutional U-Net. Previous work applying neural networks to fibers has focussed on image reconstruction as the end goal, but we instead focus on transmission matrix reconstruction. Such an approach is more flexible as the inputs to the network are calibration measurements that reflect a fibers deformation state at any given time - previous image reconstruction approaches have instead learned a static representation of the fiber TM encoded in the neural network weights. Using our approach, the recovered TM will be accurate for the most recent calibration measurements and can be used for high-speed image recovery via conventional matrix operations. Indeed, we demonstrate error values \(\leq\)8% for reconstructing complex limitations. However, one major challenge of recovering the TM in this way is the need to recover a complex-valued TM with a degenerate global phase shift. Previous work on image reconstruction has addressed this problem by training separate networks for amplitude and phase recovery in purely real space and accepting relatively poor performance for phase recovery [16]. Here, we present a novel loss function that is insensitive to this global phase degeneracy and show a high degree of convergence compared to conventional MAE metrics. We believe this metric in itself could find applications in computer-generated holography via neural networks, phase retrieval problems and indeed to image-reconstruction-based neural networks for fiber imaging. Applying this loss function to our single-ended TM recovery problem, we demonstrated the model for reconstructing \(64\times 64\) complex-valued fiber transmission matrices through a single-ended optical fiber system with \(\leq 4\%\) error either using FCNN or U-net based neural network architecture, which is also capable of reconstructing \(8\times 8\) images through fiber based on recovered TM with \(\leq 8\%\) error.
There are several major advantages to our neural network approach compared to previous iterative reconstruction approaches [13]. First, the prediction time is very fast, typically \(\sim\)1s, which makes this a feasible approach for future real-time imaging, over 4500x faster than the existing iterative approach. Training the network is significantly slower, but this would only need to be performed once per fiber for a fixed reflector so could be performed as a one-off initial calibration step. Second, our approach shows robustness to instances where the fiber TM might change part way through characterization measurements, as is likely to happen during real _in vivo_ usage, and can tolerate up to 6% of row swaps between different reflection matrices. This performance could likely be further improved by re-training the network with perturbed examples as input, thus also learning an 'error correction' strategy. Third, the approach can reconstruct non-square transmission matrices. This is important because due to experimental constraints, the sampling basis of the light on the proximal facet is often the pixel basis of the camera used. However, this basis may not be appropriate for imaging at the end of the fiber as it may contain many more elements than modes are supported in the fiber: multimode fiber may only support a few hundred modes depending on wavelength and core diameter. Therefore, to optimize speed and imaging performance it is often desirable to retrieve a TM in the mode basis of the fiber that can easily be addressed
Figure 6: (a) Minimum training data versus the number of image dimensions, plotting in log-scale.(b) Minimum memory usage versus the number of image dimensions, plotting in log-scale. (c) Converging time versus the number of image dimensions. (d) Prediction time of using our TM recovery model and conventional method, plotting in log-scale.
using our camera coordinates: hence a non-square TM. Finally, it is not required to characterize the reflector in advance as this can effectively be inferred based on measurements of the fiber.
However, there are also some trade-offs with our approach. The first trade-off is that the lack of need for pre-characterization of reflector means that the reflector matrix of this is effectively encoded in the network weights. Without careful thought about implementation this could mean that for each different reflector a separate model would have to be trained and since this may require millions of transmission matrices to be measured, this may be experimentally infeasible. One possible solution to this problem is to either pre-characterize reflectors as proposed previously, or else devise a method of reliably manufacturing reflectors with consistent and highly reproducible properties. Most of the different fiber bending conditions could then be simulated using our approach here and so the network could be trained with relatively few experimental measurements. The second trade-off that follows from this is the need for large amounts of experimental transmission and reflection measurements. This could also be alleviated somewhat by forward simulations of fiber, as we have previously found a high degree of alignment between simulated and experimental matrices [13]. Further, experimental and simulated datasets could be combined in a domain-transfer approach [21, 22]. The use of adaptive loss functions, such as in generative-adversarial networks, may further enable convergence on relatively small datasets, or else generate further training data. Third, the training process is very memory-intensive for dealing with large sizes of TM that are typically encountered in imaging applications (e.g. \(1024\times 1024\)), which requires over 1TB for training the recovery model. One possible solution is to develop matrix compression techniques such as Auto-encoder models to reduce the size of our input matrices by extracting the core features into a latent space. Reducing batch size is considered an optimizing method to reduce memory usage but too small batch size will lead to wider fluctuations and thus a larger converging loss with more training time required.
We anticipate this neural network-based TM recovery model with new loss function designed will lead to new machine-learning models that deal with phase information, for example in imaging through optical fiber, holographic imaging and projection, where both phase control and speed are required.
## 4 Methods
We present a new TM recovery method that uses neural networks, instead of using iterative approaches [13], to solve the Equations 2 - 4. Figure 7 shows the schematic of this TM recovery model. Specifically, we first simulated \(N\) optical fibers TMs, \(\mathbf{A}_{\lambda_{1}}\in\mathbb{C}^{M^{2}\times M^{2}}\), at a wavelength of \(\lambda_{1}\) as the ground truth. Then we randomly generated three complex-valued matrices as our reflector matrices, \(\mathbf{R}_{\lambda_{1}}\in\mathbb{C}^{M^{2}\times M^{2}}\), \(\mathbf{R}_{\lambda_{2}}\in\mathbb{C}^{M^{2}\times M^{2}}\), and \(\mathbf{R}_{\lambda_{3}}\in\mathbb{C}^{M^{2}\times M^{2}}\) at wavelengths \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) respectively. Finally, we generated three reflection matrices, \(\mathbf{C}_{\lambda_{1}}\in\mathbb{C}^{M^{2}\times M^{2}}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{C}^{M^{2}\times M^{2}}\), and \(\mathbf{C}_{\lambda_{3}}\in\mathbb{C}^{M^{2}\times M^{2}}\), at these wavelengths which can be calculated using to Equations 2 - 4. Here, we use wavelengths \(\lambda_{1}=850\)nm, \(\lambda_{2}=852\)nm and \(\lambda_{3}=854\)nm as physically realistic values within the TM bandwidth of a typical endoscopic length fiber (\(\sim\)2m) [13]. Each set of 3 reflection matrices, \(\mathbf{C}_{\lambda_{1..3}}\) then forms a single input to our neural network model.
To feed the neural network, we did data pre-processing to both input data (i.e. reflection matrices) and the ground truth (i.e. TMs), converting from complex-valued data to real-valued data. We then split the \(N\) data into training and validation in a 3:1 ratio before training the neural network model with the ADAM optimizer using our custom-defined loss function. Python was used for model training and MATLAB was used for data pre-processing and post-processing because of its ease of use for complex matrix computations.
To gauge the accuracy of our TM reconstruction we define loss metric to evaluate the performance of TM recovery by calculating the mean MSE of each validated TM over the number of the validation data:
\[Loss=\frac{1}{0.25N}MSE(\hat{A}_{t},A_{t}) \tag{5}\]
where,\(\hat{A}_{t}\) is the recovered TM and \(A_{t}\) is the target TM. \(0.25N\) is the number of data used for validation.
To evaluate the performance of TM recovery in a context relevant to practical endoscopy applications, we passed an image \(\mathbf{X}\in\mathbb{C}^{M^{2}}\) via the simulated optical fiber TM and compared with the image produced using the recovered TM. In this paper, we ignore any loss within the space from the image plane to the distal end of the fiber, and also the loss in transferring through the reflector stack. Theoretically, the reconstructed image, \(\mathbf{\hat{X}}\in\mathbb{C}^{M\times M}\) can be calculated by:
\[\mathbf{\hat{X}}=(\mathbf{\hat{A_{t}}^{\ T}})^{-1}\mathbf{A_{t}}^{\ T} \mathbf{X} \tag{6}\]
where, \(\mathbf{\hat{X}}\) is the reconstructed image, \(\mathbf{X}\) is the target image, \(\mathbf{\hat{A_{t}}}\) and \(\mathbf{A_{t}}\) are the recovered TM and target TM respectively.
### Network architectures
We defined two neural network models: a Fully-connected neural network (FCNN) and convolution U-net based neural network as shown in Figure 8. The FCNN is a ten-layer densely connected neural network (eight hidden layers), including 32,768 neurons in first and last hidden layers and 8192 neurons in other layers, all with LeakyRelu activation function. Figure 8 shows the FCNN architecture, where reflection process matrices \(\mathbf{C}_{\lambda_{1}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{3}}\in\mathbb{R}^{128\times 128}\), are firstly flattened into \(1D\) arrays and then concatenated as the input of the model (with the size of \(49152\times 1\)) and transmission matrix \(\mathbf{A}_{\lambda_{1}}\in\mathbb{R}^{128\times 128}\), flattened into \(1D\) array as the output (with the size of \(16384\times 1\)). Batch normalization layers were defined between every dense layer and dropout layers at the rate of \(0.2\) were defined after the first two dense layers. Also two skip connections were developed in order to prevent the model overfitting. The model was trained iteratively with the weighted 'global phase insensitive' custom loss function used. The training dataset for recovering \(64\times 64\) TM consisted of 500,000 matrices and the model was run for 2500 epochs, taking 182.5 hours using Tensorflow 2.0 running on a NVIDIA Tesla V100 GPU. The Adam optimizer was used with a learning rate of 0.004 in a decay rate of \(1e^{-4}\).
Next, we developed a U-net-based model that used encoder-decoder architecture, including seven Conv2D and DeConv2D layers respectively and two MaxPooling and UpSampling layers respectively with LeakyRelu activation function in each layer. Figure 8 shows this architecture, where reflection process matrices \(\mathbf{C}_{\lambda_{1}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{3}}\in\mathbb{R}^{128\times 128}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{R}^{128\times 128}\), are defined in three channels as the input of the model (with the size of \(128\times 128\times 3\)) and transmission matrix \(\mathbf{A}_{\lambda_{1}}\in\mathbb{R}^{128\times 128}\), as the output (with the size of \(128\times 128\times 1\)). Batch normalization layers were defined between every layer and dropout layers at the rate of \(0.2\) were defined after the second and last second Conv layers. Also three skip connections were developed in order to prevent the model being overfitting. The model was trained iteratively with the weighted 'global phase insensitive' custom loss function defined. Also, 2200 epochs were used for training 400,000 training datasets using 143h. The Adam optimizer was used with a learning rate of 0.004 in a decay rate of 1e-4.
### Data Preparation
In terms of data preparation, we first simulated \(N\) pairs of complex-valued transmission matrices \(\mathbf{A}_{\lambda_{1}}\in\mathbb{C}^{64\times 64}\) at a wavelength of \(\lambda_{1}=850nm\) as the ground truth of the model. To simulate these we devised a model that recreates some
Figure 7: Schematic of TM recovery model, including (a) data generation, (b) data pre-processing and (c) model training. \(N\) pairs of TM are firstly simulated as the ground truth. \(\lambda_{1}\), \(\lambda_{2}\) and \(\lambda_{3}\) represent three different wavelengths (in our case, 850nm, 852nm and 854nm). The input of the model is all real-valued matrices concatenated with reflection matrices at three different wavelengths. \(L\) represents custom loss function and \(w\) represents the weight updated by the optimizer.
characteristic properties found in fiber TMs. First, TMs are sparse in some commonly used basis e.g. LP modes for multimode fibers or pixel basis for multicore fibers [23]. Second, TMs can be arranged such that the majority of power intensity lies along the main diagonal with additional power spread along sub-diagonals, which is also typically observed when using bases that match relatively well to the fiber eigenbasis[24]. Third, TMs should be slightly non-unitary in realistic situations, with mode-dependent loss values (i.e. condition numbers) in the range of 3-5. To meet these requirements firstly, we generate a uniformly distributed random tri-diagonal matrix, \(\mathbf{B}\in\mathbb{C}^{6\times 64}\), which has non-zero elements only at the main diagonal, diagonal below and above it. We then compute the left singular matrix \(\mathbf{U}\in\mathbb{C}^{64\times 64}\) and right singular matrix \(\mathbf{V}\in\mathbb{C}^{64\times 64}\) via singular value decomposition (SVD). To make it a non-unitary matrix, we apply a new singular value distribution, \(\mathbf{S_{new}}\in\mathbb{R}^{64\times 64}\), a diagonal matrix that contains random values at its diagonal ranging from 0.5 to 2.5 to simulate our expected TM, which matched with those TM that were measured during the experiments [24]:
\[\mathbf{A}=\mathbf{U}*\mathbf{S_{new}}*\mathbf{V}^{T} \tag{7}\]
We next simulated three complex-valued reflector matrices with random uniformly distributed complex entries, \(\mathbf{R}_{\lambda_{1}}\in\mathbb{C}^{64\times 64}\), \(\mathbf{R}_{\lambda_{2}}\in\mathbb{C}^{64\times 64}\), and \(\mathbf{R}_{\lambda_{3}}\in\mathbb{C}^{64\times 64}\). Based on this, we generate N pairs of complex-valued reflection process matrix \(\mathbf{C}_{\lambda_{1}}\in\mathbb{C}^{64\times 64}\), \(\mathbf{C}_{\lambda_{2}}\in\mathbb{C}^{64\times 64}\), and \(\mathbf{C}_{\lambda_{3}}\in\mathbb{C}^{64\times 64}\) at three different wavelengths \(\lambda_{1}=850nm\), \(\lambda_{2}=852nm\) and \(\lambda_{3}=854nm\) that are corresponded to previously simulated \(\mathbf{A}_{\lambda_{1}}\in\mathbb{C}^{64\times 64}\) using Equations 2 - 4.
In order to feed the neural network, both input data, reflection matrices, and ground truth, TM, are required to be all real-valued matrices. Basically, a \(2\times 2\) complex-valued matrix can be described as \(4\times 4\) all real-valued matrix in Equation 8:
\[\begin{bmatrix}a+bi&c+di\\ e+fi&g+hi\end{bmatrix}=\begin{bmatrix}a&-b&c&-d\\ b&a&d&c\\ e&-f&g&-h\\ f&e&h&g\end{bmatrix} \tag{8}\]
Finally, the input of the model, three \(\mathbf{C}_{\lambda}\in\mathbb{C}^{128\times 128}\) at different wavelengths are normalized using in the range from -1 and 1.
### Weighted global phase insensitive loss function
Widely-used conventional loss functions such as mean absolute error (MAE) or mean squared error (MSE) calculate the absolute difference between predicted and target output values. However, there is a class of problems whose solutions trained by deep learning models are degenerate within a global phase factor, but whose relative phase between pixels must be preserved. This class includes problems where complex transmission matrices are reconstructed and relative phase, but not global phase important, but could extend to phase-hologram generation algorithms where replay-field phase is important. This is depicted visually in Figure 9(a), which shows one example of a pair of predicted and target matrices with complex entries depicted as vectors. Figure 9(b) shows the complex error between these two matrices when using MAE as the loss function. Due to the global phase shift, we observe that the vectors have large magnitudes, which will lead to an overall very large MAE when their magnitudes are summed. In the limiting case (e.g. when the phase shift is of \(\pi\)) where the predicted and target matrices are identical, this global phase shift can result in a
Figure 8: Architectures of two different neural network models used for recovery TM. (a) Fully-connected neural network, (b) Convolutional U-net
normalized MAE of 100% when the true value should be 0%. To avoid this problem, we propose a custom loss function termed a 'global phase insensitive' loss function:
\[L(\widehat{\mathbf{A_{t}}}(w),\mathbf{A_{t}}(w))=\sum_{t=1}^{4M^{2}}\left| \widehat{\mathbf{A_{t}}}(w)-\mathbf{A_{t}}(w)e^{1i\phi(\sum\mathbf{A_{t}}(w) \odot(\widehat{\mathbf{A_{t}}}(w)+\beta))}\right| \tag{9}\]
where, \(\widehat{\mathbf{A_{t}}}(w)\in\mathbb{C}^{M^{2}\times M^{2}}\) and \(\mathbf{A_{t}}(w)\in\mathbb{C}^{M^{2}\times M^{2}}\) represent predicted and target output value with regards to weight, \(w\), respectively, \(\sum\) represents calculating sum over all matrix elements, \(\phi\) represents the argument function for a complex number input, \(\odot\) represents element-wise division and \(\beta=0.001\) is a constant added to avoid divide-by-zero errors.
We also developed an alternative custom loss function that weights phase entries by power intensity, achieved by multiplying by the complex conjugate of \(\widehat{\mathbf{A_{t}}}(w)\), denoted \(\widehat{\mathbf{A_{t}}}^{*}(w)\). We also add an \(\ell_{2}\) regularization term to give the generalization of the model.
\[WL(\widehat{\mathbf{A_{t}}}(w),\mathbf{A_{t}}(w))=\sum_{t=1}^{4M^{2}}\left| \widehat{\mathbf{A_{t}}}(w)-\mathbf{A_{t}}(w)e^{1i\phi(\sum\mathbf{A_{t}}(w) \odot\widehat{\mathbf{A_{t}}}^{*}(w))}\right|+\frac{\alpha}{2}\|w\|^{2} \tag{10}\]
where \(\odot\) represents element-wise multiplication, \(\alpha\) = \(1e^{-4}\) is the regularization parameter. This implicitly weights the phase contributions by the product of magnitudes of the respective elements in \(\widehat{\mathbf{A_{t}}}(w)\) and \(\mathbf{A_{t}}(w)\), which upon convergence will approximately equal the squared magnitude of the target.
Specifically, the global phase factor, estimated by the term \(e^{1i\phi(\sum\mathbf{A_{t}}(w)\odot\widehat{\mathbf{A_{t}}}^{*}(w))}\) is the phase of a complex number representing the weighted sum of the elements of the complex difference matrix between predicted and target matrices. The rationale for this is that when the optimization algorithm has reached a minimum, in the ideal case the remaining error for each complex element will be entirely due to aleatoric uncertainty and can thus be modelled using a circularly symmetric complex Gaussian distribution [25]. The element-wise phase errors should therefore be uniformly distributed from \(0\) to \(2\pi\). If this is not the case, then there is likely some contribution to the phase error from an arbitrary global phase, as shown in Figure 9(c). Correcting for this factor should produce the desired uniform phase distribution.
To estimate the correction factor, the element-wise complex errors can be summed, as shown in Figure 9(c). This will produce an overall complex factor that has the desired global phase shift, shown in Figure 9(d). The predicted output value can be corrected by multiplying by this phase factor as shown in Figure 9(e), the result of which is then used to compute further parameter updates in the gradient descent algorithm. It can be seen that the complex error in Figure 9(f) between the predicted and target output value is reduced to a minimum after removing the phase factor compared to that calculated by MAE. We then compared the absolute values of the complex error calculated by MAE (green bar) and our customized weighted 'global phase insensitive' loss function (blue bar) respectively over 100,000 pairs of predicted and desired TM as shown in Figure9(g). The error using the custom loss function is more than two times smaller than that of the conventional loss function (MAE), which suggests the potential for this custom loss function in eliminating the effect of global phase.
## Data Availability
The data presented in this study are available from the following source: [DOI to be inserted later].
## Code Availability
The code for this study is available from the following source: [DOI to be inserted later].
## Author Contributions
## Acknowledgement
The authors acknowledge support from a UKRI Future Leaders Fellowship (MR/T041951/1).
|
2309.13466 | Rethinking Social Robot Navigation: Leveraging the Best of Two Worlds | Empowering robots to navigate in a socially compliant manner is essential for
the acceptance of robots moving in human-inhabited environments. Previously,
roboticists have developed geometric navigation systems with decades of
empirical validation to achieve safety and efficiency. However, the many
complex factors of social compliance make geometric navigation systems hard to
adapt to social situations, where no amount of tuning enables them to be both
safe (people are too unpredictable) and efficient (the frozen robot problem).
With recent advances in deep learning approaches, the common reaction has been
to entirely discard these classical navigation systems and start from scratch,
building a completely new learning-based social navigation planner. In this
work, we find that this reaction is unnecessarily extreme: using a large-scale
real-world social navigation dataset, SCAND, we find that geometric systems can
produce trajectory plans that align with the human demonstrations in a large
number of social situations. We, therefore, ask if we can rethink the social
robot navigation problem by leveraging the advantages of both geometric and
learning-based methods. We validate this hybrid paradigm through a
proof-of-concept experiment, in which we develop a hybrid planner that switches
between geometric and learning-based planning. Our experiments on both SCAND
and two physical robots show that the hybrid planner can achieve better social
compliance compared to using either the geometric or learning-based approach
alone. | Amir Hossain Raj, Zichao Hu, Haresh Karnan, Rohan Chandra, Amirreza Payandeh, Luisa Mao, Peter Stone, Joydeep Biswas, Xuesu Xiao | 2023-09-23T19:36:54Z | http://arxiv.org/abs/2309.13466v2 | # Targeted Learning: A Hybrid Approach to Social Robot Navigation
###### Abstract
Empowering robots to navigate in a socially compliant manner is essential for the acceptance of robots moving in human-inhibited environments. Previously, roboticists have developed classical navigation systems with decades of empirical validation to achieve safety and efficiency. However, the many complex factors of social compliance make classical navigation systems hard to adapt to social situations, where no amount of tuning enables them to be both safe (people are too unpredictable) and efficient (the frozen robot problem). With recent advances in deep learning approaches, the common reaction has been to entirely discard classical navigation systems and start from scratch, building a completely new learning-based social navigation planner. In this work, we find that this reaction is unnecessarily extreme: using a large-scale real-world social navigation dataset, scand, we find that _classical systems can be used safely and efficiently in a large number of social situations (up to 80%)_. We therefore ask if we can rethink this problem by leveraging the advantages of both classical and learning-based approaches. We propose a hybrid strategy in which we learn to switch between a classical geometric planner and a data-driven method. Our experiments on both scand and two physical robots show that the hybrid planner can achieve better social compliance in terms of a variety of metrics, compared to using either the classical or learning-based approach alone.
## I Introduction
Decades of research into autonomous mobile robot navigation allows robots to reliably move from one point to another without collision with (mostly static) obstacles [1, 2, 3, 4, 5]. Recently, there is a growing interest in bringing robots out of academic labs and into common public spaces in the wild [6, 7, 8, 9]. On their way to deliver packages [6], takeouts [7], and medical supplies [8], those robots need to navigate in a way such that they not only avoid static obstacles and move towards their goals, but also take other pedestrian's objective into account. Therefore, enabling robots to navigate by social norms, known as the social robot navigation problem, has emerged as an important research topic.
To address social robot navigation, researchers have collected large-scale demonstration datasets [10, 11, 12, 13, 14], created protocols to validate social navigation systems [15, 13, 14, 16], and developed social navigation techniques using classical [17, 18, 19, 20, 21, 22, 23] or learning-based [24, 25, 26, 27, 28, 29, 30, 31, 32] approaches (or a combination of both [33]) to move robots in a safe and socially compliant manner. While classical approaches enjoy safety, explainability, and certifiability, they require extensive engineering effort and are not scalable to complex and diverse social scenarios. On the other hand, learning-based approaches conveniently enable social navigation behaviors in a data-driven manner, but forfeit most of the benefits of their classical counterparts. Most of these approaches have achieved improvement in social compliance primarily in experiments conducted in controlled lab environments.
Despite such academic successes, robotics practitioners are still reluctant in deploying those state-of-the-art social navigation systems on their robot fleet in the real world, especially data-driven approaches, due to their lack of safety, explainability, and testability. To the best of our knowledge, most of the navigation stacks running on real-world service robots are still classical systems, which can be rigorously tested, confidently deployed, and easily debugged from a large-scale, real-world, software engineering perspective.
Considering the stark contrast between (i) our decade-long research and the current public resistance to mobile robots in public spaces and (ii) the active academic research in social robot navigation and the industrial reluctance in using them in real-world practice, we present a case study on social compliance of different existing robot navigation systems using a state-of-the-art Socially CompliAnt Navigation Dataset (scand) [10, 11] as a benchmark, which urges us to rethink the social robot navigation problem and propose a new practical hybrid paradigm with a targeted learning scope. Our two main contributions are:
* We show that classical navigation systems are sufficient for social robot navigation in a large number of social situations (up to \(80\%\)) of all social navigation scenarios in scand (Fig. 1 left).
* We propose a hybrid approach that switches between a classical geometric and a data-driven planner and achieves better social compliance in terms of a variety of metrics, compared to using either classical or learning
Fig. 1: Comparison of the navigation behavior from scand demonstration, move_base, and our approach at the same time step in two different social scenarios: Classical move_base (red) aligns with (left) or deviates from (right) the scand demonstration, while our approach is always close to the socially compliant demonstration.
-based approach alone (Fig. 1 right), in both scand and a human study with two robots on two campuses.
## II Background
In this section, we review classical and learning-based approaches to mobile robot navigation.
### _Classical Navigation_
As a research topic since decades ago, roboticists have developed a plethora of classical navigation systems to move robots from one point to another without collision with obstacles. Most classical systems take a global path from a high-level global planner, such as Dijkstra's [34], A* [35], or D* [36] algorithm, and seek help from a local planner [1, 2] to produce fine-grained motion commands to drive robots along the global plan and avoid obstacles. Most classical navigation systems require a pre-defined cost function [37] for both global and local planning and trade off different aspects of the navigation problem, such as path length, obstacle clearance, energy consumption, and in recent years, social compliance. They then use sampling-based [1, 38, 39], optimization-based [2, 40], or potential-field-based [41] methods to generate motion commands. These approaches enjoy benefits such as safety, explainability, and testability, which can be provably or asymptotically optimal. Such benefits are important when deploying physical robots in the real world with humans around, and therefore these classical navigation systems are still widely favored by practitioners in the robot industry [6, 7, 8, 9]. Implementing classical navigation approaches in socially challenging environments, however, requires substantial engineering effort such as manually designing cost functions [37] or fine-tuning navigation parameters [42]. These drawbacks motivate the use of learning-based approaches for the social robot navigation problem.
### _Learning-Based Navigation_
Learning-based approaches [27, 43] may be either end-to-end [44], i.e., producing actions directly from perception, or in a structured fashion, e.g., learning local planners [45, 46, 47, 48, 49, 50, 51, 52], cost functions [53, 54, 28, 55], kinodynamics models [56, 57, 58], and planner parameters [59, 60, 61, 62, 63, 64, 65, 66]. From the learning perspective, most approaches fall under either imitation learning [62, 62, 28, 44, 54, 63] from expert demonstrations or reinforcement learning [45, 46, 47, 52, 65, 66] from trial and error. Despite the convenience of learning emergent navigational behaviors purely from data in social scenarios, these systems suffer from the lack of safety and explainability, and cannot easily go through rigorous software testing and be debugged and fixed to avoid future failure cases. Therefore, robot practitioners rarely use learning-based navigation systems in their robot fleets deployed in the real world.
## III Social Compliance Case Study
In this section, we present our case study on social compliance of a set of classical navigation systems on scand.
### _scand_
scand[10, 11] contains \(8.7\) hours, \(138\) trajectories, and \(25\) miles of socially compliant, human tele-operated robot navigation demonstrations on the busy campus of The University of Texas at Austin, USA. scand includes socially challenging scenarios such as following, intersection, and overtake, making it an ideal dataset to test social robot navigation methods. Additionally, scand provides multi-modal information, including 3D LiDAR, RGB images, joystick commands, odometry, and inertia readings, collected on two morphologically different mobile robots--a Boston Dynamics Spot and a Clearpath Jackal--controlled by four different human demonstrators in both indoor and outdoor environments.
### _Defining Social Compliance on scand_
During social robot navigation in scand, at each time step \(t\), we denote a navigation scenario \(\mathcal{S}_{t}\) as the on-board robot perception, which includes a sequence of 3D LiDAR scans \(L\), RGB-D images \(I\), odometry \(O\), and IMU readings \(U\), and a navigation goal \(G\), i.e., \(\mathcal{S}_{t}^{D}=\{L_{k}^{D},I_{k}^{D},O_{k}^{D},U_{k}^{D},G_{t}^{D}\}_{k=t- N+1}^{t}\), where \(N\) denotes the history length included in the scenario at \(t\) and \(D\) denotes that the data is from the scand demonstrations.
We further define the navigation behavior at \(t\) as \(\mathcal{B}_{t}\), which can take the form of either a global or local plan (\(P_{t}\) or \(A_{t}\)). A demonstrated global plan in scand, computed as the human-driven trajectory starting from time \(t\), takes the form of a sequence of waypoints \(P_{t}^{D}=\{(x_{i}^{D},y_{i}^{D})\}_{i=t}^{t+M-1}\). A demonstrated local plan is represented as a sequence of joystick action commands \(A_{t}^{D}=\{(v_{i}^{D},\omega_{i}^{D})\}_{i=t}^{t+K-1}\), where \(v\) and \(\omega\) is the linear and angular velocity respectively. \(M\) and \(K\) denote the length of the navigation behavior on the global and local plan level.
Producing the navigation behavior \(\mathcal{B}_{t}\) (i.e., \(P_{t}\) or \(A_{t}\)) based on \(\mathcal{S}_{t}\) as input, a navigation system is defined as a function \(\mathcal{F}\) (i.e., \(\mathcal{F}_{g}\) or \(\mathcal{F}_{l}\) on the global or local level): \(\mathcal{B}_{t}=\mathcal{F}(\mathcal{S}_{t})\). We use the difference between \(\mathcal{B}_{t}\) and \(\mathcal{B}_{t}^{D}\), i.e., \(d=\left\|\mathcal{B}_{t}-\mathcal{B}_{t}^{D}\right\|=\left\|\mathcal{F}( \mathcal{S}_{t}^{D})-\mathcal{B}_{t}^{D}\right\|\), to quantify the social compliance of the navigation system \(\mathcal{F}\) on the demonstrated navigation scenarios in scand. In particular, we use the Hausdorff distance between \(P_{t}\) and \(P_{t}^{D}\) and L2-norm between \(A_{t}\) and \(A_{t}^{D}\) to evaluate global and local planning systems respectively.
Given different social scenarios \(\mathcal{S}_{t}^{D}\), a socially compliant navigation planner is expected to generate navigation behaviors that are similar to the expert demonstrations \(\mathcal{B}_{t}^{D}\) in scand. In the context of this case study, we assume that the expert demonstrations \(\mathcal{B}_{t}^{D}\) in scand is the "ground truth" socially compliant behavior when facing \(\mathcal{S}_{t}^{D}\), although such an assumption may not always hold: as pointed out by Karnan et al. [10], sometimes there may exist more than one strategy for socially compliant navigation in the same social scenario, which motivates us to further conduct a human study as an evaluation of whether the navigation is socially compliant independent of scand (Sec. V). But under such an assumption, we have:
**Definition 1**: _Facing a navigation scenario \(\mathcal{S}_{t}^{D}\), a navigation behavior \(\mathcal{B}_{t}\) is **socially compliant** if \(d=\|\mathcal{B}_{t}-\mathcal{B}_{t}^{D}\|=\|\mathcal{F}(\mathcal{S}_{t}^{D})- \mathcal{B}_{t}^{D}\|<\epsilon\), where \(\epsilon\) is a small threshold value. Further, let \(\alpha\in[0,1]\) be the fraction of the total scand time steps \(T\), in which \(\mathcal{B}_{t}\) is socially compliant. We will use \(\alpha\) to indicate how socially compliant a navigation system \(\mathcal{F}\) is._
### _Classical Navigation Systems_
We study four classical navigation systems (with available open-source implementations) on the scand social navigation scenarios \(\mathcal{S}_{t}^{D}\) and compare their navigation behavior \(\mathcal{B}_{t}=\mathcal{F}(\mathcal{S}_{t}^{D})\) against the scand demonstrations \(\mathcal{B}_{t}^{D}\).
#### Iii-C1 move_base
The Robot Operating System (ros) move_base[67] global planner utilizes a static costmap representation of the environment and Dijkstra's algorithm to generate an optimal path from the robot's current pose to the goal. The resulting path is smoothed and interpolated so that the local planner can follow. The move_base default local planner, the Dynamic Window Approach (dwa)[1], operates reactively, considering the robot state, sensor information, kinematic constraints, and global path. It generates real-time linear and angular velocity commands by evaluating various trajectories within the robot's dynamic window, ensuring progress towards the goal while avoiding obstacles.
#### Iii-C2 move_base with social layer
The static costmap of move_base can be augmented with a social layer with added emphasis on social factors [68], while both the global and local planners function similarly to the standard move_base. The social layer employs LiDAR scans to detect people and adjusts the costmap by introducing Gaussian distributions around them, thereby incorporating their presence into the planning process.
#### Iii-C3 Human-Aware Planner
The Human-Aware Planner[20] aims to achieve polite, obedient, and comfortable robot navigation behavior that gives priority to humans. It introduces social constraints to path planning to satisfy social space preferences and prevent the robot from operating too closely to people. The approach uses time-dependent, deterministic planning to consider the spatial relationship of the robot and humans over time. A social cost model and layered cost map efficiently combine social and static environment constraints. The algorithm optimizes the path based on social comfort, path length, execution time, and environment constraints.
#### Iii-C4 CoHAN
CoHAN[19] is a human-aware navigation planner designed to handle complex and crowded indoor scenarios. It uses an extension of the Human-Aware Timed Elastic Band (HATEB) planner [69] as local planner to handle large crowds and improve navigation legibility and acceptability. This system is developed over the ros navigation stack by introducing human safety and human visibility costmap layers into both global and local costmap.
### _Case Study Results_
All four studied classical navigation systems are kept in their default parameterizations and configurations and we observe in general all of them can produce socially compliant navigation behaviors, as defined in Definition 1. To assess their social compliance, at every time step \(t\), we set the navigation goal \(10\)m ahead of the robot on the human demonstrated path. We employ Hausdorff distance as the error metric \(d=\|\mathcal{B}_{t}-\mathcal{B}_{t}^{D}\|\) to compare global plan \(P_{t}=\{x_{i},y_{i}\}_{i=1}^{200}\) at each scand navigation scenario \(\mathcal{S}_{t}^{D}\) against scand demonstration \(P_{t}^{D}\). Fig. 2 shows how different Hausdorff distances look like visually. We choose Hausdorff distance because most global planners in existing navigation systems only plan 2D trajectories, without robot orientation and precise temporal information. As depicted in Fig. 3 middle, with \(\epsilon=1.0\), the vanilla move_base with default costmap yields the highest compliance with respect to the human demonstrations in more than \(\alpha=80\%\) of scand navigation scenarios; move_base with social layer achieves social compliance in the lowest percentage, \(\alpha=60\%\); The Human-Aware Planner and CoHAN fall in between, producing socially compliant behaviors in approximately \(\alpha=75\%\) of scand navigation scenarios. When increasing \(\epsilon\) from \(1.0\) to \(3.0\), all planners are able to achieve social compliance in a larger percentage of scand scenarios. In the case of local planning (Fig. 3 left), where L-2 norm is utilized as \(d\) to compare local plans \(A_{t}=(v_{t},\omega_{t})\) against \(A_{t}^{D}\), vanilla move_base achieves \(\alpha=60\%\), followed by Human-Aware Planner, CoHAN and move_base with social layer with marginal differences. The jump of the Cumulative Distribution Function (CDF) curves at around 1.6 is due to the fact that the maximal linear velocity \(v\) in scand is roughly 1.6m/s. Note that the difference in global plans will directly affect the social compliance of the local plans.
## IV A New Approach to Social Robot Navigation
Our social compliance case study shows that classical planners perform well in a majority of social scenarios in scand. However, as shown in Fig. 3, the classical planners deviate significantly from the human demonstrations in a small percentage of the dataset. On the other hand, learning-based approaches have shown to enable emergent navigation behaviors [70]. Yet, it tends to overfit and does not generalize well when facing out-of-distribution scenarios.
This observation motivates us to rethink how to take advantage of both classical and learning-based approaches. In this section, we first compare a widely used learning approach, Behavior Cloning (BC), with the best classical planner in our study, move_base, and show that building
Fig. 2: Different Hausdorff distances between the green and red global plan. White dots denote nearby humans and obstacles.
a completely new learning-based social navigation planner may not work as well as the classical planner. We then propose our new hybrid approach to leverage the best of both worlds.
### _Comparing BC with move_base_
A commonly known issue for learning-based approaches is the lack of generalizability to out-of-distribution data. Therefore, in addition to train and test on the original scand, we follow scand's procedure and collect extra demonstrations in manually curated social scenarios (in contrast to in the wild), including intersection encounter, frontal approach, and people following. We also study the social compliance of both move_base and BC on this out-of-distribution test set.
For the original scand, as shown in Fig. 3 middle, the CDF curves show that BC (orange) performs better than move_base (blue) on the in-distribution scand test set throughout the entire \(\epsilon\) range, indicating that learning-based approaches can efficiently capture social compliance in in-distribution scenarios. For the out-of-distribution dataset collected to test generalizability, as shown in Fig. 3 right, while move_base achieves similar performance compared to the in-distribution test data in the original scand, BC's performance significantly deteriorates, seriously suffering from the commonly known distribution-shift problem of learning-based approaches. Although BC's overall performance drops significantly, we can observe a trend that BC's performance overtakes move_base's performance at more challenging social scenarios, indicating that learning-based approaches have the potential to solve what classical approaches cannot solve.
### _Leveraging the Best of both Worlds_
Our case study results suggest (1) classical navigation systems can produce socially compliant navigation behaviors in a majority of social scenarios and (2) learning-based approaches (in our case, BC) have the potential to solve challenging social scenarios where classical approaches fail to be socially compliant. Therefore, we propose a new framework where the classical navigation system acts as a backbone, which is complemented by a learning-based approach for handling difficult social navigation scenarios. To be specific, we instantiate our hybrid navigation planner \(\mathcal{F}(\cdot)\) based on a classical navigation planner \(\mathcal{C}(\cdot)\), a learning-based planner \(\mathcal{L}_{\theta}(\cdot)\) with learnable parameters \(\theta\), and a gating function \(\mathcal{G}_{\phi}(\cdot)\) with learnable parameters \(\phi\) that selects between the output from the classical and learning-based planners:
\[\mathcal{B}_{t}=\mathcal{F}(\mathcal{S}_{t})=\mathcal{G}_{\phi}(\mathcal{C}( \mathcal{S}_{t}),\mathcal{L}_{\theta}(\mathcal{S}_{t}),\mathcal{S}_{t}).\]
The parameters \(\phi\) and \(\theta\) can be learned using supervised learning on the navigation scenario and behavior tuples \(\{\mathcal{S}_{t}^{D},\mathcal{B}_{t}^{D}\}_{t=1}^{T}\) in scand:
\[\operatorname*{argmin}_{\phi,\theta}\sum_{t=1}^{T}d(\mathcal{S}_{ t}^{D}),\] \[d(\mathcal{S}_{t}^{D})=\|\mathcal{B}_{t}-\mathcal{B}_{t}^{D}\|,\] \[\mathcal{B}_{t}=\mathcal{G}_{\phi}(\mathcal{C}(\mathcal{S}_{t}^{ D}),\mathcal{L}_{\theta}(\mathcal{S}_{t}^{D}),\mathcal{S}_{t}^{D}).\]
From among the many ways to learn \(\mathcal{G}_{\phi}(\cdot)\) and \(\mathcal{L}_{\theta}(\cdot)\) (either jointly or separately), in this work, we present a simple implementation that first learns a classifier \(\mathcal{M}_{\phi}(\mathcal{S}_{t}^{D})\) based on the difference \(d\) between \(\mathcal{B}_{t}^{D}\) and \(\mathcal{C}(\mathcal{S}_{t}^{D})\) to choose between \(\mathcal{C}(\mathcal{S}_{t}^{D})\) and \(\mathcal{L}_{\theta}(\mathcal{S}_{t}^{D})\):
\[\mathcal{B}_{t}=\begin{cases}\mathcal{C}(\mathcal{S}_{t}^{D}),&\text{if } \mathcal{M}_{\phi}(\mathcal{S}_{t}^{D})=1,\\ \mathcal{L}_{\theta}(\mathcal{S}_{t}^{D}),&\text{if }\mathcal{M}_{\phi}( \mathcal{S}_{t}^{D})=0.\end{cases}\]
\(\mathcal{C}(\cdot)\) can already produce socially compliant behaviors when \(d\leq\epsilon\), while \(\mathcal{L}_{\theta}(\cdot)\) only learns to address navigation scenarios where \(d>\epsilon\) (\(\epsilon\) is a manually defined threshold).
Fig. 3: Case Study Results: Cumulative Distribution Function (CDF) curves of different navigation planners compared against human demonstrations (Hausdorff distance for global planners and L-2 norm for local planners) with in-distribution (scand) and out-of-distribution data.
To be specific, by comparing \(\mathcal{C}(\mathcal{S}_{t}^{D})\) against \(\mathcal{B}_{t}^{D}\), i.e., \(d=||\mathcal{C}(\mathcal{S}_{t}^{D})-\mathcal{B}_{t}^{D}||\), we separate the original scand\(\mathcal{D}\) into a socially compliant \(\mathcal{D}^{C}\) and a socially non-compliant \(\mathcal{D}^{N}\) subset with respect to \(\mathcal{C}(\cdot)\), and form a supervised dataset \(\{\mathcal{S}_{t}^{D},c_{t}\}_{t=1}^{T}\), in which \(c_{t}=1\) if \(\mathcal{S}_{t}^{D}\in\mathcal{D}^{C}\) (\(d\leq\epsilon\)) and \(c_{t}=0\) if \(\mathcal{S}_{t}^{D}\in\mathcal{D}^{N}\) (\(d>\epsilon\)). Then, \(\mathcal{M}_{\phi}(\cdot)\) is learned via supervised learning with a cross-entropy loss to classify whether \(\mathcal{C}(\mathcal{S}_{t}^{D})\) is socially compliant or not:
\[\phi^{*}=\operatorname*{argmax}_{\phi}\sum_{t=1}^{T}\log\frac{\exp\left( \mathcal{M}_{\phi}(\mathcal{S}_{t}^{D})[c_{t}]\right)}{\exp\left(\mathcal{M}_ {\phi}(\mathcal{S}_{t}^{D})[0]\right)+\exp\left(\mathcal{M}_{\phi}(\mathcal{ S}_{t}^{D})[1]\right)}.\]
The learning-based planner \(\mathcal{L}_{\theta}\) is then learned to minimize the difference between its outputs and demonstrations in \(\mathcal{D}^{N}\):
\[\theta^{*}=\operatorname*{argmin}_{\theta}\sum_{(\mathcal{S}_{t}^{D}, \mathcal{B}_{t}^{D})\in\mathcal{D}^{N}}||\mathcal{L}_{\theta}(\mathcal{S}_{t} ^{D})-\mathcal{B}_{t}^{D}||.\]
During deployment, \(\mathcal{F}(\cdot)\) first uses \(\mathcal{M}_{\phi^{*}}(\cdot)\) to classify if \(\mathcal{C}(\mathcal{S}_{t})\) is socially compliant or not, and then executes \(\mathcal{C}(\mathcal{S}_{t})\) if compliant or \(\mathcal{L}_{\theta^{*}}(\mathcal{S}_{t})\) if not.
### _In- and Out-of-Distribution Experiment Results_
We instantiate \(\mathcal{C}(\cdot)\) as a global planner using the move_base planner and generate the corresponding \(\mathcal{D}^{N}\) to train a BC planner. We apply our hybrid planner on both the original scand and the out-of-distribution test set. As shown in Fig. 3 middle, our hybrid approach (green) imitates a larger percentage of scand social scenarios with smaller Hausdorff distance in contrast to move_base (blue) and BC (orange). For the out-of-distribution test set, as shown in Fig. 3 right, our hybrid approach does not suffer from the significant performance degradation experienced by BC and achieves similar performance as move_base with small Hausdorff distance. At large distance, our hybrid approach is able to improve upon move_base and approach BC. Our results in Fig. 3 verify our hypothesis that the hybrid approach can take advantage of the best of both worlds facing both in-distribution and out-of-distribution social navigation scenarios.
## V Physical Experiments
We conduct a human study in a series of physical experiments to assess the social compliance of our proposed hybrid approach, in comparison to an existing classical planner, i.e., move_base, and an end-to-end learning-based method, i.e., BC trained on scand. The experiments are conducted using a wheeled Clearpath Jackal and a legged Boston Dynamics Spot to show the generalizability of our proposed hybrid approach to robots with different morphologies on two university campuses, George Mason University (GMU) and The University of Texas at Austin (UTA), respectively. We test the robots' social compliance within three distinct social scenarios, i.e., Frontal Approach, Intersection, and Narrow Doorway. We keep the same setup of our hybrid approach among all three scenarios. The three methods are randomly shuffled and repeated five times, and human participants are requested to respond to a questionnaire with 4-5 questions using Likert scales [15, 28] following each run. Each scenario is tested on ten different individuals (fifteen interactions per individual).
### _Social Compliance Questionnaire_
For Frontal Approach, the five questions are1:
Footnote 1: \({}^{*}\) denotes negatively formulated questions, for which we reverse-code the ratings to make them comparable to the positively formulated ones.
1. _The robot moved to avoid me._
2. _The robot obstructed my path_\({}^{*}\)_._
3. _The robot maintained a safe and comfortable distance at all times._
4. _The robot nearly collided with me_\({}^{*}\)_._
5. _It was clear what the robot wanted to do._
For Intersection, the four questions are:
1. _The robot let me cross the intersection by maintaining a safe and comfortable distance._
2. _The robot changed course to let me pass._
3. _The robot paid attention to what I was doing._
4. _The robot slowed down and stopped to let me pass._
For Narrow Doorway, the four questions are:
1. _The robot got in my way_\({}^{*}\)_._
2. _The robot moved to avoid me._
3. _The robot made room for me to enter or exit._
4. _It was clear what the robot wanted to do._
The quantitative results of our experiments are shown in Fig 4, where we plot the per-question average along with error bars for the three methods in each of the scenarios.
Fig. 4: Human Study Average Scores Per Question.
\begin{table}
\begin{tabular}{c c c c} \hline \hline Jackal & Frontal & Intersection & Doorway \\ \hline Classical & \(2.66\pm 0.64\) & \(3.98\pm 0.10\) & \(\mathbf{4.08\pm 0.38}\) \\ Hybrid & \(\mathbf{4.04\pm 0.39}\) & \(\mathbf{4.06\pm 0.20}\) & \(3.89\pm 0.36\) \\ BC & \(3.63\pm 0.40\) & \(2.49\pm 0.11\) & \(2.84\pm 0.25\) \\ \hline Spot & Frontal & Intersection & Doorway \\ \hline Classical & \(\mathbf{3.73\pm 0.22}\) & \(2.72\pm 0.17\) & \(3.29\pm 0.19\) \\ Hybrid & \(3.70\pm 0.26\) & \(\mathbf{3.48\pm 0.15}\) & \(\mathbf{3.82\pm 0.14}\) \\ BC & \(3.41\pm 0.19\) & \(3.13\pm 0.12\) & \(3.54\pm 0.49\) \\ \hline \hline \end{tabular}
\end{table} TABLE I: Human Study Average Scores Per Method and Scenario: Participants generally prefer the robot with the hybrid approach to the pure classical or the pure BC approach.
### _Jackal Experiments at GMU_
For the experiments at GMU with the wheeled Jackal, our hybrid method shows the most distinguishable social behavior for frontal approach, maintaining a safe distance while passing by another person coming straight from the opposite side. For the other two scenarios, the classifier of our hybrid method mostly commands to stick to the classical planner and therefore we see similar performance of the classical and our hybrid approach. On the other hand, BC is inconsistent in the majority of the runs: Either it cannot reach the goal successfully, which is evident in the low BC scores for Jackal Frontal Approach Q5 (Fig. 4 top left) and Jackal Narrow Doorway Q4 (Fig. 4 top right), or we have to manually intervene to avoid an imminent collision with the human subject or surroundings. Across the different scenarios, we observe that our approach remains consistent by maintaining the highest average in most of the questions. With both GMU and UT experiments, we run a one-way ANOVA test on the data from each question with three groups, and the test confirms the statistical significance of the comparison at a 95% confidence level. We show the Jackal Frontal Approach experiment in Fig. 5 left as an example: Classical approach follows a trajectory which passes very close to the human; BC avoids the human but it cannot recover back to the correct trajectory and gets too close to the wall before we manually intervene; Hybrid approach reacts early by maintaining a safe distance to the human and successfully reaches the goal.
### _Spot Experiments at UT Austin_
We also conduct experiments with a legged Spot at The University of Texas at Austin (UTA). In general, we observe our hybrid approach still performs the most consistently across all three scenarios. However, the classical planner's performance is slightly worse compared to the GMU Jackal experiments. It does not perform well on Spot Intersection (Fig. 4 lower middle) and Spot Narrow Doorway (Fig. 4 lower right, except Q4 since unlike BC, the classical approach can always reach the goal). We posit this is caused by the different motion morphology of the Spot: legged robots are holonomic, and like humans it is possible for them to side-step during a social interaction. Not being able to do so due to the limitation of move_base may cause its movement to be perceived unnatural. BC performs slightly better for Spot in Intersection (Fig. 4 lower middle) and Narrow Doorway (Fig. 4 lower right). We show the Spot Narrow Doorway experiment in Fig. 5 right as an example: Classical approach first follows the shortest path until it gets close to the human and avoids the human; BC avoids the human but gets lost thereafter; Hybrid approach can slow down and avoid the human in the beginning and successfully pass the narrow doorway in the end.
## VI Conclusions, Limitations, and Future Work
This study rethinks social robot navigation by revealing that classical navigation systems can safely and efficiently produce socially complaint navigation behaviors in a large number of social situations (up to \(80\%\)) of social scenarios in a large-scale social robot navigation dataset, scand. Such a finding demonstrates the surprising capabilities of existing classical navigation planners in addressing the challenges of socially compliant navigation, based on which we propose a hybrid approach as a potential solution to handle scenarios where the classical planners fall short. The hybrid approach incorporates a classifier to identify such scenarios and switch to a learning-based planner, specifically a BC model. We show experiment results of our proposed hybrid approach on both scand dataset and in a human study conducted in three social scenarios, with two robots, and on two university campuses. Although the proposed solution shows promising results on scand and in curated social scenarios, deployment on a robot in the real world will introduce complexities. Ensuring a reliable classification network in the wild and managing smooth switching without abrupt movements pose engineering challenges. Additionally, addressing the domain shift issue requires rigorous training of the BC model and potentially additional human interventions to correct suboptimal navigation behaviors in out-of-distribution scenarios. Future work aims to address these limitations and develop a robust, deployable system that respects social norms and meets human expectations during navigation in real-world human environments.
|
2309.03675 | $f(T)$ cosmology in the regime of quasar observations | The open problems related to cosmological tensions in current times have
opened new paths to study new probes to constrain cosmological parameters in
standard and extended cosmologies, in particular, to determine at a local level
the value of the Hubble constant $H_0$, through independent techniques.
However, while standard Cosmological Constant Cold Dark Matter ($\Lambda$CDM)
model has been well constrained and parts of extended cosmology have been
intensively studied, the physics behind them aspects restrains our
possibilities of selecting the best cosmological model that can show a
significant difference from the first model. Therefore, to explore a possible
deviation from a such model that can explain the current discrepancy on the
$H_0$ value, in this work we consider adding the current local observables,
e.g. Supernovae Type Ia (SNIa), $H(z)$ measurements, and Baryon Acoustic
Observations (BAO) combined with two new calibrated Quasars (QSO) datasets
using ultraviolet, x-ray and optical plane techniques. While these can be
identified as part of the high-redshift standard candle objects, the main
characteristics of these are based on fluxes distributions calibrated up to $z
\sim 7 $. We consider five $H_0$ prior scenarios to develop these calibrations.
Furthermore, we found that our estimations provide the possibility to relax the
$H_0$ tension at 2$\sigma$ using a QSO ultraviolet sample in combination with
late measurements showing higher values of $H_0$. Our results can be an initial
start for more serious treatments in the quasars physics from ultraviolet,
x-ray, and optical plane techniques behind the local observations as
cosmological probes to relax the cosmological tensions problems. | Rodrigo Sandoval-Orozco, Celia Escamilla-Rivera, Rebecca Briffa, Jackson Levi Said | 2023-09-07T12:26:52Z | http://arxiv.org/abs/2309.03675v2 | # \(f(T)\) cosmology in the regime of quasar observations
###### Abstract
The open problems related to cosmological tensions in current times have opened new paths to study new probes to constrain cosmological parameters in standard and extended cosmologies, in particular, to determine at a local level the value of the Hubble constant \(H_{0}\), through independent techniques. However, while standard Cosmological Constant Cold Dark Matter (\(\Lambda\)CDM) model has been well constrained and parts of extended cosmology have been intensively studied, the physics behind them aspects restrains our possibilities of selecting the best cosmological model that can show a significant difference from the first model. Therefore, to explore a possible deviation from a such model that can explain the current discrepancy on the \(H_{0}\) value, in this work we consider adding the current local observables, e.g. Supernovae Type Ia (SNIa), \(H(z)\) measurements, and Baryon Acoustic Observations (BAO) combined with two new calibrated Quasars (QSO) datasets using ultraviolet, x-ray and optical plane techniques. While these can be identified as part of the high-redshift standard candle objects, the main characteristics of these are based on fluxes distributions calibrated up to \(z\sim 7\). We consider five \(H_{0}\) prior scenarios to develop these calibrations. Furthermore, we found that our estimations provide the possibility to relax the \(H_{0}\) tension at \(2\sigma\) using a QSO ultraviolet sample in combination with late measurements showing higher values of \(H_{0}\). Our results can be an initial start for more serious treatments in the quasars physics from ultraviolet, x-ray, and optical plane techniques behind the local observations as cosmological probes to relax the cosmological tensions problems.
+
Footnote †: institutetext: Department of Physics, University of California, Berkeley, CA 94720-119, USA
## 1 Introduction
The \(\Lambda\) Cold Dark Matter (\(\Lambda\)CDM) model [1] has been the most successful model up to the last few years cosmological backbone. Its theoretical simplicity and phenomenological characteristic allow it to be constrained successfully with early [2] and late times [3; 4] observations. Moreover, its theoretical structure in a spatially flat geometric, and homogeneous and isotropic description relies on the existence of cold dark matter [5; 6], which should allow the structure formation and a Cosmological Constant (\(\Lambda\)) associated with dark energy [7], which should drive the late time cosmic acceleration.
However, the fundamental nature and the properties of the dark sector are still unknown. From exploring a possible particle (or particles) to explain dark matter effects, up to explaining the fine-tuning issue on the Cosmological Constant \(\Lambda\), have made the \(\Lambda\)CDM model a scenario that needs to be tested with several observational surveys.
Recent studies related to the flat geometry assumed in this model have brought the possibility to consider a spatially non-flat Universe [8; 9; 10]. While this standard cosmological model can accept non-flat schemes, its simple theoretical characteristic is now bent. Additionally, the constraint analyses using CMB data can raise significant statistical interpretations on the geometry, which would imply relevant consequences in the understanding of cosmic evolution. Furthermore, the effectiveness of this model has come into question with the appearance of statistical tensions between early and late time surveys. The so-called \(S_{8}\) and \(H_{0}\) tensions [11] have opened a wide door to propose new gravity theories and cosmological models to alleviate all the mentioned issues, along with a good agreement with the observational methodology employed to constraint the cosmological parameters of interest.
From the theoretical perspective, there have been proposals for alternative theories of gravity from fundamental setups [12; 13]. Their main characteristics are based on extra terms in the Einstein-Hilbert action associated with the curvature and its Levi-Civita connection. Moreover, another setup in the direction of extended theories of gravity considers teleparallel
torsion rather than curvature as a mechanism to communicate with the gravitational field [14; 15]. In the latter case, Teleparallel gravity (TG) has been studied since it produces a scalar \(T\) which is equal to the Ricci scalar. Under this relation, we can have the action in a linear form of the \(T\), which is dynamically equivalent to General Relativity (GR). This is the so-called Teleparallel Equivalent of General Relativity (TEGR). Furthermore, TEGR can be generalised to a form of \(f(T)\) gravity [16; 17; 18; 19; 20; 21; 22; 23; 24], which has been well-constrained at astrophysical [25] and cosmological level [26].
From the observational analysis perspective, statistically significant deviations from the standard \(\Lambda\)CDM have already been analysed using Supernovae Type Ia (SNIa), \(H(z)\) measurements, and Baryon Acoustic Observations (BAO) [27]. However, there is a strong restriction on the priors considered for \(H_{0}\) giving lower values of the matter density parameter \(\Omega_{m}\). At high redshifts, \(f(T)\) cosmologies have been constrained through cosmography by considering non-flat and flat geometries [28] with a gamma-ray bursts (GRB) observables, using quasars (QSO) objects [29] detected through high-quality UV and X-ray fluxes up to \(z\sim 5.1\)[30]. Also, some studies use quasars as standard rulers [29] by its angular size-luminosity using very-long-baseline interferometry [31]. However, these efforts show analogous results compared with the standard \(\Lambda\)CDM predictions and are well consistent with the observational data from the Hubble diagrams of each observable. A relevant point to discuss in this analysis is the method used behind the use of quasars as standard candles. This requirement sets us to a fixed information on \(H_{0}\) priors in each scenario.
As a step further, in this paper, we discuss the treatment of two different quasar samples: the xA sample [32] and the nUVX sample [33]. On one hand, the first quasar sample is based on the 4D Eigenvector 1 empirical formalism (4DE1) formalism to locate \(\sim 250\) extreme accretors, or Active Galactic Nuclei (AGNs) galaxies with the highest accretion rates. These objects can be considered standard candles using the Eddington luminosity and could bring extra information to the SNIa trend [34].
On the other hand, the nUVX sample is based on the empirical relation between the UV and X-ray emission of \(\sim 2500\) quasars fitted through different redshifts \(z\) to obtain a relation that could allow us to determine their distance in a model-independent manner. As we can notice, the two QSO samples have different formalisms, therefore we will analyse them separately for the selected cosmological models along with standard baseline data sets. For example, for the standardization of these sources as cosmological candles in a joint analysis, e.g. SNIa + QSO, we need to consider the observed non-linear relation between the ultraviolet and the X-ray luminosity in QSOs [35]. This method extends the empirical distance ladder used for the SNIa to have access to higher redshifts.
According to the latter characteristics, we will describe key points on how the methodology employed for quasars can be improved to constraint and found _true_ deviations from standard cosmologies.
We discuss each step comprehensively to show how these observables could play a significant role in the statistical credibility of the constraints. Furthermore, we consider adding the current local observables (we denote these as _baseline_ sample) and two new calibrated QSO datasets using ultraviolet, x-ray, and optical plane techniques up to \(z\sim 7\). We extend the analysis by considering five \(H_{0}\) prior scenarios to develop the calibrations of the QSO. Our methodology presents new constraints using a baseline constructed with SN + \(H(z)\) & BAO, and two newly calibrated QSO datasets along the latter. Both baselines (SN + \(H(z)\) + BAO, SN + \(H(z)\) + QSO) are employed to analyse the \(f(T)\) cosmologies at higher redshifts. The impact of considering objects as QSO will be fundamental to study if there is
any possible deviation from the standard \(\Lambda\)CDM model, and if this deviation can relax the current statistical tension on \(H_{0}\).
This paper is divided as follows: In Sec. 2 we summarise the TG background theory and the most promising \(f(T)\) cosmologies available in the literature. All of these models are described through their normalised \(E(z)\) Friedmann evolution equation. Furthermore, we are going to consider a standard \(w\)CDM model in addition to the four \(f(T)\) cosmologies to proceed with comparisons between them. In Sec. 3 we present the methodology employed for the baseline datasets. We divided this discussion into local and quasar measurements, including two samples through ultraviolet, x-ray, and optical plane techniques. Our results on new constraints are developed in Sec. 4. Finally, the discussion is presented in Sec. 5.
## 2 Teleparallel Cosmology and its \(f(T)\) models
We can characterise TG as the interchange between the curvature Levi-Civita connection \(\mathring{\Gamma}^{\sigma}_{\ \mu\nu}\) with the teleparallel connection \(\Gamma^{\sigma}_{\ \mu\nu}\)[14, 36]. Notice that we consider over-circle quantities as the objects determined using the Levi-Civita connection. Under this definition, all curvature-inspired geometric bodies can vanish through the calculations using this connection [14, 15, 37]. In this scheme, TG can be expressed through the tetrad \(e^{A}_{\ \mu}\), its inverses \(E_{A}^{\ \ \mu}\)), and a spin connection define by \(\omega^{A}_{\ B\mu}\), where we identify Latin indices as coordinates on the tangent space and Greek indices denotes coordinates on the manifold. Furthermore, the tetrad can be constructed using the standard metric \(g_{\mu\nu}=e^{A}_{\ \mu}e^{B}_{\ \nu}\eta_{AB}\), where \(\eta_{AB}=E_{A}^{\ \ \mu}E_{B}^{\ \ \nu}g_{\mu\nu}\). Notice that as with the metric, the tetrad fulfills orthogonality conditions \(e^{A}_{\ \mu}E_{B}^{\ \ \mu}=\delta^{A}_{B}\), and \(e^{A}_{\ \mu}E_{A}^{\ \ \nu}=\delta^{\nu}_{\mu}\).
To have local Lorentz transformation invariance in the field equations, the spin connection \(\omega^{A}_{\ B\mu}\) must be flat. The connection between the tetrad and this spin connection can be defined through [38]
\[\Gamma^{\sigma}_{\ \nu\mu}:=E_{A}^{\ \ \sigma}\left(\partial_{\mu}e^{A}_{\ \ \nu}+ \omega^{A}_{\ B\mu}e^{B}_{\ \nu}\right)\,, \tag{1}\]
where the torsion tensor can be written from the teleparallel connection and its antisymmetric operator as \(T^{\sigma}_{\ \ \mu\nu}\equiv 2\Gamma^{\sigma}_{\ \ [\nu\mu]}\)[36], where its contraction can be expressed as [15, 37]
\[T=\frac{1}{4}T^{\alpha}_{\ \ \mu\nu}T^{\ \ \mu\nu}_{\alpha}+\frac{1}{2}T^{ \alpha}_{\ \ \mu\nu}T^{\nu\mu}_{\ \ \alpha}-T^{\alpha}_{\ \ \mu\alpha}T^{\beta\mu}_{\ \ \beta}\,, \tag{2}\]
and the teleparallel equivalent (TEGR) action is denoted by a linear Lagrangian form of the torsion scalar considering \(R=\mathring{\mathring{R}}+T-B=0\), [39, 40] with \(R\equiv 0\) denoting a curvature-less teleparallel connection and \(\mathring{R}\neq 0\). The boundary term denoted by \(B\) is a total divergence term.
If we parametrise an arbitrary \(\tilde{f}(T)=-T+f(T)\) gravity we can write a modified version of a TEGR action of the form [19, 20, 22] through the action
\[\mathcal{S}_{f(T)}=\frac{1}{2\kappa^{2}}\int\mathrm{d}^{4}x\ e\left[-T+f(T) \right]+\int\mathrm{d}^{4}x\ e\mathcal{L}_{\mathrm{m}}\,, \tag{3}\]
with \(e=\det\left(e^{a}_{\ \mu}\right)=\sqrt{-g}\) as the tetrad determinant, \(\kappa^{2}=8\pi G\), and \(\mathcal{L}_{\mathrm{m}}\) denotes the matter Lagrangian contribution. Cases where \(f(T)\to 0\) can bring healthy scenarios, while the standard \(\Lambda\)CDM model can be recovered when this function is equal to a constant, e.g. \(\Lambda\).
To analyse a flat homogeneous and isotropic cosmology we should consider the tetrad [41, 42]
\[e^{A}_{\ \ \mu}=\text{diag}\left(1,\,a(t),\,a(t),\,a(t)\right)\,, \tag{4}\]
where \(a(t)\) is the scale factor in cosmic time \(t\). Notice that a standard flat Friedmann-Lemaitre-Robertson-Walker (FLRW) metric is recovered using the relation between the metric and the tetrad so that the line element in cartesian coordinates is [43]
\[\text{d}s^{2}=\text{d}t^{2}-a^{2}(t)\left(\text{d}x^{2}+\text{d}y^{2}+\text{d}z ^{2}\right)\,, \tag{5}\]
where \(T=-6H^{2}\) and \(B=-6\left(3H^{2}+\dot{H}\right)\). Finally, the \(f(T)\) gravity Friedmann equations are given by [15]
\[H^{2}+\frac{T}{3}f_{T}-\frac{f}{6} =\frac{\kappa^{2}}{3}\rho\,, \tag{6}\] \[\dot{H}\left(1-f_{T}-2Tf_{TT}\right) =-\frac{\kappa^{2}}{2}\left(\rho+p\right)\,, \tag{7}\]
where we can define the Hubble parameter as \(H=\dot{a}/a\), and the over-dots denotes derivatives with respect to \(t\), and the energy density and pressure of the total matter contribution are by \(\rho\) and \(p\), respectively.
Since now we have the Friedmann equation in this TEGR scheme, we can consider \(f(T)\) models suitable to cosmological late time constraints to test them with our observables. In this work, we consider the following models:
* \(\Lambda\)CDM model. This is the simplest model that we will use as a test to compare with TG-inspired models. By defining the normalised Hubble parameter \(E(z)\equiv H(z)/H_{0}\), the Friedmann equation for this case is of the form \[E^{2}(z)=\Omega_{0}(1+z)^{3}+(1-\Omega_{m}),\] (8) where \(\Omega_{m}\) denotes the fractional density of matter and the subindex \(0\) denotes parameters evaluated at current times.
* \(f_{1}(T)\) Model. This model was first studied in [18] due to its ability to reproduce the late-time cosmic acceleration behaviour in the \(f(T)\) scheme. We can describe it through \[f_{1}(T)=\left(-T\right)^{b_{1}}\,,\] (9) where \(b_{1}\) is a constant. We can write the Friedmann equation for this model as \[E^{2}(z)=\Omega_{m}\left(1+z\right)^{3}+\Omega_{r}\left(1+z\right)^{4}+\left( 1-\Omega_{m}-\Omega_{r}\right)E^{2b_{1}}(z)\,,\] (10) which recovers \(\Lambda\)CDM model for \(b_{1}=0\). For \(b_{1}=1\), the extra component in the Friedmann equation gives a re-scaled gravitational constant term in the density parameters, i.e. the GR limit. Also, we can obtain an upper bound such that \(b_{1}<1\) for an accelerating Universe.
* Linder Model. This model was proposed to produce late-time accelerated expansion through [19] \[f_{2}(T)=T_{0}\left(1-\text{Exp}\left[-b_{2}\sqrt{T/T_{0}}\right]\right)\,,\] (11)
where \(b_{2}\) is a constant and \(T_{0}=T|_{t=t_{0}}=-6H_{0}^{2}\). The corresponding Friedmann equation for this model can be written as \[E^{2}\left(z\right)=\Omega_{m}\left(1+z\right)^{3}+\Omega_{r}\left(1+z\right)^{4 }+\frac{1-\Omega_{m}-\Omega_{r}}{(b_{2}+1)e^{-b_{2}}-1}\left[\left(1+b_{2}E(z) \right)\text{Exp}\left[-b_{2}E(z)\right]-1\right]\,,\] (12) which reduces to \(\Lambda\)CDM as \(b_{2}\rightarrow+\infty\).
* \(f_{3}(T)\) Model. A variant version of the latter model can be described by [44] \[f_{3}(T)=T_{0}\left(1-\text{Exp}\left[-b_{3}T/T_{0}\right]\right)\,,\] (13) where \(b_{3}\) is constant. The Friedmann equation for this model can be written as \[E^{2}\left(z\right)=\Omega_{m}\left(1+z\right)^{3}+\Omega_{r}\left(1+z\right) ^{4}+\frac{1-\Omega_{m}-\Omega_{r}}{(1+2b_{3})e^{-b_{3}}-1}\left[\left(1+2b_{ 3}E^{2}(z)\right)\text{Exp}\left[-b_{3}E^{2}(z)\right]-1\right]\,,\] (14) which goes to \(\Lambda\)CDM as \(b_{3}\rightarrow+\infty\) similar to \(f_{2}\)CDM.
* \(f_{4}(T)\) Model. Proposed in [45], this model is described by \[f_{4}(T)=T_{0}\sqrt{\frac{T}{b_{4}T_{0}}}\log\left[\frac{b_{4}T_{0}}{T}\right]\,,\] (15) where \(b_{4}\) is a constant. The Friedmann equation for this form is \[E^{2}\left(z\right)=\Omega_{m}\left(1+z\right)^{3}+\Omega_{r}\left(1+z\right) ^{4}+\left(1-\Omega_{m}-\Omega_{r}\right)E(z)\,.\] (16) Notice that this expression does not feature \(b_{4}\), therefore the background behaviour of this model is intriguing because it cannot feature confirmation of any bias with the standard \(\Lambda\)CDM.
## 3 Observational data treatment
In this analysis, we consider the four \(f(T)\) models described, which have been analysed in previous works [27] and give well-consistent constraints using local Universe measurements. Each \(f(T)\) cosmological model will be tested using the constraining parameters method through MCMC (Monte Carlo Markov Chain) analysis using the publicly available 1 for our cosmology and the baseline (or base for further reference) and the extract of constraints using GetDist2. The baseline contains the Cosmic Chronometers data (CC), Supernovae Type Ia (SNIa) data set and BAO measurements. As a step forward, we will focus on using two kinds of quasars datasets using ultraviolet, x-ray, and optical plane techniques.
Footnote 1: emcee.readthedocs.io
Footnote 2: getdist.readthedocs.io
For our analyses, we used different priors on \(H_{0}\) to analyse the behaviour of different models and data sets. These are reported in Table 1. The following priors are used: the estimation of the Hubble constant by SH0ES team \(H_{0}=73.3\pm 1.04\) km s\({}^{-1}\) Mpc\({}^{-1}\) using SNIa and Cepheid calibrations [46]. We will call this R21 in our analysis. The GAIA Early Data Release 3 using Cepheid stars to calibrate \(H_{0}=74.03\pm 1.42\) km s\({}^{-1}\) Mpc\({}^{-1}\)[11]. The calibration of the constant using the Tip of the Red Giant Branch (TRGB) as a standard
candle with \(H_{0}=69.8\pm 0.8\) km s\({}^{-1}\) Mpc\({}^{-1}\)[47] denoted by F20. We should mention that in [48] it was reported a higher value of \(H_{0}=71.8\pm 1.5\) km s\({}^{-1}\) Mpc\({}^{-1}\), however we keep our current study with the F20.
The indirect measurement of \(H_{0}=67.36\pm 0.54\) km s\({}^{-1}\) Mpc\({}^{-1}\) by the Planck Collaboration [49] using TT+TE+EE+lowE+Lensing denoted by P18. Finally, the indirect estimation of the Hubble constant using an independent probe with Atacama Cosmology Telescope (ACT) \(H_{0}=67.9\pm 1.5\) km s\({}^{-1}\) Mpc\({}^{-1}\)[50].
### Local measurements
* Cosmic Chronometers \(H(z)\). We consider the sample that was inferred by the Cosmic Chronometers technique (CC) in which the Hubble function is calculated through analyses of different galactic spectra in a redshift range from \(z=0\) to \(z\sim 2\), used for detecting very small redshift differences for two galaxies in a cluster that formed at the same time [51]. These estimations can be used to calculate \(\Delta z/\Delta t\) which allows us to have an estimate of \(H(z)\). _In this case we tested a newly calculated covariance matrix for the data3_. The corresponding \(\chi^{2}_{H(z)}\) is given by: Footnote 3: gitlab.com/mmoresco/CCcovariance
\[\chi^{2}_{H(z)}=\Delta H(z_{i},\Theta)^{T}C^{-1}_{H(z)}\Delta H(z_{i},\Theta),\] (1) where \(\Delta H(z_{i},\Theta)=H(z_{i},\Theta)-H_{obs}(z_{i})\) and \(C^{-1}_{H(z)}\) is the covariance matrix generated.
* Pantheon SNIa dataset. We use the 1048 data points provided by the _Pantheon_ collaboration [52] that measure the apparent distance for several Supernovae Ia (SNIa) events in a redshift range \(0.01<z<2.3\). The dataset for the Pantheon sample provides SN magnitudes corrected for the stretch and colour effects along with the maximum brightness, the mass of the host galaxy, and sky position bias, so to obtain a cosmological useful quantity we need to calculate the distance modulus \(\mu=m-M\), where \(M\) is the absolute magnitude that is considered a fixed value for our analyses [53]. Thus, the \(\chi^{2}_{\rm SN}\) for the Pantheon sample is given by \[\chi^{2}_{\rm SN}=\Delta\mu(z_{i},\Theta)^{T}C^{-1}_{\rm SN}\Delta\mu(z_{i}, \Theta)+\ln\left(\frac{S}{2\pi}\right)-\frac{k^{2}(\Theta)}{S},\] (2) where \(C^{-1}_{\rm SN}\) is the total covariance matrix for the data, \(S\) is the sum of all components of the inverse of the matrix and \(k(\Theta)=\Delta\mu(z_{i},\Theta)^{T}C^{-1}_{\rm SN}\), using \(\Delta\mu(z_{i},\Theta)=\mu(z_{i},\Theta)-\mu_{\rm obs}(z_{i})\). In this case, the distance modulus \(\mu(z)\) can be calculated as: \[\mu(z_{i},\Theta)=5\log\left[D_{L}(z_{i},\Theta)\right]+M,\] (3) and where \(D_{L}(z_{i},\Theta)\) is the luminosity distance given as: \[D_{L}(z_{i},\Theta)=c(1+z_{i})\int_{0}^{z_{i}}\frac{dz^{\prime}}{H(z^{\prime},\Theta)},\] (4) where \(c\) is the speed of light and \(H(z_{i},\Theta)\) is the Hubble parameter. We will be referring to Pantheon SNIa simply as SNIa from now on.
* Baryon acoustic oscillation (BAO). For this observable, we will consider the following independent surveys:
1. The result obtained from six-degree Field Galaxy Survey measurement (6dFGS) at \(z=0.106\)[54]. 2. The result from Sloan Digital Sky Survey (SDSS) Main Galaxy Sample Measurement (MGS) from Data Release 7 (DR7) at \(z=0.15\)[55]. 3. The data points from Baryon Oscillation from Spectroscopic Survey (BOSS) DR12 at \(z=0.38,0.51,0.61,1.52\)[56]. 4. The data points from the BAO measurements from SDSS DR14 at \(z=0.978,1.23,\)\(1.526,1.944\)[57].
To use these datasets we need to compute different cosmological quantities such as the averaged distance
\[D_{V}(z)=\left[\frac{cz}{H(z)}\frac{D_{L}^{2}(z)}{(1+z)^{2}}\right]^{1/3}, \tag{10}\]
the comoving sound horizon at the baryon drag epoch
\[r_{s}(z_{d})=\int_{z_{d}}^{\infty}\frac{c_{s}(z^{\prime})}{H(z^{\prime})}dz^{ \prime}, \tag{11}\]
and the fixed value for a fiducial cosmology through \(r_{s,\rm fid}\). Also, the sound horizon can be obtained by
\[c_{s}(z)=\frac{c}{\sqrt{3\left[1+\frac{3\Omega_{b}}{4\Omega_{\gamma}}\frac{1} {1+z}\right]}}. \tag{12}\]
Therefore, we require additional calculations like the redshift of the baryon drag epoch \(z_{d}\)[58]:
\[z_{d}=\frac{1291(\Omega_{M}h^{2})^{0.251}}{1+0.659(\Omega_{M}h^{2})^{0.828}} \left[1+b_{1}(\Omega_{b}h^{2})^{b_{2}}\right], \tag{13}\]
using for that
\[b_{1}=0.313(\Omega_{M}h^{2})^{-0.419}\left[1+0.607(\Omega_{M}h^{2})^{0.674} \right],\quad\text{and}\quad b_{2}=0.238(\Omega_{M}h^{2})^{0.223}. \tag{14}\]
\(D_{M}(z)\) is simply a quantity related to luminosity distance by \(D_{M}(z)=D_{L}(z)(1+z)^{-1}\). The calculations need also the dimensionless Hubble constant \(h=H_{0}/100\) km/s/Mpc, the baryon density parameter \(\Omega_{b}\), and its photon counterpart \(\Omega_{\gamma}\) and for the purpose of this work, the quantities \(\Omega_{b}h^{2}=0.0224\), \(\Omega_{\gamma}h^{2}=2.469\times 10^{-5}\) are fixed in agreement with [49]. So, for the fit of the BAO datasets the corresponding \(\chi^{2}_{\rm BAO}(\Theta)\) will be defined as:
\[\chi^{2}_{\rm BAO}=\Delta\Xi(z_{i},\Theta)^{T}C^{-1}_{\rm BAO}\Delta\Xi(z_{i}, \Theta), \tag{15}\]
where \(\Delta\Xi(z_{i},\Theta)=\Xi_{\rm obs}(z_{i},\Theta)-\Xi(z_{i})\) and \(C_{\rm BAO}\) is the covariance matrix for the considered observations, including the SDSS DR12 and the SDSS DR14 that have correlation among the observations and therefore we need a correlation matrix [56; 57].
### Quasars measurements
Quasars are one of the most luminous energy sources in the Universe, therefore their use at cosmological scales can make the most on studies at higher redshifts, e.g. up to \(z\sim 7\)[33]. This is a key aspect to exploring possible different models that can be indistinguishable at
low \(z\). Although, there are some examples of its cosmological use such as the reverberation mapping technique [59] or the relationship between variability in X-ray amplitude and Black Hole mass scatter of these objects [60]. However, this analysis remains at very high \(z\) and the samples are only applicable to a certain redshift range due to their strong dependency on the technique used to determine the data points. In this line of thought, there is a lack of a clear definition of the quasar to order the diversity of AGN objects [61].
In particular, and to tackle the latter issues, in this work, we use two different quasar samples:
* **QSO non-linear UV/X-ray sample**[33]. For this sample, we denote non-linear UV and X-ray samples as nUVX. This data sample includes 2421 selected objects from the Sloan Digital Sky Survey Data Release 14 (SDSS-DR14) [62] with other several surveys to make a selection of objects that cover a range from \(0<z\leq 7.54\). The detailed procedures to create the sample are well described in [33], and references therein. Moreover, here we will only describe the cosmological essentials to use the sample. The method to threat quasars as cosmological candles is based on the relation between the flux in Ultraviolet Light at 2500 A and the X-ray flux at 2 keV of objects (\(F_{\rm UV}\)-\(F_{\rm X}\)). Although the nature of this relation is not fully explained yet in the literature, we can obtain the distance modulus as \(\mu=5\log(d_{L})+25\), and the luminosity distance is written as \[\log(d_{L})=\frac{\left[\log F_{\rm X}-\gamma F_{\rm UV}\right]}{2(\gamma-1) }+\beta^{\prime},\] (21) where both fluxes are observable quantities and \(\gamma\) is the slope of the relation between the fluxes. \(\beta^{\prime}\) is related to the interception of those slopes and we will consider it as a parameter to be fitted. The fluxes relation shows no correlation with redshift and has a \(\gamma=0.702\) and \(\delta=0.21\) fixed for the complete redshift range [33]. In this direction, we can calculate the distance modulus as \[\mu=\frac{5}{2(\gamma-1)}(\log F_{\rm X}-\gamma F_{\rm UV})+5\beta^{\prime},\] (22) So, for this sample is used the following \(\chi_{\rm F}^{2}\): \[\chi_{\rm F}^{2}=-\frac{1}{2}\sum_{i}\Bigg{(}\frac{\left[\mu_{i}-\mu(\Theta) \right]^{2}}{s_{i}^{2}}-\ln s_{i}^{2}\Bigg{)},\] (23) where \(s_{i}=dy_{i}+\gamma^{2}dx_{i}+\exp(2\ln\delta)\) and it takes into account the uncertainties for both UV (\(x_{i}\)) and X-ray fluxes (\(y_{i}\)). \(\mu(\Theta)\) is the modeled theoretical distance modulus that can be obtained using the constructed cosmological distance. The data set for this sample contains the fluxes in UV, X, and both their uncertainties for all the selected objects to reconstruct the \(\chi^{2}\).
* **QSO xA sample**[34]. This data sample is a selection from \(\sim 250\) objects from the SDSS DR-14 [62] survey up to \(z\sim 2.5\). The reason why this sample is smaller is because the selection of these objects requires certain elements in the quasar's spectrum to appear and this selection is not possible for every redshift [63]. However, to treat them we use the technique so-called Eigenvector 1 of quasars [34], which is based on the detection of common spectra characteristics. To employ it we require the intensity ratio
between the Fe ii blend at \(\lambda 4570\) and H\(\beta\) called \(R_{\rm Fe\,{\textsc{ii}}}\), and the full-width half maximum (FWHM) of H\(\beta\) to create a relationship between these two parameters to determine quasar types for redshifts up to \(z\sim 0.8\). In scenarios where the previous elements do not appear on the spectrum, the lines C iv\(\lambda 1549\), Al iii\(\lambda 1860\) and Si iii\(\lambda 1892\) are useful for \(z\sim 2\) because they behave as substitutes for the \(R_{\rm Fe\,{\textsc{ii}}}\)[64]. This creates the _optical plane_ in which our analysis goes to the so-called xA group; the quasars that have a measured \(R_{\rm Fe\,{\textsc{ii}}}>1\), and FWHM(H\(\beta\)) \(<4000\) km/s. The xA quasar type has characteristics that make them candidates for standard candles [34]:
1. Those quasars that radiate near the Eddington limit and, therefore, are very close to a physical limit for the luminosity. This can form a relation between this limit and the black hole mass.
2. The black hole mass can be obtained through the virialized relation which depends on the observed FWHM of H\(\beta\) in the spectra.
3. Using a determined ionization parameter [34; 64] we can reach an expression for the luminosity, and the distance modulus will depend solely on the FWHM and the continuum value \(f_{\lambda}\lambda\).
For a complete description of the used method for this sample, in [34] is presented a complete review, and the references therein show different possible approaches. We start by writing the virial luminosity equation [64] as
\[L({\rm FWHM})=7.88\times 10^{44}({\rm FWHM})_{1000}^{4}, \tag{20}\]
where the FWHM is expressed in units of 1000 km/s. Using this expression and the H\(\beta\) line we can calculate the distance modulus as[32]:
\[\mu=2.5[\log L-\log(f_{\lambda}\lambda)]-100.19+5\log(1+z), \tag{21}\]
where \(f_{\lambda}\lambda\) is the rest-frame flux measured in 5100 A [65]. In this work, we combined both samples [63; 32; 65] to build a model-independent baseline with \(\sim 250\) objects containing a measure for \(\mu\) and \(\delta\mu\). The total quasar sample can be denoted using a \(\chi^{2}_{\rm xA}\) given by
\[\chi^{2}_{\rm xA}=-\frac{1}{2}\sum_{i}\Bigg{[}\frac{\left(\mu_{i}-\mu(z_{i}, \Theta)\right)^{2}}{\delta\mu_{i}^{2}}+\ln\!\left(\delta\mu_{i}^{2}\right) \Bigg{]}, \tag{22}\]
where \(\mu\) and \(\delta\mu_{i}\) are the measurements and their uncertainty, respectively, for every obtained distance modulus \(\mu_{i}(z_{i},\Theta)\). In Figure 1 we show the two quasar samples that will complement our Hubble diagram.
[FI
## 4 Model constraints using the baselines: SNIa+\(H(z)\)+BAO & QSO
This section presents the results for the different considerations of the baselines described and the constraints derived for each of the four \(f(T)\) models. Also, we divided the analyses using first the xA quasar sample, and afterward the quasar sample nUVX.
As mentioned, the BAO measurements depend strongly on the physics assumed in the early Universe. Due to this fact, there can be a bias in the estimation for the late-time parameters, such as \(H_{0}\). For this reason, we describe separate the results for the samples with and without BAO measurements, i.e. one group of data will use only \(H(z)\) and SNIa measurements, and the other will include \(H(z)\), SNIa, and BAO.
\begin{table}
\begin{tabular}{|l|l|l|} \hline
**Measurement** & \(H_{0}\) **[km/s/Mpc]** & **Reference** \\ \hline SH0ES (R21) & \(73.3\pm 1.04\) & [46] \\ \hline GAIA & \(74.03\pm 1.42\) & [11] \\ \hline F20 & \(69.8\pm 0.8\) & [47] \\ \hline Planck 2018 (P18) & \(67.36\pm 0.54\) & [49] \\ \hline ACT & \(67.9\pm 1.5\) & [50] \\ \hline \end{tabular}
\end{table}
Table 1: Priors used to calibrate baseline and QSO samples. The first column denotes the measurements. The second column indicates the \(H_{0}\) values in km/s/Mpc. References for each data are indicated in the last column.
Figure 1: Hubble diagram for the QSO samples described in Sec. 3.2. The dark blue dots represent the Pantheon data using an \(M=-19.3\). The green color points denote the xA sample and the coral color denotes the observed results for the nUVX sample. The \(x\)-axis represents the redshift \(z\) and \(y\)-axe the distance modulus \(\mu(z)\).
Furthermore, we present the results and their discussions on the cases with a \(H(z)\)+SNIa+BAO baseline and the two described QSO samples. We divided the \(H(z)\) +SNIa +BAO +QSO analysis into two parts in order not to produce a bias due to the physical assumptions on the calibration performed in the QSO samples.
Additionally, the QSO-nUVX sample has an extra nuisance parameter, \(\beta^{\prime}\), that will be reported in Tables for each case. Along the analysis, we will notice that the QSO-nUVX measurements can reduce significantly the \(H_{0}\) tension at 2-\(\sigma\) in all the models, including the standard \(\Lambda\)CDM model.
### \(\Lambda\)CDM model
The 1-2\(\sigma\) C.L. constraints for this model are given in Figure 2. The results for each of the constrained parameters are given in Table 2 for the baseline proposed \(H(z)\)+SNIa with and without BAO measurements. We also include the computing absolute magnitude \(M\) to compare the possible degeneracy between this parameter and the \(H_{0}\) for each prior consideration.
Also, it is reported the 1-2-\(\sigma\) constraints of this model in Figure 3, with their constraints reported in Table 3 for the QSO-xA sample and in Table 4 for the QSO-nUVX sample.
Notice that using BAO data the estimation for \(H_{0}\) has a lower value in comparison with the estimations obtained for the \(H(z)\)+SNIa measurements. As seen in Table 2, the estimation without prior is \(H_{0}=69.6^{+3.0}_{-4.1}\) km s\({}^{-1}\) Mpc\({}^{-1}\), with a significant uncertainty value.
When using the R21 prior we recover an increase in the value \(H_{0}=73.1^{+0.7}_{-0.8}\) km s\({}^{-1}\) Mpc\({}^{-1}\), which is consistent with the expected results using high Hubble priors [46]. According to this analysis, \(\Lambda\)CDM model seems to not relax the \(H_{0}\) tension - as it is expected using the baseline-only data sets - although the use of BAO sample has the effect of lowering this value to \(H_{0}=70.2\pm 0.5\) km s\({}^{-1}\) Mpc\({}^{-1}\) using the same R21 prior. For every combination of the different baselines, the minimum estimated uncertainty is the case obtained using the prior P18, \(H_{0}=67.4\pm 0.4\) km s\({}^{-1}\) Mpc\({}^{-1}\) for both data sets with and without BAO. This is another indicator that this measurement relies strongly on a lower \(H_{0}\) value. Regarding fractional matter density \(\Omega_{m}\), the highest value is obtained using the P18 and ACT priors.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(M\) \\ \hline \(H(z)\) + SNIa & \(69.6^{+3.0}_{-4.1}\) & \(0.299\pm 0.021\) & \(-19.37^{+0.10}_{-0.12}\) \\ \(H(z)\) + SNIa + R21 & \(73.1^{+0.7}_{-0.8}\) & \(0.289^{+0.020}_{-0.017}\) & \(-19.26\pm 0.02\) \\ \(H(z)\) + SNIa + P18 & \(67.4\pm 0.4\) & \(0.305^{+0.018}_{-0.020}\) & \(-19.43^{+0.02}_{-0.01}\) \\ \(H(z)\) + SNIa + F20 & \(69.8^{+0.6}_{-0.5}\) & \(0.298^{+0.020}_{-0.018}\) & \(-19.36\pm 0.02\) \\ \(H(z)\) + SNIa + GAIA & \(73.6^{+1.0}_{-0.9}\) & \(0.289^{+0.018}_{-0.02}\) & \(-19.24\pm 0.03\) \\ \(H(z)\) + SNIa + ACT & \(68.0^{+1.0}_{-1.1}\) & \(0.301^{+0.021}_{-0.019}\) & \(-19.42^{+0.04}_{-0.03}\) \\ \hline \hline \(H(z)\) + SNIa + BAO & \(67.3^{+0.6}_{-0.6}\) & \(0.318\pm 0.013\) & \(-19.43\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + R21 & \(70.2\pm 0.5\) & \(\left(293.5^{+10.4}_{-9.6}\right)\times 10^{-3}\) & \(-19.35\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + P18 & \(67.4^{+0.3}_{-0.4}\) & \(0.317\pm 0.010\) & \(-19.43\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + F20 & \(68.8^{+0.4}_{-0.5}\) & \(\left(305.7^{+9.6}_{-10.3}\right)\times 10^{-3}\) & \(-19.38^{+0.01}_{-0.02}\) \\ \(H(z)\) + SNIa + BAO + GAIA & \(69.5\pm 0.6\) & \(\left(299.2^{+10.9}_{-10.0}\right)\times 10^{-3}\) & \(-19.37\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + ACT & \(67.5\pm 0.6\) & \(\left(315.3^{+11.9}_{-9.8}\right)\times 10^{-3}\) & \(-19.42\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 2: _Top line:_\(\Lambda\)CDM model results using \(H(z)\) and SNIa datasets. _Below line_: \(\Lambda\)CDM model results using \(H(z)\), SNIa and BAO datasets. We include the analysis with the priors described in Table 1.
Figure 2: 1-2\(\sigma\) C.L results for the \(\Lambda\)CDM model using: _Left:_\(H(z)\) and Pantheon data sets. _Right:_ including BAO. Blue color denotes R21, purple color for GAIA, red color for P18, green color for F20, and yellow color for ACT priors. Additionally, the model was constrained with the sample baselines without prior, here denoted in black color.
Figure 3: 1-2\(\sigma\) C.L results for the \(\Lambda\)CDM model using: _Top left:_\(H(z)\)+SNIa and including the xA sample. _Top right:_\(H(z)\)+SNIa+BAO and including the xA sample. _Bottom left:_\(H(z)\)+SNIa and including the nUVX sample. _Bottom right: Top right:_\(H(z)\)+SNIa+BAO and including the nUVX sample. Blue color denotes R21, purple color for GAIA, red color for P18, green color for F20, and yellow color for ACT priors. Additionally, the model was constrained with the sample baselines without prior, here denoted in black color.
### Power Law Model - \(f_{1}(t)\) model
The 1-2\(\sigma\) C.L. constraints for this model are given in Figure 4. We show the results for each of the constrained cosmological parameters in Table 5, where the nuisance parameter \(M\) is also given for each case.
Also, it is reported the 1-2-\(\sigma\) constraints of this model in Figure 5, with their constraints reported in Table 6 for the QSO-xA sample and in Table 7 for the QSO-nUVX sample.
We notice that the highest value for the Hubble constant in this analysis was obtained using only \(H(z)\) and SNIa measurements with the GAIA prior \(H_{0}=73.8^{+0.9}_{-1.1}\) km s\({}^{-1}\) Mpc\({}^{-1}\). In comparison to the \(\Lambda\)CDM model, when we consider the BAO sample, the priors associated with early Universe physics, such as ACT and P18, prefer a value of \(b_{1}\to 0\). The latter case
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(M\) \\ \hline \(H(z)\) + SNIa + nUVX & \(69.0^{+3.6}_{-3.5}\) & \(0.300^{+0.020}_{-0.022}\) & \(-19.37\pm 0.11\) & \(-11.42^{+6.6}_{-7.7}\) \\ \(H(z)\) + SNIa + nUVX + R21 & \(73.2^{+0.7}_{-0.8}\) & \(0.308\pm 0.011\) & \(-19.25\pm 0.02\) & \(-11.429^{+0.095}_{-0.051}\) \\ \(H(z)\) + SNIa + nUVX + P18 & \(67.4\pm 0.4\) & \(0.309\pm 0.011\) & \(-19.43\pm 0.01\) & \(-11.39^{+0.16}_{-0.16}\) \\ \(H(z)\) + SNIa + nUVX + F20 & \(69.7^{+0.7}_{-0.5}\) & \(\left(308.5^{+10.0}_{-9.9}\right)\times 10^{-3}\) & \(-19.35\pm 0.02\) & \(-11.40\pm 0.14\) \\ \(H(z)\) + SNIa + nUVX + GAIA & \(73.5^{+0.2}_{-2.0}\) & \(0.307\pm 0.012\) & \(-19.24\pm 0.06\) & \(-11.42\pm 0.12\) \\ \(H(z)\) + SNIa + nUVX + ACT & \(67.9^{+1.6}_{-1.4}\) & \(0.308\pm 0.011\) & \(-19.41\pm 0.05\) & \(-11.40\pm 0.12\) \\ \hline \(H(z)\) + SNIa + BAO + nUVX & \(67.3\pm 0.7\) & \(0.317^{+0.012}_{-0.011}\) & \(-19.43\pm 0.02\) & \(-11.03^{+1.69}_{-1.75}\) \\ \(H(z)\) + SNIa + BAO + nUVX + R21 & \(70.2\pm 0.5\) & \(\left(293.8^{+9.9}_{-9.8}\right)\times 10^{-3}\) & \(-19.35\pm 0.02\) & \(-11.46\pm 1.78\) \\ \(H(z)\) + SNIa + BAO + nUVX + P18 & \(67.3\pm 0.3\) & \(\left(318.1^{+9.7}_{-1.9}\right)\times 10^{-3}\) & \(-19.43\pm 0.01\) & \(-11.12^{+1.54}_{-1.91}\) \\ \(H(z)\) + SNIa + BAO + nUVX + F20 & \(68.7^{+0.48}_{-0.41}\) & \(\left(305.4^{+10.7}_{-10.3}\right)\times 10^{-3}\) & \(-19.384^{+0.03}_{-0.041}\) & \(-11.46^{+1.62}_{-1.77}\) \\ \(H(z)\) + SNIa + BAO + nUVX + GAIA & \(69.5\pm 0.6\) & \(0.299\pm 0.010\) & \(-19.37\pm 0.02\) & \(-11.49^{+1.77}_{-1.65}\) \\ \(H(z)\) + SNIa + BAO + nUVX + ACT & \(67.5\pm 0.6\) & \(0.317^{+0.010}_{-0.012}\) & \(-19.42\pm 0.02\) & \(-11.47^{+1.64}_{-1.86}\) \\ \hline \end{tabular}
\end{table}
Table 4: \(\Lambda\)CDM model constraints using the: _Top line:_\(H(z)\)+SNIa sample (in the first block), _Below line:_ and with BAO sample, both using QSO-nUVX sample.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(M\) \\ \hline \(H(z)\) + SNIa + xA & \(67.4\pm 1.0\) & \(0.299\pm 0.021\) & \(-19.43\pm 0.03\) \\ \(H(z)\) + SNIa + xA + R21 & \(71.2\pm 0.6\) & \(0.265^{+0.016}_{-0.015}\) & \(-19.33\pm 0.02\) \\ \(H(z)\) + SNIa + xA + P18 & \(67.4^{+0.3}_{-0.4}\) & \(0.299^{+0.018}_{-0.017}\) & \(-19.43\pm 0.01\) \\ \(H(z)\) + SNIa + xA + F20 & \(69.2\pm 0.5\) & \(0.283^{+0.016}_{-0.017}\) & \(-19.38\pm 0.02\) \\ \(H(z)\) + SNIa + xA + GAIA & \(70.6\pm 0.7\) & \(0.269^{+0.019}_{-0.017}\) & \(-19.34\pm 0.02\) \\ \(H(z)\) + SNIa + xA + ACT & \(67.6\pm 0.7\) & \(0.297^{+0.018}_{-0.019}\) & \(-19.42\pm 0.02\) \\ \hline \(H(z)\) + SNIa + BAO + xA & \(67.2^{+0.6}_{-0.5}\) & \(0.316^{+0.012}_{-0.010}\) & \(-19.43\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + R21 & \(69.6\pm 0.5\) & \(\left(292.5^{+11.2}_{-9.7}\right)\times 10^{-3}\) & \(-19.37^{+0.02}_{-0.01}\) \\ \(H(z)\) + SNIa + BAO + xA + P18 & \(67.3^{+0.4}_{-0.3}\) & \(\left(315.2^{+10.0}_{-9.8}\right)\times 10^{-3}\) & \(-19.43\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + F20 & \(68.6^{+0.4}_{-0.5}\) & \(\left(303.7^{+9.2}_{-10.3}\right)\times 10^{-3}\) & \(-19.39^{+0.01}_{-0.02}\) \\ \(H(z)\) + SNIa + BAO + xA + GAIA & \(69.0^{+0.5}_{-0.6}\) & \(\left(298.1^{+11.4}_{-9.1}\right)\times 10^{-3}\) & \(-19.38\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + ACT & \(67.4\pm 0.5\) & \(0.314^{+0.011}_{-0.010}\) & \(-19.43\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 3: \(\Lambda\)CDM model constraints using the: _Top line:_\(H(z)\)+SNIa sample (in the first block), _Below line:_ and with BAO sample, both using QSO-xA sample.
gives \(b_{1}=-0.05^{+0.10}_{-0.11}\), which recover the case of \(\Lambda\)CDM as it is predicted from the model Eq.(9).
Furthermore, using the R21 prior we obtain \(H_{0}=71.3\pm 0.6\) km s\({}^{-1}\) Mpc\({}^{-1}\), and \(b_{1}=-0.52^{+0.16}_{-0.21}\), denoting a deviation from \(\Lambda\)CDM of more than \(2\sigma\).
Figure 5: 1-2\(\sigma\) C.L results for the \(f_{1}(T)\) model using: _Top left:_\(H(z)\)+SNIa and including the xA sample. _Top right:_\(H(z)\)+SNIa+BAO and including the xA sample. _Bottom left:_\(H(z)\)+SNIa and including the nUVX sample. _Bottom right: Top right:_\(H(z)\)+SNIa+BAO and including the nUVX sample. Blue color denotes R21, purple color for GAIA, red color for P18, green color for F20, and yellow color for ACT priors. Additionally, the model was constrained with the sample baselines without prior, here denoted in black color.
### Linder Model - \(f_{2}(T)\) model
The 1-2\(\sigma\) C.L. constraints for this model are given in Figure 6. The cosmological constraints for this model are given in Table 8. In this case, we wrote the free parameter for the model as \(1/b_{2}\), to avoid the divergence problem previously mentioned in recovering the LCDM limit (i.e. \(b_{2}\rightarrow\infty\)) in Eq.(11), i.e. \(1/b_{2}\to 0\) recover the \(\Lambda\)CDM case. Also, it is reported the 1-2-\(\sigma\) constraints of this model in Figure 6, with their constraints reported in Table 9 for the QSO-xA sample and in Table 10 for the QSO-nUVX sample.
As it is expected, the constraints for this model using the baseline described prefer values of \(1/b_{2}\) near 0, recovering the \(\Lambda\)CDM case. Regarding \(H_{0}\), its low value using a BAO sample is also obtained. On one hand, the highest \(H_{0}\) estimation for this model is the one using the GAIA prior with \(H(z)\) and SNIa combined baseline with \(H_{0}=73.7\pm 1.0\) km s\({}^{-1}\) Mpc\({}^{-1}\), with the lowest matter density \(\Omega_{m}=0.281^{+0.028}_{-0.037}\), and \(1/b_{2}\sim 0\). On the other hand,
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(b_{1}\) & \(M\) \\ \hline \(H(z)\) + SNIa + xA & \(68.1\pm 1.0\) & \(0.405^{+0.027}_{-0.028}\) & \(-1.43^{+0.49}_{-0.77}\) & \(-19.44\pm 0.03\) \\ \(H(z)\) + SNIa + xA + R21 & \(71.4\pm 0.6\) & \(0.387^{+0.024}_{-0.026}\) & \(-1.97^{+0.69}_{-0.79}\) & \(-19.35\pm 0.02\) \\ \(H(z)\) + SNIa + xA + P18 & \(67.5^{+0.3}_{-0.4}\) & \(0.411^{+0.024}_{-0.030}\) & \(-1.45^{+0.54}_{-0.69}\) & \(-19.453^{+0.02}_{-0.01}\) \\ \(H(z)\) + SNIa + xA + F20 & \(69.4\pm 0.5\) & \(0.399^{+0.025}_{-0.027}\) & \(-1.73^{+0.64}_{-0.71}\) & \(-19.40\pm 0.02\) \\ \(H(z)\) + SNIa + xA + GAIA & \(71.0\pm 0.7\) & \(0.389\pm 0.025\) & \(-1.88^{+0.62}_{-0.82}\) & \(-1.36\pm 0.02\) \\ \(H(z)\) + SNIa + xA + ACT & \(67.96^{+0.73}_{-0.74}\) & \(0.407^{+0.026}_{-0.029}\) & \(-1.58^{+0.62}_{-0.65}\) & \(-19.440\pm 0.023\) \\ \hline \(H(z)\) + SNIa + BAO + xA & \(68.3^{+0.8}_{-0.9}\) & \(0.324^{+0.014}_{-0.013}\) & \(-0.23^{+0.15}_{-0.21}\) & \(-19.41\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + R21 & \(71.0\pm 0.6\) & \(0.320\pm 0.011\) & \(-0.72^{+0.19}_{-0.26}\) & \(-19.35\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + P18 & \(67.5^{+0.3}_{-0.4}\) & \(0.325^{+0.013}_{-0.012}\) & \(-0.15^{+0.12}_{-0.13}\) & \(-19.43\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + F20 & \(69.3\pm 0.5\) & \(0.322^{+0.016}_{-0.014}\) & \(-0.41^{+0.16}_{-0.19}\) & \(-19.39\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + GAIA & \(70.5^{+0.6}_{-0.7}\) & \(0.322^{+0.011}_{-0.012}\) & \(-0.65^{+0.22}_{-0.23}\) & \(-19.36\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + ACT & \(68.1\pm 0.7\) & \(0.326^{+0.011}_{-0.013}\) & \(-0.23^{+0.15}_{-0.17}\) & \(-19.412\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 6: \(f_{1}(T)\) model constraints using the: _Top line:_\(H(z)\)+SNIa sample (in the first block), _Below line:_ and with BAO sample, both using QSO-xA sample.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Data Set & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(b_{1}\) & \(M\) & \(\beta^{\prime}\) \\ \hline \(H(z)\) + SNIa + xA & \(67.2^{+3.5}_{-3.5}\) & \(0.369^{+0.03,0.02}_{-0.009}\) & \(-0.41^{+0.63}_{-0.64}\) & \(-19.42^{+0.12}_{-0.13}\) & \(-11.74^{+1.31}_{-1.82}\) \\ \(H(z)\) + SNIa + nUVX + R21 & \(73.2^{+0.6}_{-0.8}\) & \(0.327^{+0.051}_{-0.067}\) & \(-0.11^{+0.47}_{-0.55}\) & \(-19.26^{+0.03}_{-0.02}\) & \(-10.57^{+1.13}_{-1.94}\) \\ \(H(z)\) + SNIa + nUVX + P18 & \(67.5^{+0.3}_{-0.4}\) & \(0.322^{+0.012}_{-0.013}\) & \(-0.05^{+0.11}_{-0.19}\) & \(-19.43\pm 0.01\) & \(-11.54^{+1.28}_{-1.84}\) \\ \(H(z)\) + SNIa + nUVX + F20 & \(69.8^{+0.5}_{-0.6}\) & \(0.351^{+0.045}_{-0.062}\) & \(-0.25^{+0.47}_{-0.57}\) & \(-19.36\pm 0.02\) & \(-10.9^{+1.06}_{-1.99}\) \\ \(H(z)\) + SNIa + nUVX + GAIA & \(70.8\pm 0.7\) & \(0.317^{+0.012}_{-0.012}\) & \(-0.45^{+0.16}_{-0.25}\) & \(-19.35\pm 0.02\) & \(-11.09^{+1.42}_{-1.69}\) \\ \(H(z)\) + SNIa + nUVX + ACT & \(67.9\pm 0.7\) & \(0.321^{+0.013}_{-0.012}\) & \(-0.10.03\pm 0.13\) & \(-19.42\pm 0.02\) & \(-10.12^{+1.14}_{-1.37}\) \\ \hline \(H(z)\) + SNIa + BAO + nUVX & \(67.8^{+1.0}_{-0.9}\) & \(0.321\pm 0.012\) & \(-0.08^{+0.14}_{-0.15}\) & \(-19.42^{+0.03}_{-0.02}\) & \(-11.82^{+1.14}_{-1.80}\) \\ \(H(z)\) + SNIa + BAO + nUVX + R21 & \(71.3\pm 0.6\) & \(0.315^{+0.011}_{-0.012}\) & \(-0.53^{+0.17}_{-0.20}\) & \(-19.34\pm 0.02\) & \(-11.36^{+1.27}_{-1.86}\) \\ \(H(z)\) + SNIa + BAO + nUVX + P18 & \(67.4^{+0.4}_{-0.3}\) & \(0.322^{+0.014}_{-0.015}\) & \(-0.05^{+0.17}_{-0.13}\) & \(-19.4
the lowest \(H_{0}\) estimations are the ones obtained with ACT and P18 priors using the BAO sample.
Figure 7: 1-2\(\sigma\) C.L results for the \(f_{1}(T)\) model using: _Top left:_\(H(z)\)+SNIa and including the xA sample. _Top right:_\(H(z)\)+SNIa+BAO and including the nUVX sample. _Bottom right:_\(H(z)\)+SNIa+BAO and including the nUVX sample. Blue color denotes R21, purple color for GAIA, red color for P18, green color for F20, and yellow color for ACT priors. Additionally, the model was constrained with the sample baselines without prior, here denoted in black color.
### Variant Linder Model - \(f_{3}(T)\) model
The 1-2\(\sigma\) C.L. constraints for this model are given in Figure 8. The cosmological constraints for this model are given in Table 11. Also, it is reported the 1-2-\(\sigma\) constraints of this model in Figure 9, with their constraints reported in Table 12 for the QSO-xA sample and in Table 13 for the QSO-nUVX sample.
We notice that the model goes to \(\Lambda\)CDM as its constraint in \(b_{3}\rightarrow+\infty\), which is analogous to the \(f_{2}\)CDM model, see Eq.(13). We reported this quantity as the inverse to avoid divergencies in the model. The result obtained using the \(H(z)\)+SNIa sample seems to raise the \(H_{0}\) value in comparison to the other cases. However, as is expected, the introduction of the BAO sample tends to lower the \(H_{0}\) values for the priors considered.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Data Set & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(1/b_{2}\) & \(M\) & \(\beta^{\prime}\) \\ \hline \(H(z)\) + SNIa + nUVX & \(69.8^{+3.0}_{-4.2}\) & \(0.296^{+0.022}_{-0.023}\) & \(0.00^{+0.13}_{-0.06}\) & \(-19.36^{+0.10}_{-0.12}\) & \(-12.64^{+0.14}_{-0.73}\) \\ \(H(z)\) + SNIa + nUVX + R21 & \(73.2^{+0.7}_{-0.8}\) & \(0.288^{+0.019}_{-0.021}\) & \(0.059^{+0.009}_{-0.085}\) & \(-19.27\pm 0.02\) & \(-12.21^{+0.13}_{-0.70}\) \\ \(H(z)\) + SNIa + nUVX + P18 & \(67.4\pm 0.4\) & \(0.301^{+0.022}_{-0.023}\) & \(0.021^{+0.124}_{-0.028}\) & \(-19.43\pm 0.02\) & \(-12.62^{+0.61}_{-0.60}\) \\ \(H(z)\) + SNIa + nUVX + F20 & \(69.8\pm 0.6\) & \(0.295\pm 0.021\) & \(0.077^{+0.005}_{-0.076}\) & \(-19.36\pm 0.02\) & \(-12.51^{+0.60}_{-0.60}\) \\ \(H(z)\) + SNIa + nUVX + GAIA & \(73.8^{+0.9}_{-1.1}\) & \(0.285\pm 0.021\) & \(0.018^{+0.113}_{-0.017}\) & \(-19.24\pm 0.03\) & \(-12.63^{+0.48}_{-0.63}\) \\ \(H(z)\) + SNIa + nUVX + ACT & \(68.0\pm 1.0\) & \(0.299^{+0.021}_{-0.022}\) & \(0.00^{+0.14}_{-0.06}\) & \(-19.41\pm 0.03\) & \(-12.52^{+0.68}_{-0.47}\) \\ \hline \(H(z)\) + SNIa + BAO + nUVX & \(67.2^{+0.8}_{-0.7}\) & \(0.317^{+0.013}_{-0.012}\) & \(0.071^{+0.007}_{-0.007}\) & \(-19.43\pm 0.02\) & \(-10.83^{+1.50}_{-1.51}\) \\ \(H(z)\) + SNIa + BAO + nUVX + R21 & \(70.2^{+0.6}_{-0.6}\) & \(\left(294.8^{+8.9}_{-1.0}\right)^{-3}\times 1^{-3}\) & \(0.025^{+0.092}_{-0.024}\) & \(-19.35\pm 0.02\) & \(-10.85^{+1.02}_{-1.26}\) \\ \(H(z)\) + SNIa + BAO + nUVX + P18 & \(67.3\pm 0.3\) & \(0.317^{+0.011}_{-0.010}\) & \(0.030^{+0.126}_{-0.029}\) & \(-19.43\pm 0.01\) & \(-10.52^{+1.15}_{-1.01}\) \\ \(H(z)\) + SNIa + BAO + nUVX + F20 & \(68.8^{+0.4}_{-0.5}\) & \(0.305\pm 0.010\) & \(0.075^{+0.022}_{-0.074}\) & \(-19.39\pm 0.01\) & \(-10.13^{+1.11}_{-1.98}\) \\ \(H(z)\) + SNIa + BAO + nUVX + GAIA & \(69.4\pm 0.6\) & \(0.300^{+0.010}_{-0.011}\) & \(0.054^{+0.075}_{-0.022}\) & \(-19.37\pm 0.02\) & \(-10.34^{+1.14}_{-1.96}\) \\ \(H(z)\) + SNIa + BAO + nUVX + ACT & \(67.5\pm 0.6\) & \(0.316\pm 0.011\) & \(0.055^{+0.102}_{-0.054}\) & \(-19.42\pm 0.02\) & \(-10.34^{+1.17}_{-1.56}\) \\ \hline \end{tabular}
\end{table}
Table 10: \(f_{2}(T)\) model constraints using the: _Top line:_\(H(z)\)+SNIa sample (in the first block), _Below line:_ and with BAO sample, both using QSO-nUVX sample.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Data Set & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(1/b_{2}\) & \(M\) \\ \hline \(H(z)\) + SNIa + nAVX & \(67.4^{+1.0}_{-0.9}\) & \(0.296\pm 0.019\) & \(0.00^{+0.15}_{-0.00}\) & \(-19.43\pm 0.03\) \\ \(H(z)\) + SNIa + xA + R21 & \(71.2\pm 0.6\) & \(0.264\pm 0.019\) & \(0.00^{+0.14}_{-0.00}\) & \(-19.33\pm 0.02\) \\ \(H(z)\) + SNIa + xA + P18 & \(67.4\pm 0.4\) & \(0.297\pm 0.019\) & \(0.094^{+0.066}_{-0.091}\) & \(-19.43^{+0.01}_{-0.02}\) \\ \(H(z)\) + SNIa + xA + F20 & \(69.2^{+0.6}_{-0.5}\) & \(0.281\pm 0.017\) & \(0.053^{+0.093}_{-0.051}\) & \(-19.4\pm 0.02\) \\ \(H(z)\) + SNIa + xA + GAIA & \(70.7^{+0.7}_{-0.8}\) & \(0.269\pm 0.020\) & \(0.00^{+0.14}_{-0.00}\) & \(-19.34\pm 0.02\) \\ \(H(z)\) + SNIa + xA + ACT & \(67.7^{+0.7}_{-0.8}\) & \(0.294^{+0.020}_{-0.018}\) & \(0.018^{+0.141}_{-0.014}\) & \(-19.42\pm 0.02\) \\ \hline \(H(z)\) + SNIa + BAO + xA & \(67.9\pm 0.5\) & \(0.298^{+0.012}_{-0.011}\) & \(0.025^{+0.134}_{-0.023}\) & \(-19.41\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + R21 & \(69.55\pm 0.5\) & \(0.293\pm 0.011\) & \(0.00^{+0.10}_{-0.00}\) & \(-19.37^{+0.01}_{-0.02}\) \\ \(H(z)\) + SNIa + BAO + xA + P18 & \(67.3\pm 0.3\) & \(0.314^{+0.012}_{-0.011}\) & \(0.022^{+0.107}_{-0.021}\) & \(-19.43\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + F20 & \(68.6^{+0.4}_{-0.3}\) & \(\left(295.0^{+9.0}_{-9.3}\right)\times
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(1/b_{3}\) & \(M\) \\ \hline \(H(z)\) + SNIa & \(69.5^{+3.5}_{-3.7}\) & \(0.294^{+0.024}_{-0.022}\) & \(0.015^{+0.130}_{-0.014}\) & \(-19.37\pm 0.11\) \\ \(H(z)\) + SNIa + R21 & \(73.1\pm 0.8\) & \(0.286^{+0.024}_{-0.023}\) & & \(-19.26\pm 0.03\) \\ \(H(z)\) + SNIa + P18 & \(67.4\pm 0.4\) & \(0.302^{+0.019}_{-0.022}\) & \(0.00^{+0.14}_{-0.00}\) & \(-19.43\pm 0.02\) \\ \(H(z)\) + SNIa + F20 & \(69.8^{+0.6}_{-0.5}\) & \(0.294^{+0.021}_{-0.020}\) & \(0.00^{+0.14}_{-0.00}\) & \(-19.36\pm 0.02\) \\ \(H(z)\) + SNIa + GAIA & \(73.7\pm 1.0\) & \(0.285^{+0.022}_{-0.020}\) & \(0.050^{+0.106}_{-0.049}\) & \(-19.235\pm 0.03\) \\ \(H(z)\) + SNIa + ACT & \(67.7^{+1.2}_{-0.7}\) & \(0.299^{+0.023}_{-0.024}\) & & \(-19.41\pm 0.04\) \\ \hline \hline \(H(z)\) + SNIa + BAO & \(68.1^{+0.5}_{-0.6}\) & \(0.300^{+0.012}_{-0.011}\) & \(0.093^{+0.053}_{-0.091}\) & \(-19.41\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + R21 & \(70.2^{+0.5}_{-0.6}\) & \(0.294^{+0.011}_{-0.010}\) & \(0.001^{+0.091}_{-0.000}\) & \(-19.35\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + P18 & \(67.3^{+0.3}_{-0.4}\) & \(0.318\pm 0.010\) & \(0.030^{+0.087}_{-0.029}\) & \(-19.42\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + F20 & \(68.8^{+0.5}_{-0.4}\) & \(\left(305.5^{+9.8}_{-10.6}\right)\times 10^{-3}\) & \(\left(7.3^{+5.5}_{-6.2}\right)\times 10^{-3}\) & \(-19.38^{+0.01}_{-0.02}\) \\ \(H(z)\) + SNIa + BAO + GAIA & \(69.4\pm 0.5\) & \(\left(294.8^{+10.79}_{-9.7}\right)\times 10^{-3}\) & \(0.062^{+0.042}_{-0.061}\) & \(-19.37\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + ACT & \(67.4^{+0.5}_{-0.6}\) & \(\left(322.3^{+9.9}_{-9.8}\right)\times 10^{-3}\) & \(0.077^{+0.037}_{-0.076}\) & \(-19.42\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 11: \(f_{3}(T)\) model results using \(H(z)\) and SNIa datasets. _Below line_: \(f_{3}(T)\) model results using \(H(z)\), SNIa and BAO datasets. We include the analysis with the priors described in Table 1.
Figure 8: 1-2\(\sigma\) C.L results for the \(f_{3}(T)\) model using: _Left:_\(H(z)\) and Pantheon data sets. _Right:_ including BAO. Blue color denotes R21, purple color for GAIA, red color for P18, green color for F20, and yellow color for ACT priors. Additionally, the model was constrained with the sample baselines without prior, here denoted in black color.
## 6 Conclusions
Figure 9: 1-2\(\sigma\) C.L results for the \(f_{3}(T)\) model using: _Top left:_\(H(z)\)+SNIa and including the xA sample. _Top right:_\(H(z)\)+SNIa+BAO and including the nUVX sample. _Bottom right:_\(Top\) right:_\(H(z)\)+SNIa+BAO and including the nUVX sample. Blue color denotes R21, purple color for GAIA, red color for P18, green color for F20, and yellow color for ACT priors. Additionally, the model was constrained with the sample baselines without prior, here denoted in black color.
### Logarithmic Model - \(f_{4}(T)\) model
The 1-2\(\sigma\) C.L. constraints for this model are given in Figure 10. The cosmological constraints for this model are given in Table 14. Also, it is reported the 1-2-\(\sigma\) constraints of this model in Figure 11, with their constraints reported in Table 15 for the QSO-xA sample and in Table 16 for the QSO-nUVX sample.
Notice that due to the nature of this model, from Eq.(2.15), we do not treat with free parameters related to \(b_{4}\). As a consequence, the background behaviour of this model cannot feature per se any bias compared with the \(\Lambda\)CDM model. The case with the \(H(z)\)+SNIa+GAIA prior is the only one that shows a high \(H_{0}\) value.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(1/b_{3}\) & \(M\) \\ \hline \(H(z)\) + SNIa + nUVX & \(69.8^{+3.0}_{-4.2}\) & \(0.296^{+0.022}_{-0.023}\) & \(0.00^{+0.14}_{-0.00}\) & \(-19.36^{+0.10}_{-0.12}\) & \(-12.64^{+0.14}_{-0.73}\) \\ \(H(z)\) + SNIa + nUVX + R21 & \(73.2^{+0.7}_{-0.8}\) & \(0.288^{+0.019}_{-0.021}\) & \(0.059^{+0.009}_{-0.038}\) & \(-19.27\pm 0.02\) & \(-12.21^{+0.13}_{-0.70}\) \\ \(H(z)\) + SNIa + nUVX + P18 & \(67.4\pm 0.4\) & \(0.301^{+0.022}_{-0.022}\) & \(0.021^{+0.022}_{-0.028}\) & \(-19.43\pm 0.02\) & \(-12.62^{+0.51}_{-0.65}\) \\ \(H(z)\) + SNIa + nUVX + F20 & \(69.8\pm 0.6\) & \(0.295^{+0.021}_{-0.021}\) & \(0.077^{+0.065}_{-0.075}\) & \(-19.36\pm 0.02\) & \(-12.51^{+0.66}_{-0.49}\) \\ \(H(z)\) + SNIa + nUVX + GAIA & \(73.8^{+9}_{-1.1}\) & \(0.285\pm 0.021\) & \(0.018^{+0.133}_{-0.017}\) & \(-19.24\pm 0.03\) & \(-12.63^{+0.48}_{-0.63}\) \\ \(H(z)\) + SNIa + nUVX + ACT & \(68.0\pm 1.0\) & \(0.299^{+0.021}_{-0.022}\) & \(0.00^{+0.14}_{-0.00}\) & \(-19.41\pm 0.03\) & \(-12.52^{+0.68}_{-0.47}\) \\ \hline \(H(z)\) + SNIa + BAO + nUVX & \(67.7^{+0.6}_{-0.8}\) & \(0.318\pm 0.012\) & \(0.061^{+0.058}_{-0.060}\) & \(-19.43\pm 0.02\) & \(-11.48^{+0.19}_{-0.11}\) \\ \(H(z)\) + SNIa + BAO + nUVX + R21 & \(70.1\pm 0.2\) & \(0.294\pm 0.010\) & \(0.067^{+0.022}_{-0.066}\) & \(-19.39\pm 0.02\) & \(-12.61^{+0.45}_{-0.73}\) \\ \(H(z)\) + SNIa + BAO + nUVX + P18 & \(67.3^{+0.4}_{-0.3}\) & \(0.318\pm 0.012\) & \(0.00^{+0.12}_{-0.00}\) & \(-19.43\pm 0.01\) & \(-11.39^{+0.77}_{-0.36}\) \\ \(H(z)\) + SNIa + BAO + nUVX + F20 & \(68.8\pm 0.4\) & \(0.304^{+0.012}_{-0.011}\) & \(0.020^{+0.085}_{-0.018}\) & \(-19.39\pm 0.02\) & \(-11.77^{+0.77}_{-0.32}\) \\ \(H(z)\) + SNIa + BAO + nUVX + GAIA & \(69.5\pm 0.6\) & \(0.299^{+0.011}_{-0.010}\) & \(0.018^{+0.017}_{-0.12}\) & \(-19.37\pm 0.02\) & \(-11.37^{+0.88}_{-0.88}\) \\ \(H(z)\) + SNIa + BAO + nUVX + ACT & \(67.4^{+0.6}_{-0.5}\) & \(0.316\pm 0.011\) & \(0.00^{+0.12}_{-0.00}\) & \(-19.42\pm 0.02\) & \(-11.45^{+0.11}_{-0.10}\) \\ \hline \end{tabular}
\end{table}
Table 13: \(f_{3}(T)\) model constraints using the: _Top line:_\(H(z)\)+SNIa sample (in the first block), _Below line:_ and with BAO sample, both using QSO-nUVX sample.
\begin{table}
\begin{tabular}{c|c c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(1/b_{3}\) & \(M\) \\ \hline \(H(z)\) + SNIa + xA & \(67.4\pm 1.0\) & \(0.297^{+0.020}_{-0.018}\) & \((6.9^{+108.4}_{-6.1})\times 10^{-3}\) & \(-19.43\pm 0.03\) \\ \(H(z)\) + SNIa + xA + R21 & \(71.2\pm 0.6\) & \(0.265^{+0.015}_{-0.017}\) & \(0.028^{+0.077}_{-0.028}\) & \(-19.33\pm 0.02\) \\ \(H(z)\) + SNIa + xA + P18 & \(67.4^{+0.3}_{-0.4}\) & \(0.299^{+0.017}_{-0.020}\) & \(0.033^{+0.038}_{-0.030}\) & \(-19.43\pm 0.02\) \\ \(H(z)\) + SNIa + xA + F20 & \(69.2\pm 0.5\) & \(0.281^{+0.016}_{-0.016}\) & \(0.083^{+0.020}_{-0.080}\) & \(-19.38\pm 0.02\) \\ \(H(z)\) + SNIa + xA + GAIA & \(70.7\pm 0.7\) & \(0.269^{+0.016}_{-0.016}\) & \(0.055^{+0.051}_{-0.035}\) & \(-19.34\pm 0.02\) \\ \(H(z)\) + SNIa + xA + ACT & \(67.7^{+0.6}_{-0.8}\) & \(0.294^{+0.020}_{-0.021}\) & \(0.032^{+0.079}_{-0.031}\) & \(-19.42^{+0.02}_{-0.03}\) \\ \hline \(H(z)\) + SNIa + BAO + xA + R21 & \(69.6\pm 0.5\) & \((293.0^{+10.1}_{-0.7})\times 10^{-3}\) & \(0.052^{+0.043}_{-0.051}\) & \(-19.37\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + P18 & \(67.3\pm 0.3\) & \(0.315^{+0.011}_{-0.010}\) & \(0.064^{+0.044}_{-0.063}\) & \(-19.43\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + F20 & \(68.6^{+0.4}_{-0.5}\) & \(0.302^{+0.012}_{-0.011}\) & \(0.038^{+0.062}_{-0.037}\) & \(-19.39\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + GAIA & \(68.9^{+0.5}_{-0.6}\) & \((298.9^{+107.0}_{-10.6})\times 10^{-3}\) & \(0.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(M\) \\ \hline \(H(z)\) + SNIa & \(70.1^{+3.9}_{-3.3}\) & \(0.202^{+0.014}_{-0.019}\) & \(-19.30^{+0.08}_{-0.13}\) \\ \(H(z)\) + SNIa + R21 & \(73.2^{+0.7}_{-0.8}\) & \(0.195^{+0.016}_{-0.015}\) & \(-19.25^{+0.03}_{-0.02}\) \\ \(H(z)\) + SNIa + P18 & \(67.4\pm 0.04\) & \(0.205\pm 0.016\) & \(-19.42^{+0.01}_{-0.02}\) \\ \(H(z)\) + SNIa + F20 & \(69.8^{+0.6}_{-0.5}\) & \(0.200^{+0.016}_{-0.015}\) & \(-19.35\pm 0.02\) \\ \(H(z)\) + SNIa + GAIA & \(73.9^{+0.9}_{-1.0}\) & \(0.194^{+0.016}_{-0.014}\) & \(-19.23\pm 0.03\) \\ \(H(z)\) + SNIa + ACT & \(68.0^{+1.1}_{-1.0}\) & \(0.203^{+0.017}_{-0.016}\) & \(-19.40^{+0.04}_{-0.03}\) \\ \hline \hline \(H(z)\) + SNIa + BAO & \(64.8\pm 0.5\) & \(\left(264.0^{+9.7}_{-9.9}\right)\times 10^{-3}\) & \(-19.48\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + R21 & \(68.2\pm 0.6\) & \(\left(258.6^{+10.3}_{-8.6}\right)\times 10^{-3}\) & \(-19.37\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + P18 & \(66.4^{+0.4}_{-0.3}\) & \(\left(273.8^{+8.8}_{-10.6}\right)\times 10^{-3}\) & \(-19.42\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + F20 & \(67.4^{+0.4}_{-0.5}\) & \(\left(265.3^{+10.2}_{-8.8}\right)\times 10^{-3}\) & \(-19.39\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + GAIA & \(66.2^{+0.6}_{-0.5}\) & \(\left(289.9^{+9.3}_{-8.8}\right)\times 10^{-3}\) & \(-19.42\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + ACT & \(64.9^{+0.5}_{-0.7}\) & \(\left(284.4^{+12.0}_{-9.1}\right)\times 10^{-3}\) & \(-19.47\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 14: \(f_{4}(T)\) model results using \(H(z)\) and SNIa datasets. _Below line_: \(f_{4}(T)\) model results using \(H(z)\), SNIa and BAO datasets. We include the analysis with the priors described in Table 1.
Figure 10: 1-2\(\sigma\) C.L results for the \(f_{4}(T)\) model using: _Left:_\(H(z)\) and Pantheon data sets. _Right:_ including BAO. Blue color denotes R21, purple color for GAIA, red color for P18, green color for F20, and yellow color for ACT priors. Additionally, the model was constrained with the sample baselines without prior, here denoted in black color.
Figure 11: 1-2\(\sigma\) C.L results for the \(4_{1}(T)\) model using: _Top left:_\(H(z)\)+SNIa and including the xA sample. _Top right:_\(H(z)\)+SNIa+BAO and including the xA sample. _Bottom left:_\(H(z)\)+SNIa and including the nUVX sample. _Bottom right: Top right:_\(H(z)\)+SNIa+BAO and including the nUVX sample. Blue color denotes R21, purple color for GAIA, red color for P18, green color for F20, and yellow color for ACT priors. Additionally, the model was constrained with the sample baselines without prior, here denoted in black color.
## 5 Conclusions
In this paper, we presented new constraints on \(f(T)\) models by adding to the current local observables two calibrated quasars datasets on the ultraviolet, x-ray, and optical plane. Our goal was to implement these high-redshift QSO samples based on fluxes distributions calibrated up to \(z\sim 7\), since due to the observed non-linear relation between the UV and the X-ray luminosity we can extend the distance ladder method to QSO higher redshift regime. The calibrations were performed using five \(H_{0}\) prior scenarios, however, we added an analysis using a free prior.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(M\) & \(\beta^{\prime}\) \\ \hline \(H(z)\) + SNIa + nUVX & \(71.4^{+2.4}_{-2.0}\) & \(\left(196.3^{+6.5}_{-6.1}\right)\times 10^{-3}\) & \(-19.31\pm 0.03\) & \(-10.7^{+8.5}_{-7.2}\) \\ \(H(z)\) + SNIa + nUVX + R21 & \(73.3^{+0.7}_{-0.8}\) & \(\left(197.3\pm 2.7\right)\times 10^{-3}\) & \(-19.25\pm 0.02\) & \(-10.7^{+0.7}_{-0.9}\) \\ \(H(z)\) + SNIa + nUVX + P18 & \(67.4\pm 0.4\) & \(\left(198.0^{+6.3}_{-6.4}\right)\times 10^{-3}\) & \(-19.43\pm 0.01\) & \(-11.3^{+0.9}_{-0.6}\) \\ \(H(z)\) + SNIa + nUVX + F20 & \(69.8^{+0.5}_{-0.6}\) & \(\left(1972.1^{+8.7}_{-9.3}\right)\times 10^{-4}\) & \(-19.35\pm 0.02\) & \(-10.8^{+0.7}_{-0.8}\) \\ \(H(z)\) + SNIa + nUVX + GAIA & \(73.8^{+0.9}_{-1.0}\) & \(\left(195.9^{+3.0}_{-0.0}\right)\times 10^{-3}\) & \(-19.23\pm 0.03\) & \(-10.4^{+0.7}_{-0.9}\) \\ \(H(z)\) + SNIa + nUVX + ACT & \(68.2^{+0.9}_{-1.2}\) & \(\left(197.6^{+4.5}_{-5.3}\right)\times 10^{-3}\) & \(-19.40^{+0.03}_{-0.04}\) & \(-10.9\pm 0.8\) \\ \hline \(H(z)\) + SNIa + BAO + nUVX & \(63.7\pm 0.7\) & \(\phantom{-}0.295\pm 0.012\) & \(-19.50\pm 0.02\) & \(-10.7^{+1.61}_{-1.87}\) \\ \(H(z)\) + SNIa + BAO + nUVX + R21 & \(68.2^{+0.5}_{-0.6}\) & \(\left(260.0\pm 9.5\right)\times 10^{-3}\) & \(-19.37\pm 0.02\) & \(-10.7^{+1.25}_{-1.24}\) \\ \(H(z)\) + SNIa + BAO + nUVX + P18 & \(66.5^{+0.3}_{-0.4}\) & \(\left(272.9^{+9.2}_{-9.5}\right)\times 10^{-3}\) & \(-19.42\pm 0.01\) & \(-10.65^{+1.52}_{-1.87}\) \\ \(H(z)\) + SNIa + BAO + nUVX + F20 & \(67.3^{+0.5}_{-0.4}\) & \(\left(266.5^{+9.8}_{-0.8}\right)\times 10^{-3}\) & \(-19.39\pm 0.02\) & \(-11.12^{+1.67}_{-1.75}\) \\ \(H(z)\) + SNIa + BAO + nUVX + GAIA & \(66.9\pm 0.6\) & \(\left(268.9^{+10.2}_{-0.6}\right)\times 10^{-3}\) & \(-19.40\pm 0.02\) & \(-10.24^{+1.62}_{-1.71}\) \\ \(H(z)\) + SNIa + BAO + nUVX + ACT & \(64.8^{+0.6}_{-0.5}\) & \(\phantom{-}0.285^{+0.011}_{-0.010}\) & \(-19.46\pm 0.02\) & \(-12.11^{+1.86}_{-1.50}\) \\ \hline \end{tabular}
\end{table}
Table 16: \(f_{4}(T)\) model constraints using the: _Top line:_\(H(z)\)+SNIa sample (in the first block), _Below line:_ and with BAO sample, both using QSO-nUVX sample.
\begin{table}
\begin{tabular}{c|c c c} \hline \hline Dataset & \(H_{0}\) [km s\({}^{-1}\) Mpc\({}^{-1}\)] & \(\Omega_{m}\) & \(M\) \\ \hline \(H(z)\) + SNIa + xA & \(67.3\pm 1.0\) & \(0.201^{+0.017}_{-0.014}\) & \(-19.43\pm 0.03\) \\ \(H(z)\) + SNIa + xA + R21 & \(71.1^{+0.7}_{-0.5}\) & \(0.175^{+0.012}_{-0.013}\) & \(-19.32\pm 0.02\) \\ \(H(z)\) + SNIa + xA + P18 & \(67.4\pm 0.4\) & \(0.202^{+0.014}_{-0.013}\) & \(-19.42\pm 0.01\) \\ \(H(z)\) + SNIa + xA + F20 & \(69.1\pm 0.5\) & \(0.188^{+0.014}_{-0.012}\) & \(-19.375\pm 0.02\) \\ \(H(z)\) + SNIa + xA + GAIA & \(70.6^{+0.7}_{-0.8}\) & \(0.178^{+0.014}_{-0.013}\) & \(-19.34\pm 0.02\) \\ \(H(z)\) + SNIa + xA + ACT & \(67.5^{+0.8}_{-0.7}\) & \(0.199^{+0.016}_{-0.013}\) & \(-19.42\pm 0.02\) \\ \hline \(H(z)\) + SNIa + BAO + xA & \(64.2\pm 0.6\) & \(0.291^{+0.011}_{-0.013}\) & \(-19.49\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + R21 & \(67.6\pm 0.5\) & \(\phantom{-}\left(257.7^{+9.1}_{-10.0}\right)\times 10^{-3}\) & \(-19.39\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + P18 & \(66.4^{+0.3}_{-0.4}\) & \(\left(269.0^{+9.7}_{-8.9}\right)\times 10^{-3}\) & \(-19.42\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + F20 & \(67.0\pm 0.4\) & \(\left(262.7^{+9.6}_{-9.0}\right)\times 10^{-3}\) & \(-19.41\pm 0.01\) \\ \(H(z)\) + SNIa + BAO + xA + GAIA & \(66.5^{+0.6}_{-0.5}\) & \(0.267\pm 0.010\) & \(-19.42\pm 0.02\) \\ \(H(z)\) + SNIa + BAO + xA + ACT & \(65.0\pm 0.5\) & \(0.282^{+0.011}_{-0.010}\) & \(-19.46\pm 0.02\) \\ \hline \end{tabular}
\end{table}
Table 15: \(f_{4}(T)\) model constraints using the: _Top line:_\(H(z)\)+SNIa sample (in the first block), _Below line:_ and with BAO sample, both using QSO-xA sample.
The methodology discussed presents new constraints on the \(\Lambda\)CDM model and four TEGR-inspired \(f(T)\) models using a baseline constructed with SN + CC & BAO, and the calibrated QSO datasets. Both baselines (SN + CC + BAO, SN + CC + QSO) were employed to analyse the impact of considering objects as QSO as a fundamental study to relax the current statistical tension on \(H_{0}\). In this matter, we found that our estimations provide the possibility to raise the \(H_{0}\) value to solve the tension at \(2\sigma\) by using QSO ultraviolet measurements.
We show that the \(f(T)\) models under investigation generically produce high values of Hubble constants which show stronger agreement with the more recent literature values reported in survey publications. Moreover, there is a better consistency for the matter density parameter in all cases as compared with the \(\Lambda\)CDM analog. This shows a stronger case for these models, and their potential ability to agree with observational data.
RS is supported by the CONACyT National Grant. CE-R acknowledges the Royal Astronomical Society as FRAS 10147 and is supported by PAPIIT UNAM Project TA100122. The authors would like to acknowledge networking support by the COST Action CA18108 and funding support from Cosmology@MALTA which is supported by the University of Malta. This research has been carried out using computational facilities procured through the Cosmostatistics National Group project and the European Regional Development Fund, Project No. ERDF-080 "A supercomputing laboratory for the University of Malta". The authors also acknowledge funding from "The Malta Council for Science and Technology" in project IPAS-2020-007. This article is based upon work from COST Action CA21136 Addressing observational tensions in cosmology with systematics and fundamental physics (CosmoVerse) supported by COST (European Cooperation in Science and Technology).
|
2309.14309 | Multiple Different Black Box Explanations for Image Classifiers | Existing explanation tools for image classifiers usually give only a single
explanation for an image's classification. For many images, however, both
humans and image classifiers accept more than one explanation for the image
label. Thus, restricting the number of explanations to just one is arbitrary
and severely limits the insight into the behavior of the classifier. In this
paper, we describe an algorithm and a tool, MultiReX, for computing multiple
explanations of the output of a black-box image classifier for a given image.
Our algorithm uses a principled approach based on causal theory. We analyse its
theoretical complexity and provide experimental results showing that MultiReX
finds multiple explanations on 96% of the images in the ImageNet-mini
benchmark, whereas previous work finds multiple explanations only on 11%. | Hana Chockler, David A. Kelly, Daniel Kroening | 2023-09-25T17:28:28Z | http://arxiv.org/abs/2309.14309v3 | # Multiple Different Explanations for Image Classifiers
###### Abstract
Existing explanation tools for image classifiers usually give only one single explanation for an image. For many images, however, both humans and image classifiers accept more than one explanation for the image label. Thus, restricting the number of explanations to just one severely limits the insight into the behavior of the classifier. In this paper, we describe an algorithm and a tool, ReX, for computing multiple explanations of the output of a black-box image classifier for a given image. Our algorithm uses a principled approach based on causal theory. We analyse its theoretical complexity and provide experimental results showing that ReX finds multiple explanations on \(7\) times more images than the previous work on the ImageNet-mini benchmark.
\({}^{1}\) King's College London, UK
hana.chockler,[email protected]
\({}^{2}\)Amazon, UK
[email protected]
\({}^{*}\)This work was done prior to joining Amazon.
\({}^{*}\)_If one is investigating things that are not directly perceptible, and if one sees that several explanations are possible, it is reckless to make a dogmatic pronouncement concerning any single one; such a procedure is characteristic of a seer rather than a wise man."_
Diogenes
Diogenes
Existing explanation tools for image classifiers usually give only one single explanation for an image. For many images, however, both humans and image classifiers accept more than one explanation for the image label. Thus, restricting the number of explanations to just one severely limits the insight into the behavior of the classifier. In this paper, we describe an algorithm and a tool, ReX, for computing multiple explanations of the output of a black-box image classifier for a given image. Our algorithm uses a principled approach based on causal theory. We analyse its theoretical complexity and provide experimental results showing that ReX finds multiple explanations on \(7\) times more images than the previous work on the ImageNet-mini benchmark.
## 1 Introduction
Neural networks (NN) are now a primary building block of most computer vision systems. The opacity of NNs creates demand for explainability techniques, which attempt to provide insight into why a particular input yields a particular observed output. Beyond increasing a user's confidence in the output, and hence also their trust in the AI system, these insights help to uncover subtle classification errors that are not detectable from the output alone (Chockler, Kroening, and Sun 2021).
The most commonly used definition of an explanation is a part of the input image that is sufficient for the classifier to yield the same label as the original image. Explanations, according to this definition, are obviously not unique. Images often have several explanations for their classification as illustrated in Figure 1. Initial results in the study of multiple explanations by Shitole et al. (2021) demonstrate that images have multiple explanations in all but degenerate cases.
There is a growing body of work analysing the human perception of images (Fan et al., 2020; van Dyck et al., 2021) and how this differs from what NNs do. Roughly speaking, humans do not detect small differences. In particular, there is little sense in checking the effect of changing one pixel or any small number of pixels, as a new explanation would be indistinguishable to the human eye from the previous one. Therefore, we focus our effort on the search for _sufficiently different_ explanations (see Section 4.1).
The prevalence of multiple explanations suggests that algorithms for computing more than one explanation are essential for understanding image classifiers and uncovering subtle classification errors. The image in Figure 2 is classified by VGG19 as a tennis racket, with the first explanation being indeed a part of the racket. However, the second explanation is the player's shorts, uncovering a misclassification. Yet, existing techniques provide only one explanation of an output of the classifier. The one notable exception is the tool SAG (Shitole et al., 2021), outlined in Section 2, which constructs multiple explanations by using a beam search over a fixed grid. However, as SAG searches an exponential space (the number of combinations of cells of the grid is exponential), it either runs into the exponential explosion problem or drops a part of the state space. This is hardly surprising; as
we prove in Section 3.3, the problem of computing multiple explanations is intractable. Specifically, we present an exponential upper bound on the number of possible explanations and demonstrate that this bound is tight.
In view of these theoretical results, we present ReX, an approximation algorithm and a tool for computing multiple explanations for black-box image classifiers. Using the formal mathematical theory of actual causality, ReX computes a ranking of the pixels of the image. This ranking is used to construct a refined search landscape (Figure 5), which ReX explores in order to generate multiple, different, explanations. Unlike SAG, our search is not limited to exploration from highly ranking parts of the image and allows even unlikely (low ranking regions) to be fruitfully exploited for explanations. Whereas SAG uses a fixed square and beam width for its search, ReX expands and contracts its search width to minimise explanation size. For instance, for the image in Figure 1, ReX produces \(4\) explanations whereas Sag produces only \(2\).
In Section 5 we experimentally compare ReX with SAG and with DeepCover on standard benchmarks. The results demonstrate that ReX produces finer-grained explanations and is superior to SAG wrt the number of sufficiently different explanations it produces. We provide the details of the benchmark sets, the models, and the main results in the paper. The tool, all datasets, and the full set of results are submitted as the supplementary material.
## 2 Related Work
There is a large body of work on algorithms for computing one explanation for a given output of an image classifier. They can be largely grouped into two categories: propagation and perturbation. Propagation-based explanation methods back-propagate a model's decision to the input layer to determine the weight of each input feature for the decision [12, 1, 1, 13, 14]. Grad-Cam only needs one backward pass and propagates the class-specific gradient into the final convolutional layer of a DNN to coarsely highlight important regions of an input image [10].
Perturbation-based explanation approaches introduce perturbations to the input space directly in search for an explanation. Shap (SHapley Additive exPlanations) computes Shapley values of different parts of the input and uses them to rank the features of the input according to their importance [10]. Lime constructs a small neural network to label the original input and its neighborhood of perturbed images and uses this network to estimate the importance of different parts of the input [15, 16, 17, 18]. Anchors uses a similar approach to find parts of the inputs sufficient for the classification, regardless of the values of other parts [15]. Finally, DeepCover ranks elements of the image according to their importance for the classification and uses this ranking to greedily construct a small explanation. The DeepCover ranking procedure in [14] uses SFL, and is replaced in [16] by the approximate computation of causal responsibility.
Work on calculating more than one explanation for a given classification outcome is in its infancy. To the best of our knowledge, there is only one algorithm and tool that computes multiple explanations of image classifiers - Sag, described in [17, 18]. The motivation for Sag is the same as ours: increasing human confidence and trust as well as our understanding of image classification algorithms.
Sag partitions the input image into a fixed grid (by default \(7\times 7\)). A beam search algorithm is used to search for the initial \(w\) (i.e., the beam width) root nodes in the graph. The search starts with \(w\) distinct highest weighted image regions. Their children nodes are perturbed, until the resulting mask causes a unacceptable drop in the label's probability. Explanations are identified from the SAG as multiple minimal regions of the input image sufficient for the correct classification with a high confidence. Explanations are presented in the form of a directed acyclic graph, or Structured Attention Graph (SAG). Multiple explanation diversity is enforced by bounding the maximal overlap in terms of a number of regions shared between explanations.
Figure 2: The image (a) is classified as ‘tennis racket’. Its disjoint explanations found by ReX are in (b) and (c). The first explanation is a part of a racket, and the second explanation uncovers a misclassification, as it is the player’s shorts. (d) is the saliency landscape.
Theoretical Results
In this section we describe the theoretical foundations of our approach.
### Background on Actual Causality
Our definitions are based on the framework of _actual causality_ introduced by Halpern and Pearl (2005). The reader is referred to that paper and to Halpern (2019) for an updated overview and more information on actual causality. Due to the lack of space, we omit formal definitions and instead discuss the intuition informally. This is sufficient for our purposes, as we explain below.
The definition of an _actual cause_ is based on the concept of _causal models_, which consist of a set of variables, a range of each variable, and structural equations describing the dependencies between the variables. Actual causes are defined with respect to a given causal model, a given assignment to the variables of the model (a context), and a propositional logic formula that holds in the model in this context.
_Actual causality_ extends the simple counterfactual reasoning (Hume 1739) by considering the effect of _interventions_, which are changes of the current setting. Roughly speaking, a subset of variables \(X\) and their values in a given context is an actual cause of a Boolean formula \(\varphi\) being True if there exists a change in the values of other values that creates a counterfactual dependency between the values of \(X\) and \(\varphi\) (that is, if we change the values of variables in \(X\), \(\varphi\) would be falsified). The formal definition by Halpern and Pearl (2005) and in its modifications, the latest of which is by Halpern (2015), are far more complex due to the potential dependencies between the variables and considering causes of more than one element. In our setup, where we are only interested in singleton causes and in interventions only on the input variables, all versions of the definition of (a part of) an actual cause are equivalent to our definition under the assumption of independence between the input variables.
_Responsibility_, as defined by Chockler and Halpern (2004) and adapted to the modified definition of causality by Halpern (2015), is a quantification of causality, attributing to each actual cause its _degree of responsibility_, which is derived from the size of a smallest contingency required to create a counterfactual dependence. The degree of responsibility is defined as \(1/(k+1)\), where \(k\) is the size of a smallest contingency. The degree of responsibility of counterfactual causes is therefore \(1\) (as \(k=0\)), and the degree of responsibility of variables that have no causal influence on \(\varphi\) is \(0\), as \(k\) is taken to be \(\infty\). In general, the degree of responsibility is always between \(0\) and \(1\), with higher values indicating a stronger causal dependence.
### Causes and explanations in image classification
We follow the approach by Chockler, Kroening, and Sun (2021) (CKS from now on) to defining causes and constructing explanations in image classification. We view the NN as a black-box causal model in the Halpern and Pearl (2005)-sense of the word, with its inputs being the individual pixels of an input image. The variables are defined as Boolean, with the values being the original color and the masking color (as shown by CKS, a specific masking color does not have almost any effect on the results). Following Halpern (2019), we further augment the model by limiting the allowed interventions to masking the colors of input's pixels. Moreover, we are only interested in singleton causes (recall that we assume independence between the input variables).
**Definition 1** (Singleton cause for image classification, CKS).: _For an image \(x\) classified by the NN as \(f(x)=o\), a pixel \(p_{i}\) of \(x\) is a cause of \(o\) iff there exists a subset \(P_{j}\) of pixels of \(x\) such that the following conditions hold:_
1. \(p_{i}\not\in P_{j}\)_;_
2. _changing the color of any subset_ \(P^{\prime}_{j}\subseteq P_{j}\) _to the masking color does not change the classification;_
3. _changing the color of_ \(P_{j}\) _and the color of_ \(p_{i}\) _to the masking color changes the classification._
_We call such \(P_{j}\) a witness to the fact that \(p_{i}\) is a cause of \(x\) being classified as \(o\)._
**Definition 2** (Simplified responsibility, CKS).: _The degree of responsibility \(r(p_{i},x,o)\) of \(p_{i}\) for \(x\) being classified as \(o\) is defined as \(1/(k+1)\), where \(k\) is the size of the smallest witness set \(P_{j}\) for \(p_{i}\). If \(p_{i}\) is not a cause, \(k\) is defined as \(\infty\), and hence \(r(p_{i},x,o)=0\). If changing the color of \(p_{i}\) alone to the masking color results in a change in the classification, we have \(P_{j}=\emptyset\), and hence \(r(p_{i},x,o)=1\)._
**Lemma 1** (Cks).: _Definition 1 is equivalent to the definition of an actual cause when input variables in the model are independent of each other, and we do not consider interventions on internal variables._
**Corollary 1** (Cks).: _The problem of detecting causes in image classification is NP-complete._
**Definition 3** (Explanation for image classification, CKS).: _An explanation in image classification is a minimal subset of pixels of a given input image that is sufficient for the NN to classify the image, where "sufficient" is defined as containing only this subset of pixels from the original image, with the other pixels set to the masking color._
### New Theoretical Results
In this section we prove new complexity bounds for computing multiple explanations.
CKS observe that the precise computation of an explanation in our setting is intractable, as the problem is equivalent to an earlier definition of explanations in binary causal models, which is DP-complete (Eiter and Lukasiewicz 2004). DP is the class of languages that are an intersection of a language in NP and a language in co-NP and contains, in particular, the languages of unique solutions to NP-complete problems (Papadimitriou 1984). The following lemma shows that computing a second (or any subsequent) explanation is not easier than computing the first one. For the purposes of proving theoretical results, a subsequent explanation is one that differs from the previous ones in at least one pixel; the algorithm in Section 4 constructs spatially different explanations, more suitable to the human perception.
**Lemma 2**.: _Given an explanation, constructing a different one is DP-complete._
Proof Sketch.: Membership in DP is straightforward. For the hardness part we show a reduction from the problem of computing an explanation. Given an image \(x\) classified as \(\mathcal{N}(x)\), we construct a _chimera_ image from \(x\) and an existing explanation of \(\mathcal{N}(x)\) (taken from another image) attached to it without obscuring it. Then, our existing explanation is the first explanation to the image being classified as \(\mathcal{N}(x)\), and a second one is an explanation of the classification of the original image.
We note that the chimera image constructed in the reduction does not have a rectangular shape; however the complexity of the explanation problem does not depend on the shape of the input image.
CKS use a greedy approach to constructing approximate explanations, based on scanning the ranked list of pixels \(pixel\_ranking\). We note that the construction of the ranked list is intractable as well (NP-complete), even when the ranking is based on Definition 2, rather than the general definition of responsibility by Chockler and Halpern (2004). Hence, CKS construct an approximate ranked list by partitioning the set in iterations and computing approximate degrees of responsibility for each partition while discarding low-responsibility elements.
However, this approach does not help in reducing the complexity of computing many explanations, as the number of explanations for a given image can be very large, as proven in the following lemma.
**Lemma 3**.: _The number of explanations for an input image is bounded from above by \(\binom{n}{\lfloor n/2\rfloor}\), and this bound is tight._
Proof.: Since an explanation of the classification of \(x\) is a minimal subset of \(x\) that is sufficient to result in the same classification, the number of explanations is characterised by _Sperner's theorem_, which provides a bound for the number \(S\) of largest possible families of finite sets, none of which contain any other sets in the family (Anderson, 1987). By Sperner's theorem, \(S\leq\binom{n}{\lfloor n/2\rfloor}\), and the bound is reached when all subsets are of the size \(\lfloor n/2\rfloor\). The following example demonstrates an input on which this bound is reached.
Consider a binary classifier that determines whether an input image of size \(n\) has at least \(\lfloor n/2\rfloor\) green-coloured pixels and an input image that is completely green. Then, each explanation is of size \(\lfloor n/2\rfloor\), and there are \(\binom{n}{\lfloor n/2\rfloor}\) explanations.
Finally, we note that given a set of explanations (sets of pixels) and an overlap bound, finding a subset of a given number of explanations in which elements overlap for no more than the bound is NP-hard even assuming that constructing and training a binary classifier is \(O(1)\). Indeed, let \(\mathcal{N}\) be a binary classifier that determines whether an input graph \(G=\langle V,E\rangle\) contains any connected components of size more than \(1\). An explanation would be a node \(v\in V\) with its adjacent edges. Now, \(G\) contains an independent set of size \(n\) iff there exist \(n\) disjoint explanations of the non-empty label of \(G\) given by the classifier, thus proving NP-hardness of the problem.
## 4 Multiple Explanations
In this section we present our algorithm for computing multiple, different explanations. As shown in Section 3, the problem is intractable, motivating the need for efficient and accurate approximation algorithms. Due to the lack of space, some details and algorithms have been moved to the appendix (see the supplementary material).
### What is a "Different Explanation"?
As the goal of constructing different explanations is presenting them to humans for analysis, we need to ensure that the explanations are indeed perceived as different by humans. Consider the explanations in Figure 4. As discussed by Zhang et al. (2015), a human eye fills in the gaps of hazy and low-resolution images. Hence, if we remove a small subset of pixels from a given explanation, it would be sufficiently different from the original one according to many distance measures, yet would likely not be different at all to the human eye (in Figure 4 the gaps are increased for illustrative purposes).
To avoid this problem, we define an _atomic superpixel_, the smallest set of contiguous pixels (a square) that is distinguishable to a human, as a parameter of the algorithm. The concept of a superpixel is used in a number of different explanation tools. Both Sag and Grad-Cam split the image into a \(7x7\) grid of squares. Dividing the image in this way greatly reduces the computational cost of searching for explanations. The rigidity of the grid, however, leads to a strict bound on the minimum explanation size: an explanation cannot be less than the size of a square, and a square may be significantly bigger than the smallest superpixel responsible for the classification. ReX overcomes this problem by generating a random grid. We also allow the minimum size of the superpixel as a parameter. We discuss this further in Section 5. To overcome the limitations of just one grid, we allow for multiple iterations of the algorithm, each with a different grouping of superpixels. The results of the different grids are automatically combined to produce a detailed saliency landscape, as in Figure 5. As one can see, the more iterations are added, the smoother the saliency landscape becomes.
### The ReX Algorithm
The high-level structure of the algorithm is presented in Figure 3, and the pseudo-code is in Algorithm 1. We discuss each component in more detail below.
The _RANK_ procedure in Line 3 of Algorithm 1 constructs a \(pixel\_ranking\), which is a ranking of the pixels of the input image \(x\). While any pixel ranking mechanism can be used (e.g., an SFL-based ranking in Sun et al. (2020) or Lime or Shap heatmaps), the quality and the granularity of the final results depend on the quality and the granularity of the ranking. We implemented and tested ReX with causal responsibility-based ranking described in CKS. The number of required explanations is given as an input parameter to the procedure, as the total number of explanations can be exponential (see Lemma 3).
The \(Floodlight\) procedure called in Line 5 is described in Algorithm 2. It replaces the greedy explanation generation in DeepCover with a spatially delimited stochastic
hill climb. In contrast to most hill-climb-based algorithms that look for the global maximum, we search for _local maxima_, as these are likely to correspond to explanations. The global maximum usually matches the explanation computed by DeepCover, though it is not guaranteed. The function _initialize_ in Line 1 creates a floodlight of radius \(r\) at a random position over the image \(x\). We call the model on this masked image. If the initial size of the floodlight, \(\mathcal{F}\), is too small to encompass an explanation, before taking a random step, \(\mathcal{F}\) expands in position a fixed number of times. If this increased flooding still does not result in an explanation, the floodlight takes a random step, returning to its original radius. This random step is mediated by an objective function. By default, ReX uses the mean of the responsibility of the pixels under the floodlight.
```
1:an image \(x\), a network \(\mathcal{N}\), a label \(l\), a saliency
2:landscape \(\mathcal{S}\), a floodlight radius \(r\), number of steps \(n\), number of expansions \(p\), radius increase \(q\)
3:an explanation \(E\)
4:\(\mathcal{F}\leftarrow\) initialize\((r)\)
5:\(\mathcal{E}\leftarrow\emptyset\)
6:for\(i\) in \(0\ldots n-1\)do
7:for\(j\) in \(0\ldots p-1\)do
8:\(l^{\prime}\leftarrow\mathcal{N}(\mathcal{F}(x))\)
9:if\(l=l^{\prime}\)then
10:\(E\leftarrow\mathcal{F}(x)\)
11:return\(E\)
12:else
13:\(\mathcal{F}\leftarrow\) expand_radius\((r*q)\)
14:endif
15:endfor
16:\(\mathcal{F}\leftarrow\) neighbor
17:endfor
18:return\(E\)
```
**Algorithm 1**\(\textsc{ReX}(x,\mathcal{N},r,n,\delta,p,q)\)
Once an explanation is found, ReX performs a local ablation _drain_ (see the appendix for details).
Finally, the _extract_ procedure (Algorithm 3), extracts a subset of at most \(n\) explanations that pairwise overlap up to the input bound \(\delta\). As discussed in Section 3.3, the exact solution is NP-hard. The procedure uses a greedy heuristic based on the Sorensen-Dice coefficient (SDC) [14, 15], typically used as a measure of similarity between samples. First, we calculate the matrix \(SDC\) for all pairs of explanations: \(SDC(i,j)=0\) iff \(SDC(E_{i},E_{j})\leq\delta\), and is \(SDC(E_{i},E_{j})\) otherwise. 1 Columns that sum to 0 correspond to explanations that do not overlap others in more than \(\delta\), and are hence added to \(\mathcal{E}\). We then greedily remove the most overlapping explanations and recalculate the overlap matrix, adding the columns summing to 0 to \(\mathcal{E}\). The procedure iterates until \(SDC\) is empty.
Footnote 1: For disjoint explanations, i.e., \(\delta=0\), we can simply take \(E_{i}\cap E_{j}\) instead of \(SDC(i,j)\).
## 5 Experimental Results
ImplementationWe implemented Algorithm 1 in the tool ReX for generating multiple explanations. Given a saliency landscape, by default, ReX attempts to find \(10\) explanations.
Figure 4: To the human eye, these two explanations for a dog are equivalent, but they do not have any non-background pixels in common. Naively calculating the pixel overlap is insufficient in this case; we must take into account spatial location.
Figure 3: A schematic depiction of our algorithm, returning a set of explanations \(\mathcal{E}\) for a given input image. Its components: 1_ranking_ generates a saliency landscape of pixels; 2_search_ launches \(x\)_floodlight_ searches over the landscape; 3_drain_ minimizes the explanations founds in 2_; 4_extract_ produces a maximal subset \(\mathcal{E}\) from the output of 3, with the given overlap bound.
While it is computationally relatively inexpensive to search for more explanations than this, on our dataset we observe that images with more than \(6\) sufficiently different explanations are extremely rare (\(\approx 1\%\) of images). The algorithm computes multiple maximally different approximations of causal explanations according to Definition 3.
Datasets and ModelsFor our experiments, we used two standard image datasets. The first dataset is ImageNet-mini2, consisting of \(3923\) images representing \(1000\) different labels. The second dataset consists of all images, \(81\) in total, labeled _starfish_ from the Caltech256 dataset [1]. The second dataset is used as an additional independent source of images to mitigate the risk of overfitting to the ImageNet-mini dataset. While ReX is agnostic to the model, SAG uses VGG19 by default. To enable comparison, we tested ReX with the same model.
Footnote 2: [https://www.kaggle.com/datasets/ifigotin/imagenetmini-1000](https://www.kaggle.com/datasets/ifigotin/imagenetmini-1000)
### Tool setup and parameters
We use both ReX and Sag with default settings. In particular, ReX offers a large number of tunable parameters. We set the total number of iterations at \(20\), and the total number of floodlights at \(10\) for all images. Sag requires the user to set the maximal allowed overlap in squares, with suggested values of \(0\), \(1\), or \(2\). ReX has no required bound parameter, but has the option of returning all explanations. We present the results for disjoint explanations (that is, Sag with \(0\) overlaps). See the supplementary material for the results for explanations with a small overlap.
Sag divides the image into a rigid grid, by default \(7\times 7\), whereas ReX iteratively refines the random image partitioning. We used a betabinomal distribution, with both \(\alpha\) and \(\beta\) set to \(1.1\) for the random partitioning, to reduce the probability of extremely unbalanced partitions. Sag takes a probability threshold when considering whether a combination of squares is an explanation. ReX, by default, takes the top prediction from the model, without reference to probability. Setting a probability threshold is arbitrary, whereas taking the top prediction is more consistent with the model's "best guess". More importantly, by setting a probability threshold, we deliberately ignore inconsistent classification or misclassification (Figure 2).
### Experimental Results and Comparison with Sag
A natural performance measure for multiple explanations is the number of multiple significantly different explanations
Figure 5: Different saliency landscapes. The image on the left shows shows an image with a single explanation, clearly indicated by the single central peak. The other images show the saliency landscapes of a starfish after \(1\) and \(20\) iterations. The landscape here is flatter, with multiple separate peaks. These peaks are likely to correspond with different explanations.
Figure 6: Two explanations from Sag overlapping on the square no.17. Sag has a rigid overlap size (one square on the grid). ReX’s overlap is a parameter and depends on the size of both explanations.
produced for each image. We tested SAG and ReX with the option of producing completely disjoint explanations and with the option of having a small overlap. Note that as a square in Sag has a fixed size, \(1/49\)-th of the input image, it can result in very similar explanations if the explanations are small (see an illustration in Figure 6).
The main experiment was run on AWS, using a cluster of Inter Xeon Platinum 8375C CPU @ 2.90 GHz, without GPU support (which equally disadvantages both tools). The timeout (TO) is set to \(10\) minutes for each tool on each image. Within the given TO, Sag did not terminate on just under half of the images in Imagenet-mini, and on one image in the Caltech256 starfish dataset. ReX terminated on all images in both datasets.
The results show that ReX computes multiple explanations on **7X** more images in the benchmark set than Sag (**3835** for ReX vs **553** for Sag). Figure 7 shows the breakdown of the results by to the number of images having a particular number of disjoint explanations found by ReX and by Sag, respectively. The results are also presented in the tabular form in Table 2. Table 1 presents the analysis of percentage of termination and the average number of explanations for each tool. The results on the starfish dataset are similar to those on Imagenet-mini, demonstrating robustness of our approach on unseen images. \(1\)-square overlap in explanations produces similar results (see the supplementary material).
## 6 Conclusions and Future Work
Motivated by studies in human cognition and the need for thorough debugging of image classifiers, this paper proposes an algorithm and a tool ReX for constructing multiple explanations for the outputs of image classifiers. The algorithm is based on a solid mathematical theory of causal reasoning and is agnostic to the classifier, viewing it as a black box. The tool ReX is modular and borrows its ranking procedure from the existing tool DeepCover. We introduce a a novel explanation-discovery algorithm based on the saliency landscape and a "floodlight" search, ensuring different spatial locations for explanations.
ReX is built as a command-line tool and a Python library with pluggable components. Owing to its systematic and compositional approach, ReX finds multiple different explanations of image labels on standard benchmark sets and is fully configurable. Moreover, by default ReX does not depend on the probabilities assigned to the labels by the classifier. We compare our results with Sag, the only other tool for multiple explanations, and demonstrate that ReX finds significantly more explanations than Sag. Moreover, ReX terminates on the whole benchmark set, in contrast to Sag that timed out on \(50\%\) of it. The algorithm is completely parallelized, which, together with the efficient floodlight search, leads to \(17\)X speedup compared to DeepCover (see the appendix).
There is a number of promising directions for future work. Due to the modularity of ReX, it is possible to plug in any other ranking procedure and to experiment with different algorithms for explanation discovery based on the saliency landscape. Furthermore, we hypothesise that the precision of ranking drops for the lower ranked elements, affecting the quality of explanations that are lower on the saliency landscape. While the intractability of computing an explanation implies a tradeoff between the quality of the approximation and the precision of the result, we will search for new heuristics to improve the saliency landscape at low levels.
|
2309.08720 | Latvian Quantum Finite State Automata for Unary Languages | We design Latvian quantum finite state automata (LQFAs for short) recognizing
unary regular languages with isolated cut point 1/2. From an architectural
point of view, we combine two LQFAs recognizing with isolated cut point,
respectively, the finite part and the ultimately periodic part of any given
unary regular language L. In both modules, we use a component addressed in the
literature and here suitably adapted to the unary case, to discriminate strings
on the basis of their length. The number of basis states and the isolation
around the cut point of the resulting LQFA for L exponentially depends on the
size of the minimal deterministic finite state automaton for L. | Carlo Mereghetti, Beatrice Palano, Priscilla Raucci | 2023-09-15T19:14:08Z | http://arxiv.org/abs/2309.08720v1 | # Latvian Quantum Finite State Automata
###### Abstract
We design _Latvian quantum finite state automata_ (lqfas for short) recognizing unary regular languages with isolated cut point \(\frac{1}{2}\). From an architectural point of view, we combine two lqfas recognizing with isolated cut point, respectively, the finite part and the ultimately periodic part of any given unary regular language \(L\). In both modules, we use a component addressed in the literature and here suitably adapted to the unary case, to discriminate strings on the basis of their length. The number of basis states and the isolation around the cut point of the resulting lqfa for \(L\) exponentially depends on the size of the minimal deterministic finite state automaton for \(L\).
## 1 Introduction
Quantum finite automata (qfas for short) represent a theoretical model for a quantum computer with finite memory [4, 5]. While we can hardly expect to see a full-featured quantum computer in the near future, small quantum components, modeled by qfas, seem to be promising from a physical implementation viewpoint (see, e.g., [8, 16]).
Very roughly speaking, a qf is obtained by imposing the quantum paradigm -- superposition, unitary evolution, observation -- to a classical finite state automaton. The state of the qfa can be seen as a linear combination of classical states, called superposition. The qfa steps from a superposition to the next one by a unitary (reversible) evolution. Superpositions can transfer the complexity of a computation from a large number of sequential steps to a large number of coherently superposed classical states (this phenomenon is sometimes referred as quantum parallelism). Along its computation, the qfa can be "observed", i.e., some features, called observables, can be measured. From measuring an observable, an outcome is obtained with a certain probability and the current superposition irreversibly "collapses", with the same probability, to a particular superposition (coherent with the observed outcome).
qfas exhibit both advantages and disadvantages with respect to their classical (deterministic or probabilistic) counterpart. Basically, quantum superposition offers some computational advantages on probabilistic superposition. On the other hand, quantum dynamics are reversible: because of limitation of memory, it is sometimes impossible to simulate deterministic finite state automata (dfas for short) by quantum automata. Limitations due to reversibility can be partially attenuated by systematically introducing measurements of suitable observables as computational steps.
In the literature, several models of qfas are proposed, which mainly differ in their measurement policy. The first and most simple model is the _measure-once_ qfa (mo-qfa for short) [7, 17], where the probability of accepting strings is evaluated by "observing" just once, at the end of input processing. In _measure-many_ qfas (mm-qfas for short) [12], instead, the acceptance probability is evaluated by observing after each move, thus allowing the possibility of halting the computation in the middle of input processing. An additional model is the _Latvian_ qfa (lqfa for short) [2], which can be regarded as
"intermediate" between mo-qfas and mm-qfas. In fact, as in the mm-qfa model, lqfas are observed after each move; on the other hand, as in the mo-qfa model, acceptance probability is evaluated at the end of the computation only. From a language recognition point of view, it is well known that mo-qfas are strictly less powerful than lqfas, which are strictly less powerful than lmfas, which are strictly less powerful than dfas. This hierarchy is established, e.g., in [2, 7, 12].
In this paper, we investigate the architecture and size of lqfas processing _unary languages_, i.e. languages built over a single-letter alphabet. A similar investigation is presented in [6], where mm-qfas recognizing unary regular languages with isolated cut point are exhibited, whose size (number of basis states) is linear in the size of equivalent minimal dfas. Here, we show that unary regular languages can be recognized with isolated cut point by the less powerful model of lqfas as well, paying by an exponential size increase. A relevant module in our construction is a lqfa recognizing with isolated cut point the strings of length exceeding a fixed threshold. For its design, we adapt a construction in [2, 15] to the unary case. Such a module is then suitably combined with two lqfas taking care, respectively, of the finite part and the ultimately periodic part any unary regular language consists of. The architecture of the resulting lqfa turns out to be significantly different from the equivalent mm-qfas in [6]. Moreover, while in the mm-qfa case the isolation around the cut point is constant, for lqfas it exponentially decreases with respect to the size of the dfa for the finite part of the target unary regular language. However, it should be stressed that the less powerful model of mo-qfas cannot recognize with isolated cut point all unary regular languages. Our results constructively prove that lqfas and mm-qfas have the same recognition power, whenever restricted to recognize unary languages with isolated cut point.
The paper is organized as follows. In Section 2, we overview basics on formal language theory, linear algebra, and quantum finite state automaton models. In Section 3, we design isolated cut point lqfas recognizing the strings whose length is greater than or equal to a fixed value. Then, in Section 4, we provide the full architecture of isolated cut point lqfas for unary regular languages, analyzing their size, cut point, and isolation. Finally, in Section 5, we draw some concluding remarks and offer possible research hints.
## 2 Preliminaries
### Formal Languages
We assume familiarity with basic notions of formal language theory (see, e.g., [10]). Given a set \(S\), we let \(|S|\) denote its cardinality. The set of all words or strings (including the empty string \(\epsilon\)) over a finite alphabet \(\Sigma\) is denoted by \(\Sigma^{*}\), and we let \(\Sigma^{+}=\Sigma^{*}\setminus\epsilon\). For a string \(w\in\Sigma^{*}\), we let \(|w|\) denote its length and \(w_{i}\) its \(i\)th symbol. For any given \(i\geq 0\), we let \(\Sigma^{i}\) be the set of strings over \(\Sigma\) of length \(i\), with \(\Sigma^{0}=\{\epsilon\}\). We let \(\Sigma^{\leq i}=\bigcup_{j=0}^{i}\Sigma^{j}\); sets \(\Sigma^{>i}\) and \(\Sigma^{\geq i}\) are defined accordingly. A language over \(\Sigma\) is any subset \(L\subseteq\Sigma^{*}\); its complement is the language \(L^{c}=\Sigma^{*}\setminus L\). A _deterministic finite state automaton_ (dfa) is formally defined as a 5-tuple \(D=(Q,\Sigma,q_{0},\delta,F)\), where \(Q\) is the finite set of states, \(\Sigma\) the finite input alphabet, \(q_{0}\in Q\) the initial state, \(F\subseteq Q\) the set of accepting states, and \(\delta:Q\times\Sigma\to Q\) the transition function. Denoting by \(\delta^{*}\) the canonical extension of \(\delta\) to \(\Sigma^{*}\), the language recognized by \(D\) is the set \(L_{D}=\{w\in\Sigma^{*}|\delta^{*}(q_{0},w)\in F\}\). It is well known that dfas characterize the class of regular languages.
A _unary language_ is any language built over a single letter alphabet, e.g., \(\Sigma=\{\sigma\}\), and thus has the general form \(L\subseteq\sigma^{*}\). Unary _regular_ languages form _ultimately periodic sets_, as stated by the following
**Theorem 1**.: ([10, 18]) _Let \(L\subseteq\sigma^{*}\) be a unary regular language. Then, there exist two integers \(T\geq 0\) and \(P>0\) such that, for any \(k\geq T\), we have \(\sigma^{k}\in L\) if and only if \(\sigma^{k+P}\in L\)._
By Theorem 1, it is easy to see that any unary regular language \(L\) can be recognized by a (minimal) dfa consisting of an initial path of \(T\) states joined to a cycle of \(P\) states; accepting states are suitably settled on both the path and the cycle. Unary regular languages satisfying Theorem 1 with \(T=0\) are called _periodic languages_ of period \(P\).
### Linear Algebra
We quickly recall some notions of linear algebra, useful to describe quantum computational devices. For more details, we refer the reader to, e.g., [20]. The fields of real and complex numbers are denoted by \(\mathbb{R}\) and \(\mathbb{C}\), respectively. Given a complex number \(z=a+ib\), with \(a,b\in\mathbb{R}\), its _conjugate_ is denoted by \(z^{*}=a-ib\), and its _modulus_ by \(|z|=\sqrt{z\cdot z^{*}}\). We let \(\mathbb{C}^{n\times m}\) denote the set of \(n\times m\) matrices with entries in \(\mathbb{C}\). Given a matrix \(M\in\mathbb{C}^{n\times m}\), for \(1\leq i\leq n\) and \(1\leq j\leq m\), we denote by \(M_{ij}\) its \((i,j)\)th entry. The _transpose_ of \(M\) is the matrix \(M^{T}\in\mathbb{C}^{m\times n}\) satisfying \({M^{T}}_{ij}=M_{ji}\), while we let \(M^{*}\) be the matrix satisfying \({M^{*}}_{ij}=\left({M_{ij}}\right)^{*}\). The _adjoint_ of \(M\) is the matrix \({M^{\dagger}}=\left({M^{T}}\right)^{*}\). For matrices \(A,B\in\mathbb{C}^{n\times m}\), their _sum_ is the \(n\times m\) matrix \((A+B)_{ij}=A_{ij}+B_{ij}\). For matrices \(C\in\mathbb{C}^{n\times m}\) and \(D\in\mathbb{C}^{m\times r}\), their _product_ is the \(n\times r\) matrix \((C\cdot D)_{ij}=\sum_{k=1}^{m}C_{ik}\cdot D_{kj}\). For matrices \(A\in\mathbb{C}^{n\times m}\) and \(B\in\mathbb{C}^{p\times q}\), their _direct (or tensor or Kronecker) product_ is the \(n\cdot p\times m\cdot q\) matrix defined as
\[A\otimes B=\left(\begin{array}{ccc}A_{11}B&\cdots&A_{1m}B\\ \vdots&\ddots&\vdots\\ A_{n1}B&\cdots&A_{nm}B\end{array}\right).\]
When operations are allowed by matrix dimensions, we have that \((A\otimes B)\cdot(C\otimes D)=A\cdot C\ \otimes\ B\cdot D\).
A _Hilbert space_ of dimension \(n\) is the linear space \(\mathbb{C}^{n}\) of \(n\)-dimensional complex row vectors equipped with sum and product by elements in \(\mathbb{C}\), where the _inner product_\(\langle\varphi,\psi\rangle=\varphi\cdot\psi^{\dagger}\) is defined, for vectors \(\varphi,\psi\in\mathbb{C}^{n}\). The \(i\)th component of \(\varphi\) is denoted by \(\varphi_{i}\), its _norm_ is given by \(\|\varphi\|=\sqrt{\langle\varphi,\varphi\rangle}=\sqrt{\sum_{i=1}^{n}{| \varphi_{i}|}^{2}}\). If \(\langle\varphi,\psi\rangle=0\) (and \(\|\varphi\|=1=\|\psi\|\)), then \(\varphi\) and \(\psi\) are orthogonal (orthonormal). An orthonormal basis of \(\mathbb{C}^{n}\) is any set of \(n\) orthonormal vectors in \(\mathbb{C}^{n}\). In particular, the _canonical basis_ of \(\mathbb{C}^{n}\) is the set \(\{\boldsymbol{e}_{1},\boldsymbol{e}_{2},\ldots,\boldsymbol{e}_{n}\}\), where \(\boldsymbol{e}_{i}\in\mathbb{C}^{n}\) is the vector having \(1\) at the \(i\)th component and \(0\) elsewhere. Clearly, any vector \(\varphi\in\mathbb{C}^{n}\) can be univocally expressed as a linear combination of the vectors in the canonical basis as \(\varphi=\sum_{i=1}^{n}\varphi_{i}\cdot\boldsymbol{e}_{i}\). This latter fact is usually addressed by saying that \(\mathbb{C}^{n}\) is _spanned_ by \(\{\boldsymbol{e}_{1},\boldsymbol{e}_{2},\ldots,\boldsymbol{e}_{n}\}\). Two subspaces \(X,Y\subseteq\mathbb{C}^{n}\) are orthogonal if any vector in \(X\) is orthogonal to any vector in \(Y\). In this case, we denote by \(X+Y\) the linear space generated by \(X\cup Y\). For vectors \(\varphi\in\mathbb{C}^{n}\) and \(\psi\in\mathbb{C}^{m}\), their direct (or tensor or Kronecker) product is the vector \(\varphi\otimes\psi=(\varphi_{1}\cdot\psi,\ldots,\varphi_{n}\cdot\psi)\); we have \(\|\varphi\otimes\psi\|=\|\varphi\|\cdot\|\psi\|\).
A matrix \(M\in\mathbb{C}^{n\times n}\) is said to be _unitary_ if \(M\cdot M^{\dagger}=I^{(n)}=M^{\dagger}\cdot M\), where \(I^{(n)}\) is the \(n\times n\) identity matrix. Equivalently, \(M\) is unitary if it preserves the norm, i.e., \(\|\varphi\cdot M\|=\|\varphi\|\) for any vector \(\varphi\in\mathbb{C}^{n}\). Direct products of unitary matrices are unitary as well. The matrix \(M\) is said to be _Hermitian (or self-adjoint)_ if \(M=M^{\dagger}\). Let \(\mathcal{O}\in\mathbb{C}^{n\times n}\) be an Hermitian matrix, \(\nu_{1},\nu_{2},\ldots,\nu_{s}\) its eigenvalues, and \(E_{1},E_{2},\ldots,E_{s}\) the corresponding eigenspaces. It is well known that each eigenvalue \(\nu_{k}\) is real, that \(E_{i}\) is orthogonal to \(E_{j}\) for every \(1\leq i\neq j\leq s\), and that \(E_{1}+E_{2}\div\cdots\div E_{s}=\mathbb{C}^{n}\). Thus, every vector \(\varphi\in\mathbb{C}^{n}\) can be uniquely decomposed as \(\varphi=\varphi_{(1)}+\varphi_{(2)}+\cdots+\varphi_{(s)}\), for unique \(\varphi_{(j)}\in E_{j}\). The linear transformation \(\varphi\mapsto\varphi_{(j)}\) is the _projector_\(P_{j}\) onto the subspace \(E_{j}\). Actually, the Hermitian matrix \(\mathcal{O}\) is biunivocally determined by its eigenvalues and projectors as \(\mathcal{O}=\nu_{1}\cdot P_{1}+\nu_{2}\cdot P_{2}+\cdots+\nu_{s}\cdot P_{s}\). We recall that a matrix \(P\in\mathbb{C}^{n\times n}\) is a projector if and only if \(P\) is Hermitian and idempotent, i.e. \(P^{2}=P\).
Let \(\boldsymbol{\omega}=\mathrm{e}^{i\frac{2\pi}{n}}\) be the \(n\)th root of the unity (\(\omega^{n}=1\)) and define the Vandermonde matrix \(W\in\mathbb{C}^{n\times n}\) whose \((r,c)\)th component is \(\omega^{rc}\), for \(0\leq r,c<n\). Let the \(n\times n\) complex matrix \(F_{n}=\frac{1}{\sqrt{n}}\cdot W\). It is easy
to see that \(F_{n}\) is the unitary matrix implementing the _quantum Fourier transform_. Throughout the paper, it will be useful to recall that operating \(F_{n}\) on the \(j\)th vector of the canonical basis yields the vector \(\boldsymbol{e}_{j}\cdot F_{n}=\frac{1}{\sqrt{n}}\cdot(\omega^{0},\omega^{(j-1) \cdot 1},\ldots,\omega^{(j-1)\cdot(n-1)})\). We remark that \(|(\boldsymbol{e}_{j}\cdot F_{n})_{k}|^{2}=\frac{1}{n}\), for every \(1\leq k\leq n\).
As we will see in the next section, in accordance with quantum mechanics principles (see, e.g., [11]), the state of a quantum finite state automaton at any given time during its computation is represented by a norm \(1\) vector from a suitable Hilbert space, the state evolution of the automaton is modeled by unitary matrices, and information on certain characteristics of the automaton are probabilistically extracted by measuring some "observables" represented by Hermitian matrices.
### Quantum Finite State Automata
Here, we recall the model of a _Latvian quantum finite state automaton_[2] we are mostly interested in. We then quickly introduce _measure-once_ quantum finite state automata [7, 17] as a particular case of Latvian automata. Finally, we overview _measure-many_ quantum finite state automata [12].
**Definition 1**.: _Let \(\Sigma\) be an input alphabet, \(\sharp\notin\Sigma\) an endmarker symbol, and set \(\Gamma=\Sigma\cup\{\sharp\}\). A Latvian quantum finite automaton (lqfa for short) is a system \(\mathcal{A}=(Q,\Sigma,\pi_{0},\{U(\sigma)\}_{\sigma\in\Gamma},\{\mathcal{O}_{ \sigma}\}_{\sigma\in\Gamma},Q_{acc})\), where_
* \(Q=\{q_{1},q_{2},\ldots,q_{n}\}\) _is the finite set of basis states; the elements of_ \(Q\) _span_1 _the Hilbert space_ \(\mathbb{C}^{n}\)_,_ Footnote 1: We can associate with the set \(Q=\{q_{1},q_{2},\ldots,q_{n}\}\) of basis states the canonical basis \(\{\boldsymbol{e}_{1},\ldots,\boldsymbol{e}_{n}\}\) of the Hilbert space \(\mathbb{C}^{n}\) (see Section 2.2) where, for each \(1\leq i\leq n\), we let \(\boldsymbol{e}_{i}\) represent the basis state \(q_{i}\). As the canonical basis spans \(\mathbb{C}^{n}\), with a slight abuse of notation, we say that the elements of \(Q\) spans \(\mathbb{C}^{n}\).
* \(Q=\{q_{1},q_{2},\ldots,q_{n}\}\) _is the finite set of basis states; the elements of_ \(Q\) _span_1 _the Hilbert space_ \(\mathbb{C}^{n}\)_,_ Footnote 1: We can associate with the set \(Q=\{q_{1},q_{2},\ldots,q_{n}\}\) of basis states the canonical basis \(\{\boldsymbol{e}_{1},\ldots,\boldsymbol{e}_{n}\}\) of the Hilbert space \(\mathbb{C}^{n}\) (see Section 2.2) where, for each \(1\leq i\leq n\), we let \(\boldsymbol{e}_{i}\) represent the basis state \(q_{i}\). As the canonical basis spans \(\mathbb{C}^{n}\), with a slight abuse of notation, we say that the elements of \(Q\) spans \(\mathbb{C}^{n}\).
* \(\pi_{0}\in\mathbb{C}^{n}\) _is the initial amplitude vector (superposition) satisfying_ \(\|\pi_{0}\|=1\)_,_
* \(U(\sigma)\in\mathbb{C}^{n\times n}\) _is the unitary evolution matrix, for any_ \(\sigma\in\Gamma\)_,_
* _for any_ \(\sigma\in\Sigma\)_, we let_ \(\mathcal{O}_{\sigma}=\sum_{i=0}^{k_{\sigma}-1}c_{i}(\sigma)\cdot P_{i}(\sigma)\) _be an observable (Hermitian matrix) on_ \(\mathbb{C}^{n}\)_, where_ \(\{c_{0}(\sigma),\ldots,c_{k_{\sigma}-1}(\sigma)\}\) _is the set of all possible outcomes (eigenvalues) of measuring_ \(\mathcal{O}_{\sigma}\)_, and_ \(\{P_{0}(\sigma),\ldots,P_{k_{\sigma}-1}(\sigma)\}\) _are the projectors onto the corresponding eigenspaces,_
* _we let_ \(\mathcal{O}_{\sharp}=a\cdot P_{acc}(\sharp)+r\cdot(I^{(n)}-P_{acc}(\sharp))\) _be the final observable, where_ \(P_{acc}(\sharp)\) _is the projector onto the subspace of_ \(\mathbb{C}^{n}\) _spanned by the states in_ \(Q_{acc}\)_._
Let us briefly describe the behavior of \(\mathcal{A}\) on an input word \(w\sharp\in\Sigma^{*}\sharp\). At any given time, the _state_ of \(\mathcal{A}\) is a _superposition of basis states_ in \(Q\) which is represented by a norm \(1\) vector \(\xi\in\mathbb{C}^{n}\). We have that \(\xi_{i}\in\mathbb{C}\) is the _amplitude_ of the basis state \(q_{i}\), while \(|\xi_{i}|^{2}\in[0,1]\) is the _probability_ of observing \(\mathcal{A}\) in the basis state \(q_{i}\). The computation of \(\mathcal{A}\) on \(w\sharp\) starts in the initial superposition \(\pi_{0}\) by reading the first input symbol. Then, the transformations associated with each input symbol are applied in succession. The transformation corresponding to a symbol \(\sigma\in\Gamma\) consists of two steps:
1. _Evolution:_ the matrix \(U(\sigma)\) acts on the current state \(\xi\) of \(\mathcal{A}\), yielding the next state \(\xi^{\prime}=\xi\cdot U(\sigma)\).
2. _Observation:_ the observable \(\mathcal{O}_{\sigma}\) is measured and the outcome \(c_{i}(\sigma)\) is seen with probability \(\|\xi^{\prime}\cdot P_{i}(\sigma)\|^{2}\); upon seeing \(c_{i}(\sigma)\), according to Copenhagen interpretation of quantum mechanics [11], the state of \(\mathcal{A}\) "collapses" to (norm 1) state \(\xi^{\prime}\cdot P_{i}(\sigma)/\left\|\xi^{\prime}\cdot P_{i}(\sigma)\right\|\) and the computation continues, unless we are processing the endmarker \(\sharp\).
Upon processing the endmarker \(\sharp\), the final observable \(\mathcal{O}_{\sharp}\) is measured yielding the probability of seeing \(\mathcal{A}\) in an accepting basis state. Therefore, the probability of accepting \(w\in\Sigma^{*}\) is given by
\[p_{\mathcal{A}}(w)=\sum_{i_{1}=0}^{k_{w_{1}}-1}\cdots\sum_{i_{|w|}|=0}^{k_{w_{ \left|w\right|}}-1}\left\|\pi_{0}\cdot U(w_{1})\cdot P_{i_{1}}(w_{1})\cdot \cdots\cdot U(w_{\left|w\right|})\cdot P_{i_{\left|w\right|}}(w_{\left|w\right|} )\cdot U(\sharp)\cdot P_{acc}(\sharp)\right\|^{2}.\]
The function \(p_{\mathcal{A}}:\Sigma^{*}\to[0,1]\) is the _stochastic event induced by \(\mathcal{A}\)_. The _language recognized by \(\mathcal{A}\) with cut point \(\lambda\in[0,1]\)_ is the set of words \(L_{\mathcal{A},\lambda}=\{w\in\Sigma^{*}\;\mid\;p_{\mathcal{A}}(w)>\lambda\}\). The cut point \(\lambda\) is said to be _isolated_ whenever there exists \(\rho\in\left(0,\frac{1}{2}\right]\) such that \(|p_{\mathcal{A}}(w)-\lambda|\geq\rho\), for any \(w\in\Sigma^{*}\). The parameter \(\rho\) is usually referred to as _radius of isolation_.
In general, a language \(L\subseteq\Sigma^{*}\) is recognized with isolated cut point by a lqfa whenever there exists a lqfa\(\mathcal{A}\) such that \(\;(\inf\left\{p_{\mathcal{A}}(w)\;\mid\;w\in L\right\}-\sup\left\{p_{\mathcal{A }}(w)\;\mid\;w\not\in L\right\})>0\). In this case, we can compute the cut point as being \(\lambda=\frac{1}{2}\cdot(\inf\left\{p_{\mathcal{A}}(w)\;\mid\;w\in L\right\}+ \sup\left\{p_{\mathcal{A}}(w)\;\mid\;w\not\in L\right\})\), with radius of isolation \(\rho=\frac{1}{2}\cdot(\inf\left\{p_{\mathcal{A}}(w)\;\mid\;w\in L\right\}- \sup\left\{p_{\mathcal{A}}(w)\;\mid\;w\not\in L\right\})\). Throughout the rest of the paper, for the sake of conciseness, we will sometimes be writing "isolated cut point quantum finite automaton for a language" instead of "quantum finite automaton recognizing a language with isolated cut point". Isolated cut point turns out to be one of the main language recognition policies within the literature of probabilistic devices. Its relevance in the realm of finite state automata is due to the fact that we can arbitrarily reduce the classification error probability of an input word \(w\) by repeating a constant number of times (not depending on the length of \(w\)) its parsing and taking the majority of the answers. We refer the reader to,e.g., [19, Sec. 5], where the notion of isolated cut point recognition is introduced and carefully analyzed.
One of the two original and most studied models of a quantum finite state automaton is the _measure-once_ model (mo-qfa for short). An mo-qfa can be seen as a particular lqfa where, for any \(\sigma\in\Sigma\), we have that \(\mathcal{O}_{\sigma}=I^{(n)}\). Basically, this amounts to leave the computation of \(\mathcal{A}\) undisturbed up to the final observation for acceptance. Thus, an mo-qfa can be formally and more succinctly written as \(\mathcal{A}=\left(Q,\Sigma,\pi_{0},\left\{U(\sigma)\right\}_{\sigma\in\Gamma},Q_{acc}\right)\). The probability of \(\mathcal{A}\) accepting the word \(w\in\Sigma^{*}\) now simplifies as
\[p_{\mathcal{A}}(w)=\left\|\pi_{0}\cdot U(w_{1})\cdot\cdots\cdot U(w_{|w|}) \cdot U(\sharp)\cdot P_{acc}(\sharp)\right\|^{2}.\]
Let us now switch to the other original model, namely a _measure-many quantum finite state automaton_ (mm-qfa for short). Roughly speaking, an mm-qfa\(\mathcal{A}\) is defined as lqfa but with the possibility of accepting/rejecting the input string _before_ reaching the endmarker. More precisely, the set \(Q\) of the basis states of \(\mathcal{A}\) can be partitioned into _halting states_, which can be either _accepting_ or _rejecting_, and _non halting states_, also called _go_ states, i.e., \(Q=Q_{acc}\cup Q_{rej}\cup Q_{go}\). Following such a state partition, the sole observable \(\mathcal{O}=a\cdot P_{acc}+r\cdot P_{rej}+g\cdot P_{go}\), whose projectors map onto the subspaces spanned by the corresponding basis states, is associated with _each_ symbol in \(\Gamma\). At each step, the observable \(\mathcal{O}\) is measured and the computation of \(\mathcal{A}\) continues (unless we are processing \(\sharp\)) only if the outcome \(g\) is seen. Instead, if the outcome \(a\) (\(r\)) is seen, then \(\mathcal{A}\) halts and accepts (rejects). Formally, the mm-qfa\(\mathcal{A}\) can be written as \(\mathcal{A}=(Q,\Sigma,\pi_{0},\{U(\sigma)\}_{\sigma\in\Gamma},\mathcal{O},Q_{ acc})\), and the probability of accepting the word \(w\sharp=w_{1}\cdots w_{n}w_{n+1}\) is
\[p_{\mathcal{A}}(w)=\sum_{k=1}^{n+1}\|\pi_{0}\cdot(\prod_{i=1}^{k-1}U(w_{i}) \cdot P_{go})\cdot U(w_{k})\cdot P_{acc}\|^{2}.\]
It is well known (see, e.g., [7, 17]) that the class of languages recognized by isolated cut point mo-qfas coincides with the class of group languages. Notice that finite languages are not group languages, and hence they cannot be accepted by isolated cut point mo-qfas. Isolated cut point lqfas are proved in [2, 15] to be strictly more powerful than isolated cut point mo-qfas, since their recognition power coincides with the class of block group languages. An equivalent characterization states that a language is recognized by an isolated cut point lqfa if and only if it belongs to the boolean closure of languages of the form \(L_{1}a_{1}L_{2}a_{2}\cdots a_{k}L_{k+1}\), for \(a_{i}\in\Sigma\), group language \(L_{i}\subseteq\Sigma^{*}\), and \(|\Sigma|>1\). Finally, the recognition power of isolated cut point mm-qfas still remains an open question. However, it is know that mm-qfas are strictly more powerful than lqfas but strictly less powerful than dfas. In fact, isolated cut point mm-qfas can recognize the language \(a\Sigma^{*}\) which cannot be accepted by isolated cut point lqfas[2]. On the other hand, isolated cut point mm-qfas cannot recognize the language \(\Sigma^{*}a\), for \(|\Sigma|>1\) and \(a\in\Sigma\)[12].
## 3 Isolated Cut Point LQFAs for Words Longer than \(T\)
Here, we design an isolated cut point LQFA recognizing the unary language \(\sigma^{\geq T}\), for any given \(T>0\) (i.e., the set of unary strings whose length is greater than or equal to \(T\)). As it will be clear in the next section, this lqfa will be a relevant component in the modular construction of isolated cut point lqfas for unary regular languages.
Our design pattern is inspired by [2, 15], where the authors provide an isolated cut point lqfa for the language \(\Sigma^{*}a_{1}\Sigma^{*}a_{2}\cdots a_{t}\Sigma^{*}\), with \(a_{i}\in\Sigma\) and \(|\Sigma|>1\). So, we focus on recognizing the unary version of \(\Sigma^{*}a_{1}\Sigma^{*}a_{2}\cdots a_{T}\Sigma^{*}\) yielded by fixing \(a_{1}=\cdots=a_{T}=\sigma\) and \(\Sigma=\{\,\sigma\,\}\), namely, the desired language \(a_{1}\cdots a_{T}\Sigma^{*}=\sigma^{\geq T}\). We adapt the construction in [2, 15], and inductively exhibit a family \(\{\,M^{(\ell)}\,\}_{\ell\geq 1}\) of lqfas such that: _(i)_\(M^{(\ell)}\) recognizes the language \(\sigma^{\geq\ell}\) with isolated cut point, and _(ii)_\(M^{(\ell)}\) is constructed by "expanding" \(M^{(\ell-1)}\). So, the desired isolated cut point lqfa for \(\sigma^{\geq T}\) will result after \(T\) "expansions", starting from the lqfa\(M^{(1)}\). We provide a detailed analysis of the stochastic behavior of \(M^{(\ell)}\) machines, emphasizing cut points, isolations and their size (i.e., number of their basis states). In this section, to have a convenient notation, we will be using \(A_{\sigma}\) for the evolution operator of our lqfas.
**Base of the construction:** For the induction base, we define the lqfa\(M^{(1)}\) for the language \(\sigma^{\geq 1}\) as
\[M^{(1)}=(Q^{(1)},\{\,\sigma\,\}\,,\pi_{0},\{A^{(1)}_{\sigma},A^{(1)}_{\sharp} \},\{\mathcal{O}^{(1)}_{\sigma},\mathcal{O}^{(1)}_{\sharp}\},Q^{(1)}_{acc}),\]
where \(Q^{(1)}=\{q_{0},\ldots,q_{n-1}\}\) is the set of \(n\) basis states, \(\pi_{0}=\boldsymbol{e}_{1}\) meaning that \(M^{(1)}\) starts in the state \(q_{0}\), \(Q^{(1)}_{acc}=Q^{(1)}\backslash\{\,q_{0}\,\}\) is the set of \(n-1\) accepting states. For the evolution matrices, we let \(A^{(1)}_{\sigma}=F_{n}\) (the quantum Fourier transform) and \(A^{(1)}_{\sharp}=I\) (the identity matrix). The observable \(\mathcal{O}^{(1)}_{\sigma}\) is the _canonical observable_ defined by the projectors \(\{\boldsymbol{e}_{1}{}^{\dagger}\cdot\boldsymbol{e}_{1},\,\boldsymbol{e}_{2}{ }^{\dagger}\cdot\boldsymbol{e}_{2},\,\ldots,\boldsymbol{e}_{n}{}^{\dagger} \cdot\boldsymbol{e}_{n}\}\). By measuring \(\mathcal{O}^{(1)}_{\sigma}\) on \(M^{(1)}\) being in the superposition \(\xi\in\mathbb{C}^{n}\), we will see \(M^{(1)}\) in the basis state \(q_{i-1}\) with probability \(\|\xi\cdot(\boldsymbol{e}_{i}{}^{\dagger}\cdot\boldsymbol{e}_{i})\|^{2}=| \xi_{i}|^{2}\). Upon such an outcome, the state of \(M^{(1)}\) clearly collapses to \(\boldsymbol{e}_{i}\). The final observation \(\mathcal{O}^{(1)}_{\sharp}\) projects onto the subspace spanned by the accepting basis states \(\{\,q_{1},\ldots,q_{n-1}\,\}\).
The automaton \(M^{(1)}\) behaves as follows: when the first input symbol is read, the state of \(M^{(1)}\) becomes \(\pi_{0}\cdot A^{(1)}_{\sigma}=\boldsymbol{e}_{1}\cdot F_{n}\), upon which the canonical observation is measured. As noticed at the end of Section 2.2, such a measurement will cause \(M^{(1)}\) to move from \(q_{0}\) to some basis state \(q_{i}\), with \(0\leq i\leq n-1\), uniformly at random (i.e., with probability \(|(\boldsymbol{e}_{1}\cdot F_{n})_{i+1}|^{2}=\frac{1}{n}\)). After processing (again, by quantum Fourier transform followed by measuring the canonical observable) the next input symbol from being in the state \(\boldsymbol{e}_{i}\), we again find \(M^{(1)}\) in a basis state uniformly at random. Such a dynamics continues unaltered, until the endmarker is reached and processed by the identity matrix. At this point, the final observation \(\mathcal{O}_{\sharp}\) is measured, and an accepting state is easily seen to be reached with probability \(|Q^{(1)}_{acc}|\cdot\frac{1}{n}=(\frac{n-1}{n})\). Clearly, processing the empty string leaves \(M^{(1)}\) in the non accepting state \(q_{0}\) with certainty. Therefore, \(p_{M^{(1)}}(\boldsymbol{\varepsilon})=0\), while for \(k>0\) we have \(p_{M^{(1)}}(\boldsymbol{\sigma}^{k})=(\frac{n-1}{n})\). So, \(M^{(1)}\) recognizes the language \(\sigma^{\geq 1}\) with isolated cut point.
**Inductive step of the construction:** For the inductive step, we show how to build the isolated cut point lqfa\(M^{(\ell)}\) for the language \(\sigma^{\geq\ell}\) from the lqfa\(M^{(\ell-1)}\) for the language \(\sigma^{\geq\ell-1}\), this latter lqfa being given by inductive hypothesis. We define
\[M^{(\ell)}=(Q^{(\ell)},\{\,\sigma\,\}\,,\pi_{0},\{A^{(\ell)}_{\sigma},A^{(\ell )}_{\sharp}\},\{\mathcal{O}^{(\ell)}_{\sigma},\mathcal{O}^{(\ell)}_{\sharp}\},Q^{(\ell)}_{acc}),\]
where the set \(Q^{(\ell)}\) of basis states consists of the previous set \(Q^{(\ell-1)}\) of basis states, plus \((n-1)\) new basis states per each state in \(Q^{(\ell-1)}_{acc}\). We let \(Q^{(\ell)}_{acc}\) be the set containing these \((n-1)\cdot|Q^{(\ell-1)}_{acc}|\) new states, with \(|Q^{(\ell)}_{acc}|=(n-1)^{\ell}\). Therefore, \(Q^{(\ell)}=Q^{(\ell-1)}\cup Q^{(\ell)}_{acc}=\{q_{0}\}\cup Q^{(1)}_{acc}\cup Q ^{(2)}_{acc}\cup\cdots\cup Q^{(\ell)}_{acc}\) with \(|Q^{(i)}_{acc}|=(n-1)^{i}\)
so that \(|Q^{(\ell)}|=\sum_{i=0}^{\ell}(n-1)^{i}=\frac{(n-1)^{(\ell+1)}-1}{n-2}\). The initial superposition is \(\pi_{0}=\boldsymbol{e}_{1}\). We let \(A_{\sharp}^{(\ell)}=I\) and \(A_{\sigma}^{(\ell)}=B^{(\ell)}\cdot\tilde{A}^{(\ell-1)}\), where \(\tilde{A}^{(\ell-1)}\) is the transformation acting as \(A_{\sigma}^{(\ell-1)}\) on \(Q^{(\ell-1)}\subset Q^{(\ell)}\), and as the identity elsewhere. Instead, \(B^{(\ell)}\) is an additional operator working as follows. For any \(\tilde{q}\in Q_{acc}^{(\ell-1)}\), let \(Q_{\tilde{q}}=\{\tilde{q}_{1},\ldots,\tilde{q}_{n-1}\}\subset Q_{acc}^{(\ell)}\) be the set of the \(n-1\) new added accepting states associated with \(\tilde{q}\). Thus, the operator \(B^{(\ell)}\) first acts as \(F_{n}\) on \(\{\,\tilde{q}\,\}\cup Q_{\tilde{q}}\) for every \(\tilde{q}\in Q_{acc}^{(\ell-1)}\), then it measures \(\mathcal{O}_{\sigma}^{(\ell)}\) being the canonical observable on \(Q_{acc}^{(\ell-1)}\cup Q_{acc}^{(\ell)}\) plus the identity projector on the remaining basis states. The final observable \(\mathcal{O}_{\sharp}^{(\ell)}\) as usual projects onto the subspace spanned by \(Q_{acc}^{(\ell)}\). Actually, the automaton so far constructed does not perfectly comply with the definition of a lqfa given in Section 2.3 since \(A_{\sigma}^{(\ell)}\) is not a unitary matrix. However, [1, Claim 1] ensures that the action of the operator \(B^{(\ell)}\cdot\tilde{A}^{(\ell-1)}\) followed by measuring \(\tilde{\mathcal{O}}_{\sigma}^{(\ell-1)}\) (the observable of \(M^{(\ell-1)}\) extended to \(Q^{(\ell)}\) by the identity projector onto \(Q_{acc}^{(\ell)}\)) can be expressed as a unitary matrix followed by measuring a suitable observable. This last detail possibly enlarges the dimension of the Hilbert space for \(M^{(\ell)}\) by a factor bounded by \(n^{\ell}\). The stochastic event induced by \(M^{(\ell)}\) will be discussed later.
To clarify the architecture and behavior of this family of automata, we now describe the lqfa\(M^{(3)}\) recognizing the language \(\sigma^{\geq 3}\) with isolated cut point. We have
\[M^{(3)}=(Q^{(3)},\{\,\sigma\,\},\pi_{0},\{A_{\sigma}^{(3)},A_{\sharp}^{(3)}\},\{\,\mathcal{O}_{\sigma}^{(3)},\mathcal{O}_{\sharp}^{(3)}\,\},Q_{acc}^{(3)}),\]
where we let the set of basis states be \(Q^{(3)}=\{q_{0}\}\cup Q_{acc}^{(1)}\cup Q_{acc}^{(2)}\cup Q_{acc}^{(3)}\) with \(Q_{acc}^{(1)}=\{q_{i}\,\,\mid\,1\leq i\leq n-1\}\), \(Q_{acc}^{(2)}=\big{\{}q_{i,j}\,\,\mid\,\,1\leq i,j\leq n-1\big{\}}\), and \(Q_{acc}^{(3)}=\big{\{}q_{i,j,k}\,\,\mid\,\,1\leq i,j,k\leq n-1\big{\}}\). We remark that \(Q_{acc}^{(3)}\) is the set of \((n-1)^{3}\) accepting basis states of \(M^{(3)}\). We can regard basis states as partitioned into three groups reflected by the number of subscripts attributed to each basis state; each group of states is added in a subsequent step of the inductive construction. The general structure of the state (superposition) of \(M^{(3)}\) is a norm \(1\) vector in \(\mathbb{C}^{|Q^{(3)}|}\) of the following form, with \(\alpha(q)\) denoting the amplitude of the basis state \(q\):
\[[\alpha(q_{0}),\alpha(q_{1}),\alpha(q_{1,1}),[\ldots\alpha(q_{1, 1,k})\ldots],\alpha(q_{1,2}),[\ldots\alpha(q_{1,2,k})\ldots],\,\ldots\,,\alpha (q_{1,n-1}),[\ldots\alpha(q_{1,n-1,k})\ldots],\] \[\alpha(q_{2}),\alpha(q_{2,1}),[\ldots\alpha(q_{2,1,k})\ldots], \alpha(q_{2,2}),[\ldots\alpha(q_{2,2,k})\ldots],\,\ldots\,,\alpha(q_{2,n-1}),[ \ldots\alpha(q_{2,n-1,k})\ldots],\] \[\vdots\] \[\alpha(q_{n-1}),\alpha(q_{n-1,1}),[\ldots\alpha(q_{n-1,1,k})\ldots ],\alpha(q_{n-1,2}),[\ldots\alpha(q_{n-1,2,k})\ldots],\,\ldots\,,\alpha(q_{n-1,n-1}),[\ldots\alpha(q_{n-1,n-1,k})\ldots]].\] (*) Form of states (superpositions) of \(M^{(3)}\).
As usual, we let \(\pi_{0}=\boldsymbol{e}_{1}\). The evolution matrices of \(M^{(3)}\) are \(A_{\sharp}^{(3)}=I\), while we have \(A_{\sigma}^{(3)}=B^{(3)}\cdot\tilde{B}^{(2)}\cdot\tilde{A}^{(1)}\), where each matrix in the product acts on levels of the basis states as follows: \(\tilde{A}^{(1)}\) affects the states in \(\{\,q_{0}\,\}\cup Q_{acc}^{(1)}\), \(\,\tilde{B}^{(2)}\) the states in \(Q_{acc}^{(1)}\cup Q_{acc}^{(2)}\), and \(B^{(3)}\) the states in \(Q_{acc}^{(2)}\cup Q_{acc}^{(3)}\). From now on, it will be useful to describe the dynamic of \(M^{(3)}\) by displaying the sequence of the stochastic vectors obtained by squaring the amplitudes in the superpositions of the form in (*). In such vectors, the value \(|\alpha(q)|^{2}\) of the component associated with \(q\) represents the probability for \(M^{(3)}\) of being in the basis state \(q\). This stochastic dynamic description turns out to be appropriate as \(M^{(3)}\) uses the canonical observable after each quantum Fourier transform operation. Upon reading a symbol \(\sigma\), the lqfa\(M^{(3)}\) executes \(A_{\sigma}^{(3)}\) followed by measuring \(\tilde{\mathcal{O}}_{\sigma}^{(1)}\): formally, we write \(A_{\sigma}^{(3)}\downarrow\tilde{\mathcal{O}}_{\sigma}^{(1)}\). This operation distributes the probability differently in the three group of basis states \(Q^{(1)}\), \(Q_{acc}^{(2)}\) and \(Q_{acc}^{(3)}\). In particular, the probability values turn out to be identical within each group of basis states, for each step of computation (except for the initial
superposition \(\pi_{0}\)). Therefore, the form of the stochastic vector at each step of computation is
\[[x,x,y,\,[\cdots z\cdots],\,y,\,[\cdots z\cdots],\,\ldots\,,\,y,\,[ \cdots z\cdots],\] \[x,y,\,[\cdots z\cdots],\,y,\,[\cdots z\cdots],\,\ldots\,,\,y,\,[ \cdots z\cdots],\] \[\vdots\] \[x,y,\,[\cdots z\cdots],\,y,\,[\cdots z\cdots],\,\ldots\,,\,y,\,[ \cdots z\cdots]],\]
where \(x\) is the probability value for the states in \(Q^{(1)},\,\,\,y\) for the states in \(Q^{(2)}_{acc}\), and \(z\) for the (accepting) states in \(Q^{(3)}_{acc}\). Thus, the current accepting probability is \((n-1)^{3}\cdot z\).
Now, let \(x(k)\), \(y(k)\), and \(z(k)\) be the above basis states probabilities after processing the \(k\)th input symbol. We are going to establish the dependence of such values from \(x(k-1)\), \(y(k-1)\), and \(z(k-1)\) in order to single out a closed formula for the stochastic event \(p_{M^{(3)}}\). To this aim, for reader's ease of mind, a graphical representation is given in Figure 1, of how one step of the evolution-plus-observation \(A^{(3)}_{\sigma}\downarrow\tilde{\mathcal{O}}^{(1)}_{\sigma}\) affects the probability values in each different group of basis states.
Figure 1: Stochastic representation of a computation step of \(M^{(3)}\) on the symbol \(\sigma\) for basis states of different groups. The notation \(\tilde{A}^{1}\downarrow\tilde{\mathcal{O}}^{(1)}_{\sigma}\) means that \(\tilde{A}^{1}\) is applied and then the observable \(\tilde{\mathcal{O}}^{(1)}_{\sigma}\) is measured. Wave (straight) edges indicate basis state transitions occurring with probability \(\frac{1}{n}\) (with certainty). For instance, the tree in (b) says that, starting from \(q_{i\neq 0}\) for a fixed \(i\) and after one step of computation, we will observe \(M^{(3)}\) in \(q_{0}\) with probability \(\frac{1}{n^{2}}\). Note that there exist \(n-1\) trees of the form (b) leading to \(q_{0}\).
Let us focus, e.g., on \(x(k)\). The probability \(x(k)\) depends on \(x(k-1),\ y(k-1)\), and \(z(k-1)\) as follows:
* Figure 1(a) shows that the basis state \(q_{0}\) contributes with \(\frac{1}{n}\cdot x(k-1)\).
* Figure 1(b) shows the contribution of each basis states in \(Q_{acc}^{(1)}\), which is \(\frac{1}{n^{2}}\cdot x(k-1)\); given that \(|Q_{acc}^{(1)}|=(n-1)\), the total contribution is \(\frac{(n-1)}{n^{2}}\cdot x(k-1)\).
* Figure 1(c) shows that the total contribution given by \(y(k-1)\) elements (i.e., by the \((n-1)^{2}\) basis states in \(Q_{acc}^{(2)}\)) is \(\frac{(n-1)^{2}}{n^{3}}\cdot y(k-1)\).
* Figure 1(d) shows that the total contribution given by \(z(k-1)\) elements (i.e., by the \((n-1)^{3}\) basis states in \(Q_{acc}^{(3)}\)) is \(\frac{(n-1)^{3}}{n^{3}}\cdot z(k-1)\).
By analogous reasonings, we can obtain recurrences for \(y(k)\) and \(z(k)\), globally yielding the system
\[\begin{cases}x(k)=\frac{1}{n}\cdot x(k-1)+\frac{(n-1)}{n^{2}}\cdot x(k-1)+ \frac{(n-1)^{2}}{n^{3}}\cdot y(k-1)+\frac{(n-1)^{3}}{n^{3}}\cdot z(k-1)\\ y(k)=\frac{1}{n}\cdot x(k-1)+\frac{(n-1)}{n^{2}}\cdot y(k-1)+\frac{(n-1)^{2}}{ n^{2}}\cdot z(k-1)\\ z(k)=\frac{1}{n}\cdot y(k-1)+\frac{(n-1)}{n}\cdot z(k-1).\end{cases} \tag{1}\]
The base for this system of recurrences is the probability distribution after reading the first symbol \(\sigma\), i.e.:
\[\begin{cases}x(1)=\frac{1}{n}\\ y(1)=z(1)=0.\end{cases} \tag{2}\]
From the system (1), the reader may verify that at each computation step the probability "shifts" towards the next deeper level of the basis states until reaching the basis states in \(Q_{acc}^{(3)}\). In fact, after the first step (yielding probabilities in (2)), only the \(x\)-components have non null values. After the second step, only the \(x\)- and \(y\)-components have values different from \(0\), while the value of the \(z\)-components is still \(0\). This shows that \(M^{(3)}\) rejects with certainty the strings in \(\sigma^{\leq 2}\). After the third step, all the components have non null values; in particular, \(z(3)=\frac{1}{n^{3}}\), so that the accepting probability of the string \(\sigma^{3}\) attains \(|Q_{acc}^{(3)}|\cdot z(3)=(\frac{n-1}{n})^{3}\). By solving the system (1), we get a closed formula for \(z(k)\), with \(k\geq 2\), as
\[z(k)=\frac{1}{n(n-1)^{2}}\cdot\left(1-\frac{(\frac{2n-2}{n^{2}})^{k-2}\cdot(n- 1)^{2}+1}{(n-1)^{2}+1}\right).\]
This allows us to evaluate the accepting probability of \(M^{(3)}\) for any string in \(\sigma^{*}\) as
\[p_{M^{(3)}}(\sigma^{k})=|Q_{acc}^{(3)}|\cdot z(k)=\begin{cases}0&\text{if $k\leq 2$}\\ \frac{n-1}{n}\cdot\left(1-\frac{(\frac{2n-2}{n^{2}})^{k-2}\cdot(n-1)^{2}+1}{ (n-1)^{2}+1}\right)&\text{if $k\geq 3$}.\end{cases} \tag{3}\]
Equation (3) shows that \(M^{(3)}\) recognizes \(\sigma^{\geq 3}\) with isolated cut point. Clearly, the stochastic event induced by \(M^{(3)}\) depends on the number \(n\) of the basis states of \(M^{(1)}\), the initial automaton of the inductive construction. Figure 2 displays \(p_{M^{(3)}}\) for some values of \(n\). As expected, the higher \(n\) grows, the better the isolation around the cut point becomes.
Now, we consider the general lqfa\(M^{(\ell)}\), and derive the system of recurrences for its stochastic dynamic. The set of basis states of \(M^{(\ell)}\) is now partitioned into \(\ell\) groups. For \(1\leq h\leq\ell\), we denote by \(x_{h}(k)\) the probability for \(M^{(\ell)}\) of being in a basis state of the \(h\)th group, after processing \(k\) input symbols. The system of recurrences for \(M^{(\ell)}\) generalizes the system (1) as follows:
\[\begin{cases}x_{1}(k)=\frac{1}{n}\cdot x_{1}(k-1)+\sum_{j=1}^{\ell-1}\frac{(n- 1)^{j}}{n^{h+1}}\cdot x_{j}(k-1)+\frac{(n-1)^{\ell}}{n^{\ell}}\cdot x_{\ell}(k -1)\\ x_{2}(k)=\left(\sum_{j=0}^{\ell-2}\frac{(n-1)^{j}}{n^{h+1}}\cdot x_{j+1}(k-1) \right)+\frac{(n-1)^{\ell-1}}{n^{\ell-1}}\cdot x_{\ell}(k-1)\\ \qquad\qquad\vdots\\ x_{h}(k)=\left(\sum_{j=0}^{\ell-h}\frac{(n-1)^{j}}{n^{h+1}}\cdot x_{j+h-1}(k -1)\right)+\frac{(n-1)^{\ell-h+1}}{n^{\ell-h+1}}\cdot x_{\ell}(k-1)\\ \qquad\qquad\vdots\\ x_{\ell}(k)=\frac{1}{n}\cdot x_{\ell-1}(k-1)+\frac{(n-1)}{n}\cdot x_{\ell}(k -1),\end{cases} \tag{4}\]
with initial values \(x_{1}(1)=\frac{1}{n}\), and \(x_{h}(1)=0\) for every \(2\leq h\leq\ell\). We show the validity of this system of recurrences by induction, having, e.g., the system (1) for the automaton \(M^{(3)}\) as base case. By inductive hypothesis, we assume the system of recurrences for \(M^{(\ell-1)}\), and we build the system (4) for \(M^{(\ell)}\). We consider the set of trees representing one step of the computation of our automata, starting from basis states of different groups. E.g., Figure 1 displays the four different types of trees for \(M^{(3)}\), one per each group of basis states, plus one for the evolution from the state \(q_{0}\). So, for \(M^{(\ell)}\) we are going to provide \(\ell\) of such trees, plus the one for \(q_{0}\). Let us explain how to obtain them from the trees of \(M^{(\ell-1)}\). Let \(T_{j}^{(\ell-1)}\) be a tree representing one step of the evolution of \(M^{(\ell-1)}\) on a basis state of group \(1\leq j\leq\ell-1\), namely, a basis state from \(Q_{acc}^{(j)}\). Moreover, let \(T_{0}^{(\ell-1)}\) be the tree for \(q_{0}\). The evolution for \(M^{(\ell)}\) is \(A_{\sigma}^{(\ell)}=B^{(\ell)}\cdot\vec{A}^{(\ell-1)}\). Thus, the behavior of \(M^{(\ell)}\) is described by \(\ell+1\) trees with the following structure:
* The trees \(T_{j}^{(\ell)}\) for \(0\leq j<\ell-1\) are basically the trees \(T_{j}^{(\ell-1)}\) with a preliminary step due to the action of \(B^{(\ell)}\). Since in these trees the root is labeled by a basis state of level \(j<\ell-1\), such a preliminary step coincides with the identity evolution.
* Even the trees \(T_{\ell-1}^{(\ell)}\) and \(T_{\ell}^{(\ell)}\) have the action of \(B^{(\ell)}\) as a preliminary step. However, in these cases, \(B^{(\ell)}\) acts as \(F_{n}\) on the basis states of groups \(\ell-1\) in the tree \(T_{\ell-1}^{(\ell)}\), and \(\ell\) in the tree \(T_{\ell}^{(\ell)}\). The structure of these two trees, both containing the tree \(T_{\ell-1}^{\ell-1}\) as a sub-tree, is presented in Figure 3.
Figure 2: The (“continuous version” of the) stochastic events induced by \(M^{(3)}\) according to Equation 3, for different values of the number \(n\) of basis states of \(M^{(1)}\), the base module inductively leading to \(M^{(3)}\).
It is now possible to properly justify the system (4) by using the induction step. Starting from the system of recurrences for \(M^{(\ell-1)}\), we show how it modifies towards the system for \(M^{(\ell)}\). Clearly, a new recurrence for \(x_{\ell}(k)\) (i.e., the probabilities for basis states of group \(\ell\), the accepting states for \(M^{(\ell)}\)) is added at the end of the system. This component receives contributions only from the trees \(T^{(\ell)}_{\ell-1}\) and \(T^{(\ell)}_{\ell}\) weighted, respectively, by \(x_{\ell-1}(k-1)\) and \(x_{\ell}(k-1)\). Precisely, from the former tree we get the contribution \(\frac{1}{n}\cdot x_{\ell-1}(k-1)\), from the latter (\(n-1\) different trees) the contribution is \(\frac{n-1}{n}\cdot x_{\ell}(k-1)\). For \(x_{h}(k)\), with \(1\leq h\leq\ell-1\), we note that the only modified contribution is the one carried by \(x_{\ell-1}(k-1)\); moreover a new contribution from \(x_{\ell}(k-1)\) is added. Even in this case, the trees \(T^{(\ell)}_{\ell-1}\) and \(T^{(\ell)}_{\ell}\) account for these modifications: the new coefficient of \(x_{\ell-1}(k-1)\) is the old one for \(x_{\ell-1}(k-1)\) (i.e., the one associated with \(x_{\ell-1}(k-1)\) in the system for \(M^{(\ell-1)}\)) multiplied by \(\frac{1}{n}\), while the coefficient of the new contribution \(x_{\ell}(k-1)\) is the old one for \(x_{\ell-1}(k-1)\) multiplied by \(\frac{(n-1)}{n}\).
By simply applying repeated substitutions in the system (4), one may verify that, for \(1\leq k\leq\ell\), the value \(x_{k}(k)\) always equals \(\frac{1}{n^{k}}\), while we have \(x_{k+1}(k)=\dots=x_{\ell}(k)=0\). Nevertheless, this implies that the acceptance probability of \(M^{(\ell)}\) for the string \(\sigma^{k}\) is zero for \(k<\ell\), while is \(|Q^{(\ell)}_{acc}|\cdot x_{\ell}(\ell)=(\frac{n-1}{n})^{\ell}\) for \(k=\ell\). We are now going to prove that for the strings in the language \(\sigma^{\geq\ell}\) the acceptance probability never goes below \((\frac{n-1}{n})^{\ell}\). To this aim, it suffices to show
**Theorem 2**.: _On the input string \(\sigma^{\ell+s}\), with \(s\geq 0\), the probability for \(M^{(\ell)}\) of being in one of the accepting basis states in \(Q^{(\ell)}_{acc}\) while processing the suffix \(\sigma^{s}\) is greater than or equal to \(\frac{1}{n^{\ell}}\)._
Proof.: We split the proof into two parts, both proved by induction. In the first part, we focus on the input prefix \(\sigma^{\ell}\). We show by induction on \(1\leq k\leq\ell\) that \(x_{h}(k)\geq\frac{1}{n^{k}}\) in the system (4) holds true for every \(1\leq h\leq k\). This will enables us to obtain that \(x_{1}(\ell),\dots,x_{\ell}(\ell)\geq\frac{1}{n^{\ell}}\). For the base case \(k=1\), we recall that \(x_{1}(1)=\frac{1}{n}\). So, let us assume by inductive hypothesis that \(x_{h}(k)\geq\frac{1}{n^{k}}\) for a given \(k<\ell\) and every \(1\leq h\leq k\), and prove the property for \(k+1\). From the system (4), we have
\[x_{h}(k+1)=\frac{1}{n}\cdot x_{h-1}(k)+\frac{n-1}{n^{2}}\cdot x_{h}(k)+\frac{( n-1)^{2}}{n^{3}}\cdot x_{h+1}(k)+\dots+\frac{(n-1)^{k-h+1}}{n^{k-h+2}}\cdot x _{k}(k)+\dots+\frac{(n-1)^{\ell-h+1}}{n^{\ell-h+1}}\cdot x_{\ell}(k).\]
Since \(x_{h}(k)\geq\frac{1}{n^{k}}\) for \(1\leq h\leq k\), and \(0\) otherwise, we can bound \(x_{h}(k+1)\) from below as
\[x_{h}(k+1)\geq\frac{1}{n^{k}}\cdot\frac{1}{n}\cdot\left(1+\frac{n-1}{n}+\frac {(n-1)^{2}}{n^{2}}+\dots+\frac{(n-1)^{k-h+1}}{n^{k-h+1}}\right)\geq\frac{1}{n ^{k+1}}.\]
Now, the second part of the proof comes, where we show, again by induction on \(k\), that \(x_{h}(k)\geq\frac{1}{n^{\ell}}\) for \(k\geq\ell\) and \(1\leq h\leq\ell\). By the first part of the proof, we have \(x_{1}(\ell),x_{2}(\ell),\dots x_{\ell}(\ell)\geq\frac{1}{n^{\ell}}\), and so the base
case holds true. We prove \(x_{h}(k+1)\geq\frac{1}{n^{\ell}}\) assuming such a property for \(k\) by inductive hypothesis. From the system (4), we get
\[x_{h}(k+1)=\frac{1}{n}\cdot x_{h-1}(k)+\frac{n-1}{n^{2}}\cdot x_{h}(k)+\frac{(n- 1)^{2}}{n^{3}}\cdot x_{h+1}(k)+\ldots+\frac{(n-1)^{\ell-h}}{n^{\ell-h+1}}\cdot x _{\ell-1}(k)+\frac{(n-1)^{\ell-h+1}}{n^{\ell-h+1}}\cdot x_{\ell}(k).\]
Since we are assuming all \(x_{j}(k)\)'s to be greater than or equal to \(\frac{1}{n^{\ell}}\), we can bound \(x_{h}(k+1)\) from below as
\[x_{h}(k+1)\geq\frac{1}{n^{\ell}}\cdot\left(\frac{1}{n}+\frac{n-1}{n^{2}}+ \frac{(n-1)^{2}}{n^{3}}+\cdots+\frac{(n-1)^{\ell-h}}{n^{\ell-h+1}}+\left(\frac {n-1}{n}\right)^{\ell-h+1}\right)=\frac{1}{n^{\ell}},\]
whence, the claimed result follows.
We can conclude that \(M^{(\ell)}\) induces the following stochastic event:
\[p_{M^{(\ell)}}(\sigma^{k})=|Q^{(\ell)}_{acc}|\cdot x_{\ell}(k)\begin{cases}=0& \text{if }k<\ell\\ \geq\left(\frac{n-1}{n}\right)^{\ell}&\text{if }k\geq\ell.\end{cases} \tag{5}\]
This shows that the automaton \(M^{(\ell)}\) recognizes \(\sigma^{\geq\ell}\) with isolated cut point and \(n^{O(\ell)}\) basis states. As expected, for \(n\to\infty\), the event in (5) approximates a deterministic behavior. In fact, for growing values of \(n\), we have \(p_{M^{(\ell)}}(\sigma^{k})\to 1\) for \(k\geq\ell\), and \(p_{M^{(\ell)}}(\sigma^{k})=0\) for \(k<\ell\).
To sum up, let us get back to our initial purpose, i.e., building an isolated cut point lqfa for the language \(\sigma^{\geq T}\). Such a lqfa is obtained by pushing \(T\) steps ahead from \(M^{(1)}\) the inductive construction to finally get the lqfa\(M^{(T)}\). As noted, \(M^{(T)}\) features \(n^{O(T)}\) basis states, \(n\) being the number of basis states of \(M^{(1)}\). From Equation (5), we can fix a cut point \(\frac{1}{2}\cdot\left(\frac{n-1}{n}\right)^{T}\) isolated by \(\frac{1}{2}\cdot\left(\frac{n-1}{n}\right)^{T}\). By increasing \(n\), we widen such an isolation, tending to a deterministic recognition of the language \(\sigma^{\geq T}\).
Focusing on the size of \(M^{(T)}\), we observe that its number of basis states exponentially depends on \(T\). As a matter of fact, we can avoid such an exponential blow up by noticing that even the lqfa\(M^{(3)}\) can actually accept with isolated cut point the language \(\sigma^{\geq T}\), for \(T\geq 4\). This is due to the fact that the stochastic event induced by \(M^{(3)}\) is an increasing function, as one may readily infer from Equation (3) and Figure 2. By this property, we can fix the isolated cut point between \(p_{M^{(3)}}(\sigma^{T-1})\) and \(p_{M^{(3)}}(\sigma^{T})\), thus recognizing \(\sigma^{\geq T}\) with \(n^{O(1)}\) basis states, not depending on \(T\) any more. Nevertheless, such a dramatic size reduction comes at a price. In fact, the isolation around the cut point shrinks from \(\frac{1}{2}\cdot\left(\frac{n-1}{n}\right)^{T}\) to \(\frac{p_{M^{(3)}}(\sigma^{T})-p_{M^{(3)}}(\sigma^{T-1})}{2}=\frac{1}{2}\cdot (\frac{2}{n})^{T-3}\cdot(\frac{n-1}{n})^{T-1}\cdot(\frac{n+1}{n})\). This isolation vanishes as \(n\) grows, thus suggesting to consider small values of \(n\). E.g., for \(n=2\) we obtain an isolation of \(\frac{3}{2}\cdot(\frac{1}{2})^{T}\); for \(n=3\) we get \(\frac{27}{8}\cdot(\frac{4}{9})^{T}\).
## 4 Isolated Cut Point lqfas for Unary Regular Languages
Here, we are going to use the lqfas designed in the previous section as modules in a more general construction yielding isolated cut point lqfas for unary regular languages. This investigation is inspired by [6] where the same problem is tackled for mm-qfas. Our result constructively shows that isolated cut point mm-qfas and lqfas are equivalent on unary inputs, in sharp contrast to the case for general alphabets where mm-qfas outperform lqfas (see Section 2.3).
We start by observing that, according to Theorem 1, any unary regular language \(L\subseteq\sigma^{*}\) can viewed as the disjoint union of two unary languages, namely, the finite language \(L_{T}=L\cap\sigma^{\leq T}\) plus the ultimately periodic language \(L_{P}=L\cap\sigma^{\geq T+1}\). So, we are going to design two lqfa modules recognizing these two languages with isolated cut point, and then suitably assemble such modules into a final isolated cut point lqfa\(A_{L}\) for the unary regular language \(L\).
**The finite language \(L_{T}\):** We define the "\((T+1)\)-periodic continuation" \(L_{T^{\circ}}\) of \(L_{T}\), namely, the language obtained from \(L_{T}\) by adding all the strings of the form \(\sigma^{i+h\cdot(T+1)}\), with \(h\geq 0\), for \(\sigma^{i}\in L_{T}\). Formally, \(L_{T^{\circ}}=\left\{\sigma^{i+h\cdot(T+1)}\ \mid\ h\geq 0\text{ and }\sigma^{i}\in L_{T}\right\}\). Clearly, \(L_{T^{\circ}}\) is a periodic language of period \((T+1)\), and we have that \(L_{T}=L_{T^{\circ}}\cap\sigma^{\leq T}\). Therefore, in order to recognize \(L_{T}\), we start by defining the isolated cut point lqfa\(A_{T^{\circ}}\) for \(L_{T^{\circ}}\). We let \(A_{T^{\circ}}=(Q,\left\{\sigma\right\},\pi_{0},\left\{U(\sigma),U(\sharp) \right\},\left\{\mathcal{O}_{\sigma},\mathcal{O}_{\sharp}\right\},Q_{acc})\), where: \(Q=\left\{q_{0},\ldots,q_{T}\right\}\) is the set of basis states, \(Q_{acc}=\left\{q_{i}\ \mid\ 0\leq i\leq T\text{ and }\sigma^{i}\in L_{T}\right\}\) is the set of accepting basis states, \(\pi_{0}=\boldsymbol{e}_{1}\) is the initial superposition, \(U(\sigma)=S\), where \(S\in\left\{0,1\right\}^{(T+1)\times(T+1)}\) is the matrix representing the cyclic permutation: \(S\) has \(1\) at the \((i,i+1)\)th entries for \(1\leq i\leq T\) and at the \((T+1,1)\)th entry, all the other entries are \(0\), \(U(\sharp)=I^{(T+1)}\), \(\mathcal{O}_{\sigma}\) is the observable having the identity as sole projector, \(\mathcal{O}_{\sharp}\) is the usual final observable projecting onto the subspace spanned by \(Q_{acc}\). Given the observable \(\mathcal{O}_{\sigma}\), we have that \(A_{T^{\circ}}\) is basically a mo-qfa whose induced event writes as \(p_{A_{T^{\circ}}}(\sigma^{k})=\|\pi_{0}\cdot U(\sigma)^{k}\cdot U(\sharp)\cdot P _{acc}(\sharp)\|^{2}\). After processing the input \(\sigma^{k}\sharp\), the state \(\xi(k)\) of \(A_{T^{\circ}}\) is
\[\xi(k)=\pi_{0}\cdot U(\sigma)^{k}\cdot U(\sharp)=\boldsymbol{e}_{1}\cdot U( \sigma)^{k}\cdot U(\sharp)=\boldsymbol{e}_{(k\bmod(T+1))+1}. \tag{6}\]
Let us now discuss measuring by the final observable, i.e., the action of the projector \(P_{acc}(\sharp)\) on the final superposition \(\xi(k)\). By (6), \(\xi(k)\) is \(\boldsymbol{e}_{(k\bmod(T+1))+1}\), representing the basis state \(q_{k\bmod(T+1)}\). By definition of \(Q_{acc}\) we have that \(q_{k\bmod(T+1)}\) is an accepting state if and only if \(\sigma^{k\bmod(T+1)}\in L_{T}\) if and only if \(\sigma^{k}\in L_{T^{\circ}}\). Therefore, we can rewrite the stochastic event induced by \(A^{T^{\circ}}\) as \(p_{A_{T^{\circ}}}(\sigma^{k})=\|\xi(k)\cdot P_{acc}(\sharp)\|^{2}=1\) if \(\sigma^{k}\in L_{T^{\circ}}\), and \(0\) otherwise. Whence, the lqfa\(A_{T^{\circ}}\) recognizes \(L_{T^{\circ}}\) by a deterministic event. Now, we need \(A_{T^{\circ}}\) to work simultaneously with a module which checks whether or not the input string has length not exceeding \(T\), so that the resulting accepted language is \(L_{T^{\circ}}\cap\sigma^{\leq T}=L_{T}\). Such a module can be obtained by complementing the lqfa\(M^{(T+1)}\) for \(\sigma^{\geq T+1}\) presented in Section 3 (basically, by taking \(Q\setminus Q_{acc}\) as the set of accepting basis states). The resulting lqfa\(\overline{M}^{(T+1)}\) induces the complement of the event in Equation (5) with \(\ell=T+1\):
\[p_{\overline{M}^{(T+1)}}(\boldsymbol{\sigma}^{k})=1-p_{M^{(T+1)}}(\boldsymbol {\sigma}^{k})\begin{cases}=1&\text{if }k\leq T\\ \leq 1-\left(\frac{n-1}{n}\right)^{(T+1)}&\text{if }k\geq T+1,\end{cases}\]
thus recognizing the language \(\boldsymbol{\sigma}^{\leq T}\) with isolated cut point and \(n^{O(T)}\) basis states. Finally, we build the lqfa\(A_{T^{\circ}}\otimes\overline{M}^{(T+1)}\) (basically by taking the direct product component wise of the two lqfa\(A_{T^{\circ}}\) and \(\overline{M}^{(T+1)}\)) inducing the product event
\[p_{A_{T^{\circ}}\otimes\overline{M}^{(T+1)}}(\boldsymbol{\sigma}^{k})=p_{A_{T^ {\circ}}}\cdot p_{\overline{M}^{(T+1)}}(\boldsymbol{\sigma}^{k})\begin{cases}=1& \text{if }\sigma^{k}\in L_{T}\\ \leq 1-\left(\frac{n-1}{n}\right)^{(T+1)}&\text{otherwise},\end{cases}\]
defining \(L_{T}\) with \((T+1)\cdot n^{O(T)}\) basis states, and cut point \(1-\frac{1}{2}\cdot\left(\frac{n-1}{n}\right)^{(T+1)}\) isolated by \(\frac{1}{2}\cdot\left(\frac{n-1}{n}\right)^{(T+1)}\). Notice that, for large values of \(n\), the lqfa\(A_{T^{\circ}}\otimes\overline{M}^{(T+1)}\) approximates a deterministic recognition of \(L_{T}\).
**The ultimately periodic language \(L_{P}\):** It suites our goal to rewrite \(L_{P}\) as \(L_{P}=L_{P^{\circ}}\cap\sigma^{\geq T+1}\), where we let \(L_{P^{\circ}}=\left\{\sigma^{(T+1+i)\bmod{P}+h\cdot P}\ \mid\ 0\leq i<P,\ h\geq 0, \text{ and }\sigma^{T+1+i}\in L_{P}\right\}\). Clearly, \(L_{P^{\circ}}\) is a periodic language of period \(P\). So, for recognizing \(L_{P}\), we first focus on building the isolated cut point lqfa\(A_{P^{\circ}}\) for \(L_{P^{\circ}}\). We let \(A_{P^{\circ}}=(Q,\left\{\sigma\right\},\pi_{0},\left\{U(\sigma),U(\sharp) \right\},\left\{\mathcal{O}_{\sigma},\mathcal{O}_{\sharp}\right\},Q_{acc})\), where: \(Q=\left\{q_{0},\ldots,q_{P-1}\right\}\) is the set of basis states, \(Q_{acc}=\left\{q_{i}\ \mid\ 0\leq i<P\text{ and }\sigma^{i}\in L_{P^{\circ}}\right\}\) is the set of accepting basis states, \(\pi_{0}=\boldsymbol{e}_{1}\) is the initial superposition, \(U(\sigma)=S\), where \(S\in\left\{0,1\right\}^{P\times P}\) is the cyclic permutation matrix, \(U(\sharp)=I^{(P)}\),
is the observable having the identity as sole projector, \(\mathcal{O}_{\sharp}\) is the usual final observable projecting onto the subspace spanned by \(Q_{acc}\). Given the observable \(\mathcal{O}_{\sigma}\), we have that \(A_{P^{\circ}}\) is basically a mo-qfa whose induced event writes as \(p_{A^{P^{\circ}}}(\sigma^{k})=\|\pi_{0}\cdot U(\sigma)^{k}\cdot U(\sharp)\cdot P _{acc}(\sharp)\|^{2}\). After processing the input \(\sigma^{k}\sharp\), the state \(\xi(k)\) of \(A_{P^{\circ}}\) is
\[\xi(k)=\pi_{0}\cdot U(\sigma)^{k}\cdot U(\sharp)=\boldsymbol{e}_{1}\cdot U( \sigma)^{k}\cdot U(\sharp)=\boldsymbol{e}_{(k\bmod P)+1}. \tag{7}\]
Let us now measure the final observable on the final superposition \(\xi(k)\). By (7), \(\xi(k)\) is \(\boldsymbol{e}_{(k\bmod P)+1}\), representing the basis state \(q_{k\bmod P}\). By definition of \(Q_{acc}\), we have that \(q_{k\bmod P}\) is an accepting state if and only if \(\sigma^{k}\in L_{P^{\circ}}\). Therefore, the stochastic event induced by \(A_{P^{\circ}}\) is \(p_{P^{\circ}}(\sigma^{k})=\left\|\xi(k)\cdot P_{acc}(\sharp)\right\|^{2}=1\), if \(\sigma^{k}\in L_{P^{\circ}}\), and 0 otherwise. whence, the lqfa\(A_{P^{\circ}}\) recognizes \(L_{P^{\circ}}\) by a deterministic event. Now, we need \(A_{P^{\circ}}\) to work simultaneously with a module which checks whether or not the input string has length exceeding \(T\), so that the resulting accepted language is \(L_{P^{\circ}}\cap\sigma^{2T+1}=L_{P}\). Such a module is the lqfa\(M^{(T+1)}\) for \(\sigma^{2T+1}\) presented in Section 3, and inducing the event
\[p_{M^{(T+1)}}(\sigma^{k})\begin{cases}\geq(\frac{n-1}{n})^{T+1}&\text{if $k \geq T+1$}\\ =0&k\leq T,\end{cases}\]
thus recognizing the language \(\sigma^{2T+1}\) with isolated cut point and \(n^{O(T)}\) basis states. Finally, we build the lqfa\(A_{P^{\circ}}\otimes M^{(T+1)}\), inducing the product event
\[p_{A_{P^{\circ}}\otimes M^{(T+1)}}(\sigma^{k})=p_{A_{P^{\circ}}}\cdot p_{M^{(T +1)}}(\sigma^{k})\begin{cases}\geq(\frac{n-1}{n})^{T+1}&\text{if $\sigma^{k}\in L_{P}$}\\ =0&\text{otherwise},\end{cases}\]
defining \(L_{P}\) with \(P\cdot n^{O(T)}\) basis states, and cut point \(\frac{1}{2}\cdot\left(\frac{n-1}{n}\right)^{(T+1)}\) isolated by \(\frac{1}{2}\cdot\left(\frac{n-1}{n}\right)^{(T+1)}\). Notice that, for large values of \(n\), the lqfa\(A_{P^{\circ}}\otimes M^{(T+1)}\) approximates a deterministic recognition of \(L_{P}\).
**Putting things together:** We are now ready to suitably assemble the two lqfa\(A_{T}=A_{T^{\circ}}\otimes\overline{M}^{(T+1)}\) and \(A_{P}=A_{P^{\circ}}\otimes M^{(T+1)}\) so far described to obtain an isolated cut point lqfa\(A_{L}\) for the unary regular language \(L\). We notice that \(L=L_{T}\cup L_{P}=(L_{T}^{c}\cap L_{P}^{c})^{c}\). This suggests first to construct lqfa for \(L_{T}^{c}\) and \(L_{P}^{c}\) by building \(\overline{A}_{T}\) and \(\overline{A}_{P}\) inducing the complement events \(p_{\overline{A}_{T}}=1-p_{A_{T}}\) and \(p_{\overline{A}_{P}}=1-p_{A_{P}}\), respectively. Next, to account for the intersection, we construct the lqfa\(\overline{A}_{L}=\overline{A}_{T}\otimes\overline{A}_{P}\) inducing the product event \(p_{\overline{A}_{L}}=(1-p_{A_{T}})\cdot(1-p_{A_{P}})\). Finally, the desired lqfa\(A_{L}\) will be obtained by complementing \(\overline{A}_{L}\), so that \(p_{A_{L}}=(1-p_{\overline{A}_{L}})=1-(1-p_{A_{T}})\cdot(1-p_{A_{P}})=p_{A_{T}}+ p_{A_{P}}-p_{A_{P}}\cdot p_{A_{P}}\).
Let us now explain how \(p_{A_{L}}\) behaves on input string \(\sigma^{k}\):
* \(\sigma^{k}\in L=L_{T}\cup L_{P}\): Clearly, we have either \(\sigma^{k}\in L_{T}\) or \(\sigma^{k}\in L_{P}\). Suppose \(\sigma^{k}\in L_{T}\). Then, we have that \(p_{A_{T}}(\sigma^{k})=1\) since \(A_{T}=A_{T^{\circ}}\otimes\overline{M}^{(T+1)}\) and both its sub-modules will accept with certainty; correspondingly, \(p_{A_{P}}(\sigma^{k})=0\) since \(A_{P}=A_{P^{\circ}}\otimes M^{(T+1)}\) and the sub-module \(M^{(T+1)}\) accepts with 0 probability the input strings of length less than or equal to \(T\). Globally, we have \(p_{A_{L}}(\sigma^{k})=1\). Suppose \(\sigma^{k}\in L_{P}\). Then, we have that \(p_{A_{P}}(\sigma^{k})\geq\left(\frac{n-1}{n}\right)^{T+1}\) since \(A_{P^{\circ}}\) accepts with certainty, while the sub-module \(M^{(T+1)}\) accepts with probability not less than \(\left(\frac{n-1}{n}\right)^{T+1}\). Let us now focus on \(A_{T}\). The sub-module \(A_{T^{\circ}}\) could accept with probability either 0 or 1. In the former case, globally we have \(p_{A_{L}}(\sigma^{k})\geq\left(\frac{n-1}{n}\right)^{T+1}\), in the latter, the sub-module \(\overline{M}^{(T+1)}\) accepts with a probability bounded above by \(1-\left(\frac{n-1}{n}\right)^{T+1}\). By letting \((1-y)\) the acceptance probability of \(\overline{M}^{(T+1)}\), with \(0\leq y\leq\left(\frac{n-1}{n}\right)^{T+1}\), we get \(p_{A_{L}}(\sigma^{k})\geq\left(\frac{n-1}{n}\right)^{T+1}+(1-y)-\left(\frac{n-1 }{n}\right)^{T+1}\cdot(1-y)\geq\left(\frac{n-1}{n}\right)^{T+1}\). In conclusion, for any \(\sigma^{k}\in L\), we have \(p_{A_{L}}(\sigma^{k})\geq\left(\frac{n-1}{n}\right)^{T+1}\).
* \(\sigma^{k}\not\in L=L_{T}\cup L_{P}\): Clearly, both \(\sigma^{k}\not\in L_{T}\) and \(\sigma^{k}\not\in L_{P}\). By assuming \(k\leq T\), we must have \(\sigma^{k}\not\in L_{T^{\odot}}\). Therfore, the sole acceptance probability contribution could come from the module \(A_{P}=A_{P^{\odot}}\otimes M^{(T+1)}\). However, since \(k\leq T\), the sub-module \(M^{(T+1)}\) accepts with 0 probability. So, \(p_{A_{L}}(\sigma^{k})=0\). Instead, by assuming \(k\geq T+1\), we must have that \(\sigma^{k}\not\in L_{P^{\odot}}\). Thus, the sole acceptance probability could come from the module \(A_{T}\). However, the acceptance probability yielded by the sub-module \(\overline{M}^{(T+1)}\) turns out to be at most \(1-\left(\frac{n-1}{n}\right)^{T+1}\). In conclusion, for any \(\sigma^{k}\not\in L\), we have \(p_{A_{L}}(\sigma^{k})\leq 1-\left(\frac{n-1}{n}\right)^{T+1}\).
Summing up, the stochastic event induced by the lqfa\(A_{L}\) is
\[p_{A_{L}}(\sigma^{k})\begin{cases}\geq\left(\frac{n-1}{n}\right)^{T+1}&\text {if }\sigma^{k}\in L\\ \leq 1-\left(\frac{n-1}{n}\right)^{T+1}&\text{otherwise}.\end{cases} \tag{8}\]
By the event in Equation (8), we get that \(A_{L}\) recognizes \(L\) with the following cut point and isolation radius:
\[\lambda=\frac{1}{2}\cdot\left(\left(\frac{n-1}{n}\right)^{T+1}+1-\left(\frac{ n-1}{n}\right)^{T+1}\right)=\frac{1}{2},\ \ \ \rho=\frac{1}{2}\cdot\left(\left(\frac{n-1}{n}\right)^{T+1}-1+\left(\frac{n-1}{ n}\right)^{T+1}\right)=\left(\frac{n-1}{n}\right)^{T+1}-\frac{1}{2}.\]
Clearly, to have an isolation around \(\lambda\), we must require that \(\rho>0\). This can always be achieved on any \(T>0\) by imposing \(\left(\frac{n-1}{n}\right)^{T+1}>\frac{1}{2}\), which is attained whenever \(n>\frac{1}{1-\frac{T+1}{2}}\). This latter condition is satisfied, e.g., by letting \(n=4\,T\) for any \(T>0\). Nevertheless, the isolation radius \(\rho\) tends to \(\frac{1}{2}\) as \(n\) grows.
Let us inspect the size of the lqfa\(A_{L}=\overline{\overline{A_{T}}\otimes\overline{A_{P}}}\). As above pointed out, \(A_{T}\) and \(A_{P}\) have, respectively, \((T+1)\cdot n^{O(T)}\) and \(P\cdot n^{O(T)}\) basis states. The complements \(\overline{A}_{T}\) and \(\overline{A}_{P}\) maintain the same number of basis states, while the product \(\overline{A}_{T}\otimes\overline{A}_{P}\) requires \(((T+1)\cdot n^{O(T)})\cdot(P\cdot n^{O(T)})\leq T\cdot P\cdot n^{O(T)}\) basis states. The final complement \(\overline{\overline{A_{T}}\otimes\overline{A_{P}}}\) maintains the same number of basis states. By replacing \(n\) with \(4\,T\), as above suggested, the number of basis states of the isolated cut point lqfa\(A_{L}\) for \(L\) becomes \(P\cdot T^{O(T)}\).
## 5 Conclusions
In this work, we have exhibited a modular framework for building isolated cut point lqfas for unary regular languages. By suitably adapting to the unary case an inductive construction in [2, 15], we have first designed lqfas discriminating unary inputs on the basis of their length. These devices have then been plugged into two sub-modules recognizing the finite part and the ultimately periodic part any unary regular language consists of. The resulting lqfa recognizes a unary regular language \(L\) with isolated cut point \(\frac{1}{2}\), and a number of basis states which is exponential in the number of states of the minimal dfa for \(L\). In spite of this exponential size blow up, it should be stressed that more restricted models of quantum finite automata in the literature, such as mo-qfas, cannot recognize all unary regular languages. On the other hand, a linear amount of basis states is sufficient for the more powerful model of isolated cut point mm-qfas[6]. Thus, it would be worth investigating whether a more size efficient construction for unary lqfas could be provided. Another interesting line of research might explore the descriptional power (see, e.g., [3, 9, 13, 14] for topics in descriptional complexity) of isolated cut point lqfas with respect to other relevant classes of subregular languages such as, e.g., commutative regular languages [21].
**Acknowledgements.** The authors wish to thank the anonymous referees for their valuable comments. |
2308.16637 | Learning Channel Importance for High Content Imaging with Interpretable
Deep Input Channel Mixing | Uncovering novel drug candidates for treating complex diseases remain one of
the most challenging tasks in early discovery research. To tackle this
challenge, biopharma research established a standardized high content imaging
protocol that tags different cellular compartments per image channel. In order
to judge the experimental outcome, the scientist requires knowledge about the
channel importance with respect to a certain phenotype for decoding the
underlying biology. In contrast to traditional image analysis approaches, such
experiments are nowadays preferably analyzed by deep learning based approaches
which, however, lack crucial information about the channel importance. To
overcome this limitation, we present a novel approach which utilizes
multi-spectral information of high content images to interpret a certain aspect
of cellular biology. To this end, we base our method on image blending concepts
with alpha compositing for an arbitrary number of channels. More specifically,
we introduce DCMIX, a lightweight, scaleable and end-to-end trainable mixing
layer which enables interpretable predictions in high content imaging while
retaining the benefits of deep learning based methods. We employ an extensive
set of experiments on both MNIST and RXRX1 datasets, demonstrating that DCMIX
learns the biologically relevant channel importance without scarifying
prediction performance. | Daniel Siegismund, Mario Wieser, Stephan Heyse, Stephan Steigele | 2023-08-31T11:11:38Z | http://arxiv.org/abs/2308.16637v1 | # Learning Channel Importance for High Content Imaging with Interpretable Deep Input Channel Mixing
###### Abstract
Uncovering novel drug candidates for treating complex diseases remain one of the most challenging tasks in early discovery research. To tackle this challenge, biopharma research established a standardized high content imaging protocol that tags different cellular compartments per image channel. In order to judge the experimental outcome, the scientist requires knowledge about the channel importance with respect to a certain phenotype for decoding the underlying biology. In contrast to traditional image analysis approaches, such experiments are nowadays preferably analyzed by deep learning based approaches which, however, lack crucial information about the channel importance. To overcome this limitation, we present a novel approach which utilizes multi-spectral information of high content images to interpret a certain aspect of cellular biology. To this end, we base our method on image blending concepts with alpha compositing for an arbitrary number of channels. More specifically, we introduce DCMIX, a lightweight, scaleable and end-to-end trainable mixing layer which enables interpretable predictions in high content imaging while retaining the benefits of deep learning based methods. We employ an extensive set of experiments on both MNIST and RXRX1 datasets, demonstrating that DCMIX learns the biologically relevant channel importance without scarifying prediction performance.
Keywords:Biomedical Imaging Interpretable Machine Learning Explainable AI Image Channel Importance.
## 1 Introduction
High-Content Imaging (HCI) has developed to one of the main driving factors in biopharma early discovery research to reveal novel drug candidates for sophisticated treatment strategies such as cancer immunotherapies [21]. HCI is based on a standardized experimental protocol that allow for the systematic acquisition of multi-spectral images, e.g., in form of a cell painting assay protocol that requires a high number of channels with the benefit of a highly generalizable assay [3]. Here, high-content images are recorded by automated instruments on microtiter plates which allow for large-scale drug candidate testing and an automatic analysis procedure to assess the mechanics of a drug candidate for a certain disease. When running such HCI experiments, scientists prepare typically a set of 4 to 15
channels [23; 31] with a specific fluorophore that tags a certain cellular protein or compartment. Subsequently, the scientist aims to analyze the experimental outcome with respect to the importance of the fluorescence channels to validate the findings or refine the experiment and, therefore, requires a fast and easy-to-use analysis workflow. This is particularly important as the specific functional or mechanistic knowledge is encoded via the specific staining per image channel [3] and hence required for decoding the underlying biology.
However, to analyze such complex multi-channel cell-painting assays, the scientist requires the ability of sophisticated image analysis to distill the information from the multi-spectral information. In biopharma research, the traditional analysis [5] is gradually replaced by deep learning based approaches [40; 14; 48; 39]. Despite the superior performance of such models in comparison to conventional segmentation based analysis [5], the scientist lacks informative insights in terms of understanding about which fluorescence channel influenced the decision [7].
In the past, various approaches have been proposed to extract the most relevant information from high-dimensional datasets. The most basic approach to determine the most relevant channels is a preprocessing step by applying an unsupervised dimensionality reduction method such as Principal Component Analysis (PCA) [17]. However, employing such a preprocessing step does not guarantee for phenotype-specific channels as the method only optimizes for the directions with the highest variance and not necessarily for the highest phenotypic information. More recently, attention-based approaches have been introduced for image channel selection [4; 15; 24; 32] which suffer from high computational costs and poor scalability. In addition, there are model-agnostic approaches such as Shapely values [36; 19] which, however, can suffer from sampling variability [27] and be time consuming in terms of highly complex models [6].
To overcome the aforementioned limitations, we present a simple yet effective method to estimate channel importance for HCI images. More specifically, we introduce a lightweight, easy to use mixing layer that is composed of a generalized image blending mechanism with alpha compositing [50; 1] which converts a \(d\)-dimensional channel image into a 2D image retaining all phenotype relevant information. This allows not only to incorporate an arbitrary number of channels in a highly scalable fashion but also leads to a reduced network size with faster inference times while being able to facilitate the use of transfer learning of pretrained networks. To summarize, we make the following contributions:
* We extend the imaging blending concepts of [50; 1] and apply these to images with an arbitrary number of channels.
* We encapsulate the generalized image blending into a lightweight, scalable and end-to-end trainable mixing layer, called DCMIX, to estimate channel importance for multi-spectral HCI data.
* Experiments on MNIST as well as on the challenging multi-channel real-world imaging data set RXRX1 [42] with 31 different cell phenotype classes demonstrate that the proposed method learns the correct channel importance without sacrificing its model performance.
## 2 Related Work
In this section, we review related work on interpretable and explainable machine learning [30]. Broadly spoken, we can distinguish between interpretable models that are interpretable by design and explainable models that try to explain existing models post-hoc [30].
#### 2.0.1 Interpretable Machine Learning Methods
can be separated into the following model classes: score-based [44], rule-based [9], sparse [43] and neural networks [12], among others [30]. In this review, we focus more closely on sparsity inducing and attention-based interpretable methods. Sparsity-based approaches introduce a sparsity constraint on the model coefficients to determine the feature importance. One of the most basic approaches is the least absolute shrinkage and selection operator (LASSO) introduced by [43] which is employing the \(L_{1}\)-norm to ensure feature sparsity. This approach has subsequently been extended to various lines of research, including dealing with grouped features [49, 33], estimating network graphs [13, 34] or learning sparse representations in neural networks [26, 47]. Most closely related to our work is LassoNet [22] which employs a group lasso constraint based on the feature channels that are obtained from a pretrained feature extraction network. In contrast, our approach is end-to-end trainable and hence does not require a two step approach of feature extraction and importance estimation. More recently, attention-based approaches [45] have emerged in the context of interpretable machine learning. [8] introduced an attention-based model for the analysis of electronic health records and [37] learns important features with an attentive mixture of experts approach. Moreover, attention is used in the context of hyper spherical band/channel selection [4, 15, 24, 32]. In contrast, our approach works on image blending and alpha compositing and hence reducing high computational costs.
#### 2.0.2 Explainable Machine Learning Methods
denote approaches that aim to explain decisions of an already trained machine learning model post-hoc by learning a surrogate model [30]. In summary, we distinguish between attribution methods that try to quantify the attribution of a feature to the prediction [41], concept-based explanations trying to explain predictions with high-level concepts [20], symbolic metamodels employing symbolic regression as a surrogate [2] and counterfactual explanations [46]. In the context of our work, we focus on attribution models. [35] learns a surrogate classifier to explain an arbitrary black-box model based on submodular optimization. [38] introduced DeepLIFT to decompose the input contributions on the model prediction. In addition, Shapley values gained a wide adoption in the machine learning domain mainly for feature selection and model explain ability [36, 19]. As a result, Lundberg & Lee [28] introduced Shapely additive explanations (SHAP) to explain model predictions based on Shapely regression values. Finally, Shapley values have been used in the context of HCI channel importance estimation [42]. More specifically, the authors adopt Shapely values to explain the channel importance of HCI images from a
pretrained black-box model. Opposed to our approach, this method requires the training of two separate models and hence does not allow for end-to-end training.
## 3 Model
As illustrated in Figure 1, we utilize a two step approach for estimating channel importance in multi-spectral bioimage classification settings by introducing a lightweight, easy to use and end-to-end trainable mixing layer. To do so, we propose a blending layer which combines the most important parts of the distinct channels into a new 2D image. After, we perform a classification based on the blended image.
### Conception of the Image Blending Layer
We start with an input image \(I\in\mathbb{R}^{h\times w\times c}\) where \(h\) denotes the height, \(w\) the width and \(c\) the number of channels in the multi-spectral image. Subsequently, the image \(I\) is split into its distinct channels and processed in the DCMIX layer. The DCMIX layer is inspired by simple image blending and alpha compositing
Figure 1: Blue arrows denote steps and gray boxes actions in our workflow, respectively. In the first step (1.), we take a multi-channel cellular image and split it into single channels. Subsequently, we mix the channel within our DCMIX layer to obtain the most important part of each channel. In the second step (2.), we take the blended image into our classification network.
[50, 1]. More specifically, the idea behind Alpha blending is to combine two images as follows:
\[C=\alpha_{1}\cdot A_{1}+(1-\alpha_{1})\cdot A_{2}, \tag{1}\]
where '\(A_{1}\in\) '\(\mathbb{R}^{h\times w\times c}\) and \(A_{2}\in\mathbb{R}^{h\times w\times c}\) are the corresponding image matrices to blend and \(C\in\mathbb{R}^{h\times w\times c}\) the blended image matrix. The trainable parameter \(\alpha_{1}\) determines the transparency of each channel.
In this work, we take advantage of the ideas proposed in [50, 1] and generalize the idea by employing the trainable alpha values as weights for each channel that has to be blended with:
\[C=\sum_{i}^{n}\alpha_{i}\cdot A_{i}\text{ where: }\alpha_{i}\geq 0, \tag{2}\]
where \(\alpha_{i}\) is multiplied with each channel \(A_{i}\). The parameter \(n\) defines the number of channels and \(C\) is the blended image which will be subsequently used for the further analysis.
### Classifying Genetic Perturbations based on DCMIX-blended Images
Our goal is to learn a classification model \(F_{\theta}(y\mid C)\) of the blended image \(C\) for distinct classes of genetic perturbations \(y^{c}\) where \(c\) is the number of genetic perturbations to be predicted. In this work, our model \(F\) is a Deep Convolutional Neural Network which extracts a cascade of feature maps \(M^{l}\) where \(l\) denotes the current layer. The last feature map is used as an input to the multi-class classification head that predicts the genetic perturbation vector \(y^{c}\) using a softmax output.
### End-to-End Training Algorithm
The model training is described in Algorithm 1. As an input, we use the multi-spectral images \(X\) and the genetic perturbation labels \(y\). Subsequently, we draw minibatches from the training data \(X,y\) (line 1). For each of the minibatches, we obtain the blended images \(c_{i}\) as well as the corresponding mixing factors \(\alpha_{i}\). The blended images \(c_{i}\) are fed in the neural network \(F_{\theta}\) (line 3) and the corresponding predictions \(\hat{y_{i}}\) are used to calculate the loss in line 5. Finally, we update the parameters \(\theta\) and \(\alpha\) based on the loss by using gradient descent (line 7).
```
1:\(\alpha_{1}\), \(\alpha_{2}\), \(\alpha_{3}\), \(\alpha_{4}\), \(\alpha_{5}\), \(\alpha_{6}\), \(\alpha_{7}\), \(\alpha_{8}\), \(\alpha_{9}\), \(\alpha_{10}\), \(\alpha_{11}\), \(\alpha_{12}\), \(\alpha_{13}\), \(\alpha_{14}\), \(\alpha_{15}\), \(\alpha_{16}\), \(\alpha_{17}\), \(\alpha_{18}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{10}\), \(\alpha_{11}\), \(\alpha_{12}\), \(\alpha_{13}\), \(\alpha_{14}\), \(\alpha_{15}\), \(\alpha_{16}\), \(\alpha_{17}\), \(\alpha_{18}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{12}\), \(\alpha_{13}\), \(\alpha_{14}\), \(\alpha_{15}\), \(\alpha_{16}\), \(\alpha_{17}\), \(\alpha_{18}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{12}\), \(\alpha_{13}\), \(\alpha_{14}\), \(\alpha_{15}\), \(\alpha_{16}\), \(\alpha_{17}\), \(\alpha_{18}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{12}\), \(\alpha_{13}\), \(\alpha_{14}\), \(\alpha_{15}\), \(\alpha_{16}\), \(\alpha_{17}\), \(\alpha_{18}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{19}\), \(\alpha_{18}\), \(\alpha_{19}\)
```
1:\(X\) images, \(y\) labels
2:The prediction \(\hat{y}\), mixing factors \(\alpha\)
3:for minibatch \(x_{i},y_{i}\) from \(X,y\)do
4:\(c_{i},\alpha_{i}\leftarrow\text{DCMIX}(x_{i})\)
5:\(\hat{y_{i}}\gets F_{\theta}(c_{i})\)
6:
7:\(\text{loss}\leftarrow\text{crossentropy}(\hat{y_{i}},y_{i})\)
8:
9: update \(\theta,\alpha\) using gradient descent
10:endfor
```
**Algorithm 1** DCMIX training algorithm
### Mnist
#### 4.1.1 Dataset.
To demonstrate the efficacy of DCMIX for estimating channel importance, we generate an artificial dataset based on MNIST [10]. MNIST consists of 70000 samples with images \(x\in\mathbb{R}^{28x28x1}\) and labels \(y\) that represent numbers from 0 to 9. For our dataset, we randomly select a subset of 10000 samples from MNIST. In order to assess the channel importance, we extend the MNIST images with two additional noise channels. Therefore, we draw two noise matrices with shape \(28x28\) from a uniform distribution defined on \([0,255]\). Subsequently, we add the previously generated noise channels to the input image such that we obtain a three channel input image \(x\in\mathbb{R}^{28x28x3}\) where the first denotes the most important channel. For training, we split the data into a 70 percent training and a 30 percent hold-out set. The training set is further split into a 80 percent training and 20 percent validation set, respectively.
#### 4.1.2 Models.
In order to demonstrate the effectiveness of our approach, we benchmark DCMIX against a plain LCNet050, LassoNet [22] as well as on an attention-based [29, 25] LCNet050.
#### 4.1.3 Quantitative Evaluation.
Channel Importance
In this experiment, we evaluate the channel importance on the validation set, and the results are reported in Table 1. As we can observe in the channel importance ranking, DCMIX can effectively learn the most important channel one and is in line with the more complex LassoNet and attention-based LCNet050. At the same time, DCMIX requires only a fraction of GFLOPS and model parameters. More specifically, DCMIX requires solely 5.9271 GFLOPS compared to 17.809 GFLOPS for the Attention-LCNet050. In addition, DCMIX need three times less parameters (0.2789 million) in contrast to Attention-LCNet050 (0.9281 million) and requires only the same amount of GFLOPS and parameters as the plain LCNet050.
#### 4.1.4 Quantitative Evaluation.
Model Performance
Despite the fact, that the aim of this method is not to improve the model performance but rather learn
the most important channel to gain biological insights for a drug discovery experiment, we want to ensure that DCMIX archives competitive performance to state-of-the art approaches. To do so, we compared DCMIX to a plain LCNet050, LassoNet and Attention-LCNet050 in Table 2. Here, we observe that DCMIX obtains competitive results compared to both LCNet050 and Attention-LCNet050 and outperforms LassoNet on accuracy, precision, recall and f1-score measures.
### Rrxx1
**Dataset.** For our real world experiment, we employ the RXRX1 dataset[42] which consists of 125510 512x512 px fluorescence microscopy images (6 channels) of four different human cell lines that are perturbed with 1138 genetic perturbations (including 30 different positive control perturbations). In this study, we used as the training data 30 positive control siRNAs plus the non-active control which lead to 31 classes in total. All images were normalized using the 1 and 99 percent percentile and after, we extract image patches with a size of 192x192 px and an offset of 96 px. This step leads to 32776 image patches. For training,
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Method & Channel importance ranking & Channel weights & GFLOPS & \# Parameters (million) \\ \hline LCNet050 & - & - & 5.9269 & 0.2789 \\ LassoNet [22] & 1,3,2 & 120259, 51003, 52318 & - & - \\ Attention[29, 25]- & 1,3,2 & 1,\(3.24\times 10^{-11}\), \(2.33\times 10^{-6}\) & 17.809 & 0.9281 \\ LCNet050 & & & & \\ \hline \hline
**DCMIX-LCNet050** & 1,3,2 & 0.82,0.21,0.22 & 5.9271 & 0.2789 \\ \hline \end{tabular}
\end{table}
Table 1: Results of the MNIST channel importance and model size. Channel importance ranking denotes the rank of the weights depicted in the second column. The model size is evaluated on GFLOPS and the number of model parameters where lower is better.
\begin{table}
\begin{tabular}{|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|p{56.9pt}|} \hline Method & Accuracy & Precision & Recall & F1-Score \\ \hline LCNet050 & 0.992 (0.0008) & 0.991 (0.002) & 0.991 (0.002) & 0.991 (0.002) \\ LassoNet [22] & 0.963 (0.012) & 0.888 (0.002) & 0.888 (0.002) & 0.887 (0.002) \\ Attention[29, 25]- & 0.992 (0.002) & 0.991 (0.001) & 0.991 (0.001) & 0.991 (0.001) \\ LCNet050 & & & & \\ \hline \hline
**DCMIX-LCNet050** & 0.991 (0.002) & 0.990 (0.002) & 0.990 (0.002) & 0.990 (0.002) \\ \hline \end{tabular}
\end{table}
Table 2: Results of model performance for the MNIST dataset on the hold-out dataset. We assess the model performance on four different metrics: accuracy, precision, recall and f1-score where higher is better. Values in brackets denote the standard deviation.
we split the data into a 70 percent training and a 30 percent hold-out set. The training set is further split into a 80 percent training and 20 percent validation set, respectively.
#### 4.2.3 Models
For the real-world RXRX1 experiment, we compare DCMIX to LassoNet [22] and the attention-based [29, 25] LCNet050.
#### 4.2.4 Quantitative Evaluation. Channel Importance
Here, we describe the evaluation results on channel importance for the RXRX1 dataset which is illustrated in Table 3. To do so, we compare the results to the ground truth introduced in [42]. The experiment was manually designed by a scientist in the laboratory such that both channels four and two hold the most important biological information and channel 6 contains no important information for the phenotype. Keeping this information in mind, we assess the channel importance of DCMIX, LassoNet and Attention-LCNet050. Here, we can confirm that DCMIX learns the two most important channels four and two and the least important channel 6. These findings are also supported by Attention-LCNet050 which learned equivalent importance values. In contrast, LassoNet fails to uncover the correct channel importance by selecting the least important channel as the most important one. Despite finding the same important channels, DCMIX possess a 6-8 times higher speed and requires 6 times less parameters compared to the attention based networks and can be used in an end-to-end fashion which is not feasible for LassoNet.
#### 4.2.5 Quantitative Evaluation. Model Performance
In this experiment, we evaluate the model performance of DCMIX to LassoNet and Attention-LCNet050 and illustrate the results in Table 4. Here, we observe that DCMIX outperforms both LassoNet and Attention-LCNet050 in terms of accuracy by five and seven
\begin{table}
\begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Method & Importance ranking & Channel weights (in Chanel order) & GFLOPS & \# parameters (millions) \\ \hline ViT-B16-Imagenet21k & 6,4,1,5,2,3 & 73084, 52526, 31138, 87881, & - & - \\
[11] + LassoNet [22] & & 55612, 107733 & & - \\ Attention-LCNet050 & 4,2,5,1,3,6 & 0.15, 0.17, 0.008, 0.48, 0.16, & 35.61 & 1.75 \\ & & 0.007 & & \\ \hline \hline
**DCMIX-LCNet050** & 4,2,3,5,1,6 & 0.30, 0.69, 0.38, 1.06, 0.36, 0.21 & 5.95 & 0.27 \\ \hline \end{tabular}
\end{table}
Table 3: RXRX1 channel importance evaluation for the HepG2 cell line. The importance ranking illustrates the most important channels form left to right based on the weights depicted in the second column. In addition, model statistics are measured in GFLOPS and the number of model parameters (lower is better).
percent, respectively. Furthermore, these finding are confirmed by precision, recall and f1-scores where DCMIX outperforms both competitors by approximately five and seven percent.
## 5 Discussion
5.0.1 DCMIX demonstrates state-of-the-art channel importance scores in fluorescence cellular imaging
DCMIX employs image blending to estimate the importance of each image channel. In Figure 2, we provide an overview of the Spearman rank correlation of the channel importance estimates for all tested methods. The results are comparable for all methods (except of LassoNet) with a Spearman \(\rho\) always larger than 0.83. Especially the correlations of DCMIX and Attention-LCNet050 to the ground truth shapley values from [42] are evident with a Spearman \(\rho\) of 0.89. Both methods estimate the channel 2 and 4 as most important and channel 6 as least important which was the intentionally experimental design and furthermore shown via shapley values [42]. The authors explained their finding with a very large spectral overlap of the fluorescence signal from channel 2 and 4 to any other channel rendering them more important [42].
In contrast, LassoNet does not show any overlap with the rankings selected by all other methods (Figure 2) with a maximal Spearman \(\rho\) value of -0.08.
#### 5.0.2 DCMIX achieves state-of-the-art classification performance with lower model complexity
Across all classification metrics DCMIX archives competitive results on MNIST and state-of-the-art performances on real-world RXRX1 compared to its competitors. Intuitively, we attribute the competitive results on MINST to the problem simplicity which is further supported by the high classification scores of 99% (Table 2). Concurrently, DCMIX requires merely a fraction of model parameters in all experiments (Tables 1,3) compared to the baselines.
#### 5.0.3 Practical runtimes for DCMIX are 6-8 times faster than Attention-based approaches
While DCMIX requires only 5.9271 GFLOPS and 5.95
\begin{table}
\begin{tabular}{|l|l|l|l|l|} \hline Method & Accuracy & Precision & Recall & F1-Score \\ \hline ViT-B16-Imagenet21k & 0.695 (0.004) & 0.705 (0.005) & 0.705 (0.004) & 0.704 (0.005) \\
[11] + LassoNet [22] & & & & \\ Attention[29, 25]- & 0.744 (0.019) & 0.753 (0.014) & 0.747 (0.014) & 0.747 (0.013) \\ LCNet050 & & & & \\ \hline \hline
**DCMIX-LCNet050** & 0.765 (0.004) & 0.77 (0.037) & 0.77 (0.042) & 0.764 (0.043) \\ \hline \end{tabular}
\end{table}
Table 4: Results of model performance for the RXRX1 dataset on the hold-out dataset. We asses the model performance on four different metrics: accuracy, precision, recall and f1-score where higher is better. Values in brackets denote the standard deviation.
GFLOPS on RXRX1, achieving the same computational performance as plain LCNet050, Attention-LCNet050 needs 17.809 GFLOPS on MNIST and 35.614 GFLOPS on RXRX1, respectively (Table 1 and Table 3). Moreover, even post-hoc approaches such Shapely values that are trained on a black-box model require often more significant computation time. For example, the training time required for the Shapley value explanation are in the range of several minutes for the smaller CIFAR-10 dataset [16]. This demonstrates that the speed of DCMIX outperforms not only interpretable competitors but also explainable post-hoc approaches on a large scale.
#### 4.2.2 DCMIX is applicable in real-world settings beyond biomedical imaging
From an application standpoint we see an advantage of DCMIX over the
Figure 2: Visualization of Spearman’s rank correlation coefficient of the channel importance estimates for all different methods from Table 3. A value of -1 indicates maximal ranking difference between the channel importance estimates, 1 indicates no difference. Matrix has been sorted using average linkage hierarchical clustering with euclidean distance.
other tested methods, as the high scalability of DCMIX allows a model training workflow were channel importance is - per default - applied, such that a scientist gets immediate feedback about where the classification relevant information is coming from, and whether it correlates with the known understanding of the underlying biology.
DCMIX scales very well with the number of channels due to the addition of only one additional parameter per additional channel. This is particularly interesting for hyper spectral applications where hundreds of channels exist (e.g. in remote sensing) - a highly interesting application area for subsequent studies.
In addition, DCMIX allows for any arbitrary downstream network which can be fine-tuned / designed for other applications than fluorescence imaging.
DCMIX applies currently a simple addition channel mixing strategy to estimate channel importance without losing any classification performance (see model performance in Table 2 and 4). In principle several other channel blending methods exist, e.g. difference, multiplication or luminosity. Due to the flexibility of DCMIX these other mixing strategies can be easily integrated. Several studies already show the applicability of complex multi-spectral channel blending for visualization and classification in remote sensing [1, 18].
## 6 Conclusion
In this work, we present a novel lightweight framework, DCMIX, which estimates channel importance of fluoresce images based on image blending. This empowers us to estimate phenotype-focused interpretations in a simple yet effective manner. Our experimental results demonstrate that the channel importance scores uncovered by DCMIX are both biologically supported and in line with competitive state-of-the-art approaches on MNIST and RXRX1 datasets. Concurrently, DCMIX is more effective in terms of runtime and scaleable to an arbitrary number of channels without scarifying the model performance.
**Limitations.** We discuss the limitations of our approach in the following two aspects. (1) The weights of DCMIX which determine the channel importance are solely a proxy and do not explain the absolute importance between channels. (2) DCMIX is based on image blending and hence only supporting image-based datasets. For future work, we plan to investigate how DCMIX can be extended to other data modalities.
|
2309.08046 | Studying the Equilibrium Points of the Modified Circular Restricted
Three-Body Problem: the Case of Sun-Haumea System | We intend to study a modified version of the planar Circular Restricted
Three-Body Problem (CRTBP) by incorporating several perturbing parameters. We
consider the bigger primary as an oblate spheroid and emitting radiation while
the small primary has an elongated body. We also consider the perturbation from
a disk-like structure encompassing this three-body system. First, we develop a
mathematical model of this modified CRTBP. We have found there exist five
equilibrium points in this modified CRTBP model, where three of them are
collinear and the other two are non-collinear. Second, we apply our modified
CRTBP model to the Sun-Haumea system by considering several values of each
perturbing parameter. Through our numerical investigation, we have discovered
that the incorporation of perturbing parameters has resulted in a shift in the
equilibrium point positions of the Sun-Haumea system compared to their
positions in the classical CRTBP. The stability of equilibrium points is
investigated. We have shown that the collinear equilibrium points are unstable
and the stability of non-collinear equilibrium points depends on the mass
parameter $\mu$ of the system. Unlike the classical case, non-collinear
equilibrium points have both a maximum and minimum limit of $\mu$ for achieving
stability. We remark that the stability range of $\mu$ in non-collinear
equilibrium points depends on the perturbing parameters. In context of the
Sun-Haumea system, we have found that the non-collinear equilibrium points are
stable. | Ibnu Nurul Huda, Budi Dermawan, Muhammad Bayu Saputra, Rifki Sadikin, Taufiq Hidayat | 2023-09-14T22:19:11Z | http://arxiv.org/abs/2309.08046v1 | Studying the Equilibrium Points of the Modified Circular Restricted Three-Body Problem: the Case of Sun-Haumea System
###### Abstract
We intend to study a modified version of the planar Circular Restricted Three-Body Problem (CRTBP) by incorporating several perturbing parameters. We consider the bigger primary as an oblate spheroid and emitting radiation while the small primary has an elongated body. We also consider the perturbation from a disk-like structure encompassing this three-body system. First, we develop a mathematical model of this modified CRTBP. We have found there exist five equilibrium points in this modified CRTBP model, where three of them are collinear and the other two are non-collinear. Second, we apply our modified CRTBP model to the Sun-Haumea system by considering several values of each perturbing parameter. Through our numerical investigation, we have discovered that the incorporation of perturbing parameters has resulted in a shift in the equilibrium point positions of the Sun-Haumea system compared to their positions in the classical CRTBP. The stability of equilibrium points is investigated. We have shown that the collinear equilibrium points are unstable and the stability of non-collinear equilibrium points depends on the mass parameter \(\mu\) of the system. Unlike the classical case, non-collinear equilibrium points have both a maximum and minimum limit of \(\mu\) for achieving stability. We remark that the stability range of \(\mu\) in non-collinear equilibrium points depends on the perturbing parameters. In context of the Sun-Haumea system, we have found that the non-collinear equilibrium points are stable.
celestial mechanics, Kuiper belt: general, planets and satellites: dynamical evolution and stability Vol. X No. XX, 000-000
## 1 Introduction
Celestial mechanics plays an important role in understanding the dynamics of Solar System Bodies (see, e.g., Murray & Dermott, 1999; Souchay & Dvorak, 2010; Lei, 2021; Pan & Hou, 2022). One of the problems in celestial mechanics is the Circular Restricted Three-Body Problem (CRTBP). The study of CRTBP has aim to investigate the movement of an infinitesimal object under the gravitational influence of two primaries that have a circular orbit around their center of mass. CRTBP has several applications, such as for deep space exploration and satellite navigation. The classical version of CRTBP assumes the primaries as a point mass and it only considers the gravitational interaction between them. There are five equilibrium points in the case of planar. Three of them are collinear (\(L_{1}\), \(L_{2}\), and \(L_{3}\)) and other two are
non-collinear (\(L_{4}\) and \(L_{5}\)) (Murray & Dermott, 1999). In order to make CRTBP model more realistic, the classical version has been modified by considering several additional parameters.
A stellar object, including the Sun, emits radiation. This radiation exerts pressure on objects in its path. There have been numerous studies that have considered radiation pressure force as another additional force in the restricted three-body problem (see, e.g., Haque & Ishwar, 1995; Ishwar & Elipe, 2001; Kushvah et al., 2007; Kushvah, 2008; Das et al., 2009; Yousuf & Kishor, 2019; Patel et al., 2023). For instance, the first study on this topic has been done by Radzievskii (1950). Chernikov (1970) extended the study by considering the relativistic Poynting-Robertson effect. Simmons et al. (1985) studied the effect of radiation pressure force in all ranges of value. More recently, Idrisi (2017) and Idrisi & Ullah (2018) considered the effect of planetary albedo on CRTBP as a consequence of solar radiation pressure force.
Since the stars and planets are not perfectly spherical, another aspect that has been considered in the CRTBP is the oblateness of the primaries. Early studies about the impact of an oblate primary on the dynamics of restricted three-body problem have been given by Danby (1965), Sharma & Subba Rao (1978), Sharma & Subba Rao (1986). More recently, the effect of oblateness on the dynamics of CRTBP has been studied in detail by several authors (see, e.g., Markellos et al., 1996; Douskos & Markellos, 2006; Safiya Beevi & Sharma, 2012; Abouelmagd et al., 2013; Zotos, 2015; Yousuf et al., 2022). Moreover, some authors have considered the effect of both oblateness and radiation force in their calculation. For instance, Singh & Ishwar (1999) studied the linear stability of triangular equilibrium points when both primaries are oblate and emitting radiation. This study has been extended by Singh (2009) for the non-linear stability of \(L_{4}\). AbdulRaheem & Singh (2006) investigated the dynamics of CRTBP when both of primaries are oblate and emit radiation, together with the perturbation in the Coriolis and centrifugal force. Other authors such as Nurul Huda et al. (2015), Dermawan et al. (2015) and Mia et al. (2023), have considered the effect of oblateness and radiation force in the Elliptic Restricted Three-Body Problem.
Our solar system contains several types of celestial bodies. Among them are elongated objects like a few asteroids, comets, and dwarf planets. These celestial bodies can be approximately described as finite straight segments. Previous studies of CRTBP have been enriched by assuming one or both primaries have an elongated body. At first, Riaguas et al. (1999) and Riaguas et al. (2001) analyzed the dynamics of a two-body problem by considering one of the primaries as a finite straight segment. These works are extended by, e.g., Jain & Sinha (2014), Kaur et al. (2020), and Kumar et al. (2019), into the restricted three body-problem assuming both or one of the primaries have elongated shapes. In more recent studies, Verma et al. (2023a) examined the perturbed restricted three-body problem, where the smaller primary has an elongated shape and the larger primary is oblate and emits radiation. Verma et al. (2023b) considered the effect of finite straight segment and oblateness to study the dynamics of the restricted 2+2 body problem.
Meanwhile, the effect of a disk-like structure as a perturbing force near a three-body system has been well studied by several authors (see, e.g., Chernmykh, 1987; Jiang & Yeh, 2004; Kushvah, 2008b; Kushvah et al., 2012; Kishor & Kushvah, 2013; Mahato et al., 2022a). Jiang & Yeh (2004) studied CRTBP by analyzing the influence of a disk-like structure near the three-body system. Yousuf & Kishor (2019) analyzed the effect of disk-like structure, oblateness, and albedo on the CRTBP. Mahato et al. (2022a) extended the study of classical CRTBP by considering a disk-like structure and an elongated body. Mahato et al. (2022b) investigated the stability of equilibrium points within a framework of the perturbed restricted 2 + 2 bodies problem, taking into account the influence of a disk-like structure.
This study aims to obtain the collinear and non-collinear equilibrium points and investigate their stability under a framework of modified CRTBP incorporating the effect of radiation pressure, oblateness, finite straight segment, and disk-like structure. We intended to extend the work of Yousuf & Kishor (2019) by assuming the small primary as a finite straight segment rather than oblate. It is also an extension of Mahato et al. (2022a) since we consider the effect of oblateness and radiation from the bigger primary.
Here we apply our modified CRTBP model to the Sun-Haumea system by assuming the Sun is a bigger primary with an oblate shape and emitting radiation and Haumea is a smaller primary which has an elongated body. We also consider the Kuiper belt as a disk-like structure surrounding the Sun
Haumea system. Haumea was chosen as our case study because of its unique characteristics, which have captured the attention of scientists since its discovery in 2003. The Haumea surface is dominantly covered by water ice (Barkume et al. 2006; Pinilla-Alonso et al. 2009; Noviello et al. 2022). There is also evidence that organic material exists on the Haumea's surface (Lacerda et al. 2008; Gourgeot et al. 2016). Recently, it has been discovered that the Haumea has a ring and two satellites named Namaka and Hi'iaka (Ortiz et al. 2017). Moreover, previous studies have proposed Haumea as a destination for space missions in the coming decades (see, e.g., Grundy et al. 2009; Sanchez et al. 2014).
Besides the Sun-Haumea system, this modified CRTBP model can be applied to other cases. For instance, many planetary systems outside of our solar system have been discovered, and some systems have been found to have dust particle disks or asteroid belts, which are believed to be similar to the Kuiper belt or main belt in our solar system (see, e.g., Greaves et al. 1998; Matra et al. 2019). Meanwhile, previous studies have explained the presence of extrasolar asteroids or dwarf planets near the host star (see, e.g., Jura 2003; Dufour et al. 2010). Moreover, some space explorations have been devoted to exploring small solar system bodies near the main belt or Kuiper belt region. It is known that several solar system bodies have an irregular shape. Therefore, it is reasonable to study the combined effects of perturbations from a disk, an elongated body, and an oblate radiating body on the motion of an infinitesimal mass in the CRTBP.
The structure of this paper is as follows. In the next section, we present a mathematical formulation of the dynamical model. The position and the stability of equilibrium points are given in Section 3. Section 4 gives the implementation of the dynamical model in the Sun-Haumea system. Finally, the conclusion is given in Section 5. Here, MATLAB's Symbolic Toolbox is used to conduct certain algebraic calculations and find numerical solutions.
## 2 Mathematical formulation of the dynamical system
In this work, we consider a system where an infinitesimal mass moves under the influence of a bigger primary with mass \(m_{1}\) and a small primary with mass \(m_{2}\). The primaries of this system have a circular orbit around their center of mass. We treat the bigger primary as a source of radiation with an oblate spheroid shape, while the small primary has an elongated shape. The unit of time is normalized to make the Gaussian constant of gravitation equal to one. The mass parameter is represented by \(\mu=m_{2}/(m_{1}+m_{2})\) where \(m_{1}=1-\mu\) and \(m_{2}=\mu\). In the case of a restricted three-body problem, it is more convenient to introduce the system in the rotational coordinate \(Oxy\). The primaries are located in the \(x\)-axis with the distance between primaries chosen as the unit of length. The coordinates of the bigger primary, small primary, and the third body are \((\mu,0)\), \((\mu-1,0)\), and (\(x,y\)), respectively. The oblateness factor of the bigger primary can be represented by \(A=(AE^{2}-AP^{2})/5R^{2}\) where \(A\ll 1\), \(AE\) and \(AP\) represent the equatorial and polar radii, respectively, and \(R\) is the effective radius when assuming the primary as a spherical object. Meanwhile, the radiation force \(F_{p}\) acts opposite to gravitational force and diminishes with respect to the distance. The total force acting on the bigger primary can be written as \(F_{g}-F_{p}=qF_{g}\), hence \(q=1-(F_{p}/F_{g})\). Here \(q\) is called the mass reduction factor where \(0<1-q\ll 1\). The small primary is assumed as a finite straight segment with a length \(2l\). The effect of a disk-like structure surrounding the system is also considered in this study. Following Miyamoto & Nagai (1975), the planar version of unit less potential disk-like structure is given by \(V(x,y)=M_{b}/\sqrt{r^{2}+T^{2}}\), where \(M_{b}\) is the total mass of disk-like structure, \(r^{2}=x^{2}+y^{2}\) is the radial distance of the infinitesimal mass, \(T=a+b\) is the total of flatness and core parameters. Let the distance of primaries to the center of mass are \(s_{1}\) and \(s_{2}\). Considering the previous works such as Kushvah (2008b),Yousuf & Kishor (2019), and Mahato et al. (2022a), the motion of the primaries is given by
\[\begin{split} m_{1}s_{1}n^{2}&=\frac{Gm_{1}m_{2}}{R^{ 2}-l^{2}}\left(1+\frac{3A}{2R^{2}}\right)+\frac{GM_{b}m_{1}r_{c}}{(r_{c}^{2}+T ^{2})^{3/2}},\\ m_{2}s_{2}n^{2}&=\frac{Gm_{1}m_{2}}{R^{2}-l^{2}} \left(1+\frac{3A}{2R^{2}}\right)+\frac{GM_{b}m_{2}r_{c}}{(r_{c}^{2}+T^{2})^{3/2 }},\end{split} \tag{1}\]
where \(R=s_{1}+s_{2}\) is the distance between primaries. \(r_{c}^{2}=1-\mu+\mu^{2}\) means a dimensionless quantity of the reference radius of the disk-like structure (Singh & Taura, 2014). Assuming \(R=1\), \(G=1\), and \(m_{1}+m_{2}=1\), the mean motion \(n\) of the system can be calculated by adding both equations in Eq. 1, approximating the expression \(1/(1-l^{2})\) in series as \(1+l^{2}\), and neglecting the term of \(Al^{2}\). Hence we have:
\[n^{2}=1+l^{2}+\frac{3}{2}A+\frac{2M_{b}r_{c}}{(r_{c}^{2}+T^{2})^{3/2}} \tag{2}\]
Equations of motion of the third object in CRTBP are given as follows
\[\begin{split}\ddot{x}-2n\dot{y}&=\frac{\partial \Omega}{\partial x},\\ \ddot{y}+2n\dot{x}&=\frac{\partial\Omega}{\partial y },\end{split} \tag{3}\]
where \(U\) is a pseudo-potential function:
\[\begin{split}\Omega&=\frac{n^{2}}{2}\left(x^{2}+y^{2 }\right)+\frac{q(1-\mu)}{r_{1}}+\frac{(1-\mu)Aq}{r_{1}^{3}}\\ &\quad+\frac{\mu}{2l}\log\left(\frac{r_{21}+r_{22}+2l}{r_{21}+r_ {22}-2l}\right)+\frac{M_{b}}{\sqrt{r^{2}+T^{2}}}.\end{split} \tag{4}\]
Here \(r_{21}^{2}=(x-\mu+1-l)^{2}+y^{2}\) and \(r_{22}^{2}=(x-\mu+1+l)^{2}+y^{2}\) are the distance of third body to the small primary and \(r_{1}^{2}=(x-\mu)^{2}+y^{2}\) is the distance between third body and the bigger primary. It should be noted that the equation of motion differs from the equation of motion in Yousuf & Kishor (2019) since, in our case, we assume the small body by a finite straight segment.
## 3 Equilibrium points
### Position of equilibrium points
The conditions of equilibrium points are \(\dot{x}=\dot{y}=\ddot{x}=\ddot{y}=0\). Hence we can deduce that \(\Omega_{x}=\Omega_{y}=0\), i.e.,
\[\begin{split} n^{2}x&-\frac{q(1-\mu)(x-\mu)}{r_{1}^ {3}}-\frac{3(1-\mu)(x-\mu)Aq}{2r_{1}^{5}}\\ &\quad-\frac{2\mu}{(r_{21}+r_{22})^{2}-4l^{2}}\left(\frac{x-\mu+1 -l}{r_{21}}+\frac{x-\mu+1+l}{r_{22}}\right)\\ &\quad-\frac{M_{b}x}{(r^{2}+T^{2})^{3/2}}=0,\end{split} \tag{5}\]
\[\begin{split} n^{2}y&-\frac{q(1-\mu)y}{r_{1}^{3}}- \frac{3(1-\mu)yAq}{2r_{1}^{5}}\\ &\quad-\frac{2\mu}{(r_{21}+r_{22})^{2}-4l^{2}}\left(\frac{y}{r_{2 1}}+\frac{y}{r_{22}}\right)-\frac{M_{b}y}{(r^{2}+T^{2})^{3/2}}=0.\end{split} \tag{6}\]
In the following, we solve Eq. 5 and Eq. 6 to find the position of equilibrium points.
The collinear points are located in a line with the primaries, thus we have \(y=0\). Eq. 5 becomes
\[\begin{split}\Omega_{x}(x,0)&=n^{2}x-\frac{q(1-\mu)(x- \mu)}{|x-\mu|^{3}}-\frac{3(1-\mu)(x-\mu)Aq}{2|x-\mu|^{5}}\\ &\quad-\frac{2\mu}{(|x-\mu+1+l|+|x-\mu+1-l|)^{2}-4l^{2}}\\ &\quad\times\left(\frac{x-\mu+1-l}{|x-\mu+1-l|}+\frac{x-\mu+1+l }{|x-\mu+1+l|}\right)\\ &\quad-\frac{M_{b}}{x^{2}}\left(1-\frac{3T^{2}}{2x^{2}}\right)=0.\end{split} \tag{7}\]
In order to find the solution, we divide the region into three parts, i.e. (\(-\infty\), \(\mu-1-l\)), (\(\mu-1-l\), \(\mu\)), and (\(\mu\), \(\infty\)). Here \(L_{1}\), \(L_{2}\), and \(L_{3}\) are the solution located in (\(-\infty\), \(\mu-1-l\)), (\(\mu-1-l\), \(\mu\)), and (\(\mu\), \(\infty\)), respectively. Hence we have
\[\Omega_{x}(x,0)=\begin{cases}\left(n^{2}x-\frac{M_{b}}{x^{2}}\left(1-\frac{3T^ {2}}{2x^{2}}\right)\right)(x-\mu)^{4}((2x-2\mu+2)^{2}-4l^{2})\\ \qquad+q(1-\mu)(x-\mu)^{2}((2x-2\mu+2)^{2}-4l^{2})\\ \qquad+\frac{3}{2}qA(1-\mu)((2x-2\mu+2)^{2}-4l^{2})+4\mu(x-\mu)^{4},\\ \qquad\mbox{if}\,\,-\infty<x<\mu-1-l\\ \\ \left(n^{2}x-\frac{M_{b}}{x^{2}}\left(1-\frac{3T^{2}}{2x^{2}}\right)\right)(x -\mu)^{4}((2x-2\mu+2)^{2}-4l^{2})\\ \qquad+q(1-\mu)(x-\mu)^{2}((2x-2\mu+2)^{2}-4l^{2})\\ \qquad+\frac{3}{2}qA(1-\mu)((2x-2\mu+2)^{2}-4l^{2})-4\mu(x-\mu)^{4},\\ \qquad\mbox{if}\,\,\mu+x<\infty\end{cases} \tag{8}\]
These three equations have been solved numerically to find each collinear equilibrium point. Only the real solution is considered for the position of equilibrium points.
Meanwhile, there are two non-collinear equilibrium points, i.e., \(L_{4}\) and \(L_{5}\). The additional condition of these equilibrium points is \(y\neq 0\). Eq. 5 and 6 can be rewritten in the form
\[\begin{split} x\left(n^{2}-\frac{q(1-\mu)}{r_{1}^{3}}-\frac{3(1- \mu)Aq}{2r_{1}^{5}}-\frac{2\mu}{(r_{21}+r_{22})^{2}-4l^{2}}\left(\frac{1}{r_{2 1}}+\frac{1}{r_{22}}\right)\right.\\ \qquad\qquad\qquad\qquad\qquad\qquad\left.-\frac{M_{b}x}{(r^{2}+T^{2})^{3/2 }}\right)\frac{q\mu(1-\mu)}{r_{1}^{3}}+\frac{3\mu(1-\mu)Aq}{2r_{1}^{5}}\\ \qquad\qquad\qquad\qquad\left.-\frac{2\mu}{(r_{21}+r_{22})^{2}-4l^{2}} \left(\frac{-\mu+1-l}{r_{21}}+\frac{-\mu+1+l}{r_{22}}\right)\,=0.\end{split} \tag{9}\]
\[\begin{split} y\left(n^{2}-\frac{q(1-\mu)}{r_{1}^{3}}-\frac{3(1- \mu)Aq}{2r_{1}^{5}}-\frac{2\mu}{(r_{21}+r_{22})^{2}-4l^{2}}\left(\frac{1}{r_{2 1}}+\frac{1}{r_{22}}\right)\right.\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\left.-\frac{M_{b}y}{(r^{2}+T^{2})^{3/2}}\right)= 0.\end{split} \tag{10}\]
Hence from Eq. 10 we have
\[n^{2}-\frac{q(1-\mu)}{r_{1}^{3}}-\frac{3(1-\mu)Aq}{2r_{1}^{5}}-\frac{2\mu}{(r _{21}+r_{22})^{2}-4l^{2}}\left(\frac{1}{r_{21}}+\frac{1}{r_{22}}\right)-\frac{ M_{b}y}{(r^{2}+T^{2})^{3/2}}=0. \tag{11}\]
Substituting Eq. 11 into Eq. 9 gives
\[\frac{q(1-\mu)}{r_{1}^{3}}+\frac{3(1-\mu)Aq}{2r_{1}^{5}}-\frac{2}{(r_{21}+r_{22}) ^{2}-4l^{2}}\left(\frac{-\mu+1-l}{r_{21}}+\frac{-\mu+1+l}{r_{22}}\right)=0. \tag{12}\]
In the classical case, the position of these equilibrium points is located in \(r_{1}=1\) and \(r_{2}=1\). Since some perturbations exist, we assume that \(r_{1}\) and \(r_{2}\) are perturbed by \(\epsilon_{1}\) and \(\epsilon_{2}\). Hence, in our case, we have (Mahato et al. 2022a)
\[r_{1}=1+\epsilon_{1};\qquad r_{21}=1+\epsilon_{2}-l/2;\qquad r_{22}=1+\epsilon_ {2}+l/2. \tag{13}\]
The calculation of \(\epsilon_{1}\) and \(\epsilon_{2}\) are done by substituting Eq. 13 to Eq. 12 and Eq. 11 and solving these equations. By approximating with series and neglecting higher order terms of \(\epsilon_{1}\), \(\epsilon_{2}\), \(l^{2}\), and \(A\), we have:
\[\epsilon_{1} =\frac{\frac{4\,\gamma}{3}-\frac{4\,q}{3}-2\,A\,q+\frac{\gamma\, \mu}{3}+\frac{4\,\mu\,q}{3}+2\,A\,\mu\,q}{4\,q\,(\mu-1)} \tag{14}\] \[+\frac{5\,\gamma\,\mu-l^{2}\left(\frac{2\,\mu}{3}+\frac{11\, \gamma\,\mu}{4}\right)}{q\,(13\,l^{2}-12)\,(\mu-1)},\] \[\epsilon_{2} =\frac{16\,\gamma-40\,l^{2}\,\gamma+28\,l^{2}-16}{52\,l^{2}-48}\]
where \(\gamma=1+l^{2}+3A/2+M_{b}(2r_{c}-1)/(r_{c}^{2}+T^{2})^{3/2}\). The position of non-collinear equilibrium points (\(x_{o},y_{o}\)) is given by
\[x_{o} =\mu-\frac{1}{2}+(\epsilon_{2}-\epsilon_{1})\,, \tag{15}\] \[y_{o} =\pm\sqrt{\frac{3}{4}+\epsilon_{1}+\epsilon_{2}}\]
Putting value of \(\epsilon_{1,2}\) into Eq. 15, we get:
\[x_{o} =\mu-\frac{1}{2}+\frac{16\,\gamma-40\,L^{2}\,\gamma+28\,L^{2}-16}{ 52\,L^{2}-48} \tag{16}\] \[-\frac{\frac{4\,\gamma}{3}-\frac{4\,q_{1}}{3}-2\,A_{1}\,q_{1}+ \frac{\gamma\,\mu}{3}+\frac{4\,\mu\,q_{1}}{3}+2\,A_{1}\,\mu\,q_{1}}{4\,q_{1} \,(\mu-1)}-\frac{5\,\gamma\,\mu-L^{2}\left(\frac{2\,\mu}{3}+\frac{11\,\gamma \,\mu}{4}\right)}{q_{1}\,(13\,L^{2}-12)\,(\mu-1)},\] \[y_{o} =\pm\left(\frac{3}{4}+\frac{16\,\gamma-40\,L^{2}\,\gamma+28\,L^{ 2}-16}{52\,L^{2}-48}\right.\] \[\left.+\frac{\frac{4\,\gamma}{3}-\frac{4\,q_{1}}{3}-2\,A_{1}\,q_{1 }+\frac{\gamma\,\mu}{3}+\frac{4\,\mu\,q_{1}}{3}+2\,A_{1}\,\mu\,q_{1}}{4\,q_{1 }\,(\mu-1)}+\frac{5\,\gamma\,\mu-L^{2}\left(\frac{2\,\mu}{3}+\frac{11\,\gamma \,\mu}{4}\right)}{q_{1}\,(13\,L^{2}-12)\,(\mu-1)}\right)^{1/2}\]
If the perturbation parameters are not considered, Eq. 16 is similar to the classical version where \(x_{o}=\mu-\frac{1}{2}\) and \(y_{o}=\pm\sqrt{\frac{3}{4}}\).
### Linear Stability
Let us assume a small displacement in an equilibrium point by defining
\[u=x-x_{o};\qquad v=y-y_{o}, \tag{17}\]
where "\(o\)" corresponds to the equilibrium points. The equation of motion from this small displacement is given as follows:
\[\tilde{u}-2n\dot{v} =u\Omega_{xx}^{o}+v\Omega_{xy}^{o}, \tag{18}\] \[\ddot{v}+2n\dot{u} =u\Omega_{xy}^{o}+v\Omega_{yy}^{o},\]
where
\[\begin{split}\Omega_{xx}^{o}&=n^{2}+\frac{3q(1-\mu)(x- \mu)^{2}}{r_{1}^{2}}-\frac{q(1-\mu)}{r_{1}^{3}}+\frac{154q(1-\mu)(x-\mu)^{2}}{2r _{1}^{2}}-\frac{3(1-\mu)Aq}{2r_{1}^{3}}+\frac{3M_{3}x^{2}}{(r^{2}+T^{2})^{3/2}} \\ &\quad-\frac{M_{3}}{(r^{2}+T^{2})^{3/2}}+\frac{2\mu}{(r_{1}+r_{2 })^{2}-4l^{2}}\Big{(}\frac{1}{r_{21}+r_{22}-2l}+\frac{1}{r_{21}+r_{22}+2l} \Big{)}\Big{(}\frac{x-\mu+1-l}{r_{21}}+\frac{x-\mu+1+l}{r_{22}}\Big{)}^{2}\\ &\quad-\frac{2\mu}{(r_{21}+r_{22})^{2}-4l^{2}}\Big{[}\frac{1}{r_ {21}}+\frac{1}{r_{22}}-\Big{(}\frac{(x-\mu+1-l)^{2}}{r_{21}^{3}}+\frac{(x-\mu+ l+l)^{2}}{r_{22}^{3}}\Big{)}\Big{]},\end{split} \tag{19}\]
\[\begin{split}\Omega_{yy}^{o}&=\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!
\(\mu=2\times 10^{-9}\) and \(l=3.5\times 10^{-7}\). Following Yousuf & Kishor (2019), here we assume that the Sun has \(A=2.6\times 10^{-11}\) while the Kuiper belt has \(T=0.11\) and \(M_{b}=3\times 10^{-7}\). According to Sharma (1987), the photogravitational parameter \(q\) can be expressed in the CGS unit system as \(q=1-(5.6\times 10^{-5}/a\rho)\) where \(a\) and \(\rho\) are the radius and density of a moving body, respectively. Assumed a spacecraft has \(a=700\) cm and \(\rho=0.05\) gr/cm\({}^{3}\), hence \(1-q=1.6\times 10^{-6}\).
We calculated the position of the collinear equilibrium points of Sun-Haumea. By substituting the property of the system into Eq. 8 and solving it numerically, we found \(L_{1}\), \(L_{2}\), and \(L_{3}\). Table 1 shows the position of collinear equilibrium points. Here we vary the value of each perturbation parameter to examine the impact on the equilibrium point position. In the case of \(L_{1}\), the position is getting closer to the primaries if \(A\) and \(1-q\) increase. Decreasing \(A\) and increasing \(1-q\) makes \(L_{2}\) closer to the bigger primary. The position of \(L_{3}\) is nearer with respect to primaries if the bigger primary emits stronger radiation pressure. According to Table 1, the position of collinear equilibrium points depends on the value of \(M_{b}\) and \(l\). Increasing \(M_{b}\) and decreasing \(l\) makes the location of \(L_{1}\) nearer the smaller primary. The increment of \(M_{b}\) and \(l\) affect the position of \(L_{2}\) to become closer to the bigger primary. \(L_{3}\) is getting closer to the primaries if we increase the value of \(M_{b}\).
The position of non-collinear equilibrium points is calculated from Eq. 16. Table 2 shows the position of non-collinear equilibrium points with respect to the chosen value of several parameters. When there are no perturbing factors, the triangular points have the same coordinate as in the classical case. The inclusion of perturbation parameters has resulted in a shift in the location of non-collinear equilibrium points. The increment of \(A\) makes the position of these equilibrium points closer to the small primary. In contrast, if we reduce \(q\) or increase \(M_{b}\), the position of equilibrium points is shifted toward the bigger primary. The position is closer to the bigger primary in line with the increase of \(l\).
We now analyze the linear stability of each equilibrium point in the Sun-Haumea system. Collinear equilibrium points lie in the abscissa. Hence we have \(\Omega_{xy}^{o}=0\). In order to study the stability, we divide the abscissa into three regions, i.e., \(L_{1}\) (\(-\infty\), \(\mu-1-l\)), \(L_{2}\) (\(\mu-1-l\), \(\mu\)), and \(L_{3}\) (\(\mu\), \(\infty\)), and calculate the sign of \(b\) and \(b^{2}-4c\) numerically for each region. First, we estimate the stability by considering the perturbation parameters in the Sun-Haumea system. As shown in Figure 1, there exist pure real and pure imaginary characteristic roots for \(\mu\) between 0 and 0.5. Hence, all collinear equilibrium points of the Sun-Haumea system are unstable. Furthermore, we conducted the calculation by varying the value of perturbation parameters. Table 3 shows the result of the calculation. All regions have \(b<0\) and \(b^{2}-4c>0\) which means it produces two real pairs and two pure imaginary pairs. It shows that even if we change the value of perturbation parameters, the collinear equilibrium points remain unstable.
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|c|} \hline \(1-q\) & \(A\) & \(l\) & \(M_{b}\) & \(L_{1}\) & \(L_{2}\) & \(L_{3}\) \\ \hline
1 & 0 & 0 & 0 & \(-1.000873832771965\) & \(-0.999126671989864\) & \(1.00000000833333\) \\ \hline \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-1.00087355691303\) & \(-0.999126395886954\) & \(0.999999369260791\) \\ \(1.6\times 10^{-4}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-1.00085632697634\) & \(-0.999108413962744\) & \(0.999946566436973\) \\ \(1.6\times 10^{-9}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-1.00087373433354\) & \(-0.999126573666542\) & \(0.9999999902060868\) \\ \hline \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-1.00087355691303\) & \(-0.999126395886954\) & \(0.999999369260791\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-9}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-1.00087355691116\) & \(-0.99912639588830\) & \(0.999999369260793\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-13}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-1.00087355691305\) & \(-0.999126395886936\) & \(0.999999369260791\) \\ \hline \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-1.00087355691303\) & \(-0.999126395886954\) & \(0.999999369260791\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-5}\) & \(3\times 10^{-7}\) & \(-1.00087402445000\) & \(-0.999125928668234\) & \(0.99999936885249\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-9}\) & \(3\times 10^{-7}\) & \(-1.00087355686628\) & \(-0.999126395933676\) & \(0.999999369260832\) \\ \hline \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-1.00087355691303\) & \(-0.999126395886954\) & \(0.99999369260791\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-5}\) & \(-1.00086393713086\) & \(-0.999116570293592\) & \(0.999989644085257\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-9}\) & \(-1.00087365419779\) & \(-0.999126493044322\) & \(0.999999466517286\) \\ \hline \end{tabular}
\end{table}
Table 1: The abscissa Position of collinear equilibrium points (\(L_{1}\), \(L_{2}\), and \(L_{3}\)) in Sun-Haumea system with \(\mu=2\times 10^{-9}\) and \(T=0.11\).
\begin{table}
\begin{tabular}{|c|c|c|c|c|c|} \hline \(1-q\) & \(A\) & \(l\) & \(M_{b}\) & \multicolumn{2}{c|}{\(L_{4,5}\)} \\ \hline \(1\) & \(0\) & \(0\) & \(0\) & \(-0.499999998000000\) & \(\pm 0.866025403784439\) \\ \hline \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-0.499999464678626\) & \(\pm 0.866024982450545\) \\ \(1.6\times 10^{-4}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-0.499946656129219\) & \(\pm 0.865994492868782\) \\ \(1.6\times 10^{-9}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-0.499999997479636\) & \(\pm 0.866025290063446\) \\ \hline \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-0.499999464678626\) & \(\pm 0.866024982450545\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-9}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-0.499999465965624\) & \(\pm 0.866024981707493\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-13}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-0.499999464665756\) & \(\pm 0.866024982457975\) \\ \hline \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-0.499999464678626\) & \(\pm 0.866024982450545\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-5}\) & \(3\times 10^{-7}\) & \(-0.499999464678626\) & \(\pm 0.866024982155884\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-9}\) & \(3\times 10^{-7}\) & \(-0.499999464678656\) & \(\pm 0.866024982450574\) \\ \hline \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-7}\) & \(-0.499999464678626\) & \(\pm 0.866024982450545\) \\ \(1.6\times 10^{-6}\) & \(2.6\times 10^{-11}\) & \(3.5\times 10^{-7}\) & \(3\times 10^{-9}\) & \(-0.499999464678781\) & \(\pm 0.866025094722156\) \\ \hline \end{tabular}
\end{table}
Table 2: Position of non-collinear equilibrium points (\(L_{4}\) and \(L_{5}\)) in Sun-Haumea system with \(\mu=2\times 10^{-9}\) and \(T=0.11\).
Figure 1: Plot of \(\mu\) versus characteristic roots (\(\lambda_{1,2,3,4}\)) for L1, L2, and L3, with \(l=3.5\times 10^{-7}\), \(M_{b}=3\times 10^{-7}\), \(A=2.6\times 10^{-11}\), and \(1-q=1.6\times 10^{-6}\). The real and imaginary parts of characteristic roots are marked by solid and dashed lines, respectively. Here we used \(T=0.11\).
## References
Next, we investigate the stability of non-collinear equilibrium points in the Sun-Haumea system. We discuss only \(L_{4}\) since the dynamic of \(L_{5}\) is nearly similar. In the classical case, non-collinear equilibrium points are stable under the condition \(27\mu(1-\mu)<1\). Hence we can deduce \(\mu<\mu_{c}\), where the critical mass \(\mu_{c}=0.038520896504551\). This critical mass can be calculated by finding the solution of \(b^{2}-4c=0\). In this modified version of CRTBP, we numerically calculate the roots by solving Eq. 24. By considering the perturbing parameters, it shows that the stability of non-collinear equilibrium points has a maximum limit (\(\mu_{c}\)) and minimum limit (\(\mu_{o}\)) of mass parameters which is different from the classical case. For Sun-Haumea system, we found \(\mu_{c}=0.0385208896007\) and \(\mu_{o}=1.386\times 10^{-12}\). Since the Sun-Haumea system has \(\mu=2\times 10^{-9}\), we conclude that the Sun-Haumea system has stable non-collinear equilibrium points. Figure 2 shows the comparison of stability for several cases by changing the perturbing parameters of the Sun-Haumea system. It shows that the range of stability depends on the parameter \(A\), \(q\), \(l\), and \(M_{b}\). The characteristic roots have the form of pure imaginary if \(\mu_{o}<\mu<\mu_{c}\). The considered perturbation parameters alter the range of stability in \(\mu\). The increment of \(A\) or reduction of \(q\) reduces the size of the stability area. The stability region is shifted toward bigger \(\mu\) if \(M_{b}\) and \(l\) increase.
## 5 Conclusion
We have investigated the dynamics of an infinitesimal mass under the gravitational influence of two primaries. Our study assumes that the smaller primary is an elongated body, while the larger primary is oblate and also emits radiation. In addition, we have taken into account the presence of a disk that surrounds the three-body system. We have found that there are five equilibrium points in this modified CRTBP where three of them are collinear and the other two are non-collinear. Our numerical exploration of the Sun-Haumea system has revealed that the inclusion of perturbing parameters has caused a displacement in the position of the Sun-Haumea system's equilibrium points with respect to their positions in the classical CRTBP. We noticed that the magnitude of the perturbing parameters (\(q\), \(A\), \(l\), and \(M_{b}\)) can affect the position of the five equilibrium points. It shows that the non-collinear equilibrium points of the Sun-Haumea system are stable, while all collinear equilibrium points are unstable. Moreover, we have figured out that the collinear equilibrium points remain unstable for several possible ranges of perturbing parameters. In contrast, the non-collinear equilibrium points are conditionally stable with respect to \(\mu\). When taking into account the perturbing parameters, we have found that there are upper and lower limits of \(\mu\) for achieving the stability of non-collinear equilibrium points. The stability region in \(\mu\) depends on the perturbing parameters.
###### Acknowledgements.
This work is funded partially by BRIN's research grant Ruzah Program AIBDTK 2023. We thank the anonymous reviewer for the insightful comments and suggestions on the manuscript.
|
2309.17166 | Advances in Kidney Biopsy Lesion Assessment through Dense Instance
Segmentation | Renal biopsies are the gold standard for diagnosis of kidney diseases. Lesion
scores made by renal pathologists are semi-quantitative and exhibit high
inter-observer variability. Automating lesion classification within segmented
anatomical structures can provide decision support in quantification analysis
and reduce the inter-observer variability. Nevertheless, classifying lesions in
regions-of-interest (ROIs) is clinically challenging due to (a) a large amount
of densely packed anatomical objects (up to 1000), (b) class imbalance across
different compartments (at least 3), (c) significant variation in object scales
(i.e. sizes and shapes), and (d) the presence of multi-label lesions per
anatomical structure. Existing models lack the capacity to address these
complexities efficiently and generically. This paper presents \textbf{a
generalized technical solution} for large-scale, multi-source datasets with
diverse lesions. Our approach utilizes two sub-networks: dense instance
segmentation and lesion classification. We introduce \textbf{DiffRegFormer}, an
end-to-end dense instance segmentation model designed for multi-class,
multi-scale objects within ROIs. Combining diffusion models, transformers, and
RCNNs, DiffRegFormer efficiently recognizes over 500 objects across three
anatomical classes (glomeruli, tubuli, arteries) within ROIs on a single NVIDIA
GeForce RTX 3090 GPU. On a dataset of 303 ROIs (from 148 Jones' silver-stained
renal WSIs), it outperforms state of art models, achieving AP of 52.1\%
(detection) and 46.8\% (segmentation). Our lesion classification sub-network
achieves 89.2\% precision and 64.6\% recall on 21889 object patches (from the
303 ROIs). Importantly, the model demonstrates direct domain transfer to
PAS-stained WSIs without fine-tuning. | Zhan Xiong, Junling He, Pieter Valkema, Tri Q. Nguyen, Maarten Naesens, Jesper Kers, Fons J. Verbeek | 2023-09-29T11:59:57Z | http://arxiv.org/abs/2309.17166v2 | # Advances in Kidney Biopsy Structural Assessment through Dense Instance Segmentation
###### Abstract
The kidney biopsy is the gold standard for the diagnosis of kidney diseases. Lesion scores made by expert renal pathologists are semi-quantitative and suffer from high inter-observer variability. Automatically obtaining statistics per segmented anatomical object, therefore, can bring significant benefits in reducing labor and this inter-observer variability. Instance segmentation for a biopsy, however, has been a challenging problem due to (a) the on average large number (around 300 to 1000) of densely touching anatomical structures, (b) with multiple classes (at least 3) and (c) in different sizes and shapes. The currently used instance segmentation models cannot simultaneously deal with these challenges in an efficient yet generic manner. In this paper, we propose the first anchor-free instance segmentation model that combines diffusion models, transformer modules, and RCNNs (regional convolution neural networks). Our model is trained on just one NVIDIA GeForce RTX 3090 GPU, but can efficiently recognize more than 500 objects with 3 common anatomical object classes in renal biopsies, i.e., glomeruli, tubuli, and arteries. Our data set consisted of 303 patches extracted from 148 Jones' silver-stained renal whole slide images (WSIs), where 249 patches were used for training and 54 patches for evaluation. In addition, without adjustment or retraining, the model can directly transfer its domain to generate decent instance segmentation results from PAS-stained WSIs. Importantly, it outperforms other baseline models and reaches an AP 51.7% in detection as the new state-of-the-art.
anchor-free model, dense instance segmentation, diffusion model, kidney biopsy, transformers
## I Introduction
Kidney biopsy evaluation performed by expert pathologists remains the gold standard for the diagnosis and staging severity for many nephrological diseases [2]. In order to facilitate viewing, biopsies currently are digitalized by scanners and stored as Whole-Slide-Images (WSIs)1. Visual morphological assessment of different anatomical structures in WSIs provides diagnostic information to pathologists for disease categorization. High-quality diagnostic assessment heavily hinges on the correct quantification of lesion scores over different structures within a biopsy that is manually annotated by a renal pathologist. Fig. 1 shows a manually labeled biopsy that contains hundreds of densely packed tissue objects. The annotation normally would cost a skilled expert around 2-4 hours. This laborious task naturally has aroused interest in automating the annotation process, which can form the basis for further lesion classification and quantification tasks, offload annotation time, and reduce intra-/ inter-observer variability.
Footnote 1: [https://www.mbftbioscience.com/whole-slide-imaging-analysis/](https://www.mbftbioscience.com/whole-slide-imaging-analysis/)
Deep learning-based instance segmentation algorithms can be promising tools for pathologists to obtain individual structures for later lesion analysis. Although a large number of models has been successfully trained on images with cells, yeasts, etc. [33, 12, 29], training an end-to-end instance segmentation model for renal biopsies that contain multiple classes remains as a challenge due to 3 difficulties: 1) dense objects (hundreds of structures); 2) various sizes and shapes; 3) unbalanced classes, i.e. on average, the tubulointerstitial area occupies \(\geq 70\%\) of the renal parenchyma in health and disease [1]. Therefore, simultaneously processing dense objects in various sizes and shapes has imposed restrictions
on deep learning models concerning efficiency and versatility.
Previous research has aimed at addressing the aforementioned difficulties, however, only sub-optimal solutions were achieved. For efficiency, some researchers adopt a simple two-step framework. It consists of a semantic segmentation network and a non-trainable post-processing step [20]. The post-processing steps aim to split closely packed objects into intact individual structures using mathematical morphological operations [16]; this would fail in some cases. In addition, some researchers use detection-based instance segmentation methods. The detection module in a regional convolution neural network (RCNN) depends on a set of pre-defined bounding box candidates called anchors, and cannot automatically be scaled to data with objects in a broader range of sizes and shapes. For versatility, researchers use transformers to learn global queries for all objects across whole datasets automatically, and directly predict one instance mask per structure. However, these are inefficient as feature representations over large-scale datasets. On the one hand, because the global queries must update over the whole dataset, the number of global queries would increase with the scale of possible sizes and shapes. On the other hand, the separated instance map per object has a low occupation rate, please see Fig. 2. (c).
In this paper, we propose a novel detection-based instance
segmentation model with a good trade-off between efficiency and versatility. Our model adopts the diffusion method [21] to generate bounding boxes out of Gaussian noise and does not need anchors. Subsequently, we extract regions of interest (ROIs) from the boxes and then adaptively generate dynamic queries from those regions. Because there is no need to maintain those queries, our model is invariant to the scale of the dataset. Moreover, the crops from the predicted bounding boxes represent the instance segmentation; these crops have very high overlay w.r.t the bounding boxes (see Fig. 2. (a)). To the best of our knowledge, our model is the first framework that combines diffusion methodology with an RCNN transformer to efficiently process renal biopsies containing dense objects from multiple anatomical classes. In Fig. 3, a flow chart of our model is depicted. The **bbox-decoder** is a diffusion model that predicts bounding boxes out of noise iteratively. As a result of very unbalanced classes, the **sampling** module discards negative candidates without potential objects while the class-wise balanced positive samples are kept. The **mask-decoder** is an RCNN that uses attention mechanisms to predict ROI instance masks.
The main contributions of our work are the following.
We present the first end-to-end instance segmentation model to process nephrology biopsies containing dense objects with multiple classes. To facilitate RCNN predicting ROI-instance masks, we disentangled the mask-decoder from the bbox-decoder, and designed a sampling method that can keep class-wise balanced positive samples. In addition, we integrated attention mechanisms into the RCNN and enabled learning long-range dependencies between ROI features and dynamic queries. We evaluated our model on Jones' silver-stained images (commonly used in nephrology biopsies) and found that our model significantly outperforms previously published results. Finally, we show that our model is capable of learning a robust classifier for PAS-stained images, which is an indication of domain-agnostic detection.
The remainder of the paper is organized as follows. In section II, all relevant work is introduced and we show all improvements w.r.t. previous research. In section III, we describe each component of our model in detail. In section IV, we demonstrate the performance of the model including ablation experiments. In section V, we describe the advantages and limits in our pipeline and show possible follow-up research for the future.
## II Related Work
**Diffusion Model**: diffusion models are a category of deep generative approaches [21, 27]. They aim to generate samples over a random distribution and learn to recover the noised data by an iterative denoising process. Although
Fig. 1: An illustration of manually annotated kidney biopsy for **glomeruli**, **tubuli** and **arteries**. There are 3 difficulties in clinical biopsies: (1) a large number of objects closely touch each other; (2) instances have a large variety in sizes and shapes; (3) the distribution over class is heavily biased.
Fig. 2: ROI segments vs. instance segments. (a) cropped segmentation maps from boxes tightly surrounding each object; (b) ground-truth bounding boxes and instance segments within one image; (c) separated instance mask per object.
diffusion models have recently shown impressive results in image generation [22], natural language processing [24], and audio processing [28], there is still room for improvement for visual perception tasks. Some pioneering studies tried to adopt the diffusion model for segmentation tasks [6, 17]. However, despite significant interest in this field, there have not been previous solutions that successfully adapted generative diffusion models for dense instance segmentation. We argue that this may be because processing dense objects has imposed constraints on feature representations, which is not efficient in generating object features at the pixel level. We use regional features instead and apply attention mechanisms among them to produce instance masks. This first work adopts a diffusion model for dense instance segmentation.
**Transformer**: Transformers have been proposed as an end-to-end framework for instance segmentation. They aim to extract long-range dependencies within one feature map and aggregate that information into contextual features. In general, the dependency extraction process is called attention operation with the context feature as a query. For instance segmentation tasks, each query corresponds to the required feature representation of one object. With queries, the instance mask prediction can be formulated as computing similarities between queries and potential objects. Conventional transformers [9] defined a set of global queries, computed per-pixel similarities, and updated them over the whole dataset. To recognize objects in all possible combinations of appearances, shapes, and positions, the size of the global queries inevitably increases with the scale of the dataset. It leads to infeasible solutions for large-scale datasets with dense objects. Most recently, research [11] proposed transformers that compute dynamic queries, and discard them after use. This enables those models to be applied to large-scale datasets without changing parameters.
**RCNN**: The Region-based CNN (RCNN) approach [30] is based on bounding boxes to generate a manageable number of candidate region-of-interests (ROIs). Over each ROI, one regional feature is extracted by a pooling operator; i.e. ROIPool [15], ROIAlign [18], etc. For instance segmentation tasks, RCNN is extended with mask prediction modules that evaluate binary instance scores independently on each ROI [18]. This leads to an efficient representation for dense objects. According to different regional proposal generation methods, RCNN approaches exhibit a lot of variants. Earlier RCNN models adopted different convolution sub-networks to generate class-agnostic regional proposals, and use them to predict the final class-wise instance masks [23]. Most recently, a few studies have formulated each proposal as one query, and use regional attention to predict instance masks [14]. This approach is included in our model.
## 3 Methods
### Preliminaries
**Diffusion model**. Diffusion approaches [21] is a family of deep generative models. Inspired by non-equilibrium thermodynamics [22], every model is a chained graph with length T that iteratively transforms from one initial state \(Z_{0}\) to one final state \(Z_{T}\). Each state transformation is referred to as a diffusion forward that adds Gaussian noise once between successive states. Due to the special property of the Gaussian distribution [21], it is possible to directly reach state \(t\) within one step:
\[q(Z_{t}|Z_{0})=\mathcal{N}(Z_{t}|\sqrt{\bar{\alpha}_{t}}Z_{0},(1-\bar{\alpha}_ {t}\mathbf{I})) \tag{1}\]
where \(Z_{0}\) is the original input data while \(Z_{t}\) (\(t\leq T\)) is a latent state which equals to add noise \(t\) times. In addition, \(\bar{\alpha}_{t}=\prod_{k=0}^{t}(1-\beta_{k})\) with \(\beta_{k}\) as the noise variance schedule [6]. Interestingly, the chain of diffusion forward is non-trainable
Figure 3: Our **DiffusionMask2former** for dense instance segmentation is a one-stage method. Instead of using anticipated anchors, we impose Gaussian noise on ground-truth boxes and generate a fixed-sized set of random bounding boxes. With feature maps extracted from the encoder, the bbox-decoder iteratively learns to denoise and predicts class-wise candidate boxes. Due to many unevenly distributed candidates, we propose a sampling module to discard negative samples and keep a proportion of balanced positive samples for fast convergence. Finally, we make final instance masks according to the given selected positive samples in the mask-decoder. For illustration simplicity, we only choose one object per class.
and can provide noised data at any given state \(t\). In contrast, the diffusion model targets to build a neural network \(f_{\theta}(Z_{t},t)\) that can reverse \(Z_{t}\) back to \(Z_{0}\). During the training, \(f_{\theta}(Z_{t},t)\) is trained to denoise at arbitrary state t by minimizing an objective loss function:
\[\mathcal{L}=\frac{1}{2}\|f_{\theta}(Z_{t},t)-Z_{0}\|^{2} \tag{2}\]
where \(Z_{0}\) is the ground-truth and \(Z_{t}\) is the noise-added data. At the inference stage, the model \(f_{\theta}\) reconstructs from pure noise \(Z_{T}\) back to predicted data \(Z_{0}\) with an updating rule [31] along the chain with step \(s\): \(Z_{T}\to Z_{T-s}\rightarrow\cdots\to Z_{0}\). A detailed formulation can be found in [31].
### _Architecture_
Since the DiffusionMask2former iteratively generates bounding boxes, the model needs to run multiple times at the inference stage. However, it would be computationally intractable to directly apply the whole model on the raw image at every iterative step during training. Therefore, it is beneficial to disentangle the image encoder from the other modules. We designed our encoder to only once extract multi-scale feature maps from the raw input image while the other modules just take these deep features as conditions instead. In this manner, the box and mask predictions are progressively refined from initial Gaussian noise. Besides, to facilitate the training of the mask decoder, we designed a sampling module that can randomly select positive samples from each individual class. The obtained class-wise balanced regional features enable the mask decoder to also learn about objects with low occurrence rates. In summary, the whole model is divided into four parts, image encoder, boxes decoder, sampling module, and mask decoder, where the image encoder only runs once while the other modules iteratively refine predictions with shared multi-scale feature maps. This is depicted in Fig.3.
**Encoder**. The image encoder takes as input the raw image and extracts its high-level features for the subsequent decoders. It is implemented with a ResNet which is pre-trained on ImageNet [19] as a backbone, and is followed by a Feature Pyramid Network [25] to generate multi-scale feature maps.
**Bounding box decoder**. The box decoder is based on DiffusionDet [6] and it takes a set of proposal boxes as input to crop ROI features [33, 66] from the multi-scale feature maps generated by the image decoder. Next, these regional features are used to obtain box regression and classification results (cf. Fig.4. (b)). More precisely, our box decoder, firstly, generates initial dynamic queries from ROI features (cf. Fig.4. (a)). Then, each box head sequentially makes stage predictions and refined dynamic queries for the next step (cf. Fig.4. (c)). For the next step, our module is composed of 6 cascading stages [11] that will be passed sequentially. The difference between our decoder and that of DiffusionDet is the fact that we adopt a stack of convolution kernels to compute the initial dynamic queries while DiffusionDet only uses a mean operation. This has the advantage that our module can extract more flexible regional descriptors to represent objects in different appearances and shapes.
**Sampling**. The conventional schemes used in other RCNN models are based on intersection over union (IoU) and consider a box as positive if it has IoU with a ground-truth box of at least 0.5 and will be negative otherwise [18]. These are, however, not appropriate solutions to our model because, at the early training stage, the boxes generated by diffusion methods are very noisy. Initially, it is unlikely to obtain adequate positive samples for the mask decoder for training. This might result in a slow convergence or even an occasional failure during training. Moreover, these schemes tend to ignore imbalances among different classes. This bias prefers the recognition of objects with high occurrence rates. Therefore, we propose a novel sampling method for the mask decoder. The scheme consists of two steps. In Step 1, we divide ground-truth boxes into groups according to their classes; in Step 2, within each group, we directly select n random boxes as positive samples. Our sampling method has two advantages. First, the mask decoder can always learn in rare instances, but only if they appear in images. This also prevents the risk of overwhelming. The goal of the box decoder is to predict boxes closely approximating the ground truth, therefore, there is no harm in utilizing the perfect boxes at the beginning of the training. The second advantage is that the direct use of the selected ground-truth boxes as positive samples will stabilize and accelerate mask training. In our specific case, we have \(3\) classes and set the maximum objects per image \(N=500\). Then for each class, there are up to \(n=\frac{500}{3}\simeq 133\) instances.
**Mask decoder**. Similar to the idea proposed in [9, 10], we also associate dynamic queries with positive ROIs. Similarities between ROI and query highlight effective queries and filter out the others. It can enhance the ROI features by focusing on pixels that reside in objects of interest, and predict per-pixel instance score over each positive ROI. This is depicted in
Fig. 4: The bounding box decoder takes multi-scale feature maps and a set of proposal boxes to predict classes and boxes iteratively. (a) It generates the initial dynamic queries via noised proposal boxes and multi-scale feature maps. (b) The module consists of a dynamic queries initialization head plus multiple boxes refinement heads. (c) One head generates stage predictions and refined dynamic queries.
Fig.5.
### Implementation Details
For training and inference, DiffusionMask2former has different schemes, this is illustrated in Fig.6. At the training stage, our model learns to recover the ground truth from an arbitrary state while at the inference stage, the model can ensemble multiple predictions to generate the final refined results.
#### 3.3.1 Training
During training, we first construct noisy boxes from the ground truth. Next, our model is trained to recover instance boxes and iteratively generate masks from an arbitrary state \(t\). Algorithm III-C1 provides the pseudo-code for the training procedure.
Diffusion forwardWe first concatenate the ground-truth boxes with extra random boxes and align them to a fixed length because the number of instances varies across images. Then Gaussian noises are added to the padded boxes with a noise scale that is controlled by \(\bar{\alpha}\) in Eq.(1). The scale changes with state \(t\) and adopts a monotonically decreasing cosine schedule, as proposed in [27]. Notably, the signal-to-noise ratio has a significant effect on the performance and favors a relatively high signal scaling value compared to image generation tasks [6, 7, 8, 13, 21].
Training LossThe box decoder takes noisy boxes as input and subsequently predicts the results of the category classification and box coordinates. Since there is a one-to-many mapping between the ground-truth boxes and the predicted boxes, we apply a set-loss function to assign a predicted box to a ground-truth based on the lowest cost [4, 32, 34]. Additionally, the positive boxes selected from the sampling module will be input to the mask decoder, and from those the instance masks are predicted. Because it concerns a one-to-one mapping, we apply a Binary Cross Entropy loss function.
```
deftrain(images,gt_boxes,gt_masks): """ images:[B,H,W,3] gt_boxes:[B,+,4] gt_masks:[B,H,W,G] B: batch size N: number of proposal boxes """
#generatemulti-scalefeaturesviaencoder feats=encoder(images)
#generatenoisedboxesatstatetwheret
#israndomintereindiffusionforward;
#Padboxestoiswitht_boxes:[B,N,4] t_boxes,t_=diffusion(gt_boxes,mean=0,std=1)
#generateinitialdynamicqueries d_query=initialize_query(t_boxes,feats)
#learntoreversenoisedboxesatstatetback
#toground-truthboxesatstate0andreturn
#refineddynamicqueries
[pred_boxes,d_query]=box_decoder(t_boxes,feats, d_query,t) d_query,t)
#obtainbboxlossviasetobjectivefunction loss_bbox=set_loss(pred_boxes,gt_boxes)
#randomlyselectbalancedpositiveboxesfrom
#groot-truthboxes pos_boxes=sampling(gt_boxes)
#generatepredictedinstancemasks pred_masks=mask_decoder(pos_boxes,feats,d_query)
#obtainmasklossviasetobjectivefunction loss_mask=set_prediction_loss(pred_masks,gt_masks) returnloss_bbox,loss_mask
```
**Algorithm 1** DiffusionMask2former Training
#### 3.3.2 Inference
In the inference phase, our model starts from the standard Gaussian distribution which corresponds to the final state \(T\) 1. Then a progressive denoising operation reverses the predictions back to the initial state \(0\). The process is shown in Algorithm III-C1.
Inference stepsIn every step, the noisy boxes that are generated from the previous state \(\mathrm{t_{now}}\) are taken as input to the box decoder for predictions of category and box coordinates. After obtaining the de-noised boxes, DDIM [26, 31] is adopted to add noise and generate new noisy boxes at state \(\mathrm{t_{next}}\). Therefore, each iteration does inference for just one state. At the final state, we can assemble all intermediate predictions to refine the final result. In our model, we only run the inference once and directly predict the results from the input Gaussian noise at state \(T\).
Figure 5: The mask decoder takes multi-scale feature maps and a set of positive proposal boxes to predict instance masks. The dynamic queries interact with regional features and only highlight pixels residing in objects of interest over each ROI. Final instance masks are generated from the enhanced ROI feature maps.
Figure 6: An illustration for a diffusion model at different stages. For simplicity, there are only 5 states with skip step as 1. State 4 is a Gaussian noise while state 0 is the ground truth. (a) In one training, the diffusion model only randomly selects several denoise paths to train the box decoder; (b) In one inference, the diffusion model can alternately pick between denoise and corrupt path to generate multiple predictions, and ensemble them together for the final refined results.
```
definfer(images, step, T): """ images:[B, H, W, 3] step: the skip length for state transform T: the length of the chain B: batch size N: number of proposal boxes ==
# generate multi-scale features via encoder feata = encoder(images)
# return Gaussian noise as noisy boxes at state T;
# Pad boxes to N with T_boxes: [B, N, 4]
[t_boxes, _] = diffusion(mean0, std=1)
# generate state transform pairs skipping every step
# [(T, T-step), (T-step, T-2*step),..., (step, 0)] time_pairs = uniform(0, T, step)
# generate initial dynamic queries d_query = initialize_query(t_boxes, feata)
iterate over stages for (t_now, t_next) iterate t_pairs: # predict boxes and dynamic query at state t_now [pred_boxes, d_query] = box_decoder(t_boxes, feats, d_query, t_now)
generate new noisy boxes from state t_now to t_next t_boxes = ddim(t_boxes, pred_boxes, t_now, t_next)
# replace undesired boxes with random Gaussian noise t_boxes = box_replace(t_boxes)
generate predicted instance masks pred_masks = mask_decoder(pred_boxes, feats, d_query) return pred_boxes, pred_masks
```
**Algorithm 2** DiffusionMask2former Inference
**Box replacement** In the inference phase, the predicted boxes can, according to their prediction scores, be roughly classified into two groups,i.e., _positive_ or _negative_. The positive boxes have scores above a particular threshold and contain proper objects. The negative ones have scores below the threshold and are arbitrarily located. The newly generated noisy boxes would significantly deteriorate if the negative boxes were directly sent into DDIM. These negatively labeled boxes are constructed by corrupted boxes in the training phase and therefore outside the Gaussian distribution. In order to align the inference phase well with the training phase, we replace the negative boxes with random ones generated from the Gaussian distribution.
## IV Results
In this section, we first introduce our dataset resource. Then, our model is compared with previously well-established instance segmentation models on kidney biopsies. We reproduce all those methods based on the package **mmdetection**[5]. Finally, we provide an ablation study on the sampling module.
### _Data sets_
The biopsies are prepared according to the Pathology Laboratory Protocol for kidney biopsies. The tissues were collected by core needle biopsy, Serra-fixed, rapidly processed using a routine tissue processor, then paraffin-embedded, serially sectioned at 5 \(\mu m\), mounted onto adhesive slides, and stained with JONES silver staining. All biopsy samples, both transplant kidneys and native kidneys, of 148 patients were collected at the multicenter's archive of the Departments of Pathology, LUMC, AUMC, UMCU, the Netherlands, or at the archive of the Department of Nephrology, Leuven, Belgium. All biopsies were anonymized by a pathology staff member before any further analysis was done. The 148 Jones-stained WSIs were obtained from scanners with \(\mathrm{PPM}=0.25\) in BIG-TIFF format2, and 303 patches were extracted by software ASAP version 1.9 for Windows 3 at level \(0\) of the BIG-TIFF images. The composition of the data set for our experiment is shown in table III, and the short sizes of patches are depicted in Fig. 7. Moreover, 115 PAS-stained WSIs from LUMC were only used for testing to show the domain transfer ability. Notably, patches from the same WSIs were only used for either training or validation.
Footnote 2: [https://www.awaresystems.be/imaging/tif/bigtiff.html](https://www.awaresystems.be/imaging/tif/bigtiff.html)
Footnote 3: [https://computationalpathologygroup.github.io/ASAP/](https://computationalpathologygroup.github.io/ASAP/)
### _Experiments_
We compare our model with previously well-established instance segmentation models [18, 3, 14] all trained with the aforementioned kidney biopsies data set. We use **256** as feature channels for queries. In both the training and inference phase, our model adopts **500** dynamic queries while QueryInst [14] uses 300 and 500 global queries respectively. In table I we compare the performances on object detection w.r.t boxes and instance segmentation w.r.t masks respectively. For all experiments, our model does not use refinement in the inference phase, i.e. iteration set to 1. Notably, Mask-RCNN [18] and cascade Mask-RCNN [3] are two-stage methods that use RPN networks to replace dynamic queries while QueryInst is a one-stage method that uses global queries.
Fig. 7: The histogram of the short edges of patches extracted from WSIs. In our dataset, the smallest edge is 387 and the largest edge is 4565.
In comparison, our model can be considered as a one-stage method and uses dynamic queries.
**Detection**. DiffusionMask2former achieves detection of 51.7 % AP with a ResNet-50 backbone, thereby outperforming QueryInst, Mask-RCNN, and cascade Mask-RCNN by a non-trivial margin. Interestingly, iterative refinement deteriorates the performance of detecting small objects while it favors the detection of medium to large objects. This indicates that extra operations are required to avoid information loss of small objects during refinement. Moreover, to deal with dense objects, the use of global queries is not a good choice. Especially when the number of queries is relatively larger than their feature channels, i.e. 256 vs. 300 or 500, more global queries cause a significant drop in the detection performance.
**Instance segmentation**. On instance segmentation our model achieves 44.8 % AP, which is comparable to other methods. Likewise, we observe that iterative refinement on boxes also affects the performance of instance segmentation.
In table II we compare the performance of each class. It shows that our model can, on average, detect glomeruli and arteries better than other methods. However, there is a significant drop in the detection of tubuli. Besides, our model has a competitive performance on instance segmentation of glomeruli, a significant decline in tubuli, and a non-trivial margin on arteries. Therefore, our model can process objects in various shapes and large sizes like arteries and glomeruli, while there is a bottleneck to processing small objects like tubuli.
Fig.8 shows that our model can detect objects at various levels of resolution well. It can correctly generate instance masks within all boxes, and especially precisely locates larger arteries (cf. Fig.8 the second row). Similarly, cascade Mask-RCNN generates predictions to ours but fails to locate larger objects within one box, because it does not use an attention mechanism to capture long-range dependencies. Moreover, Mask-RCNN suffers from false multiple responses
over one object (cf. Fig.8 the third row) due to a lack of cascade refinement. In summary, iterative refinement on bounding boxes during training can greatly suppress multiple predictions over one object while the attention mechanism with query helps to process large-scale objects. In addition, QueryInst cannot detect all instances with the global query well if the data set has more combinations of sizes and shapes. QueryInst-300 skips a lot of structures while QueryInst-500 even degenerates to falsely perform multiple predictions over a single object. Notably, there is a common drawback for RCNN-based models. One pixel might be assigned to multiple instance masks since pixel-wise prediction runs within each independent ROI. In that case, it is not possible to enforce the spatial constraint that one pixel is exclusive to one instance. In addition, Fig.9 shows that our model is sufficiently generic to directly generate decent results on images with different stainings. While the model is only trained with Jones' silver image, images of PAS-stained slides give good results. This indicates the potential for domain transfer.
### Comparison with Other Sampling Schemes
The distribution in the data set is unbalanced, therefore it is important to select positive samples with an equal likelihood among different classes. Otherwise, the training stage is dominated by the class with the most objects (i.e. tubulin in our data set) and fails to correctly recognize objects from other classes. Fig. 10 shows that only a few tubulin can be recognized when the model is overwhelmed by tubulin during training, indicating that our sampling module as a class-wise balanced scheme is crucial to stabilize the training process.
## V Conclusion and Future Work
In this paper, we propose a novel dense instance segmentation paradigm by combining a diffusion model with a regional transformer. Based on refined object boxes generated from a denoising diffusion model, the regional transformer can generate dynamic queries and convert these into final ROI instance masks. Our pipeline has several appealing properties for application on data sets with dense objects. The iterative noise-to-box paradigm enables the quick and arbitrary location of many objects and dynamically extracts one query per object without any prior knowledge of size and shape. It becomes possible to use the same network parameters to obtain the desired speed-accuracy trade-off without re-training the model when applying it to large-scale data sets. Experiments on Jones' stained renal biopsies show that our model achieves excellent performance compared to other well-established models.
For future work, it can be envisioned to further improve the performance of the diffusion model in instance-level recognition tasks. Further, we would investigate the design of novel regional feature extractors that can generate more representative dynamic queries. An introduction of spatial constraints into the inference of ROI masks will facilitate in overcoming that a single pixel is assigned to multiple instances as it will force it to be given to just one instance.
|
2309.05562 | A crystallographic approach to symmetry-breaking in fluid layers | Symmetry-breaking bifurcations, where a flow state with a certain symmetry
undergoes a transition to state with a different symmetry, are ubiquitous in
fluid mechanics. Much can be understood about the nature of these transitions
from symmetry alone, using the theory of groups and their representations. Here
we show how the extensive databases on groups in crystallography can be
exploited to yield insights into fluid-dynamical problems. In particular, we
demonstrate the application of the crystallographic layer groups to problems in
fluid layers, using thermal convection as an example. Crystallographic notation
provides a concise and unambiguous description of the symmetries involved, and
we advocate its broader use by the fluid dynamics community. | John F. Rudge, Dan McKenzie | 2023-09-11T15:48:20Z | http://arxiv.org/abs/2309.05562v2 | # A crystallographic approach to symmetry-breaking in fluid layers
###### Abstract
Symmetry-breaking bifurcations, where a flow state with a certain symmetry undergoes a transition to state with a different symmetry, are ubiquitous in fluid mechanics. Much can be understood about the nature of these transitions from symmetry alone, using the theory of groups and their representations. Here we show how the extensive databases on groups in crystallography can be exploited to yield insights into fluid-dynamical problems. In particular, we demonstrate the application of the crystallographic layer groups to problems in fluid layers, using thermal convection as an example. Crystallographic notation provides a concise and unambiguous description of the symmetries involved, and we advocate its broader use by the fluid dynamics community.
## 1 Introduction
One of the best known examples of pattern formation in fluid dynamics concerns Rayleigh-Benard convection in a fluid layer. As the temperature difference across the layer is increased the geometry of the flow changes from an initial stationary state in which no flow occurs, to a series of more complex flows. At the onset of convection patterns of rolls, hexagons, or squares can be seen, depending on the nature of fluid properties (e.g. whether the viscosity is temperature-dependent) and the nature of the boundary conditions. As the temperature difference is increased, further changes in the geometry of the flow occur, with the flow ultimately becoming chaotic and time-dependent.
The transition from one flow geometry to another (e.g. from a stationary state to hexagons) involves a loss of symmetry; the system is said to have undergone a _spontaneous symmetry-breaking bifurcation_. What is remarkable is that much can be understood about the nature of the bifurcation purely from the consideration of the symmetries of the system. This understanding comes from the subset of dynamical systems theory termed _equivariant bifurcation theory_, and is well-covered in textbooks such as Hoyle (2006); Golubitsky & Stewart (2002). The language of symmetry is group theory. Each of the symmetry-breaking transitions from one state to another can be described by a state with a certain symmetry group transitioning to a state whose symmetry is a subgroup of the original group.
Crystallographers have long been concerned with transitions between states with different symmetry. Indeed, there is a celebrated theory of phase transitions in crystals due to Landau (Landau, 1965), which has much in common with equivariant bifurcation theory. Crystallographers have catalogued detailed symmetry information for periodic structures
in the famous International Tables for Crystallography (Hahn 2006), which have been supplemented in recent decades by extensive computer databases such as the Bilbao Crystallographic Server (Aroyo _et al._ 2006\(a\),_b_).
The aim of the present manuscript is demonstrate how the extensive databases on group theory in crystallography can be exploited to understand transitions in fluid layers. While there has already been extensive use of group theory to understand transitions in fluid layers, authors tend to use a bespoke notation for their particular problem. The advantage of crystallographic notation is that it is standardised. Moreover, there is a wealth of group-theoretic information that can be simply looked up, without the need to be re-derived for each new problem. The use of crystallographic notation to describe convective transitions was first advocated by McKenzie (1988). The present manuscript is in a sense an extension of that work, and goes further by exploiting the theory of crystallographic layer groups (Wood 1964; Litvin & Wike 1991), which were only added to the International Tables in 2002 (Kopsky & Litvin 2010).
There is no new theory discussed in this manuscript: the theoretical ideas are well-established and can be found in textbooks. We aim to provide here an informal introduction to the main ideas, and the interested reader can refer to the literature for the detailed theory. One of the main difficulties with this topic is the large amount of technical jargon needed to properly describe the ideas: the topic encompasses fluid dynamics, representation theory, bifurcation theory, and crystallography. Additional difficulties arise because different communities use different words for the same concept (e.g. factor group/ quotient group; invariant subgroup/ normal subgroup; isotropy group/ little group/ stabilizer). Where possible we have tried to use the notation of the International Tables for the crystallographic concepts, and the notation of the textbook by Hoyle (2006) for equivariant bifurcation theory.
The manuscript is organised as follows. In section 2 we establish the fundamental symmetries of fluid layers. This is followed by an introduction to the crystallographic layer groups in section 3 and an introduction to symmetry-breaking transitions in section 4. Section 5 introduces the relevant representation theory, and section 6 the relevant bifurcation theory. The theory is then applied to some simple convection problems in section 7. Three appendices provide additional technical details, and three supplements give tables of group theory information.
## 2 The symmetry of fluid layers
We will consider a fluid dynamical problem which takes place in a layer. In terms of symmetry it is important to distinguish between three different symmetries: i) the symmetry of the domain; ii) the symmetry of the fluid-dynamical problem (i.e. the domain plus the governing equations and boundary conditions); and iii) the symmetry of solutions to the problem. Each of these symmetries may be different.
### Domain symmetries
Let us consider first the symmetries of the domain. We have an infinite fluid layer, and will take \(x\) and \(y\) as horizontal co-ordinates, and \(z\) as a vertical co-ordinate. Let \(z=0\) denote the mid-plane of the layer, and \(a\) denote the layer thickness. The domain is thus the region bounded by \(-a/2\leqslant z\leqslant a/2\).
A symmetry of the domain is an invertible map which maps points in the domain to other points in the domain. Here we will consider only distance-preserving symmetries of the domain (isometries) as these will be the ones of relevance to the physical problem. We can
translate all points by a horizontal displacement vector \(\mathbf{d}=(d_{1},d_{2},0)\)
\[t_{\mathbf{d}}:(x,y,z)\to(x+d_{1},y+d_{2},z) \tag{2.1}\]
and retain the same domain \(-a/2\leqslant z\leqslant a/2\). We also retain the same domain if we rotate about a vertical axis by an angle \(\theta\),
\[R_{z}^{\theta}:(x,y,z)\to(x\cos\theta-y\sin\theta,x\sin\theta+y\cos\theta,z) \tag{2.2}\]
or reflect in a vertical mirror plane, e.g. with normal \(x\),
\[m_{x}:(x,y,z)\to(-x,y,z). \tag{2.3}\]
The set of all such operations of the form (2.1), (2.2), (2.3) i.e. all horizontal translations, rotations about a vertical axes, and vertical mirrors, and their combinations, form a group known as \(E(2)\), the Euclidean group of distance-preserving transformation in a plane. The fluid layer domain is also invariant under reflections in horizontal plane, i.e. with normal \(z\),
\[m_{z}:(x,y,z)\to(x,y,-z). \tag{2.4}\]
from which it also follows that the domain is also invariant under the inversion operation
\[\overline{1}:(x,y,z)\to(-x,-y,-z). \tag{2.5}\]
The group of all distance-preserving operations (isometries) of the layer is \(E(2)\times C_{2}\), a direct product of \(E(2)\) and \(C_{2}\), where \(C_{2}\) denotes the cyclic group of order 2 containing two elements (taken here as the identity and the inversion operation). The combination of elements in the group leads to operations that are more complex than individual rotations and reflections e.g. one can have glide reflections which combine a reflection and a translation; and screw displacements which combine translations and rotations.
### Problem symmetries
A fluid dynamical problem in the layer consists of the domain, a set of governing equations and boundary conditions. At each point in the domain there is a set of field variables that describe the state of the fluid (e.g. its temperature, velocity, pressure). A symmetry operation of the fluid dynamical problem is described by the combination of one of the isometries with a description of how the field variables transform. If the governing equations and boundary conditions are invariant under this transformation then it is a symmetry of the fluid dynamical problem.
Choices of material properties and boundary conditions mean that not all operations that are isometries of the domain are necessarily symmetries of the fluid dynamical problem. For example, if a different boundary condition is used on the top and bottom of the layer (e.g. fixed temperature on one, fixed-flux on another), then the system cannot be invariant under a horizontal mirror like (2.4). Or, if one considers an inclined convection problem where the gravity vector is at an angle to the vertical axis of the layer then the problem is not invariant under arbitrary rotations about a vertical axis (Reetz _et al._, 2020).
A fluid dynamical problem which is invariant under the full group \(E(2)\times C_{2}\) of isometries of the layer, and which will be used in many of the examples which follows, is Rayleigh-Benard convection in a fluid layer of constant viscosity with appropriately chosen symmetric boundary conditions (e.g. both boundaries being fixed-flux and free-slip) and the Boussinesq approximation. The gravity vector is assumed to be aligned with the vertical. The natural field which describes the state of the system is the temperature. The governing equations are invariant under the operations given in (2.1) to (2.5), provided that the temperature
perturbation \(\theta\) (the difference in temperature from a conductive steady-state) transforms as
\[t_{\mathbf{d}},R_{z}^{\theta},m_{x} :\theta\rightarrow\theta, \tag{6}\] \[m_{z},\overline{1} :\theta\rightarrow-\theta, \tag{7}\]
The sign change under horizontal mirror reflection is a manifestation of the symmetry between hot, rising fluid and cold, sinking fluid. A more detailed discussion of the symmetry of this problem can be found in Appendix A.
### Solution symmetries
In general, the symmetries of solutions to the equations are not the same as symmetries of the problem, although the solutions' symmetries are generically subgroups of the set of symmetries of the problem. Rayleigh-Benard convection provides a natural example of this: a planform of hexagons or squares is not invariant under any arbitrary translation but only a subgroup of allowed translations. However, one can apply a general element of the symmetry group of the problem to a given solution to yield another solution of the equations.
## 3 Crystallographic layer groups
This work focuses on particular subgroups of \(E(2)\times C_{2}\) known as the crystallographic layer groups. They are the set of isometries of the layer that are doubly-periodic in space: that is, instead of having continuous translation symmetry in the horizontal as \(E(2)\times C_{2}\) does, the translation symmetry is discrete. The layer groups are invariant under \(t_{\mathbf{d}}\) in (1) only for discrete lattice vectors satisfying
\[\mathbf{d}=x\mathbf{a}_{1}+y\mathbf{a}_{2} \tag{8}\]
with \(x,y\in\mathbb{Z}\), and \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\) are basis vectors for the lattice. Many pattern-forming problems lead to steady fluid flows that can be described as having a layer group symmetry. For example planforms described as squares, hexagons, bimodal, triangles, are all doubly periodic in space and are examples of layer group symmetry. The principal example of a convective flow that is not a layer group symmetry is that of convective rolls: this has a discrete translation symmetry in one horizontal direction, but a continuous translation symmetry in another horizontal direction. Layer groups are an example of a subperiodic group: a group where the dimension of the space is greater than the dimension of the periodic lattice. For layer groups, the space in which the group elements act is 3-dimensional, but there is only a 2-dimensional lattice of translations. The layer groups are in a sense intermediate between full 3-d space groups (3-dimensional groups with a 3-dimensional translation lattice) and the 2-d plane or wallpaper groups (2-dimensional groups with a 2-dimensional translation lattice).
There are 80 layer groups, and their properties are detailed in the International Tables for Crystallography, volume E (Kopsky & Litvin, 2010) (hereafter referred to as ITE) and in computer databases such as the Bilbao Crystallographic Server (de la Flor _et al._, 2021) (hereafter referred to as BCS). Each layer group is identified by a unique number and Hermann-Mauguin symbol. One example that we will focus on is the layer group \(p4/nmm\) (layer group 64, illustrated in Figure 1a). This group has a square lattice, a 4-fold vertical rotation axis, two conjugate sets of vertical mirror planes, and a glide reflection \(n\) that combines reflection in a horizontal plane with a translation by \((\frac{1}{2},\frac{1}{2},0)\).
From (7) we have that a symmetry operation that sends \(z\rightarrow-z\) involves a change in sign of the temperature perturbation (i.e. hot to cold or vice-versa). There is a broader class of crystallographic groups termed "black and white", "magnetic" or "Shubnikov" that have as a possible group element \(1^{\prime}\) which changes the sign of a field without changing position. With such groups a combination of a horizontal mirror and a sign change would be denoted
Figure 1: Symmetry diagrams from ITE for a) \(p4/nmm\) (origin choice 1), and two of its subgroups, b) \(pmmn\) and c) \(p4mm\). Shown is the unit cell in a projection onto the horizontal mid-plane. Squares indicate fourfold vertical rotation axes (\(4_{z}\)), filled ovals are twofold vertical rotation axes (\(2_{z}\)), circles are inversion centres (\(\overline{1}\)), unfilled squares with filled ovals indicate a \(\overline{4}_{z}\) vertical inversion axis. Solid lines are vertical mirror planes, dashed lines are vertical glide planes. The symbol in the top right refers to the horizontal glide plane \(n\). Full arrows around the edge refer to a horizontal twofold rotation axis, half-arrows refer to a twofold screw axis. Red colouring indicates symmetry operations that send \(z\rightarrow-z\) and will be associated with sign changes in the temperature field (hot to cold and vice-versa). Examples of convective flows with these symmetries are shown in Figure 3b; Figure 4b; Figure 6, Figure 7 and Figure 8.
as \(m^{\prime}_{z}\), and the black-and-white layer groups depicted in Figure 1a,b would be referred to as \(p4/n^{\prime}mm\) and \(pmmn^{\prime}\)(Litvin, 2013). However, here we will not denote the symmetry operations with primes for two reasons: First, the fluid problems we consider are invariant only on combining the sign change in \(\theta\) with the horizontal mirror: they are not invariant under a sign change in \(\theta\) alone, so the \(1^{\prime}\) operator is not present. Second, a general fluid dynamical problem can consist of more field variables that just one, and each variable may transform in a different way under the isometries e.g. the horizontal velocities and the toroidal potential do not change sign under \(m_{z}\) (see Appendix A). We will simply write \(m_{z}\) as the group element corresponding to horizontal mirror reflection and it should be understood that it acts on different fields in different ways (some change sign, some do not).
Many of the plots in this manuscript show the temperature field in the horizontal mid-plane. Position in the mid-plane is invariant under the horizontal mirror \(m_{z}\): the only action of \(m_{z}\) in the mid-plane is to change the sign of the temperature perturbation. The mid-plane temperature fields can therefore be considered as directly belonging to one of the two-dimensional black-and-white plane groups. There are 80 black-and-white plane groups, which are isomorphic to the 80 layer groups. A mapping between the symbols used for black-and-white plane groups and those for layer groups can be found in ITE.
## 4 Symmetry-breaking transitions
Suppose that as a control parameter (such as the Rayleigh number) is varied, a symmetry-breaking transition occurs from a state with symmetry group \(G\) to a state with a lower-symmetry group \(H\). For simplicity, let us just consider steady-states (time-independent). Purely from symmetry arguments alone there is often much that can be said about the nature of the transition: e.g. one can often classify the nature of the bifurcation as being either pitchfork or transcritical, and also write down the generic form of the equations describing the amplitudes of the critical modes (section 6). More generally, given an initial state with a group \(G\) one can determine the possible groups \(H\) that can arise in a symmetry-breaking transition.
The first requirement for \(H\) is that it is a subgroup of \(G\). Subgroups of layer groups have a particular structure and classification (Muller, 2013). A subgroup is termed a _translationengleiche_ subgroup or t-subgroup if it has the same translation symmetries as its parent. A t-subgroup has a different point group to its parent and so is necessarily a non-isomorphic subgroup with a different layer group number and symbol. A subgroup is termed a _klassengleiche_ subgroup or k-subgroup if the translations are reduced but the order of the point group remains the same. Klassengleiche subgroups can be further categorised into those that are isotypic (have the same layer group number and symbol) and those that are non-isotypic (have a different layer group symbol). Finally a subgroup may have both the order of the point group reduced and the translations reduced. However in this case there always exists an intermediate subgroup \(M\) such that \(M\) is a t-subgroup of \(G\) and \(H\) is a k-subgroup of \(M\) (Hermann's theorem). As such any subgroup \(H\) of \(G\) can be described in terms of a chain of t and k relationships.
A subgroup \(H\) of a group \(G\) is termed _maximal_ if there is no intermediate subgroup \(M\) of \(G\) such that \(H\) is a proper subgroup of \(M\). Both ITE and BCS provide comprehensive lists of the maximal subgroups of the layer groups, from which parent group-subgroup relationships can be described. An example of such subgroup information is given in Table 1 for the layer group \(p4/nmm\), along with additional information useful for describing symmetry-breaking bifurcations. Similar tables for all 80 layer groups can be found in supplement 1. In Table 1 each maximal subgroup is listed with its layer group number and Hermann-Mauguin symbol. The index of the subgroup in the parent is given (the index is the number of left cosets of the
subgroup in the parent). The type of subgroup is given as either t or k for translationengleiche or klassengleiche. The nature of the additional information in Table 1 on factor group, core, and bifurcation is described in sections 5 and 6.
Group theory places more constraints on the group \(H\) than it simply being a subgroup of \(G\). In fact for a generic steady-state symmetry-breaking bifurcation the subgroup \(H\) must be an _isotropy subgroup_ of a particular _absolutely irreducible representation_ of the group \(G\)(Hoyle, 2006; Golubitsky & Stewart, 2002). Thus to understand symmetry-breaking bifurcations of layer groups we must understand their group representations, which we turn to now.
## 5 Representations of layer groups
A _representation_ of a group is simply a mapping of the group elements to a set of matrices in a way that preserves the group operation (i.e. the mapping is a homomorphism onto \(GL(V)\)). A representation acts on a certain vector space \(V\) of dimension \(n\). An _invariant subspace_ of a representation is a vector subspace \(W\) that has the property that \(\boldsymbol{Dw}\in W\) for all \(\boldsymbol{w}\in W\) and for all \(\boldsymbol{D}\) in the set of representation matrices. The spaces \(W=\{\boldsymbol{0}\}\) and \(W=V\) are always invariant subspaces, known as the trivial subspaces. If the representation contains a non-trivial invariant subspace then it is said to be _reducible_, otherwise it is _irreducible_. A representation is _absolutely irreducible_ if the only linear maps which commute with the representation are multiples of the identity. For representations over \(\mathbb{C}\) there is no distinction between being absolutely irreducible and just irreducible, but there is a difference over \(\mathbb{R}\) where representations can be irreducible but not absolutely irreducible. Irreducible representations (or irreps for short) are the building blocks of representation theory. Any representation of the group can be written as a direct sum of its irreducible representations (Maschke's theorem).
Given a point \(\boldsymbol{v}\in V\), we can define its _isotropy subgroup_\(\Sigma\) as
\[\Sigma=\{g\in G:g\boldsymbol{v}=\boldsymbol{v}\} \tag{5.1}\]
and its corresponding _fixed-point subspace_ by
\[\operatorname{Fix}(\Sigma)=\{\boldsymbol{w}\in V:g\boldsymbol{w}=\boldsymbol {w},\forall g\in\Sigma\} \tag{5.2}\]
An isotropy subgroup is said to be _axial_ if the dimension of its fixed-point subspace is 1. Axial isotropy subgroups are of particular interest because the existence of solution branches with the given isotropy subgroup is guaranteed under certain conditions by the equivariant branching lemma (Hoyle, 2006; Golubitsky & Stewart, 2002).
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
46 & \(pmmn\) & 2 & t & \(C_{2}\) & 46 & \(pmmn\) & \(C_{1}\) & pitchfork \\
48 & \(cmme\) & 2 & t & \(C_{2}\) & 48 & \(cmme\) & \(C_{1}\) & pitchfork \\
52 & \(p4/n\) & 2 & t & \(C_{2}\) & 52 & \(p4/n\) & \(C_{1}\) & pitchfork \\
54 & \(p42_{1}2\) & 2 & t & \(C_{2}\) & 54 & \(p42_{1}2\) & \(C_{1}\) & pitchfork \\
55 & \(p4mm\) & 2 & t & \(C_{2}\) & 55 & \(p4mm\) & \(C_{1}\) & pitchfork \\
58 & \(p42_{1}m\) & 2 & t & \(C_{2}\) & 58 & \(p42_{1}m\) & \(C_{1}\) & pitchfork \\
59 & \(p4m2\) & 2 & t & \(C_{2}\) & 59 & \(p42_{2}\) & \(C_{1}\) & pitchfork \\
64 & \(p4/mmm\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & transcritical \\
64 & \(p4/mmm\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & pitchfork \\
64 & \(p4/mmm\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 1: Maximal subgroups of \(p4/nmm\) (No. 64)
Classifying the steady-state symmetry-breaking bifurcations of layer groups consists of identifying their irreducible representations, and subsequently finding their isotropy subgroups. The general theory of irreducible representations of layer groups is in detail somewhat involved, but it is known, and results can simply be looked up in textbooks or extracted from computer databases. In many cases it is not necessary to invoke the full general theory, as the appropriate irreps can be found quickly by lifting from an appropriate factor group.
### Lifting representations
Given a group \(G\) and a normal subgroup \(N\), one can form the factor group (or quotient group) \(G/N\). The elements of \(G/N\) are the left cosets of \(N\) in \(G\), which have a well-defined multiplication operator when the subgroup \(N\) is normal.
Suppose we are interested in understanding a transition between a group \(G\) and a subgroup \(H\). We want to know the irrep of \(G\) associated with the transition. In the crystallography literature this is termed "the inverse Landau problem" (Ascher & Kobayashi 1977; Litvin _et al._ 1986). One solution to this is as follows. We first find the normal core \(N\) of the subgroup \(H\) in \(G\): that is, the largest normal subgroup of \(G\) that is contained in \(H\). In some cases this may be the whole subgroup \(H\), but not in general. We then form the factor group \(G/N\), and we refer to this as the factor group associated with the transition. Table 1 gives the factor groups and normal cores associated with each of the maximal subgroups of \(p4/nmm\). The table also gives the image of the subgroup \(H\) under the natural homomorphism onto cosets of \(N\). The advantage of finding the factor group \(G/N\) is that it is typically a small finite group and so finding its irreps is much more straightforward than the finding the irreps for the group \(G\) (which in the case of layer groups is an infinite group). Moreover, the irreps of the factor group \(G/N\) can be _lifted_ to an irrep of the group \(G\) using the natural homomorphism onto cosets. Suppose we have an irrep \(\rho\) of the factor group
\[\rho:G/N\to GL(V) \tag{5.3}\]
and suppose \(q\) is the natural homomorphism
\[q:G\to G/N, \tag{5.4}\]
where \(q(g)=gN\). Then the composition \(\rho\circ q\) is the irrep of \(G\) lifted from \(G/N\). Indeed it is an irrep of \(G\) with \(N\) in its kernel. Representations lifted from a factor group are sometimes termed _engendered representations_ in the crystallography literature.
All the t-subgroups of \(p4/nmm\) have the same factor group: \(C_{2}\). In this case the subgroups are all normal subgroups, so the normal core \(N\) is the same as the subgroup \(H\). The irreps associated with these transitions are very simple. They are 1-dimensional and just send each element of the subgroup \(H\) to 1 and all others to \(-1\). As will be discussed later, this is associated with a pitchfork bifurcation. Any index-2 subgroup necessarily has a factor group of \(C_{2}\) and is associated with a pitchfork bifurcation.
### t-subgroups
The irreps associated with translationengleiche transitions, which preserve the translations of the lattice, can be found by lifting from an appropriate factor group. The set of all translations \(T\) of the lattice forms a normal subgroup of any layer group. Therefore the irreps of \(G\) with the pure translations in the kernel can be found by lifting from the factor group \(G/T\). The factor group \(G/T\) is isomorphic to the isogonal point group associated with the layer group, so the irreps are simply those of the corresponding point group.
The character table for the factor group \(G/T\) is shown for \(p4/nmm\) in Table 2, and is the
same as that for isogonal point group \(4/mmm\) (\(D_{4h}\)). Similar tables for all 80 layer groups can be found in supplement 2. The _character_ of a representation matrix is simply its trace. Characters are independent of the basis used in the representation, and are the same for group elements that are conjugate. A character table simply consists of a table of all the characters for all the irreps of a group. For many applications of representation theory it is sufficient to know the characters of the representation and it is not necessary to know the representation matrices themselves.
Each of the 7 t-subgroups listed in Table 1 is associated with one of the 1-dimensional irreps in Table 2. There are also additional axial subgroups identified in Table 2 associated with the 2-dimensional representations labelled \(E_{g}\) and \(E_{u}\). These subgroups are not in Table 1 as they are not maximal subgroups. It should be stressed that isotropy subgroups need not be maximal subgroups.
### General theory of representations of layer groups
Irreps associated with k-transitions, where translation symmetries are lost, can also be obtained by lifting from appropriate factor groups. An example is given in Table 1 which lists an index-9 k-transition from \(p4/mmm\) to \(p4/mmm\) where the associated irreps could be found by considering the irreps of the corresponding factor group \(C_{3}^{2}\rtimes D_{4}\). A discussion of the irreps of this particular factor group can be found in Matthews (2004) (see his Figure 1). Such a transition is an example of a spatial-period-multiplying bifurcation where the periodicity of the pattern is broken but maintained on a larger scale: in this case after the symmetry break the lattice basis vectors are scaled by a factor of 3 in each direction.
An alternative approach is to exploit the general theory which describes the complete set of irreps of layer groups. This theory is somewhat involved, but is understood, and one can simply look up appropriate representations using published tables (Bradley & Cracknell 1972; Litvin & Wike 1991; Milosevic _et al._ 1998) or computer software (Aroyo _et al._ 2006\(a\); de la Flor _et al._ 2021; Stokes _et al._ 2016).
\begin{table}
\begin{tabular}{l|c c c c c c c c c|l} \hline \hline & 1 & 2\({}_{z}\) & 4\({}_{z}\) & 2\({}_{y}\) & 2\({}_{xy}\) & \(\overline{1}\) & \(m_{z}\) & \(\overline{4}_{z}\) & \(m_{y}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p4/mmm\) (64) \\ \(A_{2g}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & 1 & \(-1\) & \(p4/n\) (52) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(pmmm\) (46) \\ \(B_{2g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(cmme\) (48) \\ \(E_{g}\) & 2 & \(-2\) & 0 & 0 & 0 & 2 & \(-2\) & 0 & 0 & 0 & \(p2_{1}/m11\) (15), \(c2/m11\) (18) \\ \(A_{1u}\) & 1 & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p42_{1}2\) (54) \\ \(A_{2u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & 1 & 1 & \(p4mm\) (55) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(-1\) & 1 & \(-1\) & \(p\overline{42}_{1}m\) (58) \\ \(B_{2u}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(p\overline{4}m2\) (59) \\ \(E_{u}\) & 2 & \(-2\) & 0 & 0 & 0 & \(-2\) & 2 & 0 & 0 & 0 & \(pm2_{1}n\) (32), \(cm2e\) (36) \\ \hline \hline \end{tabular}
\end{table}
Table 2: Translationengleiche character table of \(p4/mmm\) (No. 64). The top row gives the Seitz symbol labels for a member of each conjugacy class. The number of elements in each conjugacy class is listed on the row beneath. Each irrep is given a label on the left using Mulliken notation. The right column gives the corresponding axial isotropy subgroups associated with each irrep. Note that the Seitz symbol labels only refer to the point group part of the symmetry operations: the coset representatives of \(2_{y}\), \(2_{xy}\), \(\overline{1}\), \(m_{z}\), \(\overline{4}_{z}\) also involve a translation by \(\left(\frac{1}{2},\frac{1}{2},0\right)\) (see the ITE description of \(p4/mmm\), origin choice 1).
The starting point for the general theory concerns the representations of the subgroup \(T\) of all translations of the lattice. The subgroup \(T\) is a normal subgroup of any layer group. It is also an Abelian group, so its irreps are 1-dimensional. The irreps of \(T\) are simply \(\mathrm{e}^{-\mathbf{i}\boldsymbol{k}\cdot\boldsymbol{d}}\) for a translation by a vector \(\boldsymbol{d}\), where \(\boldsymbol{k}\) is a wavevector which labels the particular irrep. Wavevectors that differ by a reciprocal lattice vector lead to identical irreps. As such, the wavevectors for defining irreps are restricted to a region of reciprocal space known as the Brillouin zone (a unit cell in reciprocal space) such that each irrep has a unique \(\boldsymbol{k}\) label.
The irreps of the layer groups can built up from the irreps of \(T\) using the theory of induced representations (see appendix B for a brief example, and Bradley Cracknell (1972); Aroyo _et al._ (2006_a_); de la Flor _et al._ (2021) for the detailed theory). Each irrep is labelled by a wavevector \(\boldsymbol{k}\), a symbol which represents the type of wavevector (the labels \(\Gamma\), \(\Sigma\), \(\Delta\) etc in Figure 2), and an index (1, 2, 3,...) referencing a particular representation of the little group of the wavevector. For example, the index-9 k-transition from \(p4/nmm\) to \(p4/nmm\) is associated with two possible irreps of the parent group: \({}^{*}\Sigma_{1}\) with \(\boldsymbol{k}=(1/3,1/3)\) and \({}^{*}\Delta_{3}\) with \(\boldsymbol{k}=(0,1/3)\). The full matrices of these representations is given in appendix B.2. The irreps associated with t-transitions have a zero wavevector, \(\boldsymbol{k}=(0,0)\). As such are sometimes labelled by \({}^{*}\Gamma\) and an index, rather than the Mulliken symbols used in Table 2, as they correspond to the \(\Gamma\) point in the Brillouin zone (Figure 2).
Much information can be obtained about the irreps and isotropy subgroups associated with transitions by querying computer databases (de la Flor _et al._, 2021; Aroyo _et al._, 2006\(a\); Stokes _et al._, 2016; Perez-Mato _et al._, 2012; Iraola _et al._, 2022). Given a parent group and a subgroup one can ask the software tools to provide the associated irreps and the corresponding fixed-point subspaces of the isotropy subgroups. For a given parent group one can also obtain from the tools a complete listing of all possible isotropy subgroups and the corresponding irreps. Most of these software tools are designed for use on full 3D space groups, rather than layer groups. However, each layer group can be associated with a corresponding space group (Litvin & Kopsky, 2000). Given a 3D space group \(S\), and \(T_{z}\) as the one-dimensional
Figure 2: The Brillouin zone for \(p4/nmm\) from the Bilbao Crystallographic Server (de la Flor _et al._, 2021). The irreps are specified by a wavevector lying in the labelled triangular region, known as the representation domain. The origin is at the \(\Gamma\) point. The special point \(M\) is at \(\left(\frac{1}{2},\frac{1}{2}\right)\). Software which uses text labels will refer to \(\Gamma\) as GM, \(\Delta\) as DT, and \(\Sigma\) as SM.
subgroup of \(S\) of the vertical translations, then the factor group \(S/T_{z}\) is isomorphic to a layer group. The irreps of layer groups can be obtained from the irreps of space groups where the wavevector is constrained to lie in a particular plane.
## 6 Equivariant bifurcation theory
Once the irrep associated with a particular transition is known, then the nature of the bifurcation can be understood using equivariant bifurcation theory (Hoyle 2006; Crawford & Knobloch 1991; Golubitsky & Stewart 2002). The full dynamics, which are described by a set of PDEs, can be reduced in the neighbourhood of the bifurcation point to simple ODEs of the form
\[\frac{\mathrm{d}\boldsymbol{y}}{\mathrm{d}t}=\boldsymbol{f}\left(\boldsymbol{ y};\mu\right) \tag{10}\]
using methods such as centre-manifold reduction or Lyapunov-Schmidt reduction. Such equations are termed _amplitude equations_. \(\boldsymbol{y}\) is the vector of mode amplitudes (which would be referred to as an _order parameter_ in crystallography). The vector \(\boldsymbol{y}\) is of the same dimension as the irrep. \(\boldsymbol{y}=\boldsymbol{0}\) before the symmetry-break. Equilibrium solutions satisfy \(\boldsymbol{f}(\boldsymbol{y};\mu)=\boldsymbol{0}\). The function \(\boldsymbol{f}\) satisfies \(\boldsymbol{f}\left(\boldsymbol{0};\mu\right)=\boldsymbol{0}\) such that \(\boldsymbol{y}=\boldsymbol{0}\) is always an equilibrium solution (although not necessarily a stable one). \(\mu\) is the bifurcation parameter, which for convection problems can be related to the Rayleigh number. Bifurcation occurs when \(\mu\) passes through zero.
The function \(\boldsymbol{f}\left(\boldsymbol{y};\mu\right)\) is _equivariant_ under the action of the matrices of the given irrep, that is
\[\boldsymbol{f}\left(\boldsymbol{g}\boldsymbol{y};\mu\right)=\boldsymbol{g} \boldsymbol{f}\left(\boldsymbol{y};\mu\right) \tag{11}\]
for all matrices \(\boldsymbol{g}\) in the given irrep. Equivariance places strong constraints on the form of the amplitude equations, and in turn on the nature of the bifurcation.
The simplest example of the consequences of equivariance are in a 1-dimensional system, invariant under \(C_{2}=\{1,-1\}\). Equivariance under \(C_{2}\) implies the function \(f\) is odd (\(f(-y;\mu)=-f(y;\mu)\)) which in turn implies that in a Taylor expansion of \(f(y;\mu)\) about \(y=0\) no even-order terms in \(y\) will appear. It follows from the symmetry alone that the associated bifurcation must be a pitchfork.
A common method of analysing amplitude equations is to consider their Taylor expansion in powers of \(\boldsymbol{y}\), and to truncate at some particular order. Much generic behaviour about the bifurcation can be described by these truncated forms. Moreover, symmetry places constraints on the number of independent parameters needed to describe the truncated form: for the \(C_{2}\) example there are no quadratic or other even order terms present. The dimension of the space of equivariants of given degree can be obtained purely using the characters of the representation (see Appendix C and Antoneli _et al._ (2008)). This can be used to show for example that the faithful irrep of \(D_{3}\) has a quadratic equivariant, unlike \(C_{2}\). The faithful irrep of \(D_{3}\) is generically associated with a transcritical bifurcation, although for a particular problem there is always the possibility that the coefficient associated with the quadratic equivariant is zero due to some particular feature of the governing equations (e.g. self-adjointness, Golubitsky _et al._ (1984)) that would then cause the bifurcation to be a pitchfork.
The final column of Table 1 classifies the type of generic steady-state bifurcation associated with each of the maximal subgroups of \(p4/nmm\). Only the index-9 k-transition is associated with a transcritical bifurcation; all others are pitchforks. The bifurcations can generically be classified depending on whether the dynamics when restricted to the fixed-point subspace has a quadratic term: pitchfork if not, transcritical if so. The classification of bifurcations
is discussed further in supplement 3, which provides character tables of several small finite groups and the dimensions of their spaces of equivariants.
## 7 Convection
We will now apply the theory discussed in the previous sections to transitions in fluid layers, and in particular to thermal convection. Consider a layer of fluid, heated from below and cooled from above. As the Rayleigh number is increased past some critical value the system begins to convect. Depending on choices of boundary conditions and rheology different planforms of the flow are possible: common planforms seen at onset are rolls, hexagons, and squares. Each of these convective planforms can be classified using crystallographic notation, e.g. squares have layer group symmetry \(p4/mmm\) (layer group 64), as illustrated in Figure 2(b).
The physical state of the fluid can be described by its temperature field. Figure 3 shows examples of possible temperature fields that can occur at the initial onset of convection. Each panel shows the mid-plane temperature field with red/blue colouring for hot/cold, along with the corresponding reciprocal space (Fourier domain) pattern, where each dot is coloured according to phase, and the size of the dot indicates its amplitude. At the onset of convection there is typically a single critical horizontal wavenumber, thus all the dots lie on a circle in reciprocal space. Each of the patterns illustrated in Figure 3 is a single parameter family: once the origin and orientation of the pattern are specified, the only remaining parameter that describes the flow is the amplitude. Figure 4 illustrates two-parameter examples that still have a single horizontal wavenumber (e.g. bimodal flow). The initial onset of convection and the selection of convective planform has been very well studied (see e.g. the extensive studies by Buzano & Golubitsky (1983); Golubitsky _et al._ (1984); Knobloch (1990)); we simply note here that each of the convective planforms which are typically described in the convective literature by a name (like hexagons, bimodal flow, patch-work quilt etc.) can be given a Hermann-Mauguin symbol that unambiguously specifies its symmetry.
### Numerical simulations
As the Rayleigh number is increased the initial convective planforms of hexagons, rolls, squares etc. undergo a series of further symmetry-breaking transitions. Typically such transitions are investigated using numerical simulations. Ideas from group theory can both illuminate the results of the numerical simulations and be used to make the computations more efficient.
As a concrete example, consider a 3-dimensional numerical simulation of fixed-flux convection in a fluid layer at infinite Prantl number. At the onset of convection the expected planform is squares (Proctor 1981), so it is natural to consider a computational domain that is a box with periodic boundary conditions in the horizontal. The temperature field within the box is described in terms of coefficients with respect to some finite set of basis vectors. The particular calculations here use spectral basis elements of the form
\[\theta(x,y,z)=\sum_{k=-K}^{K}\sum_{l=-L}^{L}\sum_{m=0}^{M}c_{klm}\mathrm{e}^{ \mathrm{i}(kx+ly)}T_{m}(z) \tag{7.1}\]
i.e. a basis of Fourier modes in the horizontal, and Chebyshev polynomials in the vertical (Burns _et al._ 2020). However, the same group theory ideas can be exploited whatever choice of basis is made. Since \(\theta\) (the temperature perturbation) is a real variable, \(c_{klm}^{*}=c_{\overline{k}lm}\).
Suppose there is a \(N\)-dimensional set of coefficients describing the given state. Each symmetry can be represented by a \(N\)-by-\(N\) matrix which describes the action of that symmetry
Figure 3: Examples of crystallographic classification for convective flows consisting of a single horizontal wavenumber. Shown are a) rolls, b) squares (checkerboard), c) rectangles (patchwork quilt), d) triangles, e) down-hexagons, f) anti-squares, g) anti-hexagons. Each pattern, with the exception of rolls, is labelled by its Hermann-Mauguin layer group symbol. The pattern of rolls does not correspond to a layer group, as it has one axis with a continuous translation symmetry (its symmetry may be referred to as \(\rho_{a}\nu_{b}ma2\), Kopsky (2006)). The left plot of each panel shows the mid-plane temperature field, the right plot shows the Fourier transform (reciprocal space plot). In reciprocal space the size of the dots show the amplitude, the colour of the dots show the phase (colourbar in top right). Grid lines indicate the reciprocal lattice, although note that some mode patterns are consistent with more than one type of lattice (e.g. both hexagonal and rectangular). The lattice shown is that used in ITE for the given layer group. With a single horizontal wavenumber all modes must lie on a circle in reciprocal space (dotted line). All of the above patterns represent a single parameter family: once the origin and orientation is specified the only remaining parameter is the amplitude.
on the basis coefficients (here the set of \(c_{klm}\)). In general this \(N\)-by-\(N\) representation is reducible, and it is possible to change basis such that in the new basis the components transform according to the irreducible representations of the given group.
The change of basis is achieved using projection operators. To project onto the components which transform according to the \(J\)-th irrep of a group \(G\) we apply the operator \(\mathbf{P}^{J}\) defined by
\[\mathbf{P}^{J}=\frac{\dim J}{|G|}\sum_{g\in G}(\chi^{J}(g))^{*}\mathbf{g} \tag{7.2}\]
where \(\dim J\) is the dimension of the irrep, \(|G|\) is the order of the group, \(\chi^{J}(g)\) is the character of the element \(g\), and \(\mathbf{g}\) is the matrix representing the action of the element \(g\) in the given representation. Moreover, it should be noted that through a change of basis the representations can be made unitary (orthogonal in the case of real representations) using Weyl's unitary trick. In turn, an orthogonal projection matrix can be used to give a orthogonal set of basis vectors corresponding to a particular irrep using a QR decomposition.
An illustration of an isotypic decomposition into basis vectors which transform according to the irreps is given in Figure 5. This example considers the \(c_{210}\) coefficient, and the coefficients to which it can be related using the layer symmetry \(p4/nmm\). The periodicity of the computational domain is assumed to align with the principal lattice of translations of \(p4/nmm\). As such the basis vectors \(c_{klm}\) are invariant under the group \(T\) of lattice translations, so one only needs to consider the factor group \(G/T\) whose character table is given in Table 2. There are 8 coefficients that are related by symmetry to \(c_{210}\), and the 8 dimensional space can be decomposed into the irreps given in Table 2 as the direct sum \(A_{1g}\oplus A_{2g}\oplus B_{1g}\oplus B_{2g}\oplus 2E_{u}\) (see Dionne _et al._ (1997) for an application of this particular decomposition).
The isotypic decomposition can be used to simplify the numerical study of symmetry-breaking bifurcations. An example of this is shown in Figures 6, 7 and 8 which illustrates
Figure 4: Further examples of crystallographic classification for convective flows consisting of a single horizontal wavenumber. These examples form two parameter families, and each pattern may be considered as a superposition of two of the single parameter patterns shown in Figure 3. Shown are a) trapezoids (a combination of squares (64) and triangles (72)) b) bimodal (a combination of squares (64) and rolls or two orthogonal sets of rolls) c) up-rectangles (a combination of rectangles (48) and hexagons(77)), d) down-triangles (a combination of triangles (72) and hexagons (77))
breaking of a \(p4/nmm\) pattern of squares into two different planforms with less symmetry, one of \(pmmn\) and one of \(p4mm\). The computational domain is a box, with aspect ratio such that the distance between rising and sinking regions is 8 times the layer depth. The heat flux is fixed on the top and bottom boundaries and both boundaries are free-slip. Fixed-flux convection with free-slip boundaries in an infinite layer formally has \(k=0\) as the most unstable wavenumber, with critical Rayleigh \(Ra_{c}=120\)(Chapman and Proctor, 1980; Rieutord, 2015). For the finite horizontal scale of the numerical problem the critical Rayleigh number is slightly higher, \(Ra_{c}=126\). Figure 6a shows the planform near onset, at \(Ra=200\), which is dominated by the four modes on the critical circle \(|\mathbf{k}|=k_{c}\) although there are also small contributions from higher modes.
As the Rayleigh number is increased there is more power in higher modes (Figure 6b, c) and sharper features are seen. However, the solution shown in Figure 6c (and also Figure 7a and Figure 8a) at \(Ra=1500\) is actually unstable to perturbations that break the symmetry.
Figure 5: An example of an isotypic decomposition for the wavevector star generated by \(\mathbf{k}=(2,1)\) with no \(z\)-dependence. Shown is the decomposition into the irreps of \(p4/nmm\) given in Table 2. The star decomposes as \(A_{1g}\oplus A_{2g}\oplus B_{1g}\oplus B_{2g}\oplus 2E_{u}\).
The unstable solution can be computed by imposing the \(p4/nmm\) symmetry on the numerical scheme, either by restricting the set of basis vectors used to those associated with the trivial irrep \(A_{1g}\), or by projecting the solutions onto that irrep at each iteration using the projection operator.
The stability of the \(p4/nmm\) solution as a function of Rayleigh number can be assessed using standard linear stability analysis, but the calculations can be made more efficient by exploiting the symmetry. A good general introduction to the numerical methods for performing linear stability and bifurcation analysis can be found in Tuckerman & Barkley (2000). The stability analysis relies on the calculation of the eigenvalues of an appropriate Jacobian matrix. When an eigenvalue has a real part which goes from being negative to being positive there is instability and an associated bifurcation to a new flow pattern. The isotypic decomposition aids the linear stability analysis by allowing one to block-diagonalise the Jacobian according to the irreps. This has several advantages: i) there are smaller linear systems to deal with in the individual blocks; ii) the eigenvalues in the individual blocks
Figure 6: Examples of symmetry-breaking bifurcations in fixed-flux convection in a fluid layer. Shown are images of the mid-plane temperature field, both in real space (contour plots) and in reciprocal space (dot patterns). At the onset of convection a square planform is seen, with \(p4/nmm\) symmetry (layer group 64). The left panels show the evolution of the \(p4/nmm\) solution as the Rayleigh number is increased from a) \(Ra=200\), b) \(Ra=700\), and c) \(Ra=1500\). The \(p4/nmm\) solution at \(Ra=1500\) is unstable to perturbations that break the symmetry. Panels d) and e) show new solutions at \(Ra=1500\) that emerge from pitchfork bifurcations from the \(p4/nmm\) solution. d) has symmetry \(pmmn\) (layer group 46) and e) has symmetry \(p4mm\) (layer group 55).
Figure 7: 3-dimensional rendering of the convective flows shown in horizontal cross-section in Figure 6c, d, e. Shown are equally-spaced contours of the temperature field. All flows have \(Ra=1500\). a) has symmetry \(p4/mm\), b) has symmetry \(pmmn\), c) has symmetry \(p4mm\). The loss of the 4-fold vertical rotation symmetry \(4_{z}\) about the centre of the box in going from a to b can be clearly seen. The loss of symmetry between a and c is more subtle: the upwellings are now not related by symmetry to the downwellings. c has lost the horizontal glide reflection \(n\), the 2-fold rotations about horizontal axes, and the 2-fold screw rotations about horizontal axes (see Figure 1).
Figure 8: Plots identical to Figure 6c, d, e but with the origin of the coordinate system shifted by \((\frac{1}{4},\frac{1}{4},0)\) (the coordinate system given as origin choice 2 for \(p4/nmm\) in ITE). The corresponding symmetry diagrams (with origin shifted from Figure 1) are shown on the left. All flows have \(Ra=1500\). a) has symmetry \(p4/nmm\), b) has symmetry \(pmmn\), c) has symmetry \(p4mm\). Some of the symmetry losses are clearer to see with this choice of origin as the rotation axes are moved away from the edges of the box. The loss of the 4-fold inversion axes (\(\overline{4}_{z}\)) in going from a to b or c can be clearly seen. The temperature perturbation is necessarily zero on the mid-plane at a 4-fold inversion axis.
may be more widely separated that those of the full problem, speeding up convergence of numerical eigenvalue techniques; iii) one can directly identify the symmetries that are broken and the corresponding active irrep.
Figure 9 illustrates the linear stability analysis of the \(p4/nmm\) solution in Figure 6b at \(Ra=700\). Shown are the eigenmodes with largest real part corresponding to the irreps \(B_{1g}\) and \(A_{2u}\). At \(Ra=700\), eigenvalues of both modes are real and negative. However, for slightly larger \(Ra\), at \(Ra=756\) for \(B_{1g}\) and \(Ra=815\) for \(A_{2u}\), the eigenvalues become positive, leading to bifurcations and the solutions with broken symmetry seen in Figure 6d, e (and also Figure 7b, c and Figure 8b, c). Given the irreps involved, these bifurcations are necessarily pitchfork bifurcations.
Symmetry can also be exploited to calculate the new equilibrium states after a bifurcation. The solution without the symmetry-break can be used as an initial condition, with a small perturbation added in the form of the symmetry-breaking eigenmode of the linear stability calculation. One can use the projection operators to constrain the solution to have the appropriate symmetry (e.g. for Figure 6d, imposing the group \(pmmn\) or restricting to only those basis vectors corresponding to the irreps \(A_{1g}\) and \(B_{1g}\) of \(p4/nmm\) in Table 2). Once the eigenmodes associated with the bifurcations have been calculated it is possible to systematically perform a centre manifold reduction to determine the amplitude equations (Carini _et al._, 2015), although we have not done this here.
Figure 9: Examples of eigenmodes in a linear stability analysis of the \(p4/nmm\) solution depicted in Figure 6b with \(Ra=700\) (using origin choice 1). a) shows the mid-plane temperature field for the eigenmode with eigenvalue with largest real part which transforms according to the irrep \(B_{1g}\) in Table 2, associated with the bifurcation to the \(pmmn\) solution in Figure 6d. b) shows the corresponding eigenmode which transforms according to irrep \(A_{2u}\) in Table 2, associated with bifurcation to \(p4nm\) in Figure 6e
### Further examples
There are many examples of flow transitions in fluid layers in the literature, but almost none use the crystallographic notation that has been adopted here. One study that does is McKenzie (1988), which describes a wide variety of transitions in convection, particularly those in the experimental studies of a temperature-dependent viscosity fluid by White (1988). A convective system with a temperature-dependent viscosity is not invariant under reflection in a horizontal mirror plane. As such the layer groups involved are those without such mirror planes, which are equivalent to those of the 17 plane or wallpaper groups. Many of the examples discussed by McKenzie (1988) are pitchfork bifurcations, although it should be noted that those bifurcations he discusses with factor group \(D_{3}\) are generically transcritical, and not pitchforks.
## 8 Conclusions
Our aim in this work has been to demonstrate the utility of the extensive databases in crystallography for understanding transitions in fluid layers. For simplicity, we have focussed on steady-states, doubly-periodic in space that are described by the crystallographic layer groups. We have not discussed transitions which break the time-translation symmetry (Hopf bifurcations) which involve spatio-temporal group elements that combine space group elements with time translations. The bifurcation theory for such cases is well understood, but it would be helpful to have a standardised crystallographic notation for describing such transitions. Translations in time can be associated with a fourth dimension. Hopf bifurcations lead to solutions that are time-periodic, and thus the natural groups to consider will be the subperiodic groups in four-dimensions with a three-dimensional lattice of translations (the two horizontal space dimensions and one time dimension). We have also made no attempt here to describe the representation theory for the initial onset of convection: formally this would involve a study of the representations of \(E(2)\times C_{2}\) which is a non-compact Lie group. The difficulty of dealing with such a group is usually side-stepped in the literature by considering instead a problem on a compact domain (the two-torus).
There is a wealth of useful information that lies within the crystallographic databases, and we encourage fluid dynamicists to exploit it.
## References
* [1]A. Antoneli, P. Fernando, M. Haas, A. Paula S. Matthews, and C. (2008) Invariants, equivariants and characters in symmetric bifurcation theory. Proceedings of the Royal Society of Edinburgh: Section A Mathematics138, pp. 477-512. Cited by: SS1.
* [2]M. Aroyo, I. Kivov, A. Sen, C. Cesar, J. M. Wondratschek, and H. B. (2006) A Bilbao Crystallographic Server. II. Representations of crystallographic point groups and space groups. Acta Crystallographica Section A Foundations of Crystallography62, pp. 115-128. Cited by: SS1.
* [3]M. Aroyo, P. I. Kirav, A. Sen, C. Cesar, J. M. Wondratschek, and H. B. (2006) A simple model for the crystallographic point group. In Proceedings of the Royal Society of Edinburgh: Section A Mathematics, pp. 115-127. Cited by: SS1.
* [4]M. Aroyo, P. I. Kirav, A. Sen, C. Cesar, J. M. Wondratschek, and H. B. (2006) A simple model for the crystallographic point group. In Proceedings of the Royal Society of Edinburgh: Section A Mathematics, pp. 115-128. Cited by: SS1.
* [5]A. A. A.
Philosophical Transactions of the Royal Society of London. Series A, Mathematical and Physical Sciences_**308** (1505), 617-667.
* Carini _et al._ (2015)Carini, M., Auteri, F. & Giannetti, F. 2015 Centre-manifold reduction of bifurcating flows. _Journal of Fluid Mechanics_**767**, 109-145.
* Chapman & Proctor (1980)Chapman, C. J. & Proctor, M. R. E. 1980 Nonlinear Rayleigh-Benard convection between poorly conducting boundaries. _Journal of Fluid Mechanics_**101** (4), 759-782.
* Crawford & Knobloch (1991)Crawford, J D & Knobloch, E 1991 Symmetry and Symmetry-Breaking Bifurcations in Fluid Dynamics. _Annual Review of Fluid Mechanics_**23** (1), 341-387.
* Dionne _et al._ (1997)Dionne, Benoit, Silber, Mary & Skeldon, Anne C 1997 Stability results for steady, spatially periodic planforms. _Nonlinearity_**10** (2), 321-353.
* de la Flor _et al._ (2021)de la Flor, Gemma, Souvignier, Bernd, Madariaga, Gotzon & Aroyo, Mois I. 2021 Layer groups: Brillouin-zone and crystallographic databases on the Bilbao Crystallographic Server. _Acta Crystallographica Section A Foundations and Advances_**77** (6), 559-571.
* Golubitsky & Stewart (2002)Golubitsky, Martin & Stewart, Ian 2002 _The Symmetry Perspective_. Basel: Birkhauser Basel.
* Golubitsky _et al._ (1984)Golubitsky, M., Swift, J.W. & Knobloch, E. 1984 Symmetries and pattern selection in Rayleigh-Benard convection. _Physica D: Nonlinear Phenomena_**10** (3), 249-276.
* Grenier & Ballou (2012)Grenier, B. & Ballou, R. 2012 Crystallography: Symmetry groups and group representations. _EPJ Web of Conferences_**22**, 00006.
* Hahn (2006)Hahn, Th., ed. 2006 _International Tables for Crystallography_, _International Tables for Crystallography_, vol. A. Chester, England: International Union of Crystallography.
* Hoyle (2006)Hoyle, Rebecca 2006 _Pattern Formation_. Cambridge University Press.
* Iraola _et al._ (2022)Iraola, Mikel, Mases, Juan L., Bradlyn, Barry, Horton, Matthew K., Neupert, Titus, Vergniory, Maia G. & Tsirkin, Stepan S. 2022 IrRep: Symmetry eigenvalues and irreducible representations of ab initio band structures. _Computer Physics Communications_**272**, 108226.
* Knobloch (1990)Knobloch, E. 1990 Pattern selection in long-wavelength convection. _Physica D: Nonlinear Phenomena_**41** (3), 450-479.
* Kopsky (2006)Kopsky, V. 2006 Unified system of Hermann-Mauguin symbols for groups of material physics. 1. Groups with decomposable lattices. _Acta Crystallographica Section A Foundations of Crystallography_**62** (2), 77-92.
* Kopsky & Litvin (2010)Kopsky, V. & Litvin, D. B., ed. 2010 _International Tables for Crystallography_,, vol. E. Chester, England: International Union of Crystallography.
* Landau (1965)Landau, L.D. 1965 On the theory of phase transitions. In _Collected Papers of L.D. Landau_, pp. 193-216. Elsevier.
* Litvin (2013)Litvin, D. B., ed. 2013 _Magnetic Group Tables_. Chester, England: International Union of Crystallography.
* Litvin _et al._ (1986)Litvin, D. B., Fuksa, J. & Kopsky, V. 1986 On exomorphic types of phase transitions. _Journal of Mathematical Physics_**27** (3), 661-667.
* Litvin & Kopsky (2000)Litvin, Daniel B. & Kopsky, Vottech 2000 Subperiodic groups isomorphic to factor groups of reducible space groups. _Acta Crystallographica Section A Foundations of Crystallography_**56** (4), 370-374.
* Litvin & Wike (1991)Litvin, Daniel B. & Wike, Thomas R. 1991 _Character Tables and Compatibility Relations of The Eighty Layer Groups and Seventeen Plane Groups_. Boston, MA: Springer US.
* Matthews (2004)Matthews, P. C. 2004 Automating Symmetry-Breaking Calculations. _LMS Journal of Computation and Mathematics_**7**, 101-119.
* McKenzie (1988)McKenzie, Dan 1988 The symmetry of convective transitions in space and time. _Journal of Fluid Mechanics_**191**, 287.
* Milosevic _et al._ (1998)Milosevic, I, Nikolic, B, Damnjanovic, M & Krcmar, M 1998 Irreducible representations of diperiodic groups. _Journal of Physics A: Mathematical and General_**31** (15), 3625-3648.
* Muller (2013)Muller, Ulrich 2013 Subgroups and supergroups of point and space groups. In _Symmetry Relationships between Crystal Structures_, pp. 86-99. Oxford University Press.
* Perez-Mato _et al._ (2012)Perez-Mato, J.M., Aroyo, M.I. & Orobengoa, D. 2012 Symmetry considerations in structural phase transitions. _EPJ Web of Conferences_**22**, 00008.
* Proctor (1981)Prootor, M. R. E. 1981 Planform selection by finite-amplitude thermal convection between poorly conducting slabs. _Journal of Fluid Mechanics_**113**, 469.
* Reetz _et al._ (2020)Reetz, Florian, Subramanian, Priya & Schneider, Tobias M. 2020 Invariant states in inclined layer convection. Part 2. Bifurcations and connections between branches of invariant states. _Journal of Fluid Mechanics_**898**, A23.
* Rieutord (2015)Rieutord, Michel 2015 _Fluid Dynamics: An Introduction_. _Graduate Texts in Physics_. Cham: Springer International Publishing.
* Rieutord (2016)
Stokes, Harold T., van Orden, Seth & Campbell, Branton J. 2016 ISOSUBGROUP : an internet tool for generating isotropy subgroups of crystallographic space groups. _Journal of Applied Crystallography_**49** (5), 1849-1853.
* Groups, Algorithms, and Programming, Version 4.11.1.
* Tuckerman & Barkley (2000)Tuckerman, Laurette S. & Barkley, Dwight 2000 Bifurcation Analysis for Timesteppers. In _Numerical Methods for Bifurcation Problems and Large-Scale Dynamical Systems. The IMA Volumes in Mathematics and its Applications, vol 119_, pp. 453-466. Springer, New York.
* White (1988)White, David B. 1988 The planforms and onset of convection with a temperature-dependent viscosity. _Journal of Fluid Mechanics_**191**, 247.
* Wood (1964)Wood, Elizabeth A. 1964 The 80 Dipeiodic Groups in Three Dimensions. _Bell System Technical Journal_**43** (1), 541-559.
* Supplementary data (2017)Supplementary data. Three supplements with additional group theory tables are available at [https://doi.org/10.1017/jfm.xx](https://doi.org/10.1017/jfm.xx)
* Funding (2017)Funding. This research received no specific grant from any funding agency, commercial or not-for-profit sectors.
## Appendix A The symmetries of Rayleigh-Benard convection
Consider Rayleigh-Benard convection in a fluid layer, with \(x\) and \(y\) as horizontal coordinates and \(z\) as a vertical coordinate. The system has a natural Euclidean symmetry in the horizontal plane, represented by the group \(E(2)\). However, depending on boundary conditions and rheological choices there may be additional symmetries in the problem.
### Governing equations
For Boussinesq, infinite Prantl number, thermal convection the governing equations are
\[\mathbf{\nabla\cdot v}=0,\] (A 1) \[\mathbf{\nabla\cdot\sigma}=-\rho_{0}g\alpha T\hat{z},\] (A 2) \[\frac{\partial T}{\partial t}+\mathbf{v\cdot\nabla}T=\kappa\nabla^{2 }T,\] (A 3)
where \(\mathbf{v}\) is the fluid velocity, \(\sigma\) is the stress tensor, \(\rho_{0}\) is the reference density, \(g\) is the acceleration due to gravity, \(\alpha\) is the thermal expansivity, \(T\) is the temperature, and \(\kappa\) is the thermal diffusivity. The Newtonian constitutive law relating stress to strain rate is
\[\sigma=-pI+\eta\left(\mathbf{\nabla v}+\mathbf{\nabla v}^{\mathrm{T}}\right),\] (A 4)
where \(p\) is the pressure. Let \(\theta\) represent the temperature perturbation from a conductive steady state, where the steady-state temperature gradient is \(\Delta T/a\), and \(a\) is the layer thickness. The governing equations (A 1), (A 2) and (A 3) can be rewritten as
\[\mathbf{\nabla\cdot v}=0,\] (A 5) \[\mathbf{\nabla\cdot\tilde{\sigma}}=-\rho_{0}g\alpha\theta\hat{z},\] (A 6) \[\frac{\partial\theta}{\partial t}+\mathbf{v\cdot\nabla}\theta-\frac{ \Delta T}{a}\mathbf{v\cdot\hat{z}}=\kappa\nabla^{2}\theta,\] (A 7)
where \(\tilde{\sigma}\) is a modified stress tensor which represents the difference from the conductive state. The equations can be made dimensionless be scaling all lengths by the layer thickness \(a\), and all times by the diffusion time \(a^{2}/\kappa\). The behaviour is controlled by the dimensionless Rayleigh number \(Ra=\rho_{0}g\alpha\Delta Ta^{3}/(\eta_{0}\kappa)\). The temperature can be scaled by \(\Delta T/Ra\), the
velocity by \(\kappa/a\), and the pressure by \(\rho_{0}g\alpha\theta_{0}d\) to yield
\[\mathbf{\nabla}\mathbf{\cdot}\mathbf{v}=0,\] (A 8) \[-\mathbf{\nabla}\mathbf{\cdot}\tilde{\sigma}=\theta\hat{\mathbf{z}},\] (A 9) \[\frac{1}{Ra}\left(\frac{\partial\theta}{\partial t}+\mathbf{v}\mathbf{ \cdot}\mathbf{\nabla}\theta-\nabla^{2}\theta\right)=\mathbf{v}\mathbf{\cdot}\hat{\mathbf{z}}.\] (A 10)
### Mid-plane reflection
If boundary conditions top and bottom are identical, then providing the viscosity is constant (or depth-dependent with mid-plane symmetry), the equations are invariant under the mid-plane symmetry (\(z=0\) is the mid-plane),
\[m_{z}:(x,y,z)\to(x,y,-z),\] (A 11)
provided the variables in the equations transform as
\[m_{z}: u\to u,\quad v\to v,\quad p\to p,\] \[\theta\to-\theta,\quad w\to-w\] (A 12)
where the velocity vector is \(\mathbf{v}=(u,v,w)\) and \(p\) is the pressure perturbation. (A 12) represents the symmetry between hot upwellings and cold downwellings.
### Poloidal-toroidal decomposition
For 3D flows represented by poloidal and toroidal potentials \(\mathcal{S}\) and \(\mathcal{T}\)
\[\mathbf{v}=\mathbf{\nabla}\mathbf{\times}(\hat{\mathbf{z}}\times\mathbf{\nabla} \mathcal{S})+\hat{\mathbf{z}}\times\mathbf{\nabla}\mathcal{T},\] \[u=-\frac{\partial^{2}\mathcal{S}}{\partial x\partial z}-\frac{ \partial\mathcal{T}}{\partial y},\quad v=-\frac{\partial^{2}\mathcal{S}}{ \partial y\partial z}+\frac{\partial\mathcal{T}}{\partial x},\quad w=-\nabla_ {h}^{2}\mathcal{S}\] (A 13)
so the potentials must transform under mid-plane reflection as
\[m_{z}:\quad\mathcal{S}\to-\mathcal{S},\quad\mathcal{T}\to\mathcal{T}.\] (A 14)
This is different from their transformation under vertical mirrors, which is
\[m_{x}:\quad\mathcal{S}\to\mathcal{S},\quad\mathcal{T}\to-\mathcal{T}.\] (A 15)
as \(m_{x}:u\to-u\). For the constant-viscosity example in Figure 6 the flow is purely poloidal (\(\mathcal{T}=0\)). The poloidal potential \(\mathcal{S}\) transforms in the same way as the temperature perturbation \(\theta\) under the symmetry operations.
### Time dimension
This work focuses on steady-states and thus there is little discussion of the time dimension. However, it is worth noting that while the governing equations are invariant under any time translation, they are not invariant under time reflection \(m_{t}\) owing to the diffusion term.
### Self-adjointness
If the equations (A 1), (A 2) and (A 3) are linearised about a conductive steady-state (i.e. neglecting the \(\mathbf{v}\mathbf{\cdot}\mathbf{\nabla}\theta\) term) then the equations themselves have an important symmetry: namely, they are self-adjoint provided the viscosity is constant or purely-depth dependent, and appropriate boundary conditions are applied.
## Appendix B Representations of layer groups with non-zero wavevector
The general theory of representations of layer groups with a non-zero wavevector is somewhat involved, and a full account can be found in e.g. Bradley and Cracknell (1972); Aroyo et al. (2006); de la Flor et al. (2021); Grenier and Ballou (2012). In this appendix we give some simple examples of representations with a non-zero wavenumber that are associated with spatial-period-multiplying bifurcations.
Consider the layer group \(p4mm\) (No. 55). This group can be generated by unit translations \(t_{x}\) and \(t_{y}\) in the \(x-\) and \(y-\) directions along with point group element \(4_{z}\) representing a 90 degree rotation about the \(z\) axis and point group element \(m_{\overline{xy}}\) representing a reflection in a vertical mirror plane parallel to the line \(y=x\). A representation of the group can be described by a mapping of the generators to a set of matrices.
In the general theory of representations of layer groups, each irreducible representation is described by a wavevector \(\mathbf{k}\) and a label for a particular small representation of the wavevector. For this example we consider a wavevector of the form
\[\mathbf{k}=u\mathbf{a}_{1}^{*}+u\mathbf{a}_{2}^{*}\] (B1)
where \(u<1/2\), and \(\mathbf{a}_{1}^{*}\), \(\mathbf{a}_{2}^{*}\) are basis vectors for the reciprocal lattice, defined such that \(\mathbf{a}_{i}\mathbf{\cdot}\mathbf{a}_{j}^{*}=2\pi\delta_{ij}\), where \(\mathbf{a}_{1}\) and \(\mathbf{a}_{2}\) are the space domain basis lattice vectors. The wavevector lies in a subset of the Brillouin zone known as the representation domain.
The chosen wavevector in (B1) lies within the part of the Brillouin zone labelled \(\Sigma\) (written in software which uses text labels as SM). The tool LKVEC (de la Flor et al., 2021) on the Bilbao Crystallographic Server can be used to identify the position of a wavenumber vector in the representation domain and the corresponding little co-group \(\overline{G}^{\mathbf{k}}\). The little co-group is the set of point group elements that leaves the wavevector unchanged. The little co-group associated with wavevectors along \(\Sigma\) is \(..m\), which has just two elements: the identity and the mirror \(m_{\overline{xy}}\).
From the little co-group \(\overline{G}^{\mathbf{k}}\) one can form the little group \(G^{\mathbf{k}}\) of \(\mathbf{k}\), which a subgroup of \(G\) containing those elements which have the point-group elements of the little co-group in their rotational part. We first need to obtain representations of the little group. Such representations must be small (or allowed) representations, which are the representations of the little group that map a pure translation by a vector \(\mathbf{d}\) to \(\exp(-i\mathbf{k}\mathbf{\cdot}\mathbf{d})\) times an identity matrix. In this simple example, the small representations are 1-dimensional and there are just two of them. The small group is generated by the unit translations and the mirror element \(m_{\overline{xy}}\). Since the representations of the translations has been prescribed, all that remains is to describe the mapping of the mirror element. There is a trivial representation \(\Sigma_{1}\) which maps the mirror element \(m_{\overline{xy}}\) to 1 and another representation \(\Sigma_{2}\) which maps the mirror element to -1.
The _star_ of the wavevector is the set of possible wavevectors that can be obtained by applying all the point group operations to the given wavevector. In this example the star has four arms:
\[\mathbf{k}_{1} =(u,u),\] (B2) \[\mathbf{k}_{2} =(-u,-u),\] (B3) \[\mathbf{k}_{3} =(u,-u),\] (B4) \[\mathbf{k}_{4} =(-u,u).\] (B5)
The left cosets of \(G^{\mathbf{k}}\) in \(G\) are in one-to-one correspondence with the star of the wavevector. A representation of the full group \(G\) can be obtained as an induced representation from the little group \(G^{\mathbf{k}}\). The induced representation is of dimension \(md\) where \(d\) is the dimension of
the little group representation (here \(d=1\)) and \(m\) is the number of left cosets of \(G^{\mathbf{k}}\) in \(G\) (which is identical to the number of arms in the star of the wavevector, here \(m=4\)).
In the induced representation a general translation by a vector \(\mathbf{d}=(d_{1},d_{2})\) is represented by a diagonal matrix of the following form
\[\begin{pmatrix}\mathrm{e}^{-\mathbf{i}\mathbf{k}_{1}\cdot\mathbf{d}}&0&0&0\\ 0&\mathrm{e}^{-\mathbf{i}\mathbf{k}_{2}\cdot\mathbf{d}}&0&0\\ 0&0&\mathrm{e}^{-\mathbf{i}\mathbf{k}_{3}\cdot\mathbf{d}}&0\\ 0&0&0&\mathrm{e}^{-\mathbf{i}\mathbf{k}_{4}\cdot\mathbf{d}}\end{pmatrix}\] (B 6)
where \(\mathbf{k}_{1}\), \(\mathbf{k}_{2}\), \(\mathbf{k}_{3}\), and \(\mathbf{k}_{4}\) are the arms of the star.
The induced representation \({}^{*}\Gamma_{1}\) of the full space group \(G\) is given in terms of the generators as
\[t_{x}= \begin{pmatrix}\omega&0&0&0\\ 0&\omega^{*}&0&0\\ 0&0&\omega&0\\ 0&0&0&\omega^{*}\end{pmatrix},\quad t_{y}=\begin{pmatrix}\omega&0&0&0\\ 0&\omega^{*}&0&0\\ 0&0&\omega^{*}&0\\ 0&0&0&\omega\end{pmatrix},\] \[4_{z}= \begin{pmatrix}0&0&1&0\\ 0&0&0&1\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix},\quad m_{\overline{x}y}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix},\] (B 7)
where \(\omega=\mathrm{e}^{-2\pi\mathbf{i}u}\). The representation \({}^{*}\Gamma_{2}\) has identical generators, except for \(m_{\overline{x}y}\) which has \(-1\) in place of \(1\) in each entry. The above representation has complex entries, but by a change of basis an equivalent real representation can be found.
If the wavevector of form (B 1) is chosen with \(u=1/2\), then it lies at a special point in the Brillouin zone labelled \(M\). The little co-group is then \(4mm\), and the induced representation \({}^{*}M_{1}\) from the trivial representation of the little group is simply the 1-dimensional representation that maps the generators as
\[t_{x}=-1,\quad t_{y}=-1,\quad 4_{z}=1,\quad m_{\overline{x}y}=1.\] (B 8)
There are four additional representations induced from the little group, including a 2-dimensional representation \({}^{*}M_{5}\), but these will not be considered further.
### Application to spatial-period-multiplying bifurcations
The representations \({}^{*}\Gamma_{1}\) and \({}^{*}M_{1}\) with matrices given in (B 7) and (B 8) can be used to describe spatial-period-multiplying bifurcations for \(p4mm\). The simplest case is when \(u=1/2\) and the 1-D representation \({}^{*}M_{1}\) in (B 8) provides a mapping onto the group \(C_{2}\). There is an isotropy subgroup which consists of all those elements which map to \(1\). This subgroup is also \(p4mm\) (as all the point group elements are retained) but it has a reduced set of translation elements (it is an index 2 klassengleiche subgroup). For the subgroup a new basis for the lattice can be obtained using the translations by \((1,1)\) and \((1,-1)\). This isotropy subgroup is the maximal isotypic subgroup of lowest index for \(p4mm\).
For the representation \({}^{*}\Gamma_{1}\), suppose \(u=1/p\) where \(p\) is a prime number equal to \(3\) or greater. Then \(\omega^{p}=1\), and the matrix group described by (B 7) is the finite group \(D_{4}\ltimes C_{p}^{2}\) of order \(8p^{2}\). There is a klassengleiche axial isotropy subgroup of this representation of \(p4mm\) which is also \(p4mm\) but with the basis vectors of the lattice scaled by a factor of \(p\) in each direction (the index of the subgroup in the parent group is \(p^{2}\)). All the point group operations are retained in the subgroup. In the representation in (B 7) this can be explicitly recognised by the fixed point subspace \((a,a,a,a)\) which is invariant under \(4_{z}\) and
(retaining the point group) and the translations for which \((d_{1},d_{2})\equiv(0,0)\) (mod \(p\)). There are also non-isotypic axial isotropy subgroups of \({}^{*}\Gamma_{1}\) with fixed point subspaces \((b,b,0,0)\) and \((0,0,c,c)\). For these isotropy subgroups the point group is reduced to \(mm2\), consisting of the diagonal mirrors and \(2_{z}\). The translation group is reduced, but retains the diagonal translation with multiples of either \((1,-1)\) or \((1,1)\). The corresponding layer group is \(cmm2\) (index \(2p\) subgroup).
For the particular case of \(p=3\), the character table for the group \(D_{4}\ltimes C_{3}^{2}\) is given in supplement 3 and further discussion of its role in spatial-period-multiplying bifurcations can be found in Matthews (2004) (see their Figure 1).
### An example of \(p4/nmm\)
A slightly more complicated, but closely related, example is given by \(p4/nmm\) (No. 64). This is non-symmorphic layer group. It can be generated by the same operations as \(p4mm\), i.e. \(t_{x}\), \(t_{y}\), \(4_{z}\), and \(m_{\overline{xy}}\), but in addition is generated by a glide reflection \(n\) which reflects in a vertical mirror \(m_{z}\) and then translates by \(\left(\frac{1}{2},\frac{1}{2},0\right)\). An index-9 k-transition from \(p4/nmm\) to \(p4/nmm\) can be obtained as an isotropy subgroup of two different irreps. The irrep \({}^{*}\Sigma_{1}\) with wavevector \(\mathbf{k}=(1/3,1/3)\) given by
\[t_{x}=\begin{pmatrix}\omega&0&0&0\\ 0&\omega^{*}&0&0\\ 0&0&\omega&0\\ 0&0&0&\omega^{*}\end{pmatrix},\quad t_{y}=\begin{pmatrix}\omega&0&0&0\\ 0&\omega^{*}&0&0\\ 0&0&\omega^{*}&0\\ 0&0&0&\omega\end{pmatrix},\] \[4_{z}=\begin{pmatrix}0&0&1&0\\ 0&0&0&1\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix},\quad m_{\overline{xy}}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&0&1\\ 0&0&1&0\end{pmatrix},\quad n=\begin{pmatrix}\omega&0&0&0\\ 0&\omega^{*}&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix},\] (B.9)
where \(\omega=\mathrm{e}^{-2\pi\mathrm{i}/3}\), and the irrep \({}^{*}\Delta_{3}\) with wavevector \(\mathbf{k}=(0,1/3)\) given by
\[t_{x}=\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&\omega&0\\ 0&0&0&\omega^{*}\end{pmatrix},\quad t_{y}=\begin{pmatrix}\omega&0&0&0\\ 0&\omega^{*}&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix},\] \[4_{z}=\begin{pmatrix}0&0&1&0\\ 0&0&0&1\\ 0&1&0&0\\ 1&0&0&0\end{pmatrix},\quad m_{\overline{xy}}=\begin{pmatrix}0&0&1&0\\ 0&0&0&1\\ 1&0&0&0\\ 0&1&0&0\end{pmatrix},\quad n=\begin{pmatrix}\omega^{*}&0&0&0\\ 0&\omega&0&0\\ 0&0&\omega^{*}&0\\ 0&0&0&\omega\end{pmatrix}.\] (B.10)
For both representations (B.9) and (B.10) \((a,a,a,a)\) is the fixed point subspace corresponding to the axial isotropy subgroup \(p4/nmm\). Note that for both cases \(t_{x}^{3}\), \(t_{y}^{3}\), and \(n^{3}\) map to the identity and so are elements of the isotropy subgroup. \(n^{3}\) represents a reflection in a vertical mirror followed by a translation by \((3/2,3/2,0)\), so is the same as the original glide reflection \(n\) but with the translation vector scaled by 3.
## Appendix C Equivariants and character theory
Much of the key information about symmetry-breaking bifurcations can be obtained from a series of routine mechanical calculations using character tables. As described by Matthews (2004) and Antoneli _et al._ (2008), these calculations can be automated using the computational algebra package GAP (The GAP Group 2021).
Many key results follow from the _trace formula_ that states that for a group \(G\) acting linearly on a vector space \(V\) the dimension of the fixed-point subspace is
\[\dim\,\operatorname{Fix}(G,V)=\langle\chi_{V},1\rangle\] (C 1)
where \(\chi_{V}\) is the character of the representation of \(G\) on \(V\), \(1\) is the trivial character, and angle brackets represent the scalar product on characters. This formula can be used to determine whether a subgroup is an isotropy subgroup (Matthews 2004). Note that \(V\) in the trace formula can be any vector space, not just \(\mathbb{R}^{n}\). By applying this formula to appropriately symmetrised parts of tensor product spaces, Antoneli _et al._ (2008) show how this formula can be used to work out the dimension of the spaces of invariant and equivariant polynomials (their (3.9) and (3.10)). If \(I(k)\) is the dimension of the space of invariant polynomials of degree \(k\), and \(E(k)\) is the corresponding space of equivariants of degree \(k\), then the trace formula yields
\[I(k)=\langle\chi_{S^{k}V},1\rangle\,,\] (C 2) \[E(k)=\langle\chi_{S^{k}V}\chi_{V},1\rangle\,,\] (C 3)
where \(S^{k}V\) refers to the symmetric part of the tensor product of \(k\) copies of \(V\).
As an example, suppose we want to work out the number of quadratic equivariants for the faithful irrep of \(D_{3}\). The character of the irrep can be written as \(\chi_{V}=(2,0,-1)\) where the identity corresponds to \(2\), the mirrors correspond to \(0\), and the rotations correspond to \(-1\). The symmetric part of \(V\otimes V\) has character \(\chi_{S^{2}V}=(3,1,0)\). Thus \(\chi_{S^{2}V}\chi_{V}=(6,0,0)\). The inner product with the trivial character then yields \(E(2)=6/6=1\) (since the order of \(D_{3}\) is \(6\)). Supplement 3 provides tables of \(I(k)\) and \(E(k)\) for a series of small finite groups.
# Supplement 1: Maximal subgroups of the layer groups
This supplement lists the maximal subgroups of the 80 layer groups with prime or prime-square index up to 7. Much of this information can also be found in the International Tables for Crystallography Volume E and on the Bilbao Crystallographic Server (the MAXSUB program). However, the tables here include additional information useful for classifying steady-state symmetry-breaking bifurcations, and in particular the relevant factor groups.
Each table is captioned with the name of the parent layer group \(G\). Each row of the table corresponds to a particular maximal subgroup \(H\) of that parent group. Columns from left to right give i) the number of the subgroup; ii) the Hermann-Mauguin symbol of the subgroup; iii) the index of the subgroup in the parent group; iv) the type of subgroup, either 't' for translationengleiche (translations retained) or 'k' for klassengleiche (translations reduced); iv) the factor group \(G/N\); v) the normal core \(N\) of the subgroup \(H\) in \(G\) (the largest normal subgroup of \(G\) contained in \(H\)); vi) the Hermann-Mauguin symbol of the normal core; vii) the image of the subgroup \(H\) under the natural homomorphism from \(G\) to the cosets \(gN\); viii) the corresponding classification of a steady-state bifurcation from the group \(G\) to its subgroup \(H\). Axial isotropy subgroups are generically associated with either pitchfork or transcritical bifurcations. Where the table lists pitchfork, a pitchfork bifurcation is necessitated by the symmetry. Where the table lists transcritical, a transcritical bifurcation may be associated with the transition, although in certain degenerate cases these transitions can also be pitchforks. Some subgroups are associated with complex irreps (marked as "(complex)"): these subgroups cannot be associated with steady-state bifurcations. For some of the larger factor groups, the given group-subgroup transition is associated with more than one irrep, and potentially both bifurcations types (marked as "pfork + trans").
## 1 Introduction
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & k & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
1 & \(p1\) & 3 & k & \(C_{3}\) & 1 & \(p1\) & \(C_{1}\) & (complex) \\
1 & \(p1\) & 5 & k & \(C_{5}\) & 1 & \(p1\) & \(C_{1}\) & (complex) \\
1 & \(p1\) & 7 & k & \(C_{7}\) & 1 & \(p1\) & \(C_{1}\) & (complex) \\ \hline \hline \end{tabular}
\end{table}
Table 1: Maximal subgroups of \(p1\) (No. 1)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
3 & \(p112\) & 2 & k & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
3 & \(p112\) & 3 & k & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
3 & \(p112\) & 5 & k & \(D_{5}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
3 & \(p112\) & 7 & k & \(D_{7}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 4: Maximal subgroups of \(p11m\) (No. 4)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
2 & \(p\overline{1}\) & 2 & k & \(C_{2}\) & 2 & \(p\overline{1}\) & \(C_{1}\) & pitchfork \\
2 & \(p\overline{1}\) & 3 & k & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
2 & \(p\overline{1}\) & 5 & k & \(D_{5}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
2 & \(p\overline{1}\) & 7 & k & \(D_{7}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 2: Maximal subgroups of \(p\overline{1}\) (No. 2)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
3 & \(p112\) & 2 & k & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
3 & \(p112\) & 3 & k & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
3 & \(p112\) & 5 & k & \(D_{5}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
3 & \(p112\) & 7 & k & \(D_{7}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 3: Maximal subgroups of \(p112\) (No. 3)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
2 & \(p\overline{1}\) & 2 & t & \(C_{2}\) & 2 & \(p\overline{1}\) & \(C_{1}\) & pitchfork \\
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
4 & \(p11m\) & 2 & t & \(C_{2}\) & 4 & \(p11m\) & \(C_{1}\) & pitchfork \\
6 & \(p112/m\) & 2 & k & \(C_{2}\) & 6 & \(p112/m\) & \(C_{1}\) & pitchfork \\
6 & \(p112/m\) & 3 & k & \(D_{3}\) & 4 & \(p11m\) & \(C_{2}\) & transcritical \\
6 & \(p112/m\) & 5 & k & \(D_{5}\) & 4 & \(p11m\) & \(C_{2}\) & pitchfork \\
6 & \(p112/m\) & 7 & k & \(D_{7}\) & 4 & \(p11m\) & \(C_{2}\) & pitchfork \\
7 & \(p112/a\) & 2 & k & \(C_{2}\) & 7 & \(p112/a\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 6: Maximal subgroups of \(p112/m\) (No. 6)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 2 & k & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 3 & k & \(C_{3}\) & 8 & \(p211\) & \(C_{1}\) & (complex) \\
8 & \(p211\) & 3 & k & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
8 & \(p211\) & 5 & k & \(C_{5}\) & 8 & \(p211\) & \(C_{1}\) & (complex) \\
8 & \(p211\) & 5 & k & \(D_{5}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
8 & \(p211\) & 7 & k & \(C_{7}\) & 8 & \(p211\) & \(C_{1}\) & (complex) \\
8 & \(p211\) & 7 & k & \(D_{7}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
9 & \(p2_{1}11\) & 2 & k & \(C_{2}\) & 9 & \(p2_{1}11\) & \(C_{1}\) & pitchfork \\
10 & \(c211\) & 2 & k & \(C_{2}\) & 10 & \(c211\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 7: Maximal subgroups of \(p112/a\) (No. 7)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
5 & \(p11a\) & 2 & k & \(C_{2}\) & 5 & \(p11a\) & \(C_{1}\) & pitchfork \\
5 & \(p11a\) & 3 & k & \(C_{3}\) & 5 & \(p11a\) & \(C_{1}\) & (complex) \\
5 & \(p11a\) & 5 & k & \(C_{5}\) & 5 & \(p11a\) & \(C_{1}\) & (complex) \\
5 & \(p11a\) & 7 & k & \(C_{7}\) & 5 & \(p11a\) & \(C_{1}\) & (complex) \\ \hline \end{tabular}
\end{table}
Table 5: Maximal subgroups of \(p11a\) (No. 5)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & k & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 3 & k & \(C_{3}\) & 11 & \(pm11\) & \(C_{1}\) & (complex) \\
11 & \(pm11\) & 3 & k & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
11 & \(pm11\) & 5 & k & \(C_{5}\) & 11 & \(pm11\) & \(C_{1}\) & (complex) \\
11 & \(pm11\) & 5 & k & \(D_{5}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
11 & \(pm11\) & 7 & k & \(C_{7}\) & 11 & \(pm11\) & \(C_{1}\) & (complex) \\
11 & \(pm11\) & 7 & k & \(D_{7}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
12 & \(pb11\) & 2 & k & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
13 & \(cm11\) & 2 & k & \(C_{2}\) & 13 & \(cm11\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 11: Maximal subgroups of \(pm11\) (No. 11)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 3 & k & \(C_{3}\) & 11 & \(pm11\) & \(C_{1}\) & (complex) \\
11 & \(pm11\) & 3 & k & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
11 & \(pm11\) & 5 & k & \(C_{5}\) & 11 & \(pm11\) & \(C_{1}\) & (complex) \\
11 & \(pm11\) & 5 & k & \(D_{5}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
11 & \(pm11\) & 7 & k & \(C_{7}\) & 11 & \(pm11\) & \(C_{1}\) & (complex) \\
11 & \(pm11\) & 7 & k & \(D_{7}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
12 & \(pb11\) & 2 & k & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
13 & \(cm11\) & 2 & k & \(C_{2}\) & 13 & \(cm11\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 9: Maximal subgroups of \(p2_{1}11\) (No. 9)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & k & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 3 & k & \(C_{3}\) & 12 & \(pb11\) & \(C_{1}\) & (complex) \\
12 & \(pb11\) & 3 & k & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
12 & \(pb11\) & 5 & k & \(C_{5}\) & 12 & \(pb11\) & \(C_{1}\) & (complex) \\
12 & \(pb11\) & 5 & k & \(D_{5}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\
12 & \(pb11\) & 7 & k & \(C_{7}\) & 12 & \(pb11\) & \(C_{1}\) & (complex) \\
12 & \(pb11\) & 7 & k & \(D_{7}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 12: Maximal subgroups of \(pb11\) (No. 12)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
1 & \(p1\) & 2 & t & \(C_{2}\) & 1 & \(p1\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & k & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & k & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & k & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
13 & \(cm11\) & 3 & k & \(C_{3}\) & 13 & \(cm11\) & \(C_{1}\) & (complex) \\
13 & \(cm11\) & 3 & k & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
13 & \(cm11\) & 5 & k & \(C_{5}\) & 13 & \(cm11\) & \(C_{1}\) & (complex) \\
13 & \(cm11\) & 7 & k & \(D_{7}\) & 1 & \(p1\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 13: Maximal subgroups of \(cm11\) (No. 13)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
2 & \(p\overline{1}\) & 2 & t & \(C_{2}\) & 2 & \(p\overline{1}\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 2 & t & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
14 & \(p2/m11\) & 2 & k & \(C_{2}\) & 14 & \(p2/m11\) & \(C_{1}\) & pitchfork \\
14 & \(p2/m11\) & 3 & k & \(D_{3}\) & 8 & \(p211\) & \(C_{2}\) & transcritical \\
14 & \(p2/m11\) & 3 & k & \(D_{3}\) & 11 & \(pm11\) & \(C_{2}\) & transcritical \\
14 & \(p2/m11\) & 5 & k & \(D_{5}\) & 8 & \(p211\) & \(C_{2}\) & pitchfork \\
14 & \(p2/m11\) & 5 & k & \(D_{5}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
14 & \(p2/m11\) & 7 & k & \(D_{7}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
14 & \(p2/m11\) & 7 & k & \(D_{7}\) & 8 & \(p211\) & \(C_{2}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 2 & k & \(C_{2}\) & 15 & \(p2_{1}/m11\) & \(C_{1}\) & pitchfork \\
16 & \(p2/b11\) & 2 & k & \(C_{2}\) & 16 & \(p2/b11\) & \(C_{1}\) & pitchfork \\
18 & \(c2/m11\) & 2 & k & \(C_{2}\) & 18 & \(c2/m11\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 14: Maximal subgroups of \(p2/m11\) (No. 14)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
2 & \(p\overline{1}\) & 2 & t & \(C_{2}\) & 2 & \(p\overline{1}\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 2 & t & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & t & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
16 & \(p2/b11\) & 2 & k & \(C_{2}\) & 16 & \(p2/b11\) & \(C_{1}\) & pitchfork \\
16 & \(p2/b11\) & 3 & k & \(D_{3}\) & 8 & \(p211\) & \(C_{2}\) & transcritical \\
16 & \(p2/b11\) & 3 & k & \(D_{3}\) & 12 & \(pb11\) & \(C_{2}\) & transcritical \\
16 & \(p2/b11\) & 5 & k & \(D_{5}\) & 8 & \(p211\) & \(C_{2}\) & pitchfork \\
16 & \(p2/b11\) & 5 & k & \(D_{5}\) & 12 & \(pb11\) & \(C_{2}\) & pitchfork \\
16 & \(p2/b11\) & 7 & k & \(D_{7}\) & 8 & \(p211\) & \(C_{2}\) & pitchfork \\
16 & \(p2/b11\) & 7 & k & \(D_{7}\) & 12 & \(pb11\) & \(C_{2}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 2 & k & \(C_{2}\) & 17 & \(p2_{1}/b11\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 16: Maximal subgroups of \(p2/b11\) (No. 16)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
2 & \(p\overline{1}\) & 2 & t & \(C_{2}\) & 2 & \(p\overline{1}\) & \(C_{1}\) & pitchfork \\
9 & \(p2_{1}11\) & 2 & t & \(C_{2}\) & 9 & \(p2_{1}11\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & t & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 3 & k & \(D_{3}\) & 12 & \(pb11\) & \(C_{2}\) & transcritical \\
17 & \(p2_{1}/b11\) & 3 & k & \(D_{3}\) & 9 & \(p2_{1}11\) & \(C_{2}\) & transcritical \\
17 & \(p2_{1}/b11\) & 5 & k & \(D_{5}\) & 9 & \(p2_{1}11\) & \(C_{2}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 5 & k & \(D_{5}\) & 12 & \(pb11\) & \(C_{2}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 7 & k & \(D_{7}\) & 12 & \(pb11\) & \(C_{2}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 7 & k & \(D_{7}\) & 9 & \(p2_{1}11\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 17: Maximal subgroups of \(p2_{1}/b11\) (No. 17)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
2 & \(p\overline{1}\) & 2 & t & \(C_{2}\) & 2 & \(p\overline{1}\) & \(C_{1}\) & pitchfork \\
9 & \(p2_{1}11\) & 2 & t & \(C_{2}\) & 9 & \(p2_{1}11\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 2 & k & \(C_{2}\) & 15 & \(p2_{1}/m11\) & \(C_{1}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 3 & k & \(D_{3}\) & 11 & \(pm11\) & \(C_{2}\) & transcritical \\
15 & \(p2_{1}/m11\) & 3 & k & \(D_{3}\) & 9 & \(p2_{1}11\) & \(C_{2}\) & transcritical \\
15 & \(p2_{1}/m11\) & 5 & k & \(D_{5}\) & 9 & \(p2_{1}11\) & \(C_{2}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 5 & k & \(D_{5}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 7 & k & \(D_{7}\) & 9 & \(p2_{1}11\) & \(C_{2}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 7 & k & \(D_{7}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 2 & k & \(C_{2}\) & 17 & \(p2_{1}/b11\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 15: Maximal subgroups of \(p2_{1}/m11\) (No. 15)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 2 & t & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
9 & \(p222\) & 3 & k & \(D_{3}\) & 8 & \(p211\) & \(C_{2}\) & transcritical \\
19 & \(p222\) & 5 & k & \(D_{5}\) & 8 & \(p211\) & \(C_{2}\) & pitchfork \\
19 & \(p222\) & 7 & k & \(D_{7}\) & 8 & \(p211\) & \(C_{2}\) & pitchfork \\
20 & \(p2_{1}22\) & 2 & k & \(C_{2}\) & 20 & \(p2_{1}22\) & \(C_{1}\) & pitchfork \\
22 & \(c222\) & 2 & k & \(C_{2}\) & 22 & \(c222\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 20: Maximal subgroups of \(p2_{1}22\) (No. 20)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
2 & \(p\overline{1}\) & 2 & t & \(C_{2}\) & 2 & \(p\overline{1}\) & \(C_{1}\) & pitchfork \\
10 & \(c211\) & 2 & t & \(C_{2}\) & 10 & \(c211\) & \(C_{1}\) & pitchfork \\
13 & \(cm11\) & 2 & t & \(C_{2}\) & 13 & \(cm11\) & \(C_{1}\) & pitchfork \\
14 & \(p2/m11\) & 2 & k & \(C_{2}\) & 14 & \(p2/m11\) & \(C_{1}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 2 & k & \(C_{2}\) & 15 & \(p2_{1}/m11\) & \(C_{1}\) & pitchfork \\
16 & \(p2/b11\) & 2 & k & \(C_{2}\) & 16 & \(p2/b11\) & \(C_{1}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 2 & k & \(C_{2}\) & 17 & \(p2_{1}/b11\) & \(C_{1}\) & pitchfork \\
18 & \(c2/m11\) & 3 & k & \(D_{3}\) & 13 & \(cm11\) & \(C_{2}\) & transcritical \\
18 & \(c2/m11\) & 3 & k & \(D_{3}\) & 10 & \(c211\) & \(C_{2}\) & transcritical \\
18 & \(c2/m11\) & 5 & k & \(D_{5}\) & 13 & \(cm11\) & \(C_{2}\) & pitchfork \\
18 & \(c2/m11\) & 5 & k & \(D_{5}\) & 10 & \(c211\) & \(C_{2}\) & pitchfork \\
18 & \(c2/m11\) & 7 & k & \(D_{7}\) & 13 & \(cm11\) & \(C_{2}\) & pitchfork \\
18 & \(c2/m11\) & 7 & k & \(D_{7}\) & 10 & \(c211\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 18: Maximal subgroups of \(c2/m11\) (No. 18)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 2 & t & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
19 & \(p222\) & 2 & k & \(C_{2}\) & 19 & \(p222\) & \(C_{1}\) & pitchfork \\
19 & \(p222\) & 3 & k & \(D_{3}\) & 8 & \(p211\) & \(C_{2}\) & transcritical \\
19 & \(p222\) & 5 & k & \(D_{5}\) & 8 & \(p211\) & \(C_{2}\) & pitchfork \\
19 & \(p222\) & 7 & k & \(D_{7}\) & 8 & \(p211\) & \(C_{2}\) & pitchfork \\
20 & \(p2_{1}22\) & 2 & k & \(C_{2}\) & 20 & \(p2_{1}22\) & \(C_{1}\) & pitchfork \\
22 & \(c222\) & 2 & k & \(C_{2}\) & 22 & \(c222\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 19: Maximal subgroups of \(p222\) (No. 19)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
23 & \(pmm2\) & 2 & k & \(C_{2}\) & 23 & \(pmm2\) & \(C_{1}\) & pitchfork \\
23 & \(pmm2\) & 3 & k & \(D_{3}\) & 11 & \(pm11\) & \(C_{2}\) & transcritical \\
23 & \(pmm2\) & 5 & k & \(D_{5}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
23 & \(pmm2\) & 7 & k & \(D_{7}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
24 & \(pma2\) & 2 & k & \(C_{2}\) & 24 & \(pma2\) & \(C_{1}\) & pitchfork \\
26 & \(cmm2\) & 2 & k & \(C_{2}\) & 26 & \(cmm2\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 23: Maximal subgroups of \(pmm2\) (No. 23)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
10 & \(c211\) & 2 & t & \(C_{2}\) & 10 & \(c211\) & \(C_{1}\) & pitchfork \\
19 & \(p222\) & 2 & k & \(C_{2}\) & 19 & \(p222\) & \(C_{1}\) & pitchfork \\
20 & \(p2_{1}22\) & 2 & k & \(C_{2}\) & 20 & \(p2_{1}22\) & \(C_{1}\) & pitchfork \\
21 & \(p2_{1}2_{1}2\) & 2 & k & \(C_{2}\) & 21 & \(p2_{1}2_{1}2\) & \(C_{1}\) & pitchfork \\
22 & \(c222\) & 3 & k & \(D_{3}\) & 10 & \(c211\) & \(C_{2}\) & transcritical \\
22 & \(c222\) & 5 & k & \(D_{5}\) & 10 & \(c211\) & \(C_{2}\) & pitchfork \\
22 & \(c222\) & 7 & k & \(D_{7}\) & 10 & \(c211\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 22: Maximal subgroups of \(c222\) (No. 22)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
23 & \(pmm2\) & 2 & k & \(C_{2}\) & 23 & \(pmm2\) & \(C_{1}\) & pitchfork \\
23 & \(pmm2\) & 3 & k & \(D_{3}\) & 11 & \(pm11\) & \(C_{2}\) & transcritical \\
23 & \(pmm2\) & 5 & k & \(D_{5}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
23 & \(pmm2\) & 7 & k & \(D_{7}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
24 & \(pma2\) & 2 & k & \(C_{2}\) & 24 & \(pma2\) & \(C_{1}\) & pitchfork \\
26 & \(cmm2\) & 2 & k & \(C_{2}\) & 26 & \(cmm2\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 21: Maximal subgroups of \(p2_{1}2_{1}2\) (No. 21)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & t & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
24 & \(pma2\) & 2 & k & \(C_{2}\) & 23 & \(pmm2\) & \(C_{1}\) & pitchfork \\
24 & \(pma2\) & 2 & k & \(C_{2}\) & 24 & \(pma2\) & \(C_{1}\) & pitchfork \\
25 & \(pba2\) & 2 & k & \(C_{2}\) & 25 & \(pba2\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 26: Maximal subgroups of \(cmm2\) (No. 26)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & t & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
24 & \(pma2\) & 2 & k & \(C_{2}\) & 24 & \(pma2\) & \(C_{1}\) & pitchfork \\
24 & \(pma2\) & 3 & k & \(D_{3}\) & 12 & \(pb11\) & \(C_{2}\) & transcritical \\
24 & \(pma2\) & 3 & k & \(D_{3}\) & 11 & \(pm11\) & \(C_{2}\) & transcritical \\
24 & \(pma2\) & 5 & k & \(D_{5}\) & 12 & \(pb11\) & \(C_{2}\) & pitchfork \\
24 & \(pma2\) & 5 & k & \(D_{5}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
24 & \(pma2\) & 7 & k & \(D_{7}\) & 12 & \(pb11\) & \(C_{2}\) & pitchfork \\
24 & \(pma2\) & 7 & k & \(D_{7}\) & 11 & \(pm11\) & \(C_{2}\) & pitchfork \\
25 & \(pba2\) & 2 & k & \(C_{2}\) & 25 & \(pba2\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 24: Maximal subgroups of \(pma2\) (No. 24)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & t & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
25 & \(pba2\) & 3 & k & \(D_{3}\) & 12 & \(pb11\) & \(C_{2}\) & transcritical \\
25 & \(pba2\) & 5 & k & \(D_{5}\) & 12 & \(pb11\) & \(C_{2}\) & pitchfork \\
25 & \(pba2\) & 7 & k & \(D_{7}\) & 12 & \(pb11\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 25: Maximal subgroups of \(pba2\) (No. 25)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
4 & \(p11m\) & 2 & t & \(C_{2}\) & 4 & \(p11m\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 2 & t & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
27 & \(pm2m\) & 2 & k & \(C_{2}\) & 27 & \(pm2m\) & \(C_{1}\) & pitchfork \\
27 & \(pm2m\) & 3 & k & \(D_{3}\) & 4 & \(p11m\) & \(C_{2}\) & transcritical \\
27 & \(pm2m\) & 3 & k & \(C_{3}\) & 27 & \(pm2m\) & \(C_{1}\) & (complex) \\
27 & \(pm2m\) & 5 & k & \(D_{5}\) & 4 & \(p11m\) & \(C_{2}\) & pitchfork \\
27 & \(pm2m\) & 5 & k & \(C_{5}\) & 27 & \(pm2m\) & \(C_{1}\) & (complex) \\
27 & \(pm2m\) & 7 & k & \(C_{7}\) & 27 & \(pm2m\) & \(C_{1}\) & (complex) \\
27 & \(pm2m\) & 7 & k & \(D_{7}\) & 4 & \(p11m\) & \(C_{2}\) & pitchfork \\
28 & \(pm2_{1}b\) & 2 & k & \(C_{2}\) & 28 & \(pm2_{1}b\) & \(C_{1}\) & pitchfork \\
29 & \(pm2_{1}m\) & 2 & k & \(C_{2}\) & 29 & \(pm2_{1}m\) & \(C_{1}\) & pitchfork \\
30 & \(pb2b\) & 2 & k & \(C_{2}\) & 30 & \(pb2b\) & \(C_{1}\) & pitchfork \\
31 & \(pm2a\) & 2 & k & \(C_{2}\) & 31 & \(pm2a\) & \(C_{1}\) & pitchfork \\
35 & \(cm2m\) & 2 & k & \(C_{2}\) & 35 & \(cm2m\) & \(C_{1}\) & pitchfork \\
36 & \(cm2e\) & 2 & k & \(C_{2}\) & 36 & \(cm2e\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 27: Maximal subgroups of \(pm2m\) (No. 27)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
5 & \(p11a\) & 2 & t & \(C_{2}\) & 5 & \(p11a\) & \(C_{1}\) & pitchfork \\
9 & \(p2_{1}11\) & 2 & t & \(C_{2}\) & 9 & \(p2_{1}11\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
28 & \(pm2_{1}b\) & 2 & k & \(C_{2}\) & 28 & \(pm2_{1}b\) & \(C_{1}\) & pitchfork \\
28 & \(pm2_{1}b\) & 3 & k & \(C_{3}\) & 28 & \(pm2_{1}b\) & \(C_{1}\) & (complex) \\
28 & \(pm2_{1}b\) & 3 & k & \(D_{3}\) & 5 & \(p11a\) & \(C_{2}\) & transcritical \\
28 & \(pm2_{1}b\) & 5 & k & \(C_{5}\) & 28 & \(pm2_{1}b\) & \(C_{1}\) & (complex) \\
28 & \(pm2_{1}b\) & 5 & k & \(D_{5}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
28 & \(pm2_{1}b\) & 7 & k & \(C_{7}\) & 28 & \(pm2_{1}b\) & \(C_{1}\) & (complex) \\
28 & \(pm2_{1}b\) & 7 & k & \(D_{7}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
32 & \(pm2_{1}n\) & 2 & k & \(C_{2}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 28: Maximal subgroups of \(pm2_{1}b\) (No. 28)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
5 & \(p11a\) & 2 & t & \(C_{2}\) & 5 & \(p11a\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 2 & t & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & t & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
30 & \(pb2b\) & 2 & k & \(C_{2}\) & 30 & \(pb2b\) & \(C_{1}\) & pitchfork \\
30 & \(pb2b\) & 3 & k & \(C_{3}\) & 30 & \(pb2b\) & \(C_{1}\) & (complex) \\
30 & \(pb2b\) & 3 & k & \(D_{3}\) & 5 & \(p11a\) & \(C_{2}\) & transcritical \\
30 & \(pb2b\) & 5 & k & \(C_{5}\) & 30 & \(pb2b\) & \(C_{1}\) & (complex) \\
30 & \(pb2b\) & 5 & k & \(D_{5}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
30 & \(pb2b\) & 7 & k & \(C_{7}\) & 30 & \(pb2b\) & \(C_{1}\) & (complex) \\
30 & \(pb2b\) & 7 & k & \(D_{7}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
34 & \(pb2n\) & 2 & k & \(C_{2}\) & 34 & \(pb2n\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 30: Maximal subgroups of \(pb2b\) (No. 30)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
4 & \(p11m\) & 2 & t & \(C_{2}\) & 4 & \(p11m\) & \(C_{1}\) & pitchfork \\
9 & \(p2_{1}11\) & 2 & t & \(C_{2}\) & 9 & \(p2_{1}11\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & t & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
29 & \(pm2_{1}m\) & 2 & k & \(C_{2}\) & 29 & \(pm2_{1}m\) & \(C_{1}\) & pitchfork \\
29 & \(pm2_{1}m\) & 3 & k & \(C_{3}\) & 29 & \(pm2_{1}m\) & \(C_{1}\) & (complex) \\
29 & \(pm2_{1}m\) & 3 & k & \(D_{3}\) & 4 & \(p11m\) & \(C_{2}\) & transcritical \\
29 & \(pm2_{1}m\) & 5 & k & \(C_{5}\) & 29 & \(pm2_{1}m\) & \(C_{1}\) & (complex) \\
29 & \(pm2_{1}m\) & 5 & k & \(D_{5}\) & 4 & \(p11m\) & \(C_{2}\) & pitchfork \\
29 & \(pm2_{1}m\) & 7 & k & \(C_{7}\) & 29 & \(pm2_{1}m\) & \(C_{1}\) & (complex) \\
29 & \(pm2_{1}m\) & 7 & k & \(D_{7}\) & 4 & \(p11m\) & \(C_{2}\) & pitchfork \\
33 & \(pb2_{1}a\) & 2 & k & \(C_{2}\) & 33 & \(pb2_{1}a\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 29: Maximal subgroups of \(pm2_{1}m\) (No. 29)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
5 & \(p11a\) & 2 & t & \(C_{2}\) & 5 & \(p11a\) & \(C_{1}\) & pitchfork \\
9 & \(p2_{1}11\) & 2 & t & \(C_{2}\) & 9 & \(p2_{1}11\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
32 & \(pm2_{1}n\) & 3 & k & \(C_{3}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & (complex) \\
32 & \(pm2_{1}n\) & 3 & k & \(D_{3}\) & 5 & \(p11a\) & \(C_{2}\) & transcritical \\
32 & \(pm2_{1}n\) & 5 & k & \(C_{5}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & (complex) \\
32 & \(pm2_{1}n\) & 5 & k & \(D_{5}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
32 & \(pm2_{1}n\) & 7 & k & \(C_{7}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & (complex) \\
32 & \(pm2_{1}n\) & 7 & k & \(D_{7}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
33 & \(pb2_{1}a\) & 7 & k & \(C_{7}\) & 33 & \(pb2_{1}a\) & \(C_{1}\) & (complex) \\
33 & \(pb2_{1}a\) & 7 & k & \(D_{7}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 32: Maximal subgroups of \(pm2_{1}n\) (No. 32)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
5 & \(p11a\) & 2 & t & \(C_{2}\) & 5 & \(p11a\) & \(C_{1}\) & pitchfork \\
9 & \(p2_{1}11\) & 2 & t & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
31 & \(pm2a\) & 2 & k & \(C_{2}\) & 31 & \(pm2a\) & \(C_{1}\) & pitchfork \\
31 & \(pm2a\) & 3 & k & \(C_{3}\) & 31 & \(pm2a\) & \(C_{1}\) & (complex) \\
31 & \(pm2a\) & 3 & k & \(D_{3}\) & 5 & \(p11a\) & \(C_{2}\) & transcritical \\
31 & \(pm2a\) & 5 & k & \(D_{5}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
31 & \(pm2a\) & 7 & k & \(C_{7}\) & 31 & \(pm2a\) & \(C_{1}\) & (complex) \\
31 & \(pm2a\) & 7 & k & \(D_{7}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
32 & \(pm2_{1}n\) & 2 & k & \(C_{2}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & pitchfork \\
33 & \(pb2_{1}a\) & 2 & k & \(C_{2}\) & 33 & \(pb2_{1}a\) & \(C_{1}\) & pitchfork \\
34 & \(pb2n\) & 2 & k & \(C_{2}\) & 34 & \(pb2n\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 31: Maximal subgroups of \(pm2a\) (No. 31)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
5 & \(p11a\) & 2 & t & \(C_{2}\) & 5 & \(p11a\) & \(C_{1}\) & pitchfork \\
9 & \(p211\) & 2 & t & \(C_{2}\) & 9 & \(p2_{1}11\) & \(C_{1}\) & pitchfork \\
11 & \(pm11\) & 2 & t & \(C_{2}\) & 11 & \(pm11\) & \(C_{1}\) & pitchfork \\
32 & \(pm2_{1}n\) & 3 & k & \(C_{3}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & (complex) \\
32 & \(pm2_{1}n\) & 3 & k & \(D_{3}\) & 5 & \(p11a\) & \(C_{2}\) & transcritical \\
32 & \(pm2_{1}n\) & 5 & k & \(C_{5}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & (complex) \\
32 & \(pm2_{1}n\) & 5 & k & \(D_{5}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
32 & \(pm2_{1}n\) & 7 & k & \(D_{7}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 33: Maximal subgroups of \(pb2_{1}a\) (No. 33)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
4 & \(p11m\) & 2 & t & \(C_{2}\) & 4 & \(p11m\) & \(C_{1}\) & pitchfork \\
10 & \(c211\) & 2 & t & \(C_{2}\) & 10 & \(c211\) & \(C_{1}\) & pitchfork \\
13 & \(cm11\) & 2 & t & \(C_{2}\) & 13 & \(cm11\) & \(C_{1}\) & pitchfork \\
27 & \(pm2m\) & 2 & k & \(C_{2}\) & 27 & \(pm2m\) & \(C_{1}\) & pitchfork \\
29 & \(pm2_{1}m\) & 2 & k & \(C_{2}\) & 29 & \(pm2_{1}m\) & \(C_{1}\) & pitchfork \\
32 & \(pm2_{1}n\) & 2 & k & \(C_{2}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & pitchfork \\
34 & \(pb2n\) & 2 & k & \(C_{2}\) & 34 & \(pb2n\) & \(C_{1}\) & pitchfork \\
35 & \(cm2m\) & 3 & k & \(C_{3}\) & 35 & \(cm2m\) & \(C_{1}\) & (complex) \\
35 & \(cm2m\) & 3 & k & \(D_{3}\) & 4 & \(p11m\) & \(C_{2}\) & transcritical \\
35 & \(cm2m\) & 5 & k & \(C_{5}\) & 35 & \(cm2m\) & \(C_{1}\) & (complex) \\
35 & \(cm2m\) & 5 & k & \(D_{5}\) & 4 & \(p11m\) & \(C_{2}\) & pitchfork \\
35 & \(cm2m\) & 7 & k & \(C_{7}\) & 35 & \(cm2m\) & \(C_{1}\) & (complex) \\
35 & \(cm2m\) & 7 & k & \(D_{7}\) & 4 & \(p11m\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 35: Maximal subgroups of \(cm2m\) (No. 35)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
5 & \(p11a\) & 2 & t & \(C_{2}\) & 5 & \(p11a\) & \(C_{1}\) & pitchfork \\
8 & \(p211\) & 2 & t & \(C_{2}\) & 8 & \(p211\) & \(C_{1}\) & pitchfork \\
12 & \(pb11\) & 2 & t & \(C_{2}\) & 12 & \(pb11\) & \(C_{1}\) & pitchfork \\
34 & \(pb2n\) & 3 & k & \(C_{3}\) & 34 & \(pb2n\) & \(C_{1}\) & (complex) \\
34 & \(pb2n\) & 3 & k & \(D_{3}\) & 5 & \(p11a\) & \(C_{2}\) & transcritical \\
34 & \(pb2n\) & 5 & k & \(C_{5}\) & 34 & \(pb2n\) & \(C_{1}\) & (complex) \\
34 & \(pb2n\) & 5 & k & \(D_{5}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
34 & \(pb2n\) & 7 & k & \(C_{7}\) & 34 & \(pb2n\) & \(C_{1}\) & (complex) \\
34 & \(pb2n\) & 7 & k & \(D_{7}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 34: Maximal subgroups of \(pb2n\) (No. 34)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
5 & \(p11a\) & 2 & t & \(C_{2}\) & 5 & \(p11a\) & \(C_{1}\) & pitchfork \\
10 & \(c211\) & 2 & t & \(C_{2}\) & 10 & \(c211\) & \(C_{1}\) & pitchfork \\
13 & \(cm11\) & 2 & t & \(C_{2}\) & 13 & \(cm11\) & \(C_{1}\) & pitchfork \\
28 & \(pm2_{1}b\) & 2 & k & \(C_{2}\) & 28 & \(pm2_{1}b\) & \(C_{1}\) & pitchfork \\
30 & \(pb2b\) & 2 & k & \(C_{2}\) & 30 & \(pb2b\) & \(C_{1}\) & pitchfork \\
31 & \(pm2a\) & 2 & k & \(C_{2}\) & 31 & \(pm2a\) & \(C_{1}\) & pitchfork \\
33 & \(pb2_{1}a\) & 2 & k & \(C_{2}\) & 33 & \(pb2_{1}a\) & \(C_{1}\) & pitchfork \\
36 & \(cm2e\) & 3 & k & \(C_{3}\) & 36 & \(cm2e\) & \(C_{1}\) & (complex) \\
36 & \(cm2e\) & 3 & k & \(D_{3}\) & 5 & \(p11a\) & \(C_{2}\) & transcritical \\
36 & \(cm2e\) & 5 & k & \(C_{5}\) & 36 & \(cm2e\) & \(C_{1}\) & (complex) \\
36 & \(cm2e\) & 5 & k & \(D_{5}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\
36 & \(cm2e\) & 7 & k & \(C_{7}\) & 36 & \(cm2e\) & \(C_{1}\) & (complex) \\
36 & \(cm2e\) & 7 & k & \(D_{7}\) & 5 & \(p11a\) & \(C_{2}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 36: Maximal subgroups of \(cm2e\) (No. 36)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
6 & \(p112/m\) & 2 & t & \(C_{2}\) & 6 & \(p112/m\) & \(C_{1}\) & pitchfork \\
14 & \(p2/m11\) & 2 & t & \(C_{2}\) & 14 & \(p2/m11\) & \(C_{1}\) & pitchfork \\
19 & \(p222\) & 2 & t & \(C_{2}\) & 19 & \(p222\) & \(C_{1}\) & pitchfork \\
23 & \(pmm2\) & 2 & t & \(C_{2}\) & 23 & \(pmm2\) & \(C_{1}\) & pitchfork \\
27 & \(pm2m\) & 2 & t & \(C_{2}\) & 27 & \(pm2m\) & \(C_{1}\) & pitchfork \\
37 & \(pmmmm\) & 2 & k & \(C_{2}\) & 37 & \(pmmm\) & \(C_{1}\) & pitchfork \\
37 & \(pmmmm\) & 3 & k & \(D_{3}\) & 27 & \(pm2m\) & \(C_{2}\) & transcritical \\
37 & \(pmmmm\) & 5 & k & \(D_{5}\) & 27 & \(pm2m\) & \(C_{2}\) & pitchfork \\
37 & \(pmmmm\) & 7 & k & \(D_{7}\) & 27 & \(pm2m\) & \(C_{2}\) & pitchfork \\
38 & \(pmaa\) & 2 & k & \(C_{2}\) & 38 & \(pmaa\) & \(C_{1}\) & pitchfork \\
40 & \(pmam\) & 2 & k & \(C_{2}\) & 40 & \(pmam\) & \(C_{1}\) & pitchfork \\
41 & \(pmma\) & 2 & k & \(C_{2}\) & 41 & \(pmma\) & \(C_{1}\) & pitchfork \\
47 & \(cmmm\) & 2 & k & \(C_{2}\) & 47 & \(cmmm\) & \(C_{1}\) & pitchfork \\
48 & \(cmme\) & 2 & k & \(C_{2}\) & 48 & \(cmme\) & \(C_{1}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 37: Maximal subgroups of \(pmmm\) (No. 37)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
[MISSING_PAGE_POST]
aa\) & 2 & k & \(C_{2}\) & 43 & \(pbaa\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 38: Maximal subgroups of \(pmaa\) (No. 38)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
7 & \(p112/a\) & 2 & t & \(C_{2}\) & 7 & \(p112/a\) & \(C_{1}\) & pitchfork \\
16 & \(p2/b11\) & 2 & t & \(C_{2}\) & 16 & \(p2/b11\) & \(C_{1}\) & pitchfork \\
19 & \(p222\) & 2 & t & \(C_{2}\) & 19 & \(p222\) & \(C_{1}\) & pitchfork \\
25 & \(pba2\) & 2 & t & \(C_{2}\) & 25 & \(pba2\) & \(C_{1}\) & pitchfork \\
34 & \(pb2n\) & 2 & t & \(C_{2}\) & 34 & \(pb2n\) & \(C_{1}\) & pitchfork \\
39 & \(pban\) & 3 & k & \(D_{3}\) & 34 & \(pb2n\) & \(C_{2}\) & transcritical \\
39 & \(pban\) & 5 & k & \(D_{5}\) & 34 & \(pb2n\) & \(C_{2}\) & pitchfork \\
39 & \(pban\) & 7 & k & \(D_{7}\) & 34 & \(pb2n\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 39: Maximal subgroups of \(pban\) (No. 39)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
[MISSING_PAGE_POST]
\) & 2 & k & \(C_{2}\) & 45 & \(pma\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 41: Maximal subgroups of \(pmma\) (No. 41)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
[MISSING_PAGE_POST]
mn\) & 2 & k & \(C_{2}\) & 46 & \(pmmn\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 40: Maximal subgroups of \(pmam\) (No. 40)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
7 & \(p112/a\) & 2 & t & \(C_{2}\) & 7 & \(p112/a\) & \(C_{1}\) & pitchfork \\
16 & \(p2/b11\) & 2 & t & \(C_{2}\) & 16 & \(p2/b11\) & \(C_{1}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 2 & t & \(C_{2}\) & 17 & \(p2_{1}/b11\) & \(C_{1}\) & pitchfork \\
20 & \(p2_{1}22\) & 2 & t & \(C_{2}\) & 20 & \(p2_{1}22\) & \(C_{1}\) & pitchfork \\
25 & \(pba2\) & 2 & t & \(C_{2}\) & 25 & \(pba2\) & \(C_{1}\) & pitchfork \\
30 & \(pb2b\) & 2 & t & \(C_{2}\) & 30 & \(pb2b\) & \(C_{1}\) & pitchfork \\
33 & \(pb2_{1}a\) & 2 & t & \(C_{2}\) & 33 & \(pb2_{1}a\) & \(C_{1}\) & pitchfork \\
43 & \(pbaa\) & 3 & k & \(D_{3}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & transcritical \\
43 & \(pbaa\) & 3 & k & \(D_{3}\) & 30 & \(pb2b\) & \(C_{2}\) & transcritical \\
43 & \(pbaa\) & 5 & k & \(D_{5}\) & 30 & \(pb2b\) & \(C_{2}\) & pitchfork \\
43 & \(pbaa\) & 5 & k & \(D_{5}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & pitchfork \\
43 & \(pbaa\) & 7 & k & \(D_{7}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 44: Maximal subgroups of \(pham\) (No. 42)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
7 & \(p112/a\) & 2 & t & \(C_{2}\) & 7 & \(p112/a\) & \(C_{1}\) & pitchfork \\
14 & \(p2/m11\) & 2 & t & \(C_{2}\) & 14 & \(p2/m11\) & \(C_{1}\) & pitchfork \\
17 & \(p2_{1}/b11\) & 2 & t & \(C_{2}\) & 17 & \(p2_{1}/b11\) & \(C_{1}\) & pitchfork \\
20 & \(p2_{1}22\) & 2 & t & \(C_{2}\) & 20 & \(p2_{1}22\) & \(C_{1}\) & pitchfork \\
24 & \(pma2\) & 2 & t & \(C_{2}\) & 24 & \(pma2\) & \(C_{1}\) & pitchfork \\
32 & \(pm2_{1}n\) & 2 & t & \(C_{2}\) & 32 & \(pm2_{1}n\) & \(C_{1}\) & pitchfork \\
34 & \(pb2n\) & 2 & t & \(C_{2}\) & 34 & \(pb2n\) & \(C_{1}\) & pitchfork \\
42 & \(pman\) & 3 & k & \(D_{3}\) & 32 & \(pm2_{1}n\) & \(C_{2}\) & transcritical \\
42 & \(pman\) & 3 & k & \(D_{3}\) & 34 & \(pb2n\) & \(C_{2}\) & transcritical \\
42 & \(pman\) & 5 & k & \(D_{5}\) & 34 & \(pb2n\) & \(C_{2}\) & pitchfork \\
42 & \(pman\) & 5 & k & \(D_{5}\) & 32 & \(pm2_{1}n\) & \(C_{2}\) & pitchfork \\
42 & \(pman\) & 7 & k & \(D_{7}\) & 32 & \(pm2_{1}n\) & \(C_{2}\) & pitchfork \\
42 & \(pman\) & 7 & k & \(D_{7}\) & 34 & \(pb2n\) & \(C_{2}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 43: Maximal subgroups of \(pbaa\) (No. 43)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
7 & \(p112/a\) & 2 & t & \(C_{2}\) & 7 & \(p112/a\) & \(C_{1}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 2 & t & \(C_{2}\) & 15 & \(p2_{1}/m11\) & \(C_{1}\) & pitchfork \\
21 & \(p2_{1}2_{1}2\) & 2 & t & \(C_{2}\) & 21 & \(p2_{1}2_{1}2\) & \(C_{1}\) & pitchfork \\
24 & \(pma2\) & 2 & t & \(C_{2}\) & 24 & \(pma2\) & \(C_{1}\) & pitchfork \\
28 & \(pm2_{1}b\) & 2 & t & \(C_{2}\) & 28 & \(pm2_{1}b\) & \(C_{1}\) & pitchfork \\
33 & \(pb2_{1}a\) & 2 & t & \(C_{2}\) & 33 & \(pb2_{1}a\) & \(C_{1}\) & pitchfork \\
45 & \(pbma\) & 3 & k & \(D_{3}\) & 28 & \(pm2_{1}b\) & \(C_{2}\) & transcritical \\
45 & \(pbma\) & 3 & k & \(D_{3}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & transcritical \\
45 & \(pbma\) & 5 & k & \(D_{5}\) & 28 & \(pm2_{1}b\) & \(C_{2}\) & pitchfork \\
45 & \(pbma\) & 5 & k & \(D_{5}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & pitchfork \\
45 & \(pbma\) & 7 & k & \(D_{7}\) & 28 & \(pm2_{1}b\) & \(C_{2}\) & pitchfork \\
45 & \(pbma\) & 7 & k & \(D_{7}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 45: Maximal subgroups of \(pbma\) (No. 45)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
7 & \(p112/a\) & 2 & t & \(C_{2}\) & 7 & \(p112/a\) & \(C_{1}\) & pitchfork \\
15 & \(p2_{1}/m11\) & 2 & t & \(C_{2}\) & 17 & \(p2_{1}/b11\) & \(C_{1}\) & pitchfork \\
21 & \(p2_{1}2_{1}2\) & 2 & t & \(C_{2}\) & 21 & \(p2_{1}2_{1}2\) & \(C_{1}\) & pitchfork \\
24 & \(pma2\) & 2 & t & \(C_{2}\) & 24 & \(pma2\) & \(C_{1}\) & pitchfork \\
28 & \(pm2_{1}b\) & 2 & t & \(C_{2}\) & 28 & \(pm2_{1}b\) & \(C_{1}\) & pitchfork \\
33 & \(pb2_{1}a\) & 2 & t & \(C_{2}\) & 33 & \(pb2_{1}a\) & \(C_{1}\) & pitchfork \\
45 & \(pbma\) & 3 & k & \(D_{3}\) & 28 & \(pm2_{1}b\) & \(C_{2}\) & transcritical \\
45 & \(pbma\) & 3 & k & \(D_{3}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & transcritical \\
45 & \(pbma\) & 5 & k & \(D_{5}\) & 28 & \(pm2_{1}b\) & \(C_{2}\) & pitchfork \\
45 & \(pbma\) & 5 & k & \(D_{5}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & pitchfork \\
45 & \(pbma\) & 7 & k & \(D_{7}\) & 28 & \(pm2_{1}b\) & \(C_{2}\) & pitchfork \\
45 & \(pbma\) & 7 & k & \(D_{7}\) & 33 & \(pb2_{1}a\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 46: Maximal subgroups of \(pmmn\) (No. 46)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
7 & \(p112/a\) & 2 & t & \(C_{2}\) & 7 & \(p112/a\) & \(C_{1}\) & pitchfork \\
18 & \(c2/m11\) & 2 & t & \(C_{2}\) & 18 & \(c2/m11\) & \(C_{1}\) & pitchfork \\
22 & \(c222\) & 2 & t & \(C_{2}\) & 22 & \(c222\) & \(C_{1}\) & pitchfork \\
26 & \(cmm2\) & 2 & t & \(C_{2}\) & 26 & \(cmm2\) & \(C_{1}\) & pitchfork \\
36 & \(c2e\) & 2 & t & \(C_{2}\) & 36 & \(c2e\) & \(C_{1}\) & pitchfork \\
38 & \(pmaa\) & 2 & k & \(C_{2}\) & 38 & \(pmaa\) & \(C_{1}\) & pitchfork \\
41 & \(pmaa\) & 2 & k & \(C_{2}\) & 41 & \(pmaa\) & \(C_{1}\) & pitchfork \\
43 & \(pbaa\) & 2 & k & \(C_{2}\) & 43 & \(pbaa\) & \(C_{1}\) & pitchfork \\
45 & \(pbaa\) & 2 & k & \(C_{2}\) & 45 & \(pbaa\) & \(C_{1}\) & pitchfork \\
48 & \(cmme\) & 3 & k & \(D_{3}\) & 36 & \(c2e\) & \(C_{2}\) & transcritical \\
48 & \(cmme\) & 5 & k & \(D_{5}\) & 36 & \(c2e\) & \(C_{2}\) & pitchfork \\
48 & \(cmme\) & 7 & k & \(D_{7}\) & 36 & \(c2e\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 49: Maximal subgroups of \(p4\) (No. 49)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
6 & \(p112/m\) & 2 & t & \(C_{2}\) & 6 & \(p112/m\) & \(C_{1}\) & pitchfork \\
18 & \(c2/m11\) & 2 & t & \(C_{2}\) & 18 & \(c2/m11\) & \(C_{1}\) & pitchfork \\
22 & \(c222\) & 2 & t & \(C_{2}\) & 22 & \(c222\) & \(C_{1}\) & pitchfork \\
26 & \(cmm2\) & 2 & t & \(C_{2}\) & 26 & \(cmm2\) & \(C_{1}\) & pitchfork \\
35 & \(c2m\) & 2 & t & \(C_{2}\) & 35 & \(c2m\) & \(C_{1}\) & pitchfork \\
37 & \(pmmmm\) & 2 & k & \(C_{2}\) & 37 & \(pmmmm\) & \(C_{1}\) & pitchfork \\
39 & \(pban\) & 2 & k & \(C_{2}\) & 39 & \(pban\) & \(C_{1}\) & pitchfork \\
40 & \(pmam\) & 2 & k & \(C_{2}\) & 40 & \(pnam\) & \(C_{1}\) & pitchfork \\
42 & \(pman\) & 2 & k & \(C_{2}\) & 42 & \(pman\) & \(C_{1}\) & pitchfork \\
44 & \(pbam\) & 2 & k & \(C_{2}\) & 44 & \(pbam\) & \(C_{1}\) & pitchfork \\
46 & \(pmmn\) & 2 & k & \(C_{2}\) & 46 & \(pmmn\) & \(C_{1}\) & pitchfork \\
47 & \(cmmm\) & 3 & k & \(D_{3}\) & 35 & \(c2m\) & \(C_{2}\) & transcritical \\
47 & \(cmmm\) & 5 & k & \(D_{5}\) & 35 & \(c2m\) & \(C_{2}\) & pitchfork \\
47 & \(cmmm\) & 7 & k & \(D_{7}\) & 35 & \(c2m\) & \(C_{2}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 47: Maximal subgroups of \(cmmm\) (No. 47)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 2 & t & \(C_{2}\) & 3 & \(p112\) & \(C_{1}\) & pitchfork \\
49 & \(p4\) & 2 & k & \(C_{2}\) & 49 & \(p4\) & \(C_{1}\) & pitchfork \\
49 & \(p4\) & 5 & k & \(C_{5}\rtimes C_{4}\) & 1 & \(p1\) & \(C_{4}\) & pitchfork \\
49 & \(p4\) & 9 & k & \(C_{3}^{2}\rtimes C_{4}\) & 1 & \(p1\) & \(C_{4}\) & transcritical \\
49 & \(p4\) & 49 & k & \(C_{7}^{2}\rtimes C_{4}\) & 1 & \(p1\) & \(C_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 49: Maximal subgroups of \(p4\) (No. 49)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112/a\) & 2 & t & \(C_{2}\) & 3 & \(p112/a\) & \(C_{1}\) & pitchfork \\
49 & \(p4\) & 2 & t & \(C_{2}\) & 49 & \(p4\) & \(C_{1}\) & pitchfork \\
50 & \(p\overline{4}\) & 2 & t & \(C_{2}\) & 50 & \(p\overline{4}\) & \(C_{1}\) & pitchfork \\
50 & \(p4/n\) & 5 & k & \(C_{5}\rtimes C_{4}\) & 1 & \(p1\) & \(C_{4}\) & pitchfork \\
52 & \(p4/n\) & 9 & k & \(C_{3}^{2}\rtimes C_{4}\) & 1 & \(p1\) & \(C_{4}\) & transcritical \\
52 & \(p4/n\) & 49 & k & \(C_{7}^{2}\rtimes C_{4}\) & 1 & \(p1\) & \(C_{4}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 50: Maximal subgroups of \(p4\) (No. 50)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
6 & \(p112/m\) & 2 & t & \(C_{2}\) & 6 & \(p112/m\) & \(C_{1}\) & pitchfork \\
49 & \(p4\) & 2 & t & \(C_{2}\) & 49 & \(p4\) & \(C_{1}\) & pitchfork \\
50 & \(p\overline{4}\) & 2 & t & \(C_{2}\) & 50 & \(p\overline{4}\) & \(C_{1}\) & pitchfork \\
51 & \(p4/m\) & 2 & k & \(C_{2}\) & 51 & \(p4/m\) & \(C_{1}\) & pitchfork \\
51 & \(p4/m\) & 5 & k & \(C_{5}\rtimes C_{4}\) & 4 & \(p11m\) & \(C_{4}\) & pitchfork \\
51 & \(p4/m\) & 9 & k & \(C_{3}^{2}\rtimes C_{4}\) & 4 & \(p11m\) & \(C_{4}\) & transcritical \\
51 & \(p4/m\) & 49 & k & \(C_{7}^{2}\rtimes C_{4}\) & 4 & \(p11m\) & \(C_{4}\) & pitchfork \\
52 & \(p4/n\) & 2 & k & \(C_{2}\) & 52 & \(p4/n\) & \(C_{1}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 51: Maximal subgroups of \(p4/m\) (No. 51)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
7 & \(p112/a\) & 2 & t & \(C_{2}\) & 7 & \(p112/a\) & \(C_{1}\) & pitchfork \\
49 & \(p4\) & 2 & t & \(C_{2}\) & 49 & \(p4\) & \(C_{1}\) & pitchfork \\
50 & \(p\overline{4}\) & 2 & t & \(C_{2}\) & 50 & \(p\overline{4}\) & \(C_{1}\) & pitchfork \\
52 & \(p4/n\) & 5 & k & \(C_{5}\rtimes C_{4}\) & 5 & \(p11a\) & \(C_{4}\) & pitchfork \\
52 & \(p4/n\) & 9 & k & \(C_{3}^{2}\rtimes C_{4}\) & 5 & \(p11a\) & \(C_{4}\) & transcritical \\
52 & \(p4/n\) & 49 & k & \(C_{7}^{2}\rtimes C_{4}\) & 5 & \(p11a\) & \(C_{4}\) & pitchfork \\ \hline \hline \end{tabular}
\end{table}
Table 52: Maximal subgroups of \(p4/n\) (No. 52)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
23 & \(pmm2\) & 2 & t & \(C_{2}\) & 23 & \(pmm2\) & \(C_{1}\) & pitchfork \\
26 & \(cmm2\) & 2 & t & \(C_{2}\) & 26 & \(cmm2\) & \(C_{1}\) & pitchfork \\
49 & \(p4\) & 2 & t & \(C_{2}\) & 49 & \(p4\) & \(C_{1}\) & pitchfork \\
55 & \(p4mm\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & transcritical \\
55 & \(p4mm\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\
55 & \(p4mm\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 55: Maximal subgroups of \(p4mm\) (No. 55)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
19 & \(p222\) & 2 & t & \(C_{2}\) & 19 & \(p222\) & \(C_{1}\) & pitchfork \\
22 & \(c222\) & 2 & t & \(C_{2}\) & 22 & \(c222\) & \(C_{1}\) & pitchfork \\
49 & \(p4\) & 2 & t & \(C_{2}\) & 49 & \(p4\) & \(C_{1}\) & pitchfork \\
53 & \(p422\) & 2 & k & \(C_{2}\) & 53 & \(p422\) & \(C_{1}\) & pitchfork \\
53 & \(p422\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & transcritical \\
53 & \(p422\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\
53 & \(p422\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\
54 & \(p42_{1}2\) & 2 & k & \(C_{2}\) & 54 & \(p42_{1}2\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 53: Maximal subgroups of \(p422\) (No. 53)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
21 & \(p21_{2}12\) & 2 & t & \(C_{2}\) & 21 & \(p2_{1}2_{1}2\) & \(C_{1}\) & pitchfork \\
22 & \(c222\) & 2 & t & \(C_{2}\) & 22 & \(c222\) & \(C_{1}\) & pitchfork \\
49 & \(p4\) & 2 & t & \(C_{2}\) & 49 & \(p4\) & \(C_{1}\) & pitchfork \\
54 & \(p42_{1}2\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & transcritical \\
54 & \(p42_{1}2\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\
54 & \(p42_{1}2\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 54: Maximal subgroups of \(p42_{1}2\) (No. 54)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
22 & \(c222\) & 2 & t & \(C_{2}\) & 22 & \(c222\) & \(C_{1}\) & pitchfork \\
25 & \(pba2\) & 2 & t & \(C_{2}\) & 25 & \(pba2\) & \(C_{1}\) & pitchfork \\
50 & \(p\overline{4}\) & 2 & t & \(C_{2}\) & 50 & \(p\overline{4}\) & \(C_{1}\) & pitchfork \\
60 & \(p\overline{4}b2\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & transcritical \\
60 & \(p\overline{4}b2\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\
60 & \(p\overline{4}b2\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 60: Maximal subgroups of \(p\overline{4}b2\) (No. 60)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
21 & \(p21_{2}12\) & 2 & t & \(C_{2}\) & 21 & \(p21_{2}12\) & \(C_{1}\) & pitchfork \\
26 & \(cmm2\) & 2 & t & \(C_{2}\) & 26 & \(cmm2\) & \(C_{1}\) & pitchfork \\
50 & \(p\overline{4}\) & 2 & t & \(C_{2}\) & 50 & \(p\overline{4}\) & \(C_{1}\) & pitchfork \\
57 & \(p\overline{4}2m\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & transcritical \\
57 & \(p\overline{4}2m\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\
57 & \(p\overline{4}2m\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\
59 & \(p\overline{4}2m\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 58: Maximal subgroups of \(p\overline{4}2_{1}m\) (No. 58)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
21 & \(p21_{2}12\) & 2 & t & \(C_{2}\) & 21 & \(p21_{2}12\) & \(C_{1}\) & pitchfork \\
26 & \(cmm2\) & 2 & t & \(C_{2}\) & 26 & \(cmm2\) & \(C_{1}\) & pitchfork \\
50 & \(p\overline{4}\) & 2 & t & \(C_{2}\) & 50 & \(p\overline{4}\) & \(C_{1}\) & pitchfork \\
50 & \(p\overline{4}2m\) & 2 & t & \(C_{2}\) & 50 & \(p\overline{4}\) & \(C_{1}\) & pitchfork \\
58 & \(p\overline{4}2_{1}m\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & transcritical \\
58 & \(p\overline{4}2_{1}m\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\
58 & \(p\overline{4}2_{1}m\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 1 & \(p1\) & \(D_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 59: Maximal subgroups of \(p\overline{4}2\) (No. 59)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
37 & \(pmmm\) & 2 & t & \(C_{2}\) & 37 & \(pmmm\) & \(C_{1}\) & pitchfork \\
47 & \(cmmm\) & 2 & t & \(C_{2}\) & 47 & \(cmmm\) & \(C_{1}\) & pitchfork \\
51 & \(p4/m\) & 2 & t & \(C_{2}\) & 51 & \(p4/m\) & \(C_{1}\) & pitchfork \\
53 & \(p422\) & 2 & t & \(C_{2}\) & 53 & \(p422\) & \(C_{1}\) & pitchfork \\
55 & \(p4mm\) & 2 & t & \(C_{2}\) & 55 & \(p4mm\) & \(C_{1}\) & pitchfork \\
57 & \(p\overline{4}2m\) & 2 & t & \(C_{2}\) & 57 & \(p\overline{4}2m\) & \(C_{1}\) & pitchfork \\
59 & \(p\overline{4}m2\) & 2 & t & \(C_{2}\) & 59 & \(p\overline{4}m2\) & \(C_{1}\) & pitchfork \\
61 & \(p4/mmm\) & 2 & k & \(C_{2}\) & 61 & \(p4/mmm\) & \(C_{1}\) & pitchfork \\
61 & \(p4/mmm\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 4 & \(p11m\) & \(D_{4}\) & transcritical \\
61 & \(p4/mmm\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 4 & \(p11m\) & \(D_{4}\) & pitchfork \\
61 & \(p4/mmm\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 4 & \(p11m\) & \(D_{4}\) & pitchfork \\
62 & \(p4/nbm\) & 2 & k & \(C_{2}\) & 62 & \(p4/nbm\) & \(C_{1}\) & pitchfork \\
63 & \(p4/mbm\) & 2 & k & \(C_{2}\) & 63 & \(p4/mbm\) & \(C_{1}\) & pitchfork \\
64 & \(p4/nmm\) & 2 & k & \(C_{2}\) & 64 & \(p4/nmm\) & \(C_{1}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 61: Maximal subgroups of \(p4/mm\) (No. 61)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
39 & \(pban\) & 2 & t & \(C_{2}\) & 39 & \(pban\) & \(C_{1}\) & pitchfork \\
48 & \(cmme\) & 2 & t & \(C_{2}\) & 48 & \(cmme\) & \(C_{1}\) & pitchfork \\
52 & \(p4/n\) & 2 & t & \(C_{2}\) & 52 & \(p4/n\) & \(C_{1}\) & pitchfork \\
53 & \(p422\) & 2 & t & \(C_{2}\) & 53 & \(p422\) & \(C_{1}\) & pitchfork \\
56 & \(p4bm\) & 2 & t & \(C_{2}\) & 56 & \(p4bm\) & \(C_{1}\) & pitchfork \\
57 & \(p\overline{4}2m\) & 2 & t & \(C_{2}\) & 57 & \(p\overline{4}2m\) & \(C_{1}\) & pitchfork \\
60 & \(p\overline{4}b2\) & 2 & t & \(C_{2}\) & 60 & \(p\overline{4}b2\) & \(C_{1}\) & pitchfork \\
62 & \(p4/nbm\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & transcritical \\
62 & \(p4/nbm\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 62: Maximal subgroups of \(p4/nbm\) (No. 62)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
2 & \(p\overline{1}\) & 3 & t & \(C_{3}\) & 2 & \(p\overline{1}\) & \(C_{1}\) & (complex) \\
65 & \(p\underline{3}\) & 2 & t & \(C_{2}\) & 65 & \(p\underline{3}\) & \(C_{1}\) & pitchfork \\
66 & \(p\overline{3}\) & 3 & k & \(D_{3}\) & 65 & \(p\underline{3}\) & \(C_{2}\) & transcritical \\
66 & \(p\overline{3}\) & 4 & k & \(A_{4}\) & 2 & \(p\overline{1}\) & \(C_{3}\) & transcritical \\
66 & \(p\overline{3}\) & 7 & k & \(C_{7}\rtimes C_{6}\) & 1 & \(p1\) & \(C_{6}\) & pitchfork \\
66 & \(p\overline{3}\) & 25 & k & \(C_{5}^{2}\rtimes C_{6}\) & 1 & \(p1\) & \(C_{6}\) & transcritical \\ \hline \end{tabular}
\end{table}
Table 66: Maximal subgroups of \(p\overline{3}\) (No. 66)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
46 & \(pmmn\) & 2 & t & \(C_{2}\) & 46 & \(pmmn\) & \(C_{1}\) & pitchfork \\
48 & \(cmme\) & 2 & t & \(C_{2}\) & 48 & \(cmme\) & \(C_{1}\) & pitchfork \\
52 & \(p4/n\) & 2 & t & \(C_{2}\) & 52 & \(p4/n\) & \(C_{1}\) & pitchfork \\
54 & \(p42_{1}2\) & 2 & t & \(C_{2}\) & 54 & \(p42_{1}2\) & \(C_{1}\) & pitchfork \\
55 & \(p4mm\) & 2 & t & \(C_{2}\) & 55 & \(p4mm\) & \(C_{1}\) & pitchfork \\
58 & \(p\overline{4}2_{1}m\) & 2 & t & \(C_{2}\) & 58 & \(p\overline{4}2_{1}m\) & \(C_{1}\) & pitchfork \\
59 & \(p\overline{4}m2\) & 2 & t & \(C_{2}\) & 59 & \(p\overline{4}m2\) & \(C_{1}\) & pitchfork \\
64 & \(p4/nmm\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & transcritical \\
64 & \(p4/nmm\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & pitchfork \\
64 & \(p4/nmm\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 64: Maximal subgroups of \(p4/nmm\) (No. 64)
\begin{table}
\begin{tabular}{c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
46 & \(pmmn\) & 2 & t & \(C_{2}\) & 46 & \(pmmn\) & \(C_{1}\) & pitchfork \\
48 & \(cmme\) & 2 & t & \(C_{2}\) & 48 & \(cmme\) & \(C_{1}\) & pitchfork \\
52 & \(p4/n\) & 2 & t & \(C_{2}\) & 52 & \(p4/n\) & \(C_{1}\) & pitchfork \\
54 & \(p42_{1}2\) & 2 & t & \(C_{2}\) & 54 & \(p42_{1}2\) & \(C_{1}\) & pitchfork \\
55 & \(p4mm\) & 2 & t & \(C_{2}\) & 55 & \(p4mm\) & \(C_{1}\) & pitchfork \\
58 & \(p\overline{4}2_{1}m\) & 2 & t & \(C_{2}\) & 58 & \(p\overline{4}2_{1}m\) & \(C_{1}\) & pitchfork \\
59 & \(p\overline{4}m2\) & 2 & t & \(C_{2}\) & 59 & \(p\overline{4}m2\) & \(C_{1}\) & pitchfork \\
64 & \(p4/nmm\) & 9 & k & \(C_{3}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & transcritical \\
64 & \(p4/nmm\) & 25 & k & \(C_{5}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & pitchfork \\
64 & \(p4/nmm\) & 49 & k & \(C_{7}^{2}\rtimes D_{4}\) & 5 & \(p11a\) & \(D_{4}\) & pitchfork \\ \hline \end{tabular}
\end{table}
Table 65: Maximal subgroups of \(p3\) (No. 65)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
13 & \(cm11\) & 3 & t & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
65 & \(p3\) & 2 & t & \(C_{2}\) & 65 & \(p3\) & \(C_{1}\) & pitchfork \\
69 & \(p3m1\) & 4 & k & \(S_{4}\) & 1 & \(p1\) & \(D_{3}\) & transcritical \\
69 & \(p3m1\) & 25 & k & \(C_{5}^{2}\rtimes D_{3}\) & 1 & \(p1\) & \(D_{3}\) & transcritical \\
69 & \(p3m1\) & 49 & k & \(C_{7}^{2}\rtimes D_{3}\) & 1 & \(p1\) & \(D_{3}\) & transcritical \\
70 & \(p31m\) & 3 & k & \(D_{3}\) & 65 & \(p3\) & \(C_{2}\) & transcritical \\ \hline \end{tabular}
\end{table}
Table 69: Maximal subgroups of \(p3m1\) (No. 69)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
18 & \(c2/m11\) & 3 & t & \(D_{3}\) & 2 & \(p\overline{1}\) & \(C_{2}\) & transcritical \\
66 & \(p\overline{3}\) & 2 & t & \(C_{2}\) & 66 & \(p\overline{3}\) & \(C_{1}\) & pitchfork \\
67 & \(p312\) & 2 & t & \(C_{2}\) & 67 & \(p312\) & \(C_{1}\) & pitchfork \\
70 & \(p31m\) & 2 & t & \(C_{2}\) & 70 & \(p31m\) & \(C_{1}\) & pitchfork \\
71 & \(p\overline{3}1m\) & 4 & k & \(S_{4}\) & 2 & \(p\overline{1}\) & \(D_{3}\) & transcritical \\
71 & \(p\overline{3}1m\) & 25 & k & \(C_{5}^{2}\rtimes D_{6}\) & 1 & \(p1\) & \(D_{6}\) & transcritical \\
71 & \(p\overline{3}1m\) & 49 & k & \(C_{7}^{2}\rtimes D_{6}\) & 1 & \(p1\) & \(D_{6}\) & pfork + trans \\
72 & \(p\overline{3}m1\) & 3 & k & \(D_{3}\) & 69 & \(p3m1\) & \(C_{2}\) & transcritical \\ \hline \end{tabular}
\end{table}
Table 70: Maximal subgroups of \(p31m\) (No. 70)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
10 & \(c211\) & 3 & t & \(D_{3}\) & 1 & \(p1\) & \(C_{2}\) & transcritical \\
65 & \(p3\) & 2 & t & \(C_{2}\) & 65 & \(p3\) & \(C_{1}\) & pitchfork \\
67 & \(p31m\) & 4 & k & \(S_{4}\) & 1 & \(p1\) & \(D_{3}\) & transcritical \\
70 & \(p31m\) & 25 & k & \(C_{5}^{2}\rtimes D_{3}\) & 1 & \(p1\) & \(D_{3}\) & transcritical \\
71 & \(p\overline{3}1m\) & 49 & k & \(C_{7}^{2}\rtimes D_{6}\) & 1 & \(p1\) & \(D_{6}\) & pfork + trans \\
72 & \(p\overline{3}m1\) & 3 & k & \(D_{3}\) & 69 & \(p3m1\) & \(C_{2}\) & transcritical \\ \hline \end{tabular}
\end{table}
Table 68: Maximal subgroups of \(p321\) (No. 68)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
3 & \(p112\) & 3 & t & \(C_{3}\) & 3 & \(p112\) & \(C_{1}\) & (complex) \\
65 & \(p3\) & 2 & t & \(C_{2}\) & 65 & \(p3\) & \(C_{1}\) & pitchfork \\
74 & \(p\overline{6}\) & 3 & k & \(C_{3}\) & 74 & \(p\overline{6}\) & \(C_{1}\) & (complex) \\
74 & \(p\overline{6}\) & 4 & k & \(A_{4}\) & 4 & \(p11m\) & \(C_{3}\) & transcritical \\
74 & \(p\overline{6}\) & 7 & k & \(C_{7}\rtimes C_{3}\) & 4 & \(p11m\) & \(C_{3}\) & (complex) \\ \hline \end{tabular}
\end{table}
Table 74: Maximal subgroups of \(p\overline{6}\) (No. 74)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
4 & \(p11m\) & 3 & t & \(C_{3}\) & 4 & \(p11m\) & \(C_{1}\) & (complex) \\
65 & \(p3\) & 2 & t & \(C_{2}\) & 65 & \(p3\) & \(C_{1}\) & pitchfork \\
74 & \(p\overline{6}\) & 3 & k & \(C_{3}\) & 74 & \(p\overline{6}\) & \(C_{1}\) & (complex) \\
74 & \(p\overline{6}\) & 4 & k & \(A_{4}\) & 4 & \(p11m\) & \(C_{3}\) & transcritical \\
74 & \(p\overline{6}\) & 7 & k & \(C_{7}\rtimes C_{3}\) & 4 & \(p11m\) & \(C_{3}\) & (complex) \\
74 & \(p\overline{6}\) & 25 & k & \(C_{5}^{2}\rtimes C_{3}\) & 4 & \(p11m\) & \(C_{3}\) & (complex) \\ \hline \end{tabular}
\end{table}
Table 75: Maximal subgroups of \(p\overline{6}/m\) (No. 75)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
18 & \(c2/m11\) & 3 & t & \(D_{3}\) & 2 & \(p\overline{1}\) & \(C_{2}\) & transcritical \\
66 & \(p\overline{3}\) & 2 & t & \(C_{2}\) & 66 & \(p\overline{3}\) & \(C_{1}\) & pitchfork \\
68 & \(p321\) & 2 & t & \(C_{2}\) & 68 & \(p321\) & \(C_{1}\) & pitchfork \\
69 & \(p3m1\) & 2 & t & \(C_{2}\) & 69 & \(p3m1\) & \(C_{1}\) & pitchfork \\
71 & \(p\overline{3}1m\) & 3 & k & \(D_{3}\) & 67 & \(p312\) & \(C_{2}\) & transcritical \\
72 & \(p\overline{3}m1\) & 4 & k & \(S_{4}\) & 2 & \(p\overline{1}\) & \(D_{3}\) & transcritical \\
72 & \(p\overline{3}m1\) & 25 & k & \(C_{5}^{2}\rtimes D_{6}\) & 1 & \(p1\) & \(D_{6}\) & transcritical \\
72 & \(p\overline{3}m1\) & 49 & k & \(C_{7}^{2}\rtimes D_{6}\) & 1 & \(p1\) & \(D_{6}\) & pfork + trans \\ \hline \end{tabular}
\end{table}
Table 72: Maximal subgroups of \(p\overline{3}m1\) (No. 72)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
22 & \(c222\) & 3 & t & \(D_{3}\) & 3 & \(p112\) & \(C_{2}\) & transcritical \\
67 & \(p312\) & 2 & t & \(C_{2}\) & 67 & \(p312\) & \(C_{1}\) & pitchfork \\
68 & \(p321\) & 2 & t & \(C_{2}\) & 68 & \(p321\) & \(C_{1}\) & pitchfork \\
73 & \(p6\) & 2 & t & \(C_{2}\) & 73 & \(p6\) & \(C_{1}\) & pitchfork \\
76 & \(p622\) & 3 & k & \(D_{3}\) & 67 & \(p312\) & \(C_{2}\) & transcritical \\
76 & \(p622\) & 4 & k & \(S_{4}\) & 3 & \(p112\) & \(D_{3}\) & transcritical \\
76 & \(p622\) & 25 & k & \(C_{5}^{2}\rtimes D_{6}\) & 1 & \(p1\) & \(D_{6}\) & transcritical \\
76 & \(p622\) & 49 & k & \(C_{7}^{2}\rtimes D_{6}\) & 1 & \(p1\) & \(D_{6}\) & pfork + trans \\ \hline \end{tabular}
\end{table}
Table 76: Maximal subgroups of \(p622\) (No. 76)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
26 & \(cmm2\) & 3 & t & \(D_{3}\) & 3 & \(p112\) & \(C_{2}\) & transcritical \\
69 & \(p3m1\) & 2 & t & \(C_{2}\) & 69 & \(p3m1\) & \(C_{1}\) & pitchfork \\
70 & \(p31m\) & 2 & t & \(C_{2}\) & 70 & \(p31m\) & \(C_{1}\) & pitchfork \\
73 & \(p6\) & 2 & t & \(C_{2}\) & 73 & \(p6\) & \(C_{1}\) & pitchfork \\
77 & \(p6mm\) & 3 & k & \(D_{3}\) & 69 & \(p3m1\) & \(C_{2}\) & transcritical \\
77 & \(p6mm\) & 4 & k & \(S_{4}\) & 3 & \(p112\) & \(D_{3}\) & transcritical \\
77 & \(p6mm\) & 25 & k & \(C_{7}^{2}\rtimes D_{6}\) & 1 & \(p1\) & \(D_{6}\) & pfork + trans \\ \hline \end{tabular}
\end{table}
Table 77: Maximal subgroups of \(p6mm\) (No. 77)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
35 & \(c2m2\) & 3 & t & \(D_{3}\) & 4 & \(p11m\) & \(C_{2}\) & transcritical \\
67 & \(p312\) & 2 & t & \(C_{2}\) & 67 & \(p312\) & \(C_{1}\) & pitchfork \\
69 & \(p3m1\) & 2 & t & \(C_{2}\) & 69 & \(p3m1\) & \(C_{1}\) & pitchfork \\
74 & \(p\overline{6}\) & 2 & t & \(C_{2}\) & 74 & \(p\overline{6}\) & \(C_{1}\) & pitchfork \\
78 & \(p\overline{6}m2\) & 4 & k & \(S_{4}\) & 4 & \(p11m\) & \(D_{3}\) & transcritical \\
78 & \(p\overline{6}m2\) & 25 & k & \(C_{5}^{2}\rtimes D_{3}\) & 4 & \(p11m\) & \(D_{3}\) & transcritical \\
78 & \(p\overline{6}m2\) & 49 & k & \(C_{7}^{2}\rtimes D_{3}\) & 4 & \(p11m\) & \(D_{3}\) & transcritical \\
79 & \(p\overline{6}2m\) & 3 & k & \(D_{3}\) & 74 & \(p\overline{6}\) & \(C_{2}\) & transcritical \\ \hline \end{tabular}
\end{table}
Table 78: Maximal subgroups of \(p\overline{6}m2\) (No. 78)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
47 & \(cmmm\) & 3 & t & \(D_{3}\) & 6 & \(p112/m\) & \(C_{2}\) & transcritical \\
71 & \(p\overline{3}1m\) & 2 & t & \(C_{2}\) & 71 & \(p\overline{3}1m\) & \(C_{1}\) & pitchfork \\
72 & \(p\overline{3}m1\) & 2 & t & \(C_{2}\) & 72 & \(p\overline{3}m1\) & \(C_{1}\) & pitchfork \\
75 & \(p6/m\) & 2 & t & \(C_{2}\) & 75 & \(p6/m\) & \(C_{1}\) & pitchfork \\
76 & \(p622\) & 2 & t & \(C_{2}\) & 76 & \(p622\) & \(C_{1}\) & pitchfork \\
77 & \(p6mm\) & 2 & t & \(C_{2}\) & 77 & \(p6mm\) & \(C_{1}\) & pitchfork \\
78 & \(p\overline{6}m2\) & 2 & t & \(C_{2}\) & 78 & \(p\overline{6}m2\) & \(C_{1}\) & pitchfork \\
79 & \(p\overline{6}2m\) & 2 & t & \(C_{2}\) & 79 & \(p\overline{6}2m\) & \(C_{1}\) & pitchfork \\
80 & \(p6/mmm\) & 3 & k & \(D_{3}\) & 78 & \(p\overline{6}m2\) & \(C_{2}\) & transcritical \\
80 & \(p6/mmm\) & 4 & k & \(S_{4}\) & 6 & \(p112/m\) & \(D_{3}\) & transcritical \\
80 & \(p6/mmm\) & 25 & k & \(C_{5}^{2}\rtimes D_{6}\) & 4 & \(p11m\) & \(D_{6}\) & transcritical \\
80 & \(p6/mmm\) & 49 & k & \(C_{7}^{2}\rtimes D_{6}\) & 4 & \(p11m\) & \(D_{6}\) & pfork + trans \\ \hline \end{tabular}
\end{table}
Table 80: Maximal subgroups of \(p6/mmm\) (No. 80)
\begin{table}
\begin{tabular}{c c c c c c c c c} \hline Subgroup & HM symbol & Index & Type & Factor group & Core & Core HM & Image & Bifurcation \\ \hline
35 & \(cm2m\) & 3 & t & \(D_{3}\) & 4 & \(p11m\) & \(C_{2}\) & transcritical \\
68 & \(p321\) & 2 & t & \(C_{2}\) & 68 & \(p321\) & \(C_{1}\) & pitchfork \\
70 & \(p31m\) & 2 & t & \(C_{2}\) & 70 & \(p31m\) & \(C_{1}\) & pitchfork \\
74 & \(p\overline{6}\) & 2 & t & \(C_{2}\) & 74 & \(p\overline{6}\) & \(C_{1}\) & pitchfork \\
78 & \(p\overline{6}m2\) & 3 & k & \(C_{3}\) & 78 & \(p\overline{6}m2\) & \(C_{1}\) & (complex) \\
79 & \(p\overline{6}2m\) & 4 & k & \(S_{4}\) & 4 & \(p11m\) & \(D_{3}\) & transcritical \\
79 & \(p\overline{6}2m\) & 25 & k & \(C_{5}^{2}\rtimes D_{3}\) & 4 & \(p11m\) & \(D_{3}\) & transcritical \\
79 & \(p\overline{6}2m\) & 49 & k & \(C_{7}^{2}\rtimes D_{3}\) & 4 & \(p11m\) & \(D_{3}\) & transcritical \\ \hline \end{tabular}
\end{table}
Table 79: Maximal subgroups of \(p\overline{6}2m\) (No. 79)
# Supplement 2: Translationengleiche character tables
This supplement provides the character tables for the factor groups \(G/T\) where \(G\) is a layer group and \(T\) is its normal subgroup of all pure translations. These can be used to understand transitions which retain the lattice of translations (translationengleiche transitions). Each of these character tables is identical to that of the corresponding isogonal point group. The header of each table gives the Seitz symbol of a point group element for each conjugacy class, although it should noted that the corresponding coset representative may also involve a translation component. The second row of each table gives the number of elements in the conjugacy class. In the remaining rows are the characters of the irreps, where each irrep is given a label on the left in Mulliken notation. The far right column gives the axial isotropy subgroups associated with each irrep.
\begin{table}
\begin{tabular}{c|c|c} \hline & 1 & axial subgroups \\ \hline size & 1 & \\ \hline \(A\) & 1 & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 1: Character table of \(p1\) (No. 1)
\begin{table}
\begin{tabular}{c|c|c|c} \hline & 1 & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & \\ \hline \(A^{\prime}\) & 1 & 1 & \(p11m\) (4) \\ \(A^{\prime\prime}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 5: Character table of \(p11a\) (No. 5)
\begin{table}
\begin{tabular}{c|c|c|c} \hline & 1 & \(\overline{1}\) & axial subgroups \\ \hline size & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & \(p\overline{1}\) (2) \\ \(A_{u}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 2: Character table of \(p1\) (No. 2)
\begin{table}
\begin{tabular}{c|c|c|c} \hline & 1 & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & \\ \hline \(A^{\prime}\) & 1 & 1 & \(p11m\) (5) \\ \(A^{\prime\prime}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 3: Character table of \(p112\) (No. 3)
\begin{table}
\begin{tabular}{c|c|c|c} \hline & 1 & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & \\ \hline \(A^{\prime}\) & 1 & 1 & \(p11a\) (5) \\ \(A^{\prime\prime}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 5: Character table of \(p11a\) (No. 5)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline & 1 & \(2_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & \\ \hline size & 1 & 1 & 1 & \(p112/m\) (6) \\ \(B_{g}\) & 1 & \(-1\) & 1 & \(-1\) & \(p\overline{1}\) (2) \\ \(A_{u}\) & 1 & 1 & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{u}\) & 1 & \(-1\) & \(-1\) & 1 & \(p11m\) (4) \\ \hline \end{tabular}
\end{table}
Table 6: Character table of \(p112/m\) (No. 6)
\begin{table}
\begin{tabular}{c|c c c|c} \hline & 1 & \(2_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & \\ \hline size & 1 & 1 & 1 & \(p112/a\) (7) \\ \(B_{g}\) & 1 & \(-1\) & 1 & \(-1\) & \(p\overline{1}\) (2) \\ \(A_{u}\) & 1 & 1 & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{u}\) & 1 & \(-1\) & \(-1\) & 1 & \(p11a\) (5) \\ \hline \end{tabular}
\end{table}
Table 7: Character table of \(p112/a\) (No. 7)
\begin{table}
\begin{tabular}{c|c c|c} \hline & 1 & \(2_{x}\) & axial subgroups \\ \hline size & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & \(p211\) (8) \\ \(A_{2}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 8: Character table of \(p211\) (No. 8)
\begin{table}
\begin{tabular}{c|c c|c} \hline & 1 & \(2_{x}\) & axial subgroups \\ \hline size & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & \(p2_{1}11\) (9) \\ \(A_{2}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 9: Character table of \(p2_{1}11\) (No. 9)
\begin{table}
\begin{tabular}{c|c c|c} \hline & 1 & \(2_{x}\) & axial subgroups \\ \hline size & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & \(c211\) (10) \\ \(A_{2}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 10: Character table of \(c211\) (No. 10)
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline & 1 & \(2_{x}\) & \(\overline{1}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & \\ \hline size & 1 & 1 & 1 & \(p2/b11\) (16) \\ \(A_{2g}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(p\overline{1}\) (2) \\ \(A_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(p211\) (8) \\ \(A_{2u}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(pb11\) (12) \\ \hline \end{tabular}
\end{table}
Table 16: Character table of \(p2/b11\) (No. 16)
\begin{table}
\begin{tabular}{c|c c|c} \hline & 1 & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 \\ \hline \(A_{1}\) & 1 & 1 & \(p11\) (12) \\ \(A_{2g}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 13: Character table of \(cm11\) (No. 13)
\begin{table}
\begin{tabular}{c|c c c|c} \hline & 1 & \(2_{x}\) & \(\overline{1}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & \\ \hline size & 1 & 1 & 1 & \\ \hline \(A_{1g}\) & 1 & 1 & 1 & \(p2/m11\) (14) \\ \(A_{2g}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(p\overline{1}\) (2) \\ \(A_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(p211\) (8) \\ \(A_{2u}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(pm11\) (11) \\ \hline \end{tabular}
\end{table}
Table 15: Character table of \(p2_{1}/m11\) (No. 15)
\begin{table}
\begin{tabular}{c|c c|c|c} \hline & 1 & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & \(p11\) (12) \\ \(A_{2}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 12: Character table of \(pb11\) (No. 12)
\begin{table}
\begin{tabular}{c|c c c|c} \hline & 1 & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & \(p11\) (12) \\ \(A_{2}\) & 1 & \(-1\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 13: Character table of \(cm11\) (No. 13)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(2_{x}\) & \(2_{y}\) & \(2_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A\) & 1 & 1 & 1 & 1 & \(p2_{1}22\) (20) \\ \(B_{1}\) & 1 & \(-1\) & \(-1\) & 1 & \(p112\) (3) \\ \(B_{2}\) & 1 & \(-1\) & \(-1\) & 1 & \(p112\) (3) \\ \(B_{3}\) & 1 & \(-1\) & \(-1\) & 1 & \(p211\) (9) \\ \hline \hline \end{tabular}
\end{table}
Table 20: Character table of \(p2_{1}22\) (No. 20)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(2_{x}\) & \(2_{y}\) & \(2_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A\) & 1 & 1 & 1 & 1 & \(p2_{1}2_{1}2\) (21) \\ \(B_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{2}\) & 1 & \(-1\) & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{2}\) & 1 & \(-1\) & \(-1\) & \(-1\) & \(p211\) (9) \\ \(B_{3}\) & 1 & \(-1\) & \(-1\) & 1 & \(p211\) (9) \\ \hline \hline \end{tabular}
\end{table}
Table 21: Character table of \(p2_{1}2_{1}2\) (No. 21)
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline \hline & 1 & \(2_{x}\) & \(\overline{1}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & \(c2/m11\) (18) \\ \(A_{2g}\) & 1 & \(-1\) & 1 & \(-1\) & \(p\overline{1}\) (2) \\ \(A_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(c211\) (10) \\ \(A_{2u}\) & 1 & \(-1\) & \(-1\) & 1 & \(cm11\) (13) \\ \hline \hline \end{tabular}
\end{table}
Table 18: Character table of \(c2/m11\) (No. 18)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(2_{x}\) & \(2_{y}\) & \(2_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A\) & 1 & 1 & 1 & 1 & \(p222\) (19) \\ \(B_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{2}\) & 1 & \(-1\) & 1 & \(-1\) & \(p211\) (8) \\ \(B_{3}\) & 1 & \(-1\) & \(-1\) & 1 & \(p211\) (8) \\ \hline \hline \end{tabular}
\end{table}
Table 19: Character table of \(p222\) (No. 19)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(2_{x}\) & \(2_{y}\) & \(2_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A\) & 1 & 1 & 1 & 1 & \(p2_{1}22\) (20) \\ \(B_{1}\) & 1 & \(-1\) & \(-1\) & 1 & \(p112\) (3) \\ \(B_{2}\) & 1 & \(-1\) & 1 & \(-1\) & \(p211\) (8) \\ \(B_{3}\) & 1 & 1 & \(-1\) & \(-1\) & \(p2_{1}11\) (9) \\ \hline \hline \end{tabular}
\end{table}
Table 20: Character table of \(p2_{1}22\) (No. 20)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline & 1 & \(2_{z}\) & \(m_{y}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & \(pma2\) (24) \\ \(A_{2}\) & 1 & 1 & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{1}\) & 1 & \(-1\) & \(-1\) & \(p112\) (12) \\ \(B_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(p11\) (11) \\ \hline \end{tabular}
\end{table}
Table 24: Character table of \(pma2\) (No. 24)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline & 1 & \(2_{z}\) & \(2_{y}\) & \(2_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A\) & 1 & 1 & 1 & 1 & \(c222\) (22) \\ \(B_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{2}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(c211\) (10) \\ \(B_{3}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 22: Character table of \(c222\) (No. 22)
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline & 1 & \(2_{z}\) & \(m_{y}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & \(pma2\) (23) \\ \(A_{2}\) & 1 & 1 & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{1}\) & 1 & \(-1\) & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(pm11\) (11) \\ \hline \end{tabular}
\end{table}
Table 23: Character table of \(pmm2\) (No. 23)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline & 1 & \(2_{z}\) & \(m_{y}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & \(pma2\) (24) \\ \(A_{2}\) & 1 & 1 & \(-1\) & \(-1\) & \(p112\) (3) \\ \(B_{1}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(p112\) (12) \\ \(B_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(p11\) (11) \\ \hline \end{tabular}
\end{table}
Table 25: Character table of \(pba2\) (No. 25)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(2_{y}\) & \(m_{z}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(pm2m\) (27) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p211\) (8) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(p11m\) (4) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(pm11\) (11) \\ \hline \hline \end{tabular}
\end{table}
Table 27: Character table of \(pm2m\)(No. 27)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(2_{y}\) & \(m_{z}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(pm2_{1}b\) (28) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p2_{1}11\) (9) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(p11a\) (5) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(pm11\) (11) \\ \hline \hline \end{tabular}
\end{table}
Table 28: Character table of \(pm2_{1}b\) (No. 28)
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline & 1 & \(2_{y}\) & \(m_{x}\) & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(pm2_{1}m\) (29) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p2_{1}11\) (9) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(p11m\) (4) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(pb11\) (12) \\ \hline \hline \end{tabular}
\end{table}
Table 29: Character table of \(pm2_{1}m\) (No. 29)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(2_{y}\) & \(m_{z}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(pb2b\) (30) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p211\) (8) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(p11a\) (5) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(pb11\) (12) \\ \hline \hline \end{tabular}
\end{table}
Table 30: Character table of \(pb2b\) (No. 30)
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline \hline & 1 & \(2_{y}\) & \(m_{z}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(pm2a\) (31) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p211\) (8) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(p11a\) (5) \\ \hline \hline \end{tabular}
\end{table}
Table 31: Character table of \(pm2a\) (No. 31)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline & 1 & \(2_{y}\) & \(m_{x}\) & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(pm2_{1}n\) (32) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p2_{1}11\) (9) \\ \(A^{\prime}_{2}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(p11a\) (5) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(pm11\) (11) \\ \hline \end{tabular}
\end{table}
Table 32: Character table of \(pm2_{1}n\) (No. 32)
\begin{table}
\begin{tabular}{c|c c c|c} \hline & 1 & \(2_{y}\) & \(m_{x}\) & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(pm2_{1}n\) (32) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p2_{1}11\) (9) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(-1\) & \(p11a\) (5) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(pm11\) (11) \\ \hline \end{tabular}
\end{table}
Table 33: Character table of \(pb2_{1}a\) (No. 33)
\begin{table}
\begin{tabular}{c|c c c|c} \hline & 1 & \(2_{y}\) & \(m_{x}\) & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(pb2n\) (34) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(p211\) (8) \\ \(A^{\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(p11a\) (5) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(pb11\) (12) \\ \hline \end{tabular}
\end{table}
Table 35: Character table of \(cm2m\) (No. 35)
\begin{table}
\begin{tabular}{c|c c c|c|c} \hline & 1 & \(2_{y}\) & \(m_{x}\) & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & \\ \hline \(A^{\prime}_{1}\) & 1 & 1 & 1 & 1 & \(cm2m\) (35) \\ \(A^{\prime\prime}_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(c211\) (10) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(-1\) & \(1\) & \(p11m\) (4) \\ \(A^{\prime\prime}_{2}\) & 1 & \(-1\) & \(1\) & \(-1\) & \(cm11\) (13) \\ \hline \end{tabular}
\end{table}
Table 36: Character table of \(cm2e\) (No. 36)
\begin{table}
\begin{tabular}{c|c c c c c c c c|c} \hline & 1 & \(2_{x}\) & \(2_{z}\) & \(2_{y}\) & \(\overline{1}\) & \(m_{x}\) & \(m_{z}\) & \(m_{y}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(B_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \(B_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p2/m11\) (14) \\ \(B_{3g}\) & 1 & 1 & 1 & 1 & 1 & 1 & \(-1\) & 1 & \(p222\) (19) \\ \(B_{1u}\) & 1 & 1 & 1 & 1 & 1 & \(-1\) & 1 & \(p222\) (19) \\ \(B_{2u}\) & 1 & 1 & 1 & 1 & 1 & \(-1\) & 1 & \(p22\) (23) \\ \(B_{2u}\) & 1 & 1 & 1 & 1 & \(-1\) & 1 & \(-1\) & 1 & \(p22m\) (27) \\ \(B_{3u}\) & 1 & 1 & 1 & 1 & 1 & 1 & \(-1\) & 1 & \(p22m\) (27) \\ \hline \end{tabular}
\end{table}
Table 37: Character table of \(pmmm\) (No. 37)
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(2_{y}\) & \(2_{x}\) & \(\overline{1}\) & \(m_{z}\) & \(m_{y}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \(B_{1g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(p112/a\) (7) \\ \(B_{2g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p2/b11\) (16) \\ \(B_{3g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(p2/m11\) (14) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(p222\) (19) \\ \(B_{1u}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(1\) & \(p22\) (24) \\ \(B_{3u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(p22a\) (31) \\ \hline \end{tabular}
\end{table}
Table 38: Character table of \(pmaa\) (No. 38)
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(2_{y}\) & \(2_{x}\) & \(\overline{1}\) & \(m_{z}\) & \(m_{y}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(p112/a\) (7) \\ \(B_{2g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(p2/b11\) (16) \\ \(B_{3g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(-1\) & \(p222\) (19) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p222\) (19) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p222\) (19) \\ \(B_{1u}\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p222\) (19) \\ \(B_{3u}\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(p22\) (24) \\ \(B_{3u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(p2a\) (31) \\ \hline \end{tabular}
\end{table}
Table 39: Character table of \(pban\) (No. 39)
\begin{table}
\begin{tabular}{c|c c c c c c c c|c} \hline & 1 & \(2_{y}\) & \(2_{z}\) & \(2_{x}\) & \(\overline{1}\) & \(m_{y}\) & \(m_{z}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & \\ \(B_{1g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(p112/m\) (6) \\ \(B_{2g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p2/b11\) (16) \\ \(B_{3g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(p2_{1}/m11\) (15) \\ \(A_{u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p2_{1}22\) (20) \\ \(B_{1u}\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & \(pma2\) (24) \\ \(B_{2u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & 1 & 1 & \(pm2m\) (27) \\ \(B_{3u}\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & 1 & \(-1\) & \(pm2_{1}m\) (29) \\ \hline \end{tabular}
\end{table}
Table 40: Character table of \(pmam\) (No. 40)
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline & 1 & \(2_{y}\) & \(2_{z}\) & \(2_{x}\) & \(\overline{1}\) & \(m_{y}\) & \(m_{z}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(p112/a\) (7) \\ \(B_{2g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(p2/m11\) (14) \\ \(B_{3g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(p2_{1}/m11\) (15) \\ \(A_{u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p2_{1}22\) (20) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & 1 & 1 & \(pm2_{1}m\) (23) \\ \(B_{2u}\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & \(pm2a\) (31) \\ \(B_{3u}\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & 1 & \(-1\) & \(pm2_{1}b\) (28) \\ \hline \end{tabular}
\end{table}
Table 41: Character table of \(pmma\) (No. 41)
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline & 1 & \(2_{y}\) & \(2_{z}\) & \(2_{x}\) & \(\overline{1}\) & \(m_{y}\) & \(m_{z}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \(B_{1g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(p112/a\) (7) \\ \(B_{2g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p2_{1}/b11\) (17) \\ \(B_{3g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(p2/m11\) (14) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p2_{1}22\) (20) \\ \(B_{1u}\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & \(pma2\) (24) \\ \(B_{2u}\) & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & 1 & 1 & \(pm2_{1}m\) (32) \\ \(B_{3u}\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & 1 & \(-1\) & \(pb2n\) (34) \\ \hline \end{tabular}
\end{table}
Table 42: Character table of \(pman\) (No. 42)
\begin{table}
\begin{tabular}{c|c c c c c c c c|c} \hline & 1 & \(2_{x}\) & \(2_{z}\) & \(2_{y}\) & \(\overline{1}\) & \(m_{x}\) & \(m_{z}\) & \(m_{y}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \(B_{1g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(p112/a\) (7) \\ \(B_{2g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(p21/b11\) (17) \\ \(B_{3g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p21/b11\) (16) \\ \(A_{u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p212\) (20) \\ \(B_{1u}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(pba2\) (25) \\ \(B_{2u}\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(pb21a\) (33) \\ \(B_{3u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & 1 & 1 & \(pb2b\) (30) \\ \hline \end{tabular}
\end{table}
Table 43: Character table of \(pbaa\) (No. 43)
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(2_{y}\) & \(2_{x}\) & \(\overline{1}\) & \(m_{z}\) & \(m_{y}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(p112/m\) (6) \\ \(B_{2g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(p21_{1}/b11\) (17) \\ \(B_{3g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(p21_{1}/b11\) (17) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p21_{1}2\) (21) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & 1 & 1 & \(pba2\) (25) \\ \(B_{2u}\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(p21_{1}/b11\) (29) \\ \(B_{3u}\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & 1 & \(-1\) & \(pm21_{1}m\) (29) \\ \hline \end{tabular}
\end{table}
Table 44: Character table of \(pbaa\) (No. 44)
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline & 1 & \(2_{y}\) & \(2_{x}\) & \(2_{z}\) & \(\overline{1}\) & \(m_{y}\) & \(m_{x}\) & \(m_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \(B_{1g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(p112/a\) (7) \\ \(B_{2g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p21_{1}/m11\) (15) \\ \(B_{3g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(p21_{1}/b11\) (17) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p21_{1}2\) (21) \\ \(A_{u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & 1 & \(p21_{1}a\) (33) \\ \(B_{3u}\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & \(pm21_{1}b\) (28) \\ \hline \end{tabular}
\end{table}
Table 45: Character table of \(pbaa\) (No. 45)
\begin{table}
\begin{tabular}{c|c c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(2_{y}\) & \(2_{x}\) & \(\overline{1}\) & \(m_{z}\) & \(m_{y}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & \(cmmm\) (47) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(p112/m\) (6) \\ \(B_{2g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(1\) & \(c2/m11\) (18) \\ \(B_{3g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(c2/m11\) (18) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(c222\) (22) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(cm2\) (26) \\ \(B_{2u}\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(1\) & \(-1\) & \(c2m\) (35) \\ \(B_{3u}\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(1\) & \(1\) & \(-1\) \\ \hline \end{tabular}
\end{table}
Table 48: Character table of \(cmme\) (No. 48)
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(2_{y}\) & \(2_{x}\) & \(\overline{1}\) & \(m_{z}\) & \(m_{y}\) & \(m_{x}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & & \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & 1 & 1 & \(cmme\) (48) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p112/(7)\) \\ \(B_{2g}\) & 1 & \(-1\) & 1 & \(-1\) & 1 & \(-1\) & \(1\) & \(-1\) & \(c2/m11\) (18) \\ \(B_{3g}\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(1\) & \(c2/m11\) (18) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(c222\) (22) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(cmm2\) (26) \\ \(B_{2u}\) & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(1\) & \(-1\) & \(1\) & \(cma2e\) (36) \\ \(B_{3u}\) & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(1\) & \(1\) & \(-1\) & \(cma2e\) (36) \\ \hline \end{tabular}
\end{table}
Table 49: Character table of \(p4\) (No. 49)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(4_{z}^{-1}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A\) & 1 & 1 & 1 & 1 & \(p4\) (49) \\ \(B\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(p112\) (3) \\ \({}^{1}E\) & 1 & \(-1\) & \(-i\) & \(i\) & \(p1\) (1) \\ \({}^{2}E\) & 1 & \(-1\) & \(i\) & \(-i\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 49: Character table of \(p4\) (No. 49)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(4_{z}^{-1}\) & \(\overline{1}\) & \(m_{z}\) & \(\overline{4}_{z}\) & \(\overline{4}_{z}^{-1}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & \(p4/n\) (52) \\ \(B_{g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(p112/a\) (7) \\ \({}^{1}E_{g}\) & 1 & \(-1\) & \(-i\) & \(i\) & \(1-1\) & \(-i\) & \(i\) & \(p\overline{1}\) (2) \\ \({}^{2}E_{g}\) & 1 & \(-1\) & \(i\) & \(-i\) & \(1-1\) & \(i\) & \(-i\) & \(p\overline{1}\) (2) \\ \(A_{u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p4\) (49) \\ \(B_{u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(p\overline{4}\) (50) \\ \({}^{1}E_{u}\) & 1 & \(-1\) & \(-i\) & \(i\) & \(-1\) & \(1\) & \(i\) & \(-i\) & \(p11m\) (4) \\ \({}^{2}E_{u}\) & 1 & \(-1\) & \(i\) & \(-i\) & \(-1\) & \(1\) & \(-i\) & \(i\) & \(p11m\) (4) \\ \hline \end{tabular}
\end{table}
Table 50: Character table of \(p\overline{4}\) (No. 50)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(4_{z}^{-1}\) & \(\overline{1}\) & \(m_{z}\) & \(\overline{4}_{z}\) & \(\overline{4}_{z}^{-1}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p4/n\) (52) \\ \(B_{g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p112/a\) (7) \\ \({}^{1}E_{g}\) & 1 & \(-1\) & \(-i\) & \(i\) & \(1-1\) & \(-i\) & \(i\) & \(p\overline{1}\) (2) \\ \({}^{2}E_{g}\) & 1 & \(-1\) & \(i\) & \(-i\) & \(1-1\) & \(i\) & \(-i\) & \(p\overline{1}\) (2) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p4\) (49) \\ \(B_{u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(p\overline{4}\) (50) \\ \({}^{1}E_{u}\) & 1 & \(-1\) & \(-i\) & \(i\) & \(-1\) & \(1\) & \(i\) & \(-i\) & \(p11a\) (5) \\ \({}^{2}E_{u}\) & 1 & \(-1\) & \(i\) & \(-i\) & \(-1\) & \(1\) & \(-i\) & \(i\) & \(p11a\) (5) \\ \hline \end{tabular}
\end{table}
Table 51: Character table of \(p4/m\) (No. 51)
\begin{table}
\begin{tabular}{c|c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(4_{z}^{-1}\) & \(\overline{1}\) & \(m_{z}\) & \(\overline{4}_{z}\) & \(\overline{4}_{z}^{-1}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p4/n\) (52) \\ \(B_{g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p112/a\) (7) \\ \({}^{1}E_{g}\) & 1 & \(-1\) & \(-i\) & \(i\) & \(1-1\) & \(-i\) & \(i\) & \(p\overline{1}\) (2) \\ \({}^{2}E_{g}\) & 1 & \(-1\) & \(i\) & \(-i\) & \(1\) & \(-1\) & \(i\) & \(-i\) & \(p\overline{1}\) (2) \\ \(A_{u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p4\) (49) \\ \(B_{u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(p\overline{4}\) (50) \\ \({}^{1}E_{u}\) & 1 & \(-1\) & \(-i\) & \(i\) & \(-1\) & \(1\) & \(i\) & \(-i\) & \(p11a\) (5) \\ \({}^{2}E_{u}\) & 1 & \(-1\) & \(i\) & \(-i\) & \(-1\) & \(1\) & \(-i\) & \(i\) & \(p11a\) (5) \\ \hline \end{tabular}
\end{table}
Table 52: Character table of \(p4/n\) (No. 52)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(2_{y}\) & \(2_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & 1 & \(p422\) (53) \\ \(A_{2}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p4\) (49) \\ \(B_{1}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p222\) (19) \\ \(B_{2}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(c222\) (22) \\ \(E\) & 2 & \(-2\) & 0 & 0 & 0 & \(p211\) (8), \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 53: Character table of \(p422\) (No. 53)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(2_{y}\) & \(2_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & 1 & \(p42_{1}2\) (54) \\ \(A_{2}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p4\) (49) \\ \(B_{1}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p4\) (49) \\ \(B_{2}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(c222\) (22) \\ \(E\) & 2 & \(-2\) & 0 & 0 & 0 & \(p211\) (9), \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 54: Character table of \(p421_{2}\) (No. 54)
\begin{table}
\begin{tabular}{c|c c c c|c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(2_{y}\) & \(2_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & 1 & \(p42_{1}2\) (54) \\ \(A_{2}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p4\) (49) \\ \(B_{1}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p4\) (49) \\ \(B_{1}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p21_{2}1_{2}\) (21) \\ \(B_{2}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(c222\) (22) \\ \(E\) & 2 & \(-2\) & 0 & 0 & 0 & \(p211\) (9), \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 55: Character table of \(p4mm\) (No. 55)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(2_{z}\) & \(\overline{4}_{z}\) & \(m_{y}\) & \(2_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & 1 & \(p\overline{4}2m\) (57) \\ \(A_{2}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p\overline{4}\) (50) \\ \(B_{1}\) & 1 & 1 & \(-1\) & 1 & \(c222\) (22) \\ \(B_{2}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(pba2\) (25) \\ \(E\) & 2 & \(-2\) & 0 & 0 & 0 & \(p11\) (12), \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 57: Character table of \(p\overline{4}2m\) (No. 57)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(2_{z}\) & \(\overline{4}_{z}\) & \(2_{y}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & 1 & \(p\overline{4}2m\) (58) \\ \(A_{2}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p\overline{4}\) (50) \\ \(B_{1}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p21_{2}12\) (21) \\ \(B_{2}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(cmm2\) (26) \\ \(E\) & 2 & \(-2\) & 0 & 0 & 0 & \(p211\) (9), \(cm11\) (13) \\ \hline \end{tabular}
\end{table}
Table 58: Character table of \(p\overline{4}2_{1}m\) (No. 58)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline & 1 & \(2_{z}\) & \(\overline{4}_{z}\) & \(2_{y}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & 1 & \(p\overline{4}2m\) (59) \\ \(A_{2}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p\overline{4}\) (50) \\ \(B_{1}\) & 1 & 1 & \(-1\) & 1 & \(c222\) (22) \\ \(B_{2}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(pmm2\) (23) \\ \(E\) & 2 & \(-2\) & 0 & 0 & 0 & \(p11\) (11), \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 59: Character table of \(p\overline{4}m2\) (No. 59)
\begin{table}
\begin{tabular}{c|c c c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(2_{y}\) & \(2_{xy}\) & \(\overline{1}\) & \(m_{z}\) & \(\overline{4}_{z}\) & \(m_{y}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p4/mmm\) (61) \\ \(A_{2g}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & 1 & \(-1\) & \(p4/m\) (51) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p4/m\) (51) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p4/m\) (51) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p4\)/m (47) \\ \(B_{2g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(p42\)/m 11 (18) \\ \(A_{1u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p42\) (53) \\ \(A_{2u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(p4mm\) (55) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p42\) (57) \\ \(B_{2u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(-1\) & \(p4m2\) (59) \\ \(E_{u}\) & 2 & \(-2\) & 0 & 0 & 0 & \(-2\) & 2 & 0 & 0 & 0 & \(pm2m\) (27), \(cm2m\) (35) \\ \hline \end{tabular}
\end{table}
Table 61: Character table of \(p4/mmm\) (No. 61)
\begin{table}
\begin{tabular}{c|c c c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(2_{y}\) & \(2_{xy}\) & \(\overline{1}\) & \(m_{z}\) & \(\overline{4}_{z}\) & \(m_{y}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 2 & 2 & 2 & \\ \hline size & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 2 & 2 & 2 & \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p4/nbm\) (62) \\ \(A_{2g}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & 1 & \(-1\) & \(p4/n\) (52) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p6an\) (39) \\ \(B_{2g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p42\) (48) \\ \(E_{g}\) & 2 & \(-2\) & 0 & 0 & 0 & 2 & \(-2\) & 0 & 0 & 0 & \(p2/b11\) (16), \(c2/m11\) (18) \\ \(B_{1u}\) & 1 & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p42\) (53) \\ \(E_{u}\) & 2 & \(-2\) & 0 & 0 & 0 & \(-2\) & 2 & 0 & 0 & 0 & \(p21_{1}m\) (29), \(cm2m\) (35) \\ \hline \end{tabular}
\end{table}
Table 63: Character table of \(p4/mbm\) (No. 63)
\begin{table}
\begin{tabular}{c|c c c c c c c c|c} \hline & 1 & \(3_{z}\) & \(3_{z}^{-1}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & & & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & & \\ \({}^{1}E_{g}\) & 1 & \(\omega^{2}\) & \(\omega\) & 1 & \(\omega^{2}\) & \(\omega\) & \(p\overline{1}\) (2) \\ \({}^{2}E_{g}\) & 1 & \(\omega\) & \(\omega^{2}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(p\overline{1}\) (2) \\ \({}^{A}_{u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(p3\) (65) \\ \({}^{1}E_{u}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(-1\) & \(-\omega^{2}\) & \(-\omega\) & \(p1\) (1) \\ \({}^{2}E_{u}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(-1\) & \(-\omega\) & \(-\omega^{2}\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 67: Character table of \(p312\) (No. 67)
\begin{table}
\begin{tabular}{c|c c c c c c c c c|c} \hline & 1 & \(2_{z}\) & \(4_{z}\) & \(2_{y}\) & \(2_{xy}\) & \(\overline{1}\) & \(m_{z}\) & \(\overline{4}_{z}\) & \(m_{y}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 1 & 2 & 2 & 2 & 1 & 1 & 2 & 2 & 2 & \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p4/nmm\) (64) \\ \(A_{2g}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & \(-1\) & \(p4/n\) (52) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & 1 & 1 & \(-1\) & \(1\) & \(-1\) & \(pmmn\) (46) \\ \(B_{2g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(1\) & \(cmme\) (48) \\ \(E_{g}\) & 2 & \(-2\) & 0 & 0 & 0 & 2 & \(-2\) & 0 & 0 & 0 & \(p2_{1}/m11\) (15), \(c2/m11\) (18) \\ \(A_{1u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p42_{1}2\) (54) \\ \(A_{2u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(1\) & \(p4mm\) (55) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(-1\) & \(p\overline{4}2_{1}m\) (58) \\ \(B_{2u}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(-1\) & \(1\) & \(1\) & \(-1\) & \(p\overline{4}2_{1}m2\) (59) \\ \(E_{u}\) & 2 & \(-2\) & 0 & 0 & 0 & \(-2\) & 2 & 0 & 0 & 0 & \(pm2_{1}n\) (32), \(cm2e\) (36) \\ \hline \end{tabular}
\end{table}
Table 64: Character table of \(p4/nmm\) (No. 64)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(3_{z}\) & \(3_{z}^{-1}\) & \(\overline{1}\) & \(\overline{3}_{z}\) & \(\overline{3}_{z}^{-1}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A_{g}\) & 1 & 1 & 1 & 1 & 1 & 1 & \(p\overline{3}\) (66) \\ \({}^{1}E_{g}\) & 1 & \(\omega^{2}\) & \(\omega\) & 1 & \(\omega^{2}\) & \(\omega\) & \(p\overline{1}\) (2) \\ \({}^{2}E_{g}\) & 1 & \(\omega\) & \(\omega^{2}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(p\overline{1}\) (2) \\ \(A_{u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(p3\) (65) \\ \({}^{1}E_{u}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(-1\) & \(-\omega^{2}\) & \(-\omega\) & \(p1\) (1) \\ \({}^{2}E_{u}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(-1\) & \(-\omega\) & \(-\omega^{2}\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 65: Character table of \(p3\) (No. 65)
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline & 1 & \(3_{z}\) & \(2_{xy}\) & axial subgroups \\ \hline size & 1 & 2 & 3 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & \(p321\) (68) \\ \(A_{2}\) & 1 & 1 & \(-1\) & \(p3\) (65) \\ \(E\) & 2 & \(-1\) & 0 & \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 68: Character table of \(p321\) (No. 68)
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline & 1 & \(3_{z}\) & \(2_{xy}\) & axial subgroups \\ \hline size & 1 & 2 & 3 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & \(p321\) (68) \\ \(A_{2}\) & 1 & 1 & \(-1\) & \(p3\) (65) \\ \(E\) & 2 & \(-1\) & 0 & \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 69: Character table of \(p3m1\) (No. 69)
\begin{table}
\begin{tabular}{c|c c c|c} \hline \hline & 1 & \(3_{z}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 2 & 3 & \\ \hline size & 1 & 2 & 3 & \\ \hline size & 1 & 1 & 1 & 1 & \(p31m\) (70) \\ \(A_{2}\) & 1 & 1 & \(-1\) & \(p3\) (65) \\ \(E\) & 2 & \(-1\) & 0 & \(cm11\) (13) \\ \hline \end{tabular}
\end{table}
Table 70: Character table of \(p31m\) (No. 70)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(3_{z}\) & \(2_{xy}\) & \(\overline{1}\) & \(\overline{3}_{z}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 2 & 3 & 1 & 2 & 3 \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & \(p31m\) (71) \\ \(A_{2g}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p3\) (66) \\ \(E_{g}\) & 2 & \(-1\) & 0 & 2 & \(-1\) & 0 & \(c2/m11\) (18) \\ \(A_{1u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p312\) (67) \\ \(A_{2u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & 1 & \(p31m\) (70) \\ \(E_{u}\) & 2 & \(-1\) & 0 & \(-2\) & 1 & 0 & \(c211\) (10), \(cm11\) (13) \\ \hline \end{tabular}
\end{table}
Table 71: Character table of \(p3\overline{3}1m\) (No. 71)
\begin{table}
\begin{tabular}{c|c c c c|c} \hline \hline & 1 & \(3_{z}\) & \(2_{xy}\) & \(\overline{1}\) & \(\overline{3}_{z}\) & \(m_{xy}\) & axial subgroups \\ \hline size & 1 & 2 & 3 & 1 & 2 & 3 & \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & \(p3\overline{3}m1\) (72) \\ \(A_{2g}\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p3\) (66) \\ \(E_{g}\) & 2 & \(-1\) & 0 & 2 & \(-1\) & 0 & \(c2/m11\) (18) \\ \(A_{1u}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(p321\) (68) \\ \(A_{2u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & 1 & \(p3m1\) (69) \\ \(E_{u}\) & 2 & \(-1\) & 0 & \(-2\) & 1 & 0 & \(c211\) (10), \(cm11\) (13) \\ \hline \end{tabular}
\end{table}
Table 72: Character table of \(p\overline{3}m1\) (No. 72)
\begin{table}
\begin{tabular}{c|c c c c c c|c} \hline & 1 & \(3_{z}\) & \(3_{z}^{-1}\) & \(2_{z}\) & \(6_{z}^{-1}\) & \(6_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & \\ \hline \(A\) & 1 & 1 & 1 & 1 & 1 & 1 & \(p6\) (73) \\ \(B\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(p3\) (65) \\ \({}^{1}E_{1}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(-1\) & \(-\omega^{2}\) & \(-\omega\) & \(p1\) (1) \\ \({}^{2}E_{1}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(-1\) & \(-\omega\) & \(-\omega^{2}\) & \(p1\) (1) \\ \({}^{1}E_{2}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(1\) & \(\omega^{2}\) & \(\omega\) & \(p112\) (3) \\ \({}^{2}E_{2}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(1\) & \(\omega\) & \(\omega^{2}\) & \(p112\) (3) \\ \hline \end{tabular}
\end{table}
Table 73: Character table of \(p6\) (No. 73)
\begin{table}
\begin{tabular}{c|c c c c c c|c} \hline & 1 & \(3_{z}\) & \(3_{z}^{-1}\) & \(m_{z}\) & \(\overline{6}_{z}^{-1}\) & \(\overline{6}_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p6\) (74) \\ \(A^{\prime\prime}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(p3\) (65) \\ \({}^{1}E^{\prime}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(1\) & \(\omega^{2}\) & \(\omega\) & \(p11m\) (4) \\ \({}^{2}E^{\prime}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(1\) & \(\omega\) & \(\omega^{2}\) & \(p11m\) (4) \\ \({}^{1}E^{\prime\prime}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(-1\) & \(-\omega^{2}\) & \(-\omega\) & \(p1\) (1) \\ \({}^{2}E^{\prime\prime}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(-1\) & \(-\omega\) & \(-\omega^{2}\) & \(p1\) (1) \\ \hline \end{tabular}
\end{table}
Table 74: Character table of \(p\overline{6}\) (No. 74)
\begin{table}
\begin{tabular}{c|c c c c c c|c} \hline & 1 & \(3_{z}\) & \(3_{z}^{-1}\) & \(2_{z}\) & \(6_{z}^{-1}\) & \(6_{z}\) & axial subgroups \\ \hline size & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p6\) (73) \\ \(B\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p3\) (65) \\ \({}^{1}E_{1}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(-1\) & \(-\omega^{2}\) & \(-\omega\) & \(p1\) (1) \\ \({}^{2}E_{1}\) & 1 & \(\omega^{2}\) & \(\omega\) & \(1\) & \(\omega^{2}\) & \(-1\) & \(-\omega\) & \(-\omega^{2}\) & \(p1\) (1) \\ \({}^{2}E_{2}\) & 1 & \(\omega\) & \(\omega^{2}\) & \(-1\) & \(-\omega\) & \(-\omega^{2}\) & \(-\omega\) & \(p112\) (3) \\ \hline \end{tabular}
\end{table}
Table 75: Character table of \(p6/m\) (No. 75)
\begin{table}
\begin{tabular}{c|c c c c c c|c} \hline & 1 & \(3_{z}\) & \(2_{z}\) & \(6_{z}\) & \(m_{xy}\) & \(2_{3}\) & axial subgroups \\ \hline size & 1 & 2 & 1 & 2 & 3 & 3 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & 1 & 1 & \(p622\) (76) \\ \(A_{2}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p6\) (73) \\ \(B_{1}\) & 1 & 1 & \(-1\) & \(-1\) & \(1\) & \(p321\) (68) \\ \(B_{2}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(p312\) (67) \\ \(E_{1}\) & 2 & \(-1\) & \(-2\) & 1 & 0 & 0 & \(c211\) (10), \(c211\) (10) \\ \(E_{2}\) & 2 & \(-1\) & 2 & \(-1\) & 0 & 0 & \(c222\) (22) \\ \hline \end{tabular}
\end{table}
Table 76: Character table of \(p622\) (No. 76)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(3_{z}\) & \(2_{z}\) & \(6_{z}\) & \(m_{xy}\) & \(m_{3}\) & axial subgroups \\ \hline size & 1 & 2 & 1 & 2 & 3 & 3 & \\ \hline \(A_{1}\) & 1 & 1 & 1 & 1 & 1 & 1 & \(p622\) (76) \\ \(A_{2}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p6\) (73) \\ \(B_{1}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(p321\) (68) \\ \(B_{2}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & 1 & \(p312\) (67) \\ \(E_{1}\) & 2 & \(-1\) & \(-2\) & 1 & 0 & 0 & \(c211\) (10), \(c211\) (10) \\ \(E_{2}\) & 2 & \(-1\) & 2 & \(-1\) & 0 & 0 & \(c222\) (22) \\ \hline \end{tabular}
\end{table}
Table 77: Character table of \(p62mn\) (No. 77)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(3_{z}\) & \(m_{z}\) & \(\overline{6}_{z}\) & \(m_{xy}\) & \(2_{3}\) & axial subgroups \\ \hline size & 1 & 2 & 1 & 2 & 3 & 3 & \\ \hline \(A_{1}^{\prime}\) & 1 & 1 & 1 & 1 & 1 & 1 & \(p62m\) (78) \\ \(A_{1}^{\prime\prime}\) & 1 & 1 & 1 & \(-1\) & \(-1\) & 1 & \(p312\) (67) \\ \(A_{2}^{\prime}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p6\) (74) \\ \(A_{2}^{\prime\prime}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(p31m\) (69) \\ \(E^{\prime}\) & 2 & \(-1\) & 2 & \(-1\) & 0 & 0 & \(cm2m\) (35) \\ \(E^{\prime\prime}\) & 2 & \(-1\) & \(-2\) & 1 & 0 & 0 & \(cm11\) (13), \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 78: Character table of \(p6\overline{6}m2\) (No. 78)
\begin{table}
\begin{tabular}{c|c c c c c|c} \hline & 1 & \(3_{z}\) & \(m_{z}\) & \(\overline{6}_{z}\) & \(2_{xy}\) & \(m_{3}\) & axial subgroups \\ \hline size & 1 & 2 & 1 & 2 & 3 & 3 & \\ \hline \(A_{1}^{\prime}\) & 1 & 1 & 1 & 1 & 1 & 1 & \(p62m\) (79) \\ \(A_{1}^{\prime\prime}\) & 1 & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p321\) (68) \\ \(A_{2}^{\prime}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p6\) (74) \\ \(A_{2}^{\prime\prime}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & 1 & \(p31m\) (70) \\ \(E^{\prime}\) & 2 & \(-1\) & 2 & \(-1\) & 0 & 0 & \(cm2m\) (35) \\ \(E^{\prime\prime}\) & 2 & \(-1\) & \(-2\) & 1 & 0 & 0 & \(cm11\) (13), \(c211\) (10) \\ \hline \end{tabular}
\end{table}
Table 79: Character table of \(p6\overline{6}m\) (No. 79)
\begin{tabular}{c|c c c c c c c c c c c|c} \hline & 1 & \(3_{z}\) & \(2_{z}\) & \(6_{z}\) & \(2_{xy}\) & \(2_{3}\) & \(\overline{1}\) & \(\overline{3}_{z}\) & \(m_{z}\) & \(\overline{6}_{z}\) & \(m_{xy}\) & \(m_{3}\) & axial subgroups \\ \hline size & 1 & 2 & 1 & 2 & 3 & 3 & 1 & 2 & 1 & 2 & 3 & 3 & \\ \hline \(A_{1g}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & \(p6/mmm\) (80) \\ \(A_{2g}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(p6/m\) (75) \\ \(B_{1g}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & 1 & \(-1\) & 1 & \(-1\) & \(p\overline{3}m1\) (72) \\ \(B_{2g}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(1\) & \(p\overline{3}1m\) (71) \\ \(E_{1g}\) & 2 & \(-1\) & \(-2\) & 1 & 0 & 0 & 2 & \(-1\) & \(-2\) & 1 & 0 & 0 & \(c2/m11\) (18) \\ \(E_{2g}\) & 2 & \(-1\) & 2 & \(-1\) & 0 & 0 & 2 & \(-1\) & 2 & \(-1\) & 0 & 0 & \(cmmm\) (47) \\ \(A_{1u}\) & 1 & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(p622\) (76) \\ \(A_{2u}\) & 1 & 1 & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & \(-1\) & \(-1\) & 1 & 1 & \(p6mm\) (77) \\ \(B_{1u}\) & 1 & 1 & \(-1\) & \(-1\) & 1 & \(-1\) & \(-1\) & 1 & 1 & \(-1\) & 1 & \(p\overline{6}2m\) (79) \\ \(B_{2u}\) & 1 & 1 & \(-1\) & \(-1\) & \(-1\) & 1 & \(-1\) & 1 & 1 & 1 & \(-1\) & \(p\overline{6}m2\) (78) \\ \(E_{1u}\) & 2 & \(-1\) & \(-2\) & 1 & 0 & 0 & \(-2\) & 1 & 2 & \(-1\) & 0 & 0 & \(cm2m\) (35), \(cm2m\) (35) \\ \(E_{2u}\) & 2 & \(-1\) & 2 & \(-1\) & 0 & 0 & \(-2\) & 1 & \(-2\) & 1 & 0 & 0 & \(cmm2\) (26), \(c222\) (22) \\ \hline \end{tabular}
Table 80: Character table of \(p6/mmm\) (No. 80)
# Supplement 3: Character Tables
This supplement provides character tables for some of the small finite groups associated with symmetry-breaking bifurcations. The character tables were calculated using the GAP software with an algorithm for calculating isotropy subgroups described by Matthews [2004].
The first table on each page is a standard character table but with an extra column on the right, following a similar format as those in Matthews [2004]. The header of the table '1a', '2a', etc. lists the conjugacy classes where the number indicates the order of the elements. The next row (labelled'size') gives the number of elements in each conjugacy class. For some of the tables, the next rows give the power map, with rows labelled '2P', '3P', etc. These give the conjugacy class when a given element is raised to a certain prime-order power. The remaining rows give the characters of the irreps, labelled \(R_{1}\), \(R_{2}\), etc. A letter \(F\) is added to denote that the representation is faithful.
The column on the far right gives the axial isotropy subgroups associated with each representation. Where numbers in brackets are given, the quotient group \(N_{\Gamma}(\Sigma)/\Sigma\) is non-trivial and the number gives the order of the quotient group. Here \(N_{\Gamma}(\Sigma)\) is the normalizer, defined by
\[N_{\Gamma}(\Sigma)=\left\{\gamma\in\Gamma:\gamma^{-1}\Sigma\gamma=\Sigma\right\}, \tag{1}\]
\(\Gamma\) is the parent group and \(\Sigma\) is the isotropy subgroup. Those isotopy subgroups with (2) denoted will be associated with pitchfork bifurcations. Those without a number may be transcritical, but having \(N_{\Gamma}(\Sigma)/\Sigma=1\) is not a sufficient condition for being transcritical. \(D_{5}\) provides an example where the bifurcation associated with the faithful irrep is not transcritical, but \(N_{\Gamma}(\Sigma)/\Sigma=1\).
The second table gives the dimensions of the spaces of polynomial invariants \(I(k)\) of degree \(k\), and the third table gives the dimensions of the spaces of polynomial equivariants \(E(k)\) of degree \(k\). These tables help further classify the types of bifurcation and can be used to constrain the number of free parameters needed in the amplitude equations.
|
2309.12591 | Before Blue Birds Became X-tinct: Understanding the Effect of Regime
Change on Twitter's Advertising and Compliance of Advertising Policies | Social media platforms, including Twitter (now X), have policies in place to
maintain a safe and trustworthy advertising environment. However, the extent to
which these policies are adhered to and enforced remains a subject of interest
and concern. We present the first large-scale audit of advertising on Twitter
focusing on compliance with the platform's advertising policies, particularly
those related to political and adult content. We investigate the compliance of
advertisements on Twitter with the platform's stated policies and the impact of
recent acquisition on the advertising activity of the platform. By analyzing
34K advertisements from ~6M tweets, collected over six months, we find evidence
of widespread noncompliance with Twitter's political and adult content
advertising policies suggesting a lack of effective ad content moderation. We
also find that Elon Musk's acquisition of Twitter had a noticeable impact on
the advertising landscape, with most existing advertisers either completely
stopping their advertising activity or reducing it. Major brands decreased
their advertising on Twitter, suggesting a negative immediate effect on the
platform's advertising revenue. Our findings underscore the importance of
external audits to monitor compliance and improve transparency in online
advertising. | Yash Vekaria, Zubair Shafiq, Savvas Zannettou | 2023-09-22T02:39:53Z | http://arxiv.org/abs/2309.12591v1 | # Before Blue Birds Became X-tinct:
###### Abstract
Social media platforms, including Twitter (now X), have policies in place to maintain a safe and trustworthy advertising environment. However, the extent to which these policies are adhered to and enforced remains a subject of interest and concern. We present the first large-scale audit of advertising on Twitter focusing on compliance with the platform's advertising policies, particularly those related to political and adult content. We investigate the compliance of advertisements on Twitter with the platform's stated policies and the impact of recent acquisition on the advertising activity of the platform. By analyzing 34K advertisements from \(\sim\)6M tweets, collected over six months, we find evidence of widespread non-compliance with Twitter's political and adult content advertising policies suggesting a lack of effective ad content moderation. We also find that Elon Musk's acquisition of Twitter had a noticeable impact on the advertising landscape, with most existing advertisers either completely stopping their advertising activity or reducing it. Major brands decreased their advertising on Twitter, suggesting a negative immediate effect on the platform's advertising revenue. Our findings underscore the importance of external audits to monitor compliance and improve transparency in online advertising.
## Introduction
Social media platforms allow advertisers to promote their products or services to a specific audience on the platform (Meta Advertising; Twitter Advertising). However, to maintain a safe and trustworthy advertising environment [1], social media platforms impose certain advertising policies that all advertisers must adhere to. For example, social media platforms prohibit harmful advertisements that promote scams, phishing schemes, or illegal products and services (Introduction to the Advertising Standards on Facebook; Twitter's Prohibited Content Policies). In addition to these policies that are universal, different platforms may have additional policies as well. For example, Twitter, unlike Facebook used to generally disallow political advertisements [2].
Given the potential of online advertising to influence large user populations [1], there is a great deal of interest in understanding whether platforms ensure compliance with their stated advertising policies. Some social media platforms have introduced limited transparency by, for example, making available an archive of all ads that run on the platform [13]. However, not all platforms have committed to such transparency leading to new regulations around the world [1, 2]. Digital Services Oversight and Safety Act of 2022]. Even when they do, research has shown that transparency efforts are incomplete and far from perfect [1]. To ensure compliance and improve transparency, there is a pressing need for external audits that provide an independent evaluation of policy enforcement.
In this paper, we conduct the first large-scale audit of advertising on Twitter (now X).1 First, we investigate the compliance of advertisements with Twitter's stated advertising policies, particularly those related to political and adult advertising. Compliance with political advertising policies is important for a fair and transparent electoral process especially given recent concerns around political disinformation [14, 15]. Compliance with adult advertising policies is important to avoid exposure of inappropriate content to minors or vulnerable populations [16, 17, 18]. Second, we opportunistically investigate the impact of Twitter regime change on compliance with existing policies. On Twitter, there has been a concern that a significant reduction in its workforce after the regime change, particularly the teams responsible for ensuring compliance, would pose challenges in maintaining the same level of enforcement of policies (Twitter'sacking of content moderators raises concerns). Twitter's acquisition by Elon Musk had pushed advertisers into considering reducing or halting advertising on Twitter due to concerns about the actual or perceived changes in its policies (Growing number of companies are freezing their Twitter ads after Elon Musk takeover). More recently (under X), Twitter has been under strict scrutiny by TAG (Trustworthy Accountability Group), who is considering revoking Twitter's certification due to brand safety concerns (X's brand safety efforts called into question again). Thus, it is cru
cial to evaluate the existing gaps and new issues that have emerged in regard to advertising policy compliance on Twitter during the regime change.
We aim to answer the following research questions in this work: **RQ1:** What was the level of compliance by advertisers on Twitter with regard to its _political_ and _adult_ advertising policies? and **RQ2:** How did Twitter's advertising landscape change after the regime change (i.e., Elon Musk's acquisition)? To answer these research questions, we collect a dataset of nearly 600 million tweets over the duration of six months. We then identify tweets that are ads by leveraging the tweet metadata made available in Twitter's Streaming API (Platform, 2022). Specifically, we identify 34,000 distinct advertisement tweets by filtering tweets that are created via Twitter Ad Manager (Twitter's Ad Manager Platform). In conjunction with limited manual annotations, we then use Perspective API (Jigsaw, 2017) to identify sexually explicit Twitter advertisements and a political classifier (Wojcieszak et al., 2021) to identify advertisements that are of political nature. Our work yields the following main findings:
* We find that 60.5% of political advertisements on Twitter violate one or more clauses of Twitter's political content advertising policy. A vast majority of the political content violations are for advertisers based outside the United States and are often not in English language. Our findings highlight the challenge in ensuring compliance across different regions and languages.
* We find that 20.35% of advertisements on Twitter violate the adult content advertising policy. We demonstrate that a significant fraction of explicit Twitter advertisements show signs of automation (e.g., follow a fixed template with tweets starting with a period followed by a list of sexually explicit terms/hashtags). We also find a non-negligible percentage of daily ads (2.75%) that include malicious links in their advertisement text.
* We find a significant change in the Twitter advertising landscape right after the regime change (i.e. Elon Musk's Twitter acquisition). After the acquisition, 55% of the advertisers completely stopped advertising on the platform, 5% of the advertisers decreased their advertising activity but did not completely stop, and 10% of the advertisers actually increased their activity. Overall, we find that popular advertisers (with more followers) decreased their Twitter advertising activity, highlighting that Musk's acquisition likely had a negative immediate effect on Twitter's advertising revenue from popular brands. Also, a lack of compliance with advertising policies can not only harm the safety of platform users but also negatively impact the advertising revenue of brands.
**Disclaimer: This paper studies sexual content disseminated on Twitter advertisements, hence we warn the readers that the manuscript includes uncensored mentions of sexually explicit content.**
## Related Work
Social media platforms primarily rely on advertising for monetization. Given potential negative ramifications, social media platforms are concerned about the abuse of advertising on their platforms. Prior research has attempted to understand problematic advertising on the open web (Zeng, Kohno, and Roesner, 2021; Zeng et al., 2021; Vekaria, Nithyanand, and Shafiq, 2022; Braun and Eklund, 2019; Kim, Barasz, and John, 2019; Knoll, 2016) as well as social media platforms. For the literature review, we focus on Facebook and Twitter because these are among the most popular social media platforms.
### Ad Transparency on Social Media
In order to promote transparency into advertising on their platforms and to comply with the regulations, social media companies have made efforts to make advertising related information publicly available. In 2019, Facebook started _Ad Library_(Meta, 2019) - an archive of all the ads that run on their platform. Prior work has studied different issues pertaining to advertising on Facebook using this ad archive (Andreou et al., 2019; Chiu, 2022; Edelson, Lauinger, and McCoy, 2020; Youn and Kim, 2019; Abuhashesh et al., 2021; Marino et al., 2019). Prior research has studied various types of problematic advertising on Facebook such as discriminatory ad delivery (Ali et al., 2019), exploitation of sensitive data for ads (Cabanas, Cuevas, and Cuevas, 2018), Russia-linked targeting of socially divisive ads (Ribeiro et al., 2019) and aggravating ads (Vargo and Hopp, 2020), covid-19 (Mejova and Kalimeri, 2020) and vaccine-related (Jamison et al., 2020) advertising, disparity of harmful ads (Ali et al., 2022), etc.
Unlike Facebook, Twitter does not allow direct access to all the ads that run on the platform through some equivalent2 to the Facebook's ad library, hindering at-scale analysis of its advertising ecosystem. Twitter also does not have clear guidelines on how to identify ads nor it exposes clear identification suggesting if an ad was paid for or not in their platform data crawled using the API. Due to these limitations, there has been little to no research focused on studying advertising on Twitter at a large scale. While there is ample work on studying ad targeting practices on Twitter (Wei et al., 2020; Hayes et al., 2020; Arora et al., 2019; MR Araujo et al., 2022; Wells et al., 2020; Janson, 2022; Bradley and James, 2019; Clark et al., 2016; Noh et al., 2021), unlike our research, none of them investigate compliance with advertising policies of the platform.
Footnote 2: Not equivalent to Facebook’s Ad Library, but X started Political Ads “Library” (not a library) that provides details of a specific political ad on a per request basis to its users.
In compliance with the data protection regulations, Facebook and Twitter provide personal archival of ad-related data to download for each user. Facebook's archival download doesn't provide details about the ads shown to a user - it just shows the list of advertisers who targeted the user and not the exact ad shown, ad content, or targeting parameters. Twitter, on the other hand, does provide exhaustive information - account details, personalization details (like demographics, inferred interests, etc.) and the ad information (ads shown, advertisers, targeting parameters used, etc.). Wei et al. (Wei et al., 2020) collects 3-month user-level archival data of all the ads targeted to 231 individuals through their volun
tary donations and analyze it to study Twitter's ad targeting mechanism. Other Twitter-focused works have studied different types of ads including food delivery during the pandemic [17], gambling [1], super bowl [18], and vaping [12] but with limited scale and scope.
### Political Advertising on Social Media
Social media platforms have been used to spread political disinformation during the 2016 U.S. Presidential elections [13, 14, 15]. As a result, political advertising has been extensively studied on Facebook across specific countries [16, 17, 18, 19, 20] as well as generally [15, 16, 17]. With the rise of political disinformation, social researchers have made an effort towards auditing Facebook's political advertising policies [14, 15, 16, 17, 18, 19, 20] to understand the gaps and raise awareness about the inefficacy of these policies to prevent problematic advertising. Political disinformation has also been studied on Twitter [12, 19, 20]. Problematic political advertising on Twitter has been mainly studied in reference to elections or for specific country or political figure [16, 17, 18, 19, 21, 22] in a very limited context. Some works have also analyzed the spread of bot-based political propaganda on Twitter [1, 19, 20, 21].
There are differences between the definition and authorization of political advertising on Twitter versus Facebook. Twitter's advertising policy states [14]:3
Footnote 3: Twitter’s political content advertising policy was amended with a clause for political ads run in U.S.A and Canada during Feb’23 and July’23 respectively. However, we do not use this clause because – one, it was added after we completed our data collection and second, the change was incorporated only for U.S.A and Canada and our analysis includes worldwide ads.
* _We define political content as content that references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome._
* _As that contain references to political content, including appeals for votes, solicitations of financial support, and advocacy for or against any of the above-listed types of political content, are prohibited under this policy._
* _We also do not allow ads of any type by candidates, political parties, or elected or appointed government officials._
Beyond the political content as defined by Twitter, Facebook's definition of political advertising [12] also includes ads on social issues - _civil and social rights, crime, economy, education, environmental politics, guns, health, immigration, political values and governance, and security and foreign policy_ - as political ads. Unlike Twitter, where political advertising is generally prohibited, Facebook generally allows advertisers to run political ads (including reference to political figures, political parties or elections (even "get out the vote" campaigns), provided that advertiser is approved by an authorization process and advertiser associates "paid for by" information with such ads. Facebook has country-specific requirements for almost all countries under this policy - for example, targeting political ad is not allowed in the state of Washington or in certain regions, ads must not discourage people from voting or call into question the legitimacy of an upcoming election. For both Facebook and Twitter, only Facebook-verified and Twitter-certified news publishers are respectively exempted by their respective political ad policies to promote political content.
### Adult Sexual Advertising on Social Media
Advertising related to adult sexual content is also concerning due to the potential exposure of inappropriate content to minors or vulnerable populations. No work has directly studied the adult sexual advertising on Twitter, however, research has attempted to detect abusive Twitter accounts [15, 16, 17, 18], malicious activity [14, 15, 16], and spread of adult content in general [13, 15, 16, 17]. In order to prevent the distribution of problematic content, extensive research has been conducted in the past to understand content moderation strategies on Twitter [14, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 43, 42, 44, 45, 46, 47, 48, 49, 51, 52, 53, 54, 55, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 71, 83, 84, 85, 86, 87, 88, 92, 93, 94, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 72, 73, 74, 75, 76, 78, 79, 80, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 51, 52, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 83, 89, 91, 84, 86, 88, 89, 92, 93, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 81, 82, 83, 84, 85, 86, 87, 88, 89, 93, 94, 95, 96, 97, 98, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 94, 95, 96, 97, 98, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 94, 95, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 94, 95, 96, 97, 99, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 2
Dataset & Methodology
This section presents our dataset and research methodology as depicted in Figure 1. First, we collect a random 1% sample of real-time worldwide tweets (Section ). Next, we identify ads that are observed in the 1% tweet sample (Section ). Then, we present our approach to estimating violations of Twitter's advertising policies (Section ) and our methodology for selecting the political and sexual policies for a deeper investigation (Section ). We describe how we identify and study political ads (Section ) and sexual ads (Section ). Finally, we provide a basic characterization of our dataset (Section ) and discuss our ethical considerations when collecting and processing our dataset (Section ).
### Data collection
Twitter API has been widely used by the research community to curate a representative sample of tweets at scale. We also use Twitter's Streaming API [14], which returns a random sample of 1% of all the tweets made at each given point of time during the day anywhere in the world. To gather metadata related to each tweet, we append all available query parameters to the endpoint URL, particularly, tweet.fields,user.fields,place.fields, and media.fields. We run our data collection from July 2022 to December 2022, collecting a set of 597,889,636 tweets. Then, we identify the tweets that are ads. To do this, we leverage the source attribute as discussed in Section and select all tweets with a source in our ad sources, obtaining a set of 34,606 ad tweets.
Each ad or tweet undergoes Twitter moderation and if Twitter deems some tweet to be unsafe then the account may be temporarily restricted or permanently suspended making their tweets inaccessible. A tweet creator could also delete the tweet if it receives a warning from Twitter. To retain ads that are not removed by Twitter, we rehydrate (or refresh) ad tweets after 2 weeks from their initial fetch date using the Tweet Lookup API [14] endpoint: [https://api.twitter.com/2/tweets](https://api.twitter.com/2/tweets). Rehydration yielded only 24,530 ads, corresponding to a drop of about 30% ad tweets.4 We discard the ad tweets that are removed from Twitter after the rehydration since these tweets are already being likely considered in violation of Twitter's policies. In other words, we focus only those ads which are not removed on Twitter and aim to study if any of those are problematic.
Footnote 4: We observe that only a small fraction (\(<\)5%) of ads are removed by Twitter even after the 2-week rehydration period.
### Identifying Ads on Twitter
An inherent challenge that exists when studying online advertising is obtaining data from social media platforms pertaining to advertising. Unlike Facebook, Twitter does not provide an ad archive. To overcome this challenge, in this work, we devise a novel methodology that uses the source attribute on Twitter to identify whether a tweet is an ad or not. The source attribute is a metadata field that indicates the application that was used for the posting of the tweet. For instance, if a user posts a tweet using Twitter's Web interface, the source will be set to "Twitter Web App" or if they posted via their iPhone the source will be set to "Twitter for iPhone." In a similar fashion, the source attribute can be used to identify tweets that are created via Twitter's Ads Manager (Twitter's Ad Manager Platform), which assists our purpose of identifying ad tweets.
To advertise on Twitter, an advertiser needs to create a Twitter advertising account, which gives them access to the _Twitter Ads Manager_ to create and manage their ad campaigns. A new ad tweet can be created through the advertiser's approved ad account using either the _Twitter's Ads API_ ("Twitter Ads" source) or using one of the two options from Twitter Ads Manager page - _Tweet Composer_ (under Creatives) and _Create Campaign_ (under Campaign). All ads created through Tweet Composer are assigned the source "Twitter for Advertisers" or "Twitter for Advertisers (legacy)" label. Ads created from the _Create Campaign_ option can be either a Simple Campaign ("simpleads-ui" source) or Advanced Campaign ("adverver-interface" source). In our work, we use the following _sources_ for identifying ad tweets: _Twitter Ads_, _Twitter for Advertisers_, _Twitter for Advertisers_ (legacy)_, _simpleads-ui_, and _advertiser-interface_. We refer to these as our _ad sources_.
**Limitations.** Our methodology for identifying ad tweets has some limitations that are worth mentioning. First, an advertiser can in theory use a custom source to create and promote their tweets, hence such tweets will not be captured by our methodology. Second, a Twitter user can in theory create tweets via Twitter Ads Manager and post them as regular tweets (unpaid) instead of paid ad tweets. Based on our experience with Twitter advertising, we do not expect that this phenomenon is very frequent. Finally, since December 2022, Twitter removed support for the source attribute (i.e., it is discontinued from their API and is also not shown on the platform's user interface), which led us to stop our longitudinal collection of ads.
### Estimating Violations of Twitter's Advertising Policies
As shown in Table 1, Twitter has 17 advertising policies5 that disallow the promotion of certain problematic content. To understand violations of these policies, we perform a preliminary analysis. Each policy description specifies different prohibitions to advertising under the policy along with country-specific exemptions (if any). To capture violations of Twitter's advertising policies, we extract phrases describing different violations from each policy.6 The main idea is to use textual similarity between policy phrases and the ad tweet's text to identify ads related to the prohibited content described in the policies. We use policy text as available during our data collection [1] as Twitter's advertising policies did not change during the period of our data collection. The phrase extraction is performed for 14 policies by excluding _Quality_, _State Media_, and _Prohibited Content
for Minors_ policies as they are either considered similar to other policies or violations are dependent on targeting parameters that are unavailable to us.
Next, to identify the policy being violated in a given ad, we measure semantic similarity between the ad text and the policy phrases. To this end, considering each policy as a document of policy phrases, we generate an embedding vector per policy. Similarly, each ad's textual content is represented in the same embedding space as an embedding vector. We use a pre-trained multilingual BERT model (distilusbee-multilingual-cased-v2) [10] to generate these embeddings. We select this specific model primarily because it supports 50+ languages and has been shown to perform well in standard semantic similarity tasks. To match an ad tweet with a policy, we compute the cosine similarity of the ad's embedding vector with each policy's embedding vector. Based on manual inspection, we note that the matches with cosine similarity less than 0.1 were generally irrelevant. Table 1 represents the count of ads that violate each policy. It is important to note that these matches do not precisely represent the actual number of ads violating a given policy but should be considered as a heuristic to roughly estimate the prominence of violations of different advertising policies. Next, we describe a methodology and the main intuition to capture accurate counts of two policies we select for deeper analysis.
### Selecting Advertising Policies for Deep-dive
It is difficult to study each policy in detail for the purposes of this paper, hence we decide to focus on the prominent policies in violation. From Table 1, we select top-5 policies that are observed to be violated in the maximum number of ads and that have no country-specific exemptions. Since our data collection is independent of any country and contains globally created and targeted ads, it is impossible for us to figure out that a given ad in our data was targeted to users in which specific geographic region. Hence, to study violations at scale, we select those policies to study which have no country-specific exemptions imposed. Top-5 policies identified under this reasoning includes -- _Adult Sexual Content_, _Hateful Content_, _Weapons and Weapon Accessories_, _Unauthorized Ticket Sales_, and _Political Content_ policies. Out of these 5 policies, we dropped _Weapons and Weapon Accessories_ and _Unauthorized Ticket Sales_ policies as we could not find any approach to strongly capture policy-related violations regarding them in the literature. Using the "severe_toxicity" score from Perspective API and the recommended threshold of 0.7 for identifying toxic content,7 we identified 3397 ads in violation of _Hateful Content_ policy. However, 98.7% of these ads overlapped with the adult sexual ads identified using the "sexually_explicit" attribute score from the Perspective API as described in Section. This is because the "severe_toxicity" model treats the text with sexual references as toxic. In sum, in this work, we do a deep dive and analyze the _political content_ and _adult sexual content_ policies in detail.
Footnote 7: [https://developers.perspectiveapi.com/s/about-the-api-score](https://developers.perspectiveapi.com/s/about-the-api-score)
### Identifying Political Ads
We aim to analyze and audit the compliance of political ads with Twitter's policies, hence we need a method to identify ad tweets with political nature. To do this, first, we translate all textual content of the ad tweets into English using Google's Translate API [14]. This is a necessary step since our dataset includes ad tweets from all over the world, including various languages (see Figure 17), and most political content classifiers work in a limited number of languages (in most cases, in English). Next, we use the BERT-based classifier made available by wojcieszak2021bertbert. The classifier was trained on about three thousand English news headlines (i.e., short text) and, as a result, is useful for identifying short tweet text containing political content.
We run political classifier on all 24,530 ads in our dataset and obtain 4,738 ads classified as political by the classifier. These 4,738 ads classified as political are manually verified to be political or not by checking the topic of discussion in the ad against Twitter's standard definition of political content ad policy. This yields 679 political ads based on Twit
Figure 1: Overview of our methodology.
ter's policy, which are used for further analysis in Section. To evaluate the classifier's performance, we extract a random sample of 600 unique ads (out of the 24,530 ads in our dataset) and perform a manual annotation. Two researchers independently annotated all the ads in the sample as political or not by referring to the policy documentation. The intern-annotator agreement was 96.5%, and Cohen's Kappa score was observed to be 0.65, suggesting _substantial agreement_. Our annotation, based on a sample of 600 ads, shows that the classifier yields 109 false positives and 0 false negatives. A substantial number of false positives is expected, given the differences in the definition of political content on Facebook and Twitter. Nevertheless, given that we did not find any false negatives in our sample and the fact that we remove false positives by manually vetting all ads flagged by the political classifier, we believe that the use of this political ad classifier is suitable for the purposes of our study.
### Identifying Adult Sexual Ads
To identify adult sexual content in Twitter ads, we use _Google's Perspective API_[13]. Perspective API uses machine learning models to identify different abusive nature of the comments and the perceived impact of a comment in the conversation. It provides scores for _toxicity_, _insult_, _identity attack_, _threat_, _profanity_, and \(sexually\_explicit\) attributes, ranging between 0 to 1 for each attribute. This score can be interpreted as the probability of the text being of a specific abusive nature. We focused only on \(sexually\_explicit\) attribute, which captures "_references to sexual acts, body parts, or other level content_".
Each ad tweet is first translated to English (if necessary8) and then passed to the Perspective API to obtain the score for \(sexually\_explicit\) attribute between 0 and 1. To decide a suitable threshold for the \(sexually\_explicit\) attribute, we extract a random sample of 200 ads (20 ad tweets from each 0.1 range of the score from 0 to 1). Each ad in this sample is independently annotated to be sexually explicit or not by two researchers by taking into account Twitter's policy. We find a 95% inter-annotator agreement and Cohen's Kappa score of 0.89, which suggests an _almost perfect agreement_. Next, to evaluate the classification performance, we calculate f1-scores for each 0.1 score range, and we select the threshold as 0.3 for which we observe the highest f1-score (f1-score = 0.65). All the ads with \(sexually\_explicit\) score \(\geq 0.3\) are classified as sexual ads. We obtain a total of 4,991 \(sexually\_explicit\) ads from a total of 24,530 ads in our dataset. For further validation, we manually verify that these 4,991 ads indeed include or refer to sexually explicit content and are in violation of Twitter's Adult Sexual Content Advertising Policy; we observe only 118 (\(\sim\)2.36%) false positive ads. This suggests that only 118 ads are falsely classified as sexually explicit, when they are actually not, showing the effectiveness of using a threshold of 0.3. These 118 ads are discarded to obtain 4,873 \(sexually\_explicit\) ads further studied in Section.
Footnote 8: \(\sim\)50% of the ads are not in English.
### URL Extraction and Classification
URLs are often used in ads by advertisers to promote their products or website. However, understanding the type of links shared as part of an ad is important. Malicious links embedded in the ads can negatively affect many people interacting with them. So, we aim to study the security of URL usage in Twitter ads. Malicious links have largely been
\begin{table}
\begin{tabular}{c l c c} \hline \hline \multirow{2}{*}{**Index**} & \multirow{2}{*}{**Policy Name**} & **Any Country-Specific Exemptions?** & **\# Matched Ads** \\ & & **(as of Oct 13th 2022)** & **(\# Ads \(>\) 0.1 cosine)** \\ \hline
1 & Adult Sexual Content & _None_ & **5631 (4883)** \\ \hline
2 & Alcohol Content & Yes (68 countries) & 1805 (1063) \\ \hline
3 & Drugs and Drug Paraphernalia & _Yes (US, Canada)_ & 841 (441) \\ \hline
4 & Financial Products and Services & Yes (82 countries) & 2118 (1240) \\ \hline
5 & Gambling Content & Yes (43 countries) & 2536 (1710) \\ \hline
6 & Hateful Content & _None_ & **2130 (1310)** \\ \hline
7 & Healthcare & Yes (58 countries) & 1760 (1154) \\ \hline
8 & Inappropriate Content & _None_ & 991 (499) \\ \hline
9 & Malware and Software Downloads & _None_ & 995 (431) \\ \hline
10 & Political Content & _None_ & **1148 (764)** \\ \hline
11 & Prohibited Content for Minors & _None_ & NA \\
12 & Quality & _None_ & NA \\
13 & State Media & _None_ & NA \\ \hline
14 & Tobacco and Tobacco Accessories & _None_ & 1012 (618) \\ \hline
15 & Unauthorized Ticket Sales & _None_ & **1241 (754)** \\ \hline
16 & Unacceptable Business Practices & _None_ & 629 (314) \\ \hline
17 & Weapons and Weapon Accessories & _None_ & **1693 (824)** \\ \hline \hline \end{tabular}
\end{table}
Table 1: Description of the number of country-specific exemptions associated with each policy along with the number of violating ads matched against each policy.
observed in tweets with adult content (Szurdi et al., 2021). However, we perform the analysis on broader data, including ad and non-ad tweets, for comparison. We extract embedded links from the tweet metadata obtained from the API response. Often, the embedded URLs in the tweet are shortened URLs (e.g., t.co, bit.ly, etc.) and do not actually represent the final landing page that is actually embedded. Hence, we visit each of the embedded URL and wait for all the intermediate redirects to complete before extracting the final landing page. We check the landing page URL against _VirusTotal_(VirusTotal, 2004) using their API endpoint to obtain counts for the number of anti-virus services that classify the URL as _malicious_ and/or _suspicious_. To compare the security of the URLs embedded in ads with non-ad tweets, we extract a random sample of non-ad tweets from the 1% dump of size equal to the total number ads in our dataset. We curate only those non-ad tweets that have at least one URL embedded in them and extract the same using the URL crawler as described above. The same steps are repeated for all the URLs embedded in the non-ad tweets.
### Dataset Characteristics
We observe ads from a total of 10,061 distinct advertisers in our dataset. Figure 2 represents the distribution of distinct advertisers advertising on Twitter each day. On an average, the number of distinct advertisers per day is 95 (in July), more or less constant (125) in August, September, October, and December and higher in November (135). On the other hand, the number of ads per 100 advertisers increased in November due to an increase in the total number of ads. On an average, 146 ads from 121 advertisers are observed daily in the 1% sample.
### Ethical Considerations
Our work relies solely on analyzing publicly available datasets obtained via the Twitter API, hence we do not deal with private sensitive data. Due to this, no additional privacy concerns arise from our work beyond those that are also applicable to the vast research work that analyzes large-scale datasets obtained via the Twitter API (Jhaver et al., 2021; Qayyum et al., 2023). Throughout this work, we performed manual annotations to label advertisements and assess whether they were related to sexually explicit or political content. We acknowledge that this manual work might expose the annotators to sensitive and disturbing content. Hence, we elected to do the annotations within our research team rather than with crowdsourcing workers. This allows us to minimize the exposure to potentially disturbing content to a few researchers within our research team.
RQ1: Compliance with Twitter Ad Policies Compliance with Twitter's Political Content Advertising Policy
To understand the compliance of the ads in our dataset with Twitter's political content advertising policy, we check all 679 political ads against each clause included in the policy; we find 411 ads that violate at least one clause included in Twitter's policy. Note that these clauses are obtained from Twitter's political content policy and are stated in the paper exactly as they are mentioned in the policy. We report the number of violating political ads for each clause in Table 3. Overall, ads that include political content are prohibited on Twitter, however, some exemptions allow advertisers to promote political ads. Here, we perform a systematic analysis to assess whether these exemptions are satisfied by the political ads present in our dataset by looking into each clause that exists in Twitter's political content Advertising Policy. Particularly, political ads are allowed when an advertiser is using the official Twitter handle of a news publisher or affiliated journalist provided that the news publisher or the journalist's handle is approved by Twitter to run political ads
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Event** & **Date** & **Description** \\ \hline TwBuyPr & 2022-10-04 & Musk proposes to buy Twitter \\ TwAcqEM & 2022-10-27 & Twitter’s acquisition by Elon Musk is completed \\ Layoff1 & 2022-11-04 & 50\% of Twitter’s employees laid off (15\% of moderation team) \\ Layoff2 & 2022-11-15 & Musk fires employees talking negative about him. \\ Layoff3 & 2022-11-19 & Twitter employees leave company to not serve Musk’s ultimatum to hardcore work \\ AmmSup & 2022-12-01 & Decision to grant general amnesty to suspended account is executed \\ \hline \hline \end{tabular}
\end{table}
Table 2: Important Twitter events that co-occurred with the period of our data collection (Elon Musk’s Twitter Takeover: A Timeline Of Events).
Figure 2: Distribution of advertisers and ads made per 100 advertisers on Twitter over the period of data collection after rehydrating tweets. _TwAcqEM_ refers to Twitter’s acquisition by Elon Musk as described in Table 2 along with other important events.
(clause B). However, an advertiser should not be any type of political entity as described in clause A2. We initially assume that every advertiser running a political ad is approved for the same and evaluate other clauses of the policy to determine if the assumption holds or if it is in violation of the policy. We observe 13 ads violating clause A2 and being made by some political entity like the Mayor of London - Sadiq Khan (Figure 3: Tweet D), GOP Chairwoman - Ronna McDaniel, U.S. Senator - Cindy Hyde-Smith, etc. An entity is determined as political or not by manually checking their Twitter bio or using Google Search to obtain more information about the entity. The policy also disallows any political ad with advocacy for or against some political topic like financial support, votes, etc., irrespective of whether a Twitter handle has the exemption (clause C). We observe 58 ads in violation of this clause via manual inspection. Some of the ads include advocacy for votes (Figure 3: Tweet A), derogatory remarks on political candidates (Figure 3: Tweets B, C), and criticism on political reforms (Figure 3: Tweet D), among many others.
To obtain an exemption for political advertising, a news publisher can apply for it by providing details described in clauses D1-D8. We visit the news publications' website to make these determinations. We observe various ads where we could not verify one or more of these clauses through stringent manual validation and hence consider the corresponding ad to be a violation as this information is expected to be publicly available. It is surprising that despite Twitter having all the information related to these 8 clauses, it is still not able to effectively moderate violation of these clauses. Clause D8 describes the web-traffic-based eligibility of an advertiser to determine if they shall be permitted to run political ads nationally or globally. We do not have sufficient data from Twitter to ascertain if some advertiser was running their ad campaign nationally or globally during our period of evaluation. However, we overcome this in a conservative manner by tagging an advertiser associated with violations pertaining to this clause by checking if _simlarweb.com_ numbers of the advertiser are greater than 100K or not. If they do not have even 100K traffic then they should not have been allowed to run ads nationally or globally. To avoid false classification, we check the monthly traffic numbers for the past 3 months from the date of ad. Furthermore, we observe that a large number of Twitter accounts making political ads are not certified (clause E2) (Figure 3: Tweets A,C). Twitter's political content advertising policy uses 'certified' that we assume to be the same as 'Twitter verified' and use the information returned by the Twitter API in the tweet metadata to annotate if the advertiser is certified/verified or not. Out of 201 ads that we annotate to be in violation of the certification clause, most of the ads violate one or more of the other clauses as well and only 15 ads violate only the certification clause. Note that we do not verify clause E5 due to lack of complete information and because studying violation of generic Twitter _Terms of Service_ is out-of-scope. Breakdown of Twitter's political content advertising policy into various sub-clauses resulted in inferring a total of 411 violating political ads on the platform. It is important to note that violation of clauses A2, B or C is more severe than D1-D8. Even amongst A2, B, and C, violation of C can be considered more severe than clause A2 or B. However, automated detection of violations pertaining to clause C is also more difficult than capturing clauses of type-D - as the platform needs to rely on human experts to make the final determination about clause C.
To further analyze the distribution of violating ads geographically, we use the country of the advertiser from the advertiser's location metadata. Figure 4 depicts the distribution of Top 20 countries of violating advertisers. Interestingly, we find that a large number of political ad violations originate from advertisers in Turkey; 48.4% of all political ad violations are from Turkish advertisers. Since only 5% of the ads are in Turkish, this finding highlights that the number
Figure 3: Examples of some violating political ads.
of violations in Turkish are disproportionately higher than the number of ads in Turkish. Overall, these results highlight that likely Twitter's ad moderation and approval procedure is less effective in less popular languages like Turkish when compared to English.
Next, we analyze the content of the violating tweets to see if they have some common theme of discussion. To capture semantics of violating political ads, we use a textual clustering approach followed by Hoseini et al. [12]. First, we generate 512-dimensional BERT-based embedding for each violating ad using the model described in Section. Next, embeddings are reduced to 128-dimensional space using UMAP [13] for effective density-based clustering. We use HDBSCAN to perform clustering of these embeddings. We experiment with 1224 combinations of 4 hyperparemeters -- \(min\_cluster\_size\), \(min\_samples\), \(metric\), and \(cluster\_selection\_method\) and compute DBCV (Density-based Clustering Validation) score for each combination to evaluate quality of resultant clusters. DBCV score signifies relative density connection between pairs of points and ranges from -1 to 1, where a higher value suggests better clusters. We discard combinations with DBCV \(<\) 0.1 as we observe poor clustering results for these cases. Figure 4(a) depicts the rest of the combinations. We select the combination (\(min\_cluster\_size\)=6, \(min\_samples\)=2, \(metric\)=euclidean, \(cluster\_selection\_method\)=om) that produces the highest number of clusters (24) with least noise (97 points).
We visualize the clusters using a graph where nodes represent distinct clusters obtained above and edges connect similar pairs of clusters if their inter-cluster cosine similarity is greater than 0.9. Next, we run Louvain method for community detection [12] on the generated cluster
Figure 4: Distribution of ads by Top 20 countries of the advertisers who make political tweets with the most violations of Twitter’s political advertising policy.
\begin{table}
\begin{tabular}{c l c} \hline \hline
**ID** & **Political Content Advertising Policy Breakdown into Clauses** & **Violating Ads** \\ \hline A2 & Advertiser is not a candidate, political party, or elected or appointed government official & 13 \\ \hline B & Advertiser is a news publisher or reporter/journalist & 104 \\ \hline C & Content should not include advocacy for or against political topic & 58 \\ \hline D1 & Contact information is available on their website & 31 \\ D2 & “About” information is available on their website & 185 \\ D3 & Dedicated reporter or editorial staff information is available on their website & 90 \\ D4 & The publication has a searchable archive available on their website & 40 \\ D5 & The publication is not primarily a customer-generated or aggregated content platform & 0 \\ D6 & The publication is dedicated to publishing news & 118 \\ D7 & The publication is not dedicated to advocating on a single issue & 7 \\ D8 & To advertise globally, the publication’s website must have at least 3M monthly unique visitors to advertise & 113 \\ D8 & worldwide or at least 100K monthly unique visitors to advertise in a country as per similarweb.com & 201 \\ \hline E1 & Certified Ads Account of news publisher or reporter/journalist & 201 \\ E2 & Profile photo, header photo, and website must be consistent with the @username’s online presence. & 0 \\ E3 & Profile includes journalist or publisher’s name & \\ E3 & Bio must include a website that provides valid contact info. & 10 \\ E4 & Bio discloses affiliation to news publisher and links to the publication’s website. & 0 \\ E4 & If @username name is not related to the certified entity, the bio must include the following disclaimer: & 0 \\ & “Owned by [certified entity name]” & \\ E5 & Comply with Twitter Rules \& Terms of Service. & N/A \\ & Reporter/Journalist must be listed in their parent company’s website. & \\ \hline \hline \end{tabular}
\end{table}
Table 3: Description of different sub-clauses of Twitter’s political content advertising policy and count of violating ads detected for each of these sub-clauses. Total violating political ads were 411 out of 679 in our data.
graph to obtain stronger insights about the clusters for better evaluating them in a qualitative manner. Each cluster is further annotated by using the textual content of all ads within that cluster. The ad text is first pre-processed to remove less relevant characters like punctuations or stop words. The obtained tokens are then used to extract representative 3-gram annotation for each cluster using the BERT-based keyword extraction (KeyBERT) method.
Figure 6 depicts the resultant clusters with different Louvain communities. The community represented in green depicts violations pertaining to Mexico and South American regions covering ads focused on issues like the increase in judicial executions in Casanare, the assassination of presidential candidate Villavicencio from Ecuador, etc. The community in Blue represents ads related to the AKP party in Turkey, the assassination attempt on former Pakistani PM Inran Khan, and some Armenian protests. The cluster in orange represents the opposition party in Turkey led by Kemal (CHP), with some ads mentioning even AKP leader and current president Erdogan. The other smaller pink clusters represent country-specific president-related ads. We also observe another ad cluster in yellow, which captures futuristic events like future of CHP party, upcoming elections, propaganda against how Biden is dangerous, and campaigns for Donald Trump's re-election can be clearly observed despite existing political advertising prohibitions.
Next, we study the temporal distribution of these political violations in Figure 7, representing violations normalized by the total number of ads run on a respective day. On average, 2% of the total ads run on any day violate Twitter's political content ad policy with violations varying mainly from 1% to 5% of the total ads per day on different days. Political ad violations clearly increased to 3% in December 2022. This could be due to the _AmnSup_ event (Table 2) - where suspended Twitter accounts such as that of the former US president Donald Trump were restored. It could be that such restored accounts started engaging in promoting violating political content. However, it is beyond the scope of our research to investigate which accounts were unsuspended, and hence we do not pursue it in detail.
It is important to look at the type of advertisers engaging in political ad violations so as to understand the reach of these violating ads. For this, we compare the popularity (i.e., the follower distributions) of violating political advertisers with the non-violating political advertisers in Figure 8. Up to 70% of the advertisers with less than 1M followers that are violating political ad policy have lesser followers (i.e., are less popular) than advertisers that advertise non-violating political ads. This suggests that Twitter should improve their compliance and periodic checks on the political ads promoted from less popular accounts. Violating and non-violating advertisers with greater than 1M followers follow more or less a similar trend.
### Compliance with Twitter's Adult Sexual Content Advertising Policy
In this section, we study the compliance of ads with Twitter's adult sexual content advertising policy. Using the methodology described in Section 5 we identify 4,991 (\(\sim\)20.35%)
Figure 5: Scatter plot showing number of clusters and noisy data points obtained for different hyperparameter combinations (with DBCV score \(>=0.1\)) from HDBSCAN for (a) political and (b) sexual clustering of violating ads. Red star represents the selected optimal combination.
Figure 6: Clustering of ads in violation of Twitter’s political content advertising policy. The color represents different Louvian communities detected.
ads that have a _sexually_explicit_ score of 0.3 or above and manually remove 118 false positive ads to obtain 4,873 _sexually_explicit_ ads. We find more than 75% of ads in our data have _sexually_explicit_ score of 0.05 or less, while \(\sim\)10% of ads have scores as high as \(>\)0.9 (Figure 19).
We also look at the language used in sexual ads. Figure 9 shows that, unlike political ad policy violations, sexual ad policy violations are the most common in _English_ (68.4%), followed by _Arabic_ (15.8%), _Indonesian_ (10.1%), and _Japanese_ (3.4%). This highlights the stark differences in violations across policies; while our work shows that many political ad violations are in the Turkish language, that is not the case for the violations related to sexually explicit ads.
Next, we look into how these violations are occurring over time. Figure 10 shows the number of violations per day in our dataset. We can observe that from late October 2023, the number of violating sexual ads have significantly increased. One of the plausible reasons for this could be the layout of 50% of Twitter employees post Elon Musk's Twitter acquisition in multiple rounds, which also includes a lay-off of 15% Twitter's moderation staff on November 4, 2023 (Peters 2022) (_LayOffT_). Twitter reassured that the platform's content moderation will not be impacted, however, our results likely indicate that the moderation of sexually explicit ads is not as effective as before.
To characterize the content of the sexual ads, we first perform clustering of the violating ad content, following exactly the same steps as described for violating political ads in Section. Figure 5b depicts the clustering results for hyperparameter combinations having DBCV score \(>=\) 0.1 out of 1748 total combinations. We select the combination (\(min\_cluster\_size\)=7, \(min\_samples\)=2, \(metric\)=euclidean, \(cluster\_selection\_method\)=eom) that produces the highest number of clusters (188) with least noise (1663 points). Figure 11 represents the output clusters along with the detected Louvain communities. Qualitative analysis of ads corresponding to different clusters clearly reveals grouping of ads based on distinct adult content specific categories - for example, the light purple community in the bottom right represents all the ads that promote _indo-porn_ content.
Upon manually evaluating the ads in the violating clusters, we observe that there are 2 main types of content structures followed by the advertisers: First, where the textual content of the ad begins with a period (i.e., ".") followed by different space-separated adult content related words. Maddocks et al. (Maddocks 2020) observed this type of bot-based text pattern in normal tweets. However, it is surprising to see this pattern in Twitter ads, as it suggests a lack of effective ad moderation. The majority of ads with this structure only contain textual content and do not have any embedded links or images. 3,122 out of 4,873 (i.e., \(\sim\)64%) sexual ads belonged to this category. Upon analyzing the usernames of the advertiser accounts in this sub-type, they all followed the same naming pattern - some hypothetical first name followed by last name without any spaces and first letter of both first and last names as capitalized, for instance _MilliexChanax_. Upon mapping different advertisers' _author_id_ to their _username_, we observed a pattern wherein to avoid detection, an advertiser account (i.e., a given _author_id_) kept changing their _username_ in variable amounts of time. We observed as high as 49 _usernames_ mapped to a single _author_id_. The remaining ads contain some textual content describing the scenario displayed in the linked porn video and the catch text is aimed to entice the viewer's at
Figure 8: CDF of the Twitter followers of all political advertisers.
Figure 7: Distribution of the normalized percentage of violating political ads created per day with respect to the total number of ads run on that day. See Table 2 for a description of the annotated events.
Figure 9: Language distribution of the violating sexual ads
tention using clickbaity adult language. This sub-category of ads involved scenarios where the advertisers used some dark patterns and embedded unsafe links in violation with Twitter's quality policy9 for advertising. A standard embedded video within a tweet should always be playable on Twitter itself when clicked on the play button. Some advertisers embed a link in the ad with a preview containing an image with a play button and deceiving the viewers into thinking it is a normal playable video so that when they click on the play button, it redirects them to the embedded link. Twitter moderates even the videos. So, to evade detection, some advertisers also embed an adversarial playable porn video which was edited so as to have a static foreground overlaid on the video such that humans can perceive the video contents. However, the moderation framework falsely classifies it as negative. These tactics are strictly against Twitter's quality policy. However, specific behavioral insights of non-compliance discussed here can help platform X in improving their ad moderation and compliance framework.
Footnote 9: [https://business.twitter.com/en/help/ads-policies/ads-content-policies/quality-policy.html](https://business.twitter.com/en/help/ads-policies/ads-content-policies/quality-policy.html)
Next, we perform the security analysis of the URLs embedded within all the ads in our data. Out of 24,530 ads, 20,362 ads had at least one URL embedded within the tweet. We observed a common usage of URL shortening services to shorten the original URL and then embed it. Besides the benign functionality to reduce the length of long URLs, advertisers could also use it to hide malicious URLs in their ads. As a result, we analyze the security of embedded as well as the final landing URLs resulted by clicking on an embedded link. Using the VirusTotal-based classification of each embedded as well as landing URL, we calculate a score as follows:
\[Score=max(mal_{e}+sus_{e},mal_{l}+sus_{l}) \tag{1}\]
Here, \(mal_{e}\) and \(mal_{l}\) are the counts of VirusTotal services that respectively classified embedded and landing URL of the current tweet as malicious. Similarly, \(sus_{e}\) and \(sus_{l}\) are the respective counts of VirusTotal services that respectively classified the embedded and landing URL of the current tweet as suspicious. Then, we treat a URL as problematic if the above score is greater or equal to 3. We experimented with different values for this threshold - with threshold value of 2, many benign links were also getting classified as problematic; with higher thresholds above 3, there was up to 11% decrease in problematic tweets with an increase in the threshold from 3 to 7. As a result, we chose the score threshold as 3 to avoid false negatives. Figure 12 depicts the number of tweets with problematic URLs with respect to the daily total ads after the rehydration. Unlike a significant increase from October 2023 end (as seen in Figure 10), we observe an increase in ads with problematic URLs from December 2022 onwards. This is likely due to the _AnnSup_ event discussed in Section.
It is important to understand how different is the URL usage for violating sexual ads as compared to the other ads. As a result, we plot a scatter chart between the sum of (malicious + suspicious) VirusTotal services and _sexually_explicit_ scores of embedded URLs and Landing URLs in Figure 13. Let's first look at the non-sexual ads (i.e., \(sexual\_explicit<0.3\)) - surprisingly, for both embedded and landing URLs, we obtain 103 ads where its URL is problematic. For sexual ads (i.e., \(sexual\_explicit>=0.3\)), in the case of embedded URLs, only 3 ads contain a problematic embedded link, while on the other hand, 345 ads contain benign embedded URLs which lead to unsafe landing pages.
Upon closely analyzing different kinds of landing URLs, we observed a variety of potentially harmful scenarios. First, the landing URL is sometimes resulted after a number of redirects. Moreover, Twitter only expands the embedded URLs which are converted by Twitter's link service and warns its viewers about any harmful page by checking it against a list of potentially dangerous sites it maintains before proceeding [22]. However, it does not check the landing URLs currently. Landing URL pages comprise - normal porn sites, fake reCAPTCHA and consent options, system infected warning screens, anti-virus download prompts, online game sites (including interactive sexual games), sex-baity sites, youtube channel promotions, forex market trading publisher websites, betting websites (like bet365 offers and registration), etc.
**Takeaways.** We summarize the takeaways from our audit of Twitter's political and sexual content advertising policies:
* We observe that \(\sim\)60% of political ads run on Twitter violate their _political content advertising policy_, while roughly 20% of all the ads run on Twitter are sexually explicit and in violation with their _adult sexual content advertising policy_. Political ad violations are more prominent with less popular advertisers than the popular ones.
* Surprisingly, the highest number of violating political ads (\(\sim\)48%) are noticed from Turkish advertisers although Turkish is not the top-3 most prevalent advertising languages. Also, the majority of sexual ads are run in English, Arabic, Indonesian and Japanese.
* The simplest political content policy clauses like existence of 'about' pages and 'certified accounts' are not properly validated by Twitter before allowing an advertiser to run political ads. More serious violations include ads run by some 'political entity' or ads containing 'ad
Figure 10: Daily Distribution of the ads violating Twitter’s _adult sexual content policy_. See Table 2 for a description of the annotated events.
Figure 11: Clustering of sexually explicit ads in violation of Twitter’s adult sexual content policy.
Figure 12: Distribution of the problematic ads with respect to all the ads run on each day on Twitter. Here, problematic refers to classifying either the embedded or the landing URLs of an ad as malicious or suspicious or both by 3 or more VirusTotal services. See Table 2 for a description of the annotated events.
Figure 13: Scatter diagram of (malicious + suspicious) counts of VirusTotal (VT) services for embedded URLs and landing URLs partitioned with respect to the threshold of _sexually_explicit_ score (i.e., 0.3) of the ad tweet containing those URLs.
vocacy' for or against some political topic.
* most of which embed benign shortened URLs which lead to malicious landing pages.
## RQ2: Impact of Regime Change on Twitter Advertising
Twitter acquisition by Elon Musk was one of the transforming events for Twitter, and it likely impacted the advertising on platform [14]. Next, we analyze the impact of this acquisition on the Twitter advertising ecosystem.
We start our analysis by looking at the general advertising trend on Twitter pre- and post-aquisition. Figure 14 shows the number of ad tweets per day, before and after the 14-day rehydration for the duration of our data collection. Overall, we observe that the number of ads increased after Twitter's regime change; \(\sim\)190 tweets per day (before acquisition) to \(\sim\)290 tweets per day (after acquisition) as shown by the solid black line. Also, we observe that a large majority of ads during the post-Musk period were potentially in violation of Twitter's policies and were hence removed during the 2-week period. Therefore, after rehydrating the tweets, we observe a significant decrease in the number of ads that are still publicly available to \(\sim\)180 ads per day during November.
To understand the impact of regime change at the advertiser level, we look into the number of ads created by each advertiser in our dataset before and after Musk's Twitter acquisition. Specifically, for each advertiser, we first calculate the average number of ads per day before and after the acquisition; we do this because we have to normalize the raw frequency of ads since we have fewer days in the post-acquisition period. We refer to these normalized ad values as normalized pre-Musk ads and normalized post-Musk ads. Then, for each advertiser, we calculate the fraction of normalized pre-Musk ads and normalized post-Musk ads; for advertisers that do not have any ads post-Musk, we set this fraction to -1. Figure 15 shows the CDF of the calculated fraction for each advertiser in our dataset.
We make several interesting observations from Figure 15. First, we observe that 55.57% distinct advertisers on Twitter have a normalized fraction of -1, suggesting that these advertisers stopped advertising post-Musk's takeover. Some notably reputable brands in our data include Twitter handles like @AppleEDU, @LinkedIn, @Spotiy, @IBM, @Nike, @FortuneMagazine, @EASPORTSIFIA, @ericsson, @CHANEL, etc. Second, 28.18% of advertisers have a normalized fraction equal to zero, which indicates that they did not advertise before Musk's Twitter acquisition but started running ads afterward. This category includes Twitter handles like @amazonemex, @sabah, @superhaber, @facebook, @NetflixID, etc. Interestingly, 25% of Twitter handles in this category are the ones that have been advertising sexual content as described in Section. Also, 12.73% advertisers increased Twitter advertising after Musk's acquisition, while 3.52% of the advertisers decreased Twitter advertising after Musk's acquisition. Companies like @YouTube, @Forbes, @WSJ, @Huawei, etc. increased their advertising on Twitter, while @amazon, @BestBuy, @Inc, @ESPNCricinfo, @Citibank, @AppleTVPlus, etc. are the prominent players (in terms of popularity based on Twitter followers) that reduced their Twitter advertising.
Next, we aim to analyze the distribution of Twitter advertisers under the described four groups of advertisers (started advertising, stopped advertising, increased advertising activity, and decreased advertising activity in the post-Musk period), with respect to their popularity. We aim to provide answers to questions like "Did popular advertisers stop or start advertising after Musk's acquisition?" To shed light on
Figure 14: Distribution of ads on Twitter over the period of data collection before and after the rehydration (or refresh). See Table 2 for a description of the annotated events.
Figure 15: CDF of each advertiser’s normalized fraction of the number of ads per day before Musk’s takeover and after Musk’s takeover. -1 refers to advertisers with no ads in the post-Musk period, while 0 refers to advertisers with no ads in the pre-Musk period.
this question, we plot the CDF of the number of followers for each advertiser in Figure 16. We find that, in general, advertisers that are active both in the pre-Musk and post-Musk periods tend to have statistically significantly more followers (42K median followers) compared to the other two groups - advertisers that discontinue ads (KS-stat = 0.2545, p = 4.56e-20) and advertisers that newly started ads (KS-stat = 0.2625, p = 1.21e-20). Advertisers that discontinued ads and started ads have 7.5K and 3.4K median followers, respectively. Also, we find that the popularity (i.e., number of followers) of advertisers that stopped advertising after Musk's acquisition is more as compared to the advertisers that started advertising only after Musk's acquisition with statistical significance (KS-stat = 0.0925, p = 5.94e-15).
Big Brands like Spotify, Apple, Amazon, etc., have a number of Twitter handles for different products and regions. We observed a difference in change in advertising behavior for different Twitter accounts of the same company. For example, ads from handle @Spotify have completely discontinued advertising post-Elon Musk's acquisition; however, handles - @SpotifyAfrica, @SpotifyTurkiye, and @SpotifyBrasil, who never advertised on Twitter earlier, started advertising after Musk's takeover. On the other hand, @SpotifyJP and @SpotifyMexico increased Twitter advertising. It is important to note that here we define increase and decrease only based on the time period we collected the data for. It could have been that some account was advertising on Twitter 1 year prior to our data collection but was dormant in recent months until it restarted its ads after Musk's acquisition of Twitter. In order to eliminate the possibility that some specific account did not start advertising on Twitter post-Musk due to the fact that it was created on Twitter post-Musk, we also analyzed the account creation dates of different advertisers with respect to the dates on which they advertised as per our data. For instance, all the Spotify accounts discussed above were created on Twitter between 2008 to 2020, long before our data collection. In general, out of 10,221 distinct advertisers in our data, \(\sim\)81.2% accounts were created before the start of our data collection and \(\sim\)98% accounts were created before the Twitter's acquisition by Musk. As a result, we can conclude that the advertising trend we discussed with respect to the Twitter event is not biased by the fact that accounts could have been created after the acquisition. Additionally, we observe that 68.9% of the advertising accounts that were created after we start the data collection were mostly spam accounts spreading sexual content as aforementioned in Section.
**Takeaways.** Twitter's acquisition by Elon Musk was an important event in Twitter's landscape. The main take-away points regarding the regime change on Twitter advertising are:
* We observed Twitter advertising activity to increase post the acquisition by 45.76%, where 29.68% of the new ads were observed to be removed during the 14-day rehydration period, perhaps due to their problematic nature.
* We observe that \(\sim\)55% of advertisers stop advertising post-Musk, \(\sim\)28% start newly advertising post-Musk, \(\sim\)13% increased advertising and \(\sim\)4% reduce Twitter advertising activity.
* Advertisers that start newly advertising post the takeover mostly include the less popular ones, while the more popular brands have continued (i.e., decreased or increased) Twitter advertising.
## Conclusion & Discussion
In this paper, we conducted a large-scale investigation of problematic advertising on Twitter, mainly focusing on identifying ads that violate Twitter's policies regarding political and sexual content. Also, we investigated the effect of Elon Musk's Twitter acquisition on the platform's advertising ecosystem. Among other things, we find that a large percentage of political ads (60%) violate at least one clause in Twitter's political content policy, and we find a large percentage of ads (20%) that share sexually explicit content. Also, we find that most of the political ad violations originate from Turkish advertisers. With regards to the effect of Elon Musk's acquisition, we find that a large percentage (55%) of advertisers did not advertise on Twitter after the acquisition (for the period of our dataset), while at the same time, most popular advertisers remain active on Twitter advertising even after the acquisition.
Our work and findings have several important implications for stakeholders interested in online advertising and content moderation. Below, we discuss our work implications related to data access, content moderation of online advertising, as well as advertising policy violations.
**Data Access.** Our work relied on access to the Twitter API, and to identify ads, we leveraged the source attribute that was made available via the API to indicate the application used to share their tweet. Unfortunately, during our data collection, our free access to the Twitter API and the availability of the source attribute were cut off, mainly due to changes made on the Twitter platform after Elon Musk's acquisition. These changes to data access have detrimental effects on research efforts aiming to understand and mitigate harmful online phenomena and harm transparency efforts.
Figure 16: CDF of advertiser’s Twitter followers. This distribution is shown separately for all advertisers following different cases of advertising patterns with respect to Elon Musk’s takeover. \(N_{frac}\) represents the fraction of normalized pre- to post-Musk ads per day for a given advertiser.
We argue that, as a research community, we need to work closely with policymakers to ensure that the platforms are accountable for harms arising from their services and must provide the necessary means for researchers to audit their services. Indeed, recently, the EU Commission released the Digital Services Act (European Commission 2023), which calls for such audits; we believe this is an essential step in the right direction. At the same time, we believe there is still a long way to go about finding ways to incentivize online platforms to be transparent and allow access of their data to researchers.
**Content Moderation.** Our results show that a substantial percentage of online advertisements violate Twitter's policies, which likely indicates that Twitter's content moderation on advertisements is not adequate or accurate. More worrisome is that our analysis shows that content moderation and enforcement of Twitter policies are inadequate for less popular languages like Turkish. Our findings highlight the challenges of large-scale and diverse content moderation systems; that is, an advertisement that violates Twitter policies is likely to remain on the platform and not get moderated simply because it is written in a language other than English. Overall, this prompts the need to develop more accurate multi-lingual content moderation systems to ensure that the platform's governance is applied fairly and accurately across the entire population of the platform's user base. To address these content moderation issues, social media platforms should employ more content moderators from less popular countries/languages like Turkey to effectively moderate the considerable volume of violating ads from such regions. Moderating violating ads benefits the platform itself, given that the platform's users will not get exposed to violating and potentially harmful content, while at the same time, they will have more trust in the platform and its moderation processes. We believe that our findings related to characteristics of violating advertisers and their creatives will help improve their existing moderating systems by creating heuristics to detect frequent username rotations by fixed \(author\_id\) for advertiser-owned accounts, detecting sexual ads based on the presence of adult content-specific keywords in their text, etc.
**Other Advertising Policy Violations.** In our work, we focused on auditing the compliance of ads with Twitter's Political and Adult Sexual Content advertising policies. Twitter has a larger set of policies for other kinds of content, which are not systematically studied in this work. During our work, we noticed several violations of other policies worth mentioning. Specifically, among other violations, we observed cases of violation for _Inappropriate Content Policy_ - we see ads with'misrepresentative content' (clickbaity ads), ads showing violence, use of weapons, and physical assault. Also, we found cases of violations for the _Hateful Content Policy_; e.g., an ad using hateful speech to attack a hospital doctor. Other violations we observed are violations with the _Unacceptable Business Practices Policy_ where misleading information and bad practices were used by advertisers to advertise their products; e.g., an ad used a prohibited practice of showing 'before' and 'after' images to advertise their chest-busting inner. Relating to _Healthcare Content Policy_, we find advertisements regarding supplements for weight loss, which is disallowed by Twitter. As part of our future work, we aim to systematically audit other advertising policies across multiple social media platforms to provide suggestions to platforms aiming to improve their platform governance and content moderation procedures.
|
2309.13857 | Adversarial Attacks on Video Object Segmentation with Hard Region
Discovery | Video object segmentation has been applied to various computer vision tasks,
such as video editing, autonomous driving, and human-robot interaction.
However, the methods based on deep neural networks are vulnerable to
adversarial examples, which are the inputs attacked by almost
human-imperceptible perturbations, and the adversary (i.e., attacker) will fool
the segmentation model to make incorrect pixel-level predictions. This will
rise the security issues in highly-demanding tasks because small perturbations
to the input video will result in potential attack risks. Though adversarial
examples have been extensively used for classification, it is rarely studied in
video object segmentation. Existing related methods in computer vision either
require prior knowledge of categories or cannot be directly applied due to the
special design for certain tasks, failing to consider the pixel-wise region
attack. Hence, this work develops an object-agnostic adversary that has
adversarial impacts on VOS by first-frame attacking via hard region discovery.
Particularly, the gradients from the segmentation model are exploited to
discover the easily confused region, in which it is difficult to identify the
pixel-wise objects from the background in a frame. This provides a hardness map
that helps to generate perturbations with a stronger adversarial power for
attacking the first frame. Empirical studies on three benchmarks indicate that
our attacker significantly degrades the performance of several state-of-the-art
video object segmentation models. | Ping Li, Yu Zhang, Li Yuan, Jian Zhao, Xianghua Xu, Xiaoqin Zhang | 2023-09-25T03:52:15Z | http://arxiv.org/abs/2309.13857v1 | # Adversarial Attacks on Video Object Segmentation with Hard Region Discovery
###### Abstract
Video object segmentation has been applied to various computer vision tasks, such as video editing, autonomous driving, and human-robot interaction. However, the methods based on deep neural networks are vulnerable to adversarial examples, which are the inputs attacked by almost human-imperceptible perturbations, and the adversary (i.e., attacker) will fool the segmentation model to make incorrect pixel-level predictions. This will rise the security issues in highly-demanding tasks because small perturbations to the input video will result in potential attack risks. Though adversarial examples have been extensively used for classification, it is rarely studied in video object segmentation. Existing related methods in computer vision either require prior knowledge of categories or cannot be directly applied due to the special design for certain tasks, failing to consider the pixel-wise region attack. Hence, this work develops an object-agnostic adversary that has adversarial impacts on VOS by first-frame attacking via hard region discovery. Particularly, the gradients from the segmentation model are exploited to discover the easily confused region, in which it is difficult to identify the pixel-wise objects from the background in a frame. This provides a hardness map that helps to generate perturbations with a stronger adversarial power for attacking the first frame. Empirical studies on three benchmarks indicate that our attacker significantly degrades the performance of several state-of-the-art video object segmentation models.
Video object segmentation, adversarial attack, perturbation, hard region discovery.
## I Introduction
Driven by the increasing demand of video editing [1] and autonomous driving [2], Video Object Segmentation (VOS) [3, 4, 5, 6] has attracted lots of interest in both academia and industry. Essentially, VOS aims to separate the foreground (i.e., objects) and the background pixels in all video frames. When the target objects are specified in the first frame during inference, the goal of the segmentation model is to estimate the object masks in all remaining frames. Recently, great efforts have been made to investigate VOS models using deep neural networks, which are vulnerable to adversarial examples [7, 8], i.e., the inputs are almost indistinguishable from natural data, easily leading to incorrect predictions. This rises the potential attack risks of VOS models, and the security danger is dramatically increased when these models are deployed in a highly-demanding environment.
To this end, adversarial examples have been extensively investigated in computer vision tasks, such as image classification [9, 10], video classification [11, 12], object detection [13], object tracking [14, 15, 16], and person re-identification [17]. However, few studies have explored the influences of adversarial attacks on video object segmentation. Thus, this work develops a novel adversarial attack method by discovering hard regions of frames, and shows that state-of-the-art (SOTA) VOS models are easily attacked by adversarial examples generated by simply adding some perturbations to the first frame of the video. Usually, the attack effect is evaluated in terms of the segmentation performance degradation.
Though existing adversarial attacks in vision tasks shed some light on VOS models, they are still inappropriate for this scenario due to two-fold reasons. First, video classification and object detection both need prior knowledge about categories, and the model can be attacked by maximizing the class probability with smaller confidence, e.g., fooling the model to make an incorrect prediction with the less-possible class from multiple candidates. This is not applicable to VOS tasks whose frame pixels are either foreground or background, and there is only one candidate class to select, thus making the adversarial attack much more difficult. Second, the attacks on object tracking [14] without known categories are tailored for producing wrong bounding boxes, while the attack on person re-identification [17] fools a discriminant distance metric function. Also, they cannot be directly applied to video object segmentation.
Therefore, this paper develops an adversarial attack method for VOS (See Fig. 1) and concentrates on the semi-supervised setting [18, 19], where the ground-truth mask of the target object is given in the first frame during inference. We consider the semi-supervised VOS as it is the most widely explored setting in VOS with very cheap annotation cost only on the first frame, and it gains more popularity in practice compared to unsupervised ones (zero-shot VOS [20]). Naturally, this work chooses to attack only the first frame by slightly perturbing its pixel values, thus indirectly attacking the subsequent frames. Particularly, the first frame is fed to a well-trained VOS model to generate the gradient map for obtaining perturbation, which is used for attacking the first frame to generate adversarial examples. Then, the adversarial example is fed into the VOS model to fool the inference of subsequent frames, and thus more incorrect estimations are made compared to those without the attack. Here, a question may raise as to whether
some pixel areas of the frame should be emphasized more for producing a stronger adversarial example. We argue that the foreground and the background are easily confused in the emphasized pixel area, which is regarded as a _hard region_. Motivated by this, a hard region learner (e.g., ResNet [21]), is placed after the gradient map, to output a hardness map whose entries indicate whether the pixel hardness is high or low. The hardness map is involved in element-wise production with the vanilla noise map derived from mapping the gradients to a set space via a sign function, which results in stronger perturbations. Note that there may be multiple hard regions in one frame according to the emerging objects in a video. Simply, the entire attacking framework proposed in this paper is called the _Adversarial Region Attack_ (ARA) method.
The objective function includes the Cross-Entropy (CE) loss of the VOS model and the \(\ell_{2}\)-norm hardness loss of the hard region learner. The VOS model produces the loss value map, whose entries indicate whether the pixels of the first video frame are difficult to discriminate from the foreground and background. Those pixels with high loss values constitute the hard region. Then, the loss value map is binarized into a hardness pseudo-label map, which provides some supervised knowledge for optimizing the hardness loss function. Hence, the hardness map reflects whether the pixels belong to the hard region or not.
The main contributions of this paper are summarized below:
* To the best of our knowledge, this paper is the first to study adversarial attacks against VOS models and propose a class-agnostic Adversarial Region Attack method to fool the model to make incorrect predictions, by generating small perturbations only on the first frame.
* A newly developed \(\ell_{2}\)-norm based hardness loss function is minimized to obtain the hardness scores of pixels with large confidence, under the guidance of a hardness pseudo-label map.
* Our attacker is evaluated on three VOS benchmarks, including DAVIS2016 [22], DAVIS2017 [23], and YouTube-VOS [24]. Besides the white-box attack, both the black-box attack and the defense are investigated. The experimental results indicate that our attacker significantly degrades the segmentation performance of several SOTA VOS methods and exhibits a stronger attack power compared to a few adaptive alternatives. Meanwhile, the defense performance of our model has been justified.
## II Related Work
### _Semi-supervised Video Object Segmentation_
Semi-supervised Video Object Segmentation (SVOS) [25] aims to distinguish the pixel-wise object region in a video where the object is specified by the annotation of the first frame. Sometimes, it is also called one-shot VOS [26]. Current SVOS approaches can be mainly divided into two categories: 1) _online learning_; 2) _offline learning_. The _online learning_ methods [18, 27, 28, 29] update and optimize model parameters by the historical prediction mask or ground-truth mask to obtain robust appearance representations during inference. However, they require video frames to optimize model parameters during inference. When video frames are attacked by perturbations, it has adverse impacts on the model optimization, thus degrading the segmentation performance.
Different from the online methods requiring model training during inference, the offline methods infer segmentation directly using the trained model, and they can be further divided into _propagation_ methods and _spatio-temporal matching_ methods. The propagation methods [30, 31, 32] propagate the mask of the first frame to other frames sequentially, but they exploit the temporal consistency of nearby frames and easily suffer from drastic object deformations in long videos. By contrast, the spatio-temporal matching methods [33, 34, 35, 19] achieve better segmentation effects by using the memory network to reveal the spatio-temporal relations of the historical frames and the current frames. Thus, this paper takes the spatio-temporal matching method as the target VOS model.
### _Adversarial Attacks_
Adversarial attacks are implemented by adversarial examples [36, 10], i.e., applying small but almost human-imperceptible perturbations to clean data samples. When the sample is an image, its pixels are either partially or fully perturbed, and full perturbation is considered in this paper. Then, the adversarial example is used to fool a model to make
Fig. 1: The entire framework of adversarial region attack on the VOS model. It contains two data flows: one is the adversarial example generation flow with hard region discovery using the gradient map derived from the first frame indicated by the **black** arrows, and the other is the adversarial attack on the VOS inference of the subsequent frames indicated by the yellow arrows. Compared to original masks without attacks, the attacked masks have large error proportions in red.
incorrect estimations, which is called an _attack_. Generally, adversarial attacks can be divided into _white-box_ and _black-box_ attacks. The white-box attack [7, 10] exploits the model knowledge, including the structure, the parameters, and the trainable weights used for computing the gradients to generate the model-aware perturbations. By contrast, the black-box attack [37] has very limited or no knowledge about the model, so it yields the model-agnostic perturbations. Our work mainly focuses on while-box attacks, and provides the empirical studies on black-box attacks.
### _Adversarial Attacks in Computer Vision Tasks_
**Image Classification**. In computer vision, Szegedy _et al._[36] first discovered that images with tiny perturbations can be used as adversarial examples to deceive image classifiers for making wrong predictions. Then, Goodfellow _et al._[10] pointed out that due to the naturally linear characteristics, deep neural networks are susceptible to the deception of adversarial examples. The image gradients back-propagated by a model can be used to generate perturbations to deceive the classification network. Besides, Kurakin _et al._[38] iteratively updated the adversarial examples by gradually adding perturbations to the source image. Moreover, Madry _et al._[7] found that adding random perturbations to the source image at the initial iteration can increase the attack ability of adversarial examples. Additionally, most image classification networks adopt convolution operations, and the pixel-level features are easily affected by its adjacent regions. Therefore, Gao _et al._[39] adjusted the image gradients and assigned those exceeding a threshold to the neighbors of each pixel, making adversarial examples more robust. Furthermore, Dong _et al._[9] proposed a translation-invariant attack method, which equips the adversarial example with more transferability by employing image gradients on translated images.
**Image Segmentation**. Adversarial attacks [40, 41, 42] on semantic segmentation [43, 44] are often adapted from the attack methods on image classification. For example, Arnab _et al._[40] found that the segmentation model using a deep neural network is vulnerable to adversarial examples; Xie _et al._[41] investigated the influence of adversarial examples on object detection and semantic segmentation models simultaneously by a high-transferability attack method; Gu _et al._[42] develop a model that creates more effective adversarial examples than PGD [7] under the same number of attack iterations.
**Video Classification**. For this task, Pony _et al._[12] proposed a flickering temporal perturbation to deceive the classifier to generate wrong predictions. Existing anti-attack methods for classifiers usually rely on maximizing the probability of misclassification to obtain the gradient and then convert the gradient into perturbation. To capture more effective gradients, Li _et al._[11] searched for better gradient update directions through the geometric transformation of the input frames, thus generating the desired deviations for improving the attack power. Additionally, Hwang _et al._[45] focused on the structural vulnerability of action recognition, i.e., the influences of modeling temporal information in deep models.
**Object Tracking**. Unlike classification, object tracking [46, 47, 48] aims to capture the trajectory of the moving object in videos and produce a series of object bounding boxes. To this end, several attempts [13, 14, 49, 50] have been made on the adversarial attacks for object tracking. For instance, Guo _et al._[49] proposed an online and incremental sparse perturbation generation scheme in the spatial domain to ensure attack efficiency; Chen _et al._[14] added perturbations only to the object area of the first frame. For multi-object tracking, Jia _et al._[13] introduced a tracking error reduction process to attack the object detection and tracking model at the same time, causing the tracker to lose the object. Additionally, Jia _et al._[15] attempted to generate more effective perturbations by optimizing the IoU (Intersection over Union) scores of the current and previous frames.
However, existing adversarial attack methods in the computer vision field are specially designed for certain tasks or models, so it is difficult to transfer them to the VOS model. For example, an adversarial attack approach for object tracking designs a regression loss against the object bounding box in the tracker to generate perturbations. Such a regression loss fails to handle the pixel-level classification task like VOS. Meanwhile, the video classification model is designed to assign the video with one label from predefined categories, but VOS is a class-agnostic task. Besides, the attack methods against the image classifier or semantic segmentation model aim to deceive the target model from one semantic class to another in predefined categories, which does not hold for VOS. Therefore, it is desirable to develop an adversarial attack method specially for the VOS task.
## III Method
### _Problem Definition_
Our attacker concentrates on the most popular semi-supervised VOS models during inference. Formally, the already-trained VOS model is defined as \(\Phi(\cdot)\) with the parameters \(\theta\). Given a video \(\mathcal{V}=\{\mathbf{I}_{t}\in\mathbb{R}^{H\times W\times 3}|t=1,\dots,T\}\) and the ground-truth mask \(\mathbf{Y}_{1}\in\mathbb{R}^{H\times W}\) of the first frame \(\mathbf{I}_{1}\), the goal of the segmentation model is to produce the prediction mask \(\mathbf{\hat{Y}}_{t}\in\mathbb{R}^{H\times W}\) of all remaining frames. Here, \(H\) is the frame height, \(W\) is the frame width, \(t\) is the frame index, and \(T\) denotes the total number of frames. Each foreground pixel of the mask is set to 1, while the background pixel is set to 0. If there are multiple objects, each object corresponds to a binary classification problem. Thus, VOS is an object class-agnostic task. In this setting, this work manipulates only the first frame by adding a small human-imperceptible perturbation \(\mathbf{\eta}\in\mathbb{R}^{H\times W\times 3}\) to generate the adversarial example, i.e., \(\mathbf{I}_{1}^{adv}=\mathbf{I}_{1}+\mathbf{\eta}\), which is used to mislead the model to degrade the segmentation performance on the subsequent frames. Thus, the inference process produces the attacked mask sequence \(\hat{\mathcal{Y}}^{adv}=\{\mathbf{\hat{Y}}_{t}^{adv}\}_{t=2}^{T}\), whose entry is represented as
\[\mathbf{\hat{Y}}_{t}^{adv}=\Phi(\{\mathbf{I}_{1}^{adv},\mathbf{I}_{2},\dots,\mathbf{I}_{t}\},\mathbf{Y}_{1};\theta)\in\mathbb{R}^{H\times W}. \tag{1}\]
To generate the perturbation \(\mathbf{\eta}\), this paper follow the gradient-based adversary, i.e., the fast gradient sign method [10], which linearizes the cost function \(\mathcal{L}_{1}(\cdot)\) around the current value of \(\theta\) and utilizes the back-propagation gradients to generate the
-max-norm constrained perturbation
\[\mathbf{\eta}_{t}=\epsilon\cdot\text{sign}[\nabla_{\mathbf{I}_{t}}\mathcal{L}_{1}( \Phi(\mathbf{I}_{t};\theta),\mathbf{Y}_{t})], \tag{2}\]
where \(\mathcal{L}_{1}(\cdot)\) is the CE loss, \(\nabla_{\mathbf{I}_{t}}\) denotes the gradient with regard to the \(t\)-th frame \(\mathbf{I}_{t}\) obtained by back-propagation according to \(\mathcal{L}_{1}(\cdot)\), the constant \(\epsilon>0\) is the upper-bound perturbation value and is set to \(8/255\), and \(\text{sign}[\cdot]\) denotes a sign function that maps all input gradient elements to a discrete set \(\{-1,0,1\}\). Here, \((-\epsilon,+\epsilon)\) forms a ball of the infinite norm.
### _Overview of VOS Model_
In this paper, the spatio-temporal matching-based VOS model is adopted as the target model for attacking, and the early work Space Time Memory (STM) networks [19] of this type is taken as an example. The target model processes from the second frame in the video sequence using the ground-truth mask \(\mathbf{Y}_{1}\) of the first frame, resulting in the prediction masks \(\{\mathbf{\hat{Y}}_{2},\dots,\mathbf{\hat{Y}}_{t-1}\}\in\mathbb{R}^{H\times W}\). When processing the current frame \(\mathbf{I}_{t}\), the past frames with predicted object masks are used to establish a pair-wise memory set, i.e., \(\mathcal{M}=\{(\mathbf{I}_{1},\mathbf{Y}_{1}),(\mathbf{I}_{2},\mathbf{\hat{Y} }_{2}),\dots,(\mathbf{I}_{t-1},\mathbf{\hat{Y}}_{t-1})\}\). The memory set provides long-range spatio-temporal visual semantics, which helps to distinguish the object from the background in a frame.
Meanwhile, the frame-mask pairs in the memory set are fed into one ResNet [21] encoder to produce the past frame feature subspace, and the current frame is fed into the other encoder to produce the current frame feature subspace. Then, spatio-temporal matching is performed in the feature subspace, i.e., the pixel class of the current frame is inferred from that in past frames (the elements of the predicted mask reflect the pixel-wise binary classes) according to the semantic similarity, generating a class feature map of the current frame. Then, this class feature map is exploited to produce the mask of the current frame by a softmax function after several convolution layers with bi-linear interpolation. Thus, the mask sequence is produced by \(\mathbf{\hat{Y}}_{t}=\Phi(\mathcal{M},\mathbf{I}_{t};\theta)\in\mathbb{R}^{H \times W}\).
### _Adversarial Region Attack_
To attack the VOS model, the perturbation (i.e., vanilla noise) in Eq. (2) is weak, as VOS is actually a pixel-wise binary classification problem. The pixel class label is either foreground or background, and the object class remains unknown. Such a binary problem increases the attack difficulty. This is because it is easy to deceive the model for randomly picking one wrong class from several candidates in a multi-class problem, but it becomes more difficult when there is only one candidate. For example, given a _ten_-class problem, the probability of a successful attack is \(0.9\) in theory, which decreases to \(0.5\) for binary classification. Inspired by this, our work develops an ARA method to strengthen the attack power of adversarial examples, and the whole framework is drawn in Fig. 1.
In practice, segmentation errors usually occur in the pixel area where the foreground and the background are easily confused, e.g., similar appearance, vague boundary, and salient background (like large trees). Thus, the possibly-confused pixel area is the hard region that requires more emphasis, and it is often vulnerable to perturbations. That is, perturbing the hard region of the frame can produce a stronger adversarial example. To this end, this paper designs a Hard Region Learner (HRL) to derive a hardness map, which reveals how difficult each pixel is to be correctly segmented. The hardness map and the vanilla noise in Eq. (2) are incorporated by the element-wise product operation to generate a stronger perturbation. The details are described below.
Our attacker adopts the white-box attack and exploits the image gradients obtained from the back-propagation in the model. Both the vanilla noise map and the hardness map are learned from the gradient map of the first frame \(\mathbf{I}_{1}\), and the gradient map \(\mathbf{G}\) is obtained by
\[\mathbf{G}=\nabla_{\mathbf{I}_{1}}\mathcal{L}_{1}(\Phi(\mathcal{M}^{\prime}, \mathbf{I}_{1};\theta),\mathbf{Y}_{1})\in\mathbb{R}^{H\times W\times 3}, \tag{3}\]
where the memory subset \(\mathcal{M}^{\prime}\) contains two nearby frames of the first frame and their prediction masks, i.e., the second and the third frame-mask pairs. Here, using two nearby frames follows the model training process [19], which stacks three frames in the GPU memory for efficiency. If more powerful GPUs are available, more frames can be considered. The \(\mathcal{L}_{1}(\cdot)\) represents the segmentation loss between the prediction mask \(\mathbf{\hat{Y}}_{1}=\Phi(\mathcal{M}^{\prime},\mathbf{I}_{1};\theta)\) and the ground-truth mask \(\mathbf{Y}_{1}\) of the first frame. The gradient values are generally smaller than the original image pixel values, and the layer normalization [51] strategy is adopted to normalize the gradient values along each channel.
By default, the vanilla noise map is denoted as \(\mathbf{\eta}_{1}\in\mathbb{R}^{H\times W\times 3}\), which is calculated by Eq. (2). To capture the hardness map, this paper uses convolution neural networks to construct the HRL \(\Omega(\cdot)\) and adopts the light ResNet18 [21] as the backbone.The ResNet involves four stages and each stage has a resolution downsampling ratio, i.e., \(\{1/4,1/8,1/16,1/32\}\), for the learned feature map. To keep a large spatial resolution of the feature map, only the former two stages, followed by one convolution layer with a kernel size of \(3\times 3\), are employed to discover the hard region from the frame. Then, the bilinear upsampling \(\text{Upsample}(\cdot)\) and the sigmoid function \(\sigma(\cdot)\) are performed on the obtained feature map to obtain the hardness map, i.e.,
\[\mathbf{\hat{Z}}=\sigma(\text{Upsample}(f(\mathbf{G})))=\Omega(\mathbf{G})\in \mathbb{R}^{H\times W}, \tag{4}\]
where \(f(\cdot)\) represents the modified ResNet module, and the bilinear upsampling is employed to scale the feature map up to the same size as the input frame. The scores in the hardness map fall between 0 and 1. The higher the hardness score, the more difficult the pixel is for segmentation.
**Optimization of the HRL.** To solve the problem that the ground-truth hardness map is unavailable for optimizing the hardness loss function, this paper introduces the hardness pseudo-label map as the supervisor to guide the hard region learning. Particularly, the segmentation loss \(\mathcal{L}_{1}(\cdot)\) calculated in the already trained VOS model is used to generate the hardness pseudo-label map. The rationale is that the segmentation loss of a well-trained model is minimized during training, and a large proportion of the pixels are expected to produce small loss values while only a small set of pixels have large values.
Generally, a large loss value indicates that the corresponding pixel is difficult to separate from the foreground and background. And the CE loss is adopted as the segmentation loss.
The pixel class labels are the entries of the ground-truth mask \(\mathbf{Y}\in\{0,1\}^{H\times W}\), which is flattened into a long vector \(\mathbf{y}=[y_{i}]_{i=1}^{n}\in\{0,1\}^{n}\), where \(n=H\times W\), and the index \(i\) specifies the spatial position of each pixel in the mask. Similarly, the prediction mask \(\hat{\mathbf{Y}}\in\mathbb{R}^{H\times W}\) is flattened into a long vector \(\hat{\mathbf{y}}=[\hat{y}_{i}]_{i=1}^{n}\in\mathbb{R}^{n}\), whose entries denote the probability of the pixel belonging to the object class. The CE loss is defined as \(-\mathbf{y}\log\hat{\mathbf{y}}\), and a thresholding strategy is adopted to generate the hardness pseudo label vector \(\mathbf{z}\), whose entries are obtained by
\[z_{i}=\begin{cases}1,&-y_{i}\log(\hat{y}_{i})>\alpha,\\ 0,&\text{others},\end{cases} \tag{5}\]
where \(\alpha>0\) is an empirical threshold, \(\log(\cdot)\) denotes the natural logarithm, and \(\mathbf{z}=[z_{i}]_{i=1}^{n}\in\{0,1\}^{n}\). Then, the obtained pseudo-label vector is reshaped to the hardness pseudo-label map \(\mathbf{Z}\in\{0,1\}^{H\times W}\) of the first frame. The pixels with loss values larger than the threshold indicate they are difficult to segment, and those with loss values smaller than the threshold are relatively easier to segment.
After the hardness pseudo-label map \(\mathbf{Z}\) and the hardness map \(\hat{\mathbf{Z}}\) are obtained, the objective function of the HRL is defined in a vector form as
\[\mathcal{L}_{2}=\frac{1}{n}\|\mathbf{z}-\hat{\mathbf{z}}\|_{2}^{2}, \tag{6}\]
where \(\hat{\mathbf{z}}\in\mathbb{R}^{n}\) is a flattened vector of the matrix \(\hat{\mathbf{Z}}\), and the operator \(\|\cdot\|_{2}\) denotes the \(\ell_{2}\) norm and is a \(\ell_{2}\)-loss. In this way, the optimal hardness map can be obtained by minimizing the error between the two hardness maps, i.e., \(\mathbf{E}=\mathbf{Z}-\hat{\mathbf{Z}}\).
### _Working Mechanism of Our Attacker_
During the VOS inference, only the first frame \(\mathbf{I}_{1}\) is attacked, and its adversarial example \(\mathbf{I}_{1}^{adv}\) is generated by imposing the perturbation \(\mathbf{\eta}^{\prime}\). This perturbation considers the hard pixel areas discovered by the HRL with the vanilla noise in Eq. (2). Meanwhile, an iterative scheme is adopted to generate a strong adversarial example.
In the initial period, the perturbation is the random noise \(\mathbf{\eta}^{\prime}_{0}\in[-\epsilon,\epsilon]^{H\times W\times 3}\), which is added to the first frame for the next iteration. The maximum iteration number is \(K\), which is set to 10 for efficiency. If the perturbation in the \(r\)-th iteration is \(\mathbf{\eta}^{\prime}_{r}\), then the corresponding adversarial example is represented as:
\[\mathbf{I}_{1,r}^{adv}=\text{Clip}_{\mathbf{I}_{1},\epsilon}(\mathbf{I}_{1,r- 1}+\mathbf{\eta}^{\prime}_{r-1}), \tag{7}\]
where \(\text{Clip}_{\mathbf{I}_{1},\epsilon}(\cdot)\) squeezes the numerical range of the input frame to a range \([(\min(\mathbf{I}_{1})-\epsilon,\max(\mathbf{I}_{1})+\epsilon]\). Here, \(\min(\cdot)\) and \(\max(\cdot)\) are the minimum and the maximum function of pixel values, respectively.
**Perturbation Update**. To update the perturbation, the gradient map \(\mathbf{G}_{r}\) of the first frame like Eq. (3) is first calculated according to
\[\mathbf{G}_{r}=\nabla_{\mathbf{I}_{1,r}^{adv}}\mathcal{L}_{1}(\Phi(\mathcal{M }^{\prime},\mathbf{I}_{1,r}^{adv};\theta),\mathbf{Y}_{1}), \tag{8}\]
where the model parameters \(\theta\) are fixed during back-propagation. Then, the gradient map \(\mathbf{G}_{r}\) is taken as the input of Eq. (4) to derive the hardness map \(\hat{\mathbf{Z}}_{r}\), which is replicated for a triple of maps to form the hardness tensor \(\hat{\mathbf{Z}}_{r}^{\prime}\in\mathbb{R}^{H\times W\times 3}\). Meanwhile, the gradients are projected to a set \(\{-1,0,1\}\) by a sign function, i.e., \(\mathbf{G}_{r}^{\prime}=\text{sign}(\mathbf{G}_{r})\). Finally, an element-wise product is made between the hardness tensor and the sign gradient tensor, resulting in the updated perturbation as
\[\mathbf{\eta}^{\prime}_{r}=\beta\hat{\mathbf{Z}}_{r}^{\prime}\odot\text{sign}( \mathbf{G}_{r}), \tag{9}\]
where the constant \(\beta>0\) governs the numerical range of the perturbation, and \(\odot\) denotes an element-wise product. As the iteration continues, the perturbation ability is strengthened, thus generating a stronger adversarial example. The final adversarial example \(\mathbf{I}_{1,K}^{adv}\) is fed into the VOS model to generate a sequence of attacked masks.
Besides Eq. (9), other perturbation generation schemes [39, 52, 7] also using the gradient-based adversarial attack can be adopted by simply adding the element-wise product of the hardness map to its vanilla noise. This enables our attacker to be generalized to more scenarios easily.
### _Adversarial Region Attack (ARA) Algorithm_
Our proposed Adversarial Region Attack (ARA) method is a white-box attacker, and its primary procedure is briefly summarized in Algorithm 1. The ARA attacker is developed for attacking an already well-trained VOS model \(\Phi\) with the parameter \(\theta\).
Given a video sequence \(\mathcal{V}\) with \(T\) frames and the first frame mask \(\mathbf{Y}_{1}\in\mathbb{R}^{H\times W}\), our attacker obtains the gradient map \(\mathbf{G}\in\mathbb{R}^{H\times W\times 3}\) of the first frame. Then, by using the gradient map, the hardness map \(\hat{\mathbf{Z}}_{r}\in\mathbb{R}^{H\times W}\) is obtained via the HRL (ResNet). Next, the vanilla noise and the hardness map are unified, and the attacker produces an adversarial example \(\mathbf{I}_{1,r+1}^{adv}\) by iteration. Finally, by replacing the first frame with the adversarial example as the input of the VOS model, our attacker can fool the model to degrade the segmentation performance.
Besides the white-box ARA attacker, this paper provides its black-box version since the model structure, parameters, and gradients are unknown to users in many situations. Most of the procedures are the same as those in Algorithm 1 except for the perturbation update. For the black-box attack, the initial perturbation is also the random noise added to the first frame and is updated in iteration without gradients, but the perturbation update is simple:
\[\mathbf{\eta}^{\prime}_{r}=\beta\hat{\mathbf{Z}}_{r}^{\prime}, \tag{10}\]
where the constant \(\beta>0\) governs the numerical range of the perturbation.
In addition, this paper explores the adversarial training of the white-box ARA attacker to investigate its defense performance. For the pre-trained VOS model \(\Phi(\cdot)\) with the parameter \(\theta\), the perturbation derived from the ARA attacker is added to the first frame to obtain the attacked training set. Then, the model \(\Phi(\cdot)\) is trained by using the attacked training set, and the segmentation performance of the updated VOS model \(\Phi^{\prime}(\cdot)\) is investigated.
## IV Experiments
All experiments were performed on a server equipped with two TITAN RTX graphics cards. The codes are compiled for PyTorch 1.10, Python 3.9, and CUDA 11.0.
### _Datasets_
**DAVIS2016**[22]1 contains a total of 50 video sequences, and there are 3,455 video frames with ground-truth (GT) annotations. Each video sequence contains only a single object, and there are 50 objects in total. The video contents mainly include animals, sports, vehicles, etc. Here, 30 video sequences are used for training and 20 video sequences are used for validation.
Footnote 1: [https://davischalenge.org/davis2016/code.html](https://davischalenge.org/davis2016/code.html)
**DAVIS2017**[23]2 is expanded on DAVIS2016 by increasing the number of videos to 150, and there are 10,459 video frames with GT annotations. Meanwhile, annotations for multiple objects are added, and there are 376 objects in total. The dataset is split into four subsets, namely, training set, validation set, test-dev set, and test-challenge set. Among them, the training set contains 60 videos, while the validation set contains 30 videos, and GT mask annotations are provided.
Footnote 2: [https://davischalenge.org/davis2017/code.html](https://davischalenge.org/davis2017/code.html)
**YouTube-VOS**[24]3 has three subsets, namely, training set, validation set, and test set. Among them, the training set contains 3,471 videos, and there are 65 object categories, with a total of 6,459 object instances; the validation set contains 507 videos with 1,063 object instances, and 65 of its object categories also appear in the training set, while 26 categories do not; the test set contains 541 videos with 1,092 object instances, and 65 object categories also appear in the training set, while 29 categories do not. Note that each video in the validation set only provides the GT mask of the first frame, and the final evaluation results need to be uploaded to the official server.
Footnote 3: [https://competitions.codalab.org/competitions/20127](https://competitions.codalab.org/competitions/20127)
**A2D Sentences**[54]4 is extended from the Actor-Action Dataset [55] by adding textual descriptions for each video. It contains 3782 videos annotated with 8 action classes performed by 7 actor classes and 6,655 sentences. For each video, there are 3 to 5 frames annotated with pixel-wise segmentation masks. The dataset is split into a training set and a test set with 3,036 and 746 videos, respectively.
Footnote 4: [https://kgarvilvuk.github.io/publication/actor_action/](https://kgarvilvuk.github.io/publication/actor_action/)
Among them, and the attack performance for semi-supervised VOS models on the former three datasets is examined, which is the focus of this work. Without loss of generality, all experimental results are reported on the validation set except for A2D Sentences. Additionally, the performance of our attacker for several unsupervised VOS models and referring VOS models is investigated by using the DAVIS2016 [22] validation set and the A2D Sentences [54] test set, respectively.
### _Evaluation Metrics_
Following previous works [19][35], the same evaluation metrics on the benchmarks are adopted in this paper. For DAVIS2016 and DAVIS2017 datasets, _region similarity_\(\mathcal{J}\) and _contour accuracy_\(\mathcal{F}\)[22] are used, where the former measures the IoU ratio between the prediction mask and the ground-truth mask, and the latter measures the F1 score of the predicted and ground-truth masks at the object contour pixels. Overall, \(\mathcal{J}\&\mathcal{F}\) represents the mean of region similarity and contour accuracy, which evaluates the overall segmentation performance of the VOS model.
For YouTube-VOS [24], this paper uses the same \(\mathcal{J}\) and \(\mathcal{F}\) provided by the official server 5. Since the semantic categories of some objects in the validation set do not appear in the training set, the validation set is further divided into two subsets, i.e., seen and unseen sets, where the seen subset contains the videos with seen categories in the training set, and the unseen subset contains the videos with unseen categories in the training set. The average of each metric is calculated on each subset to obtain \(\mathcal{J}_{seen},\mathcal{J}_{unseen},\mathcal{F}_{seen}\), and \(\mathcal{F}_{unseen}\). The global index \(\mathcal{G}\) is the mean of the above four metrics of the seen and unseen subsets.
Footnote 5: [https://youtube-vos.org/dataset/vos/](https://youtube-vos.org/dataset/vos/)
### _Experimental Setup_
The first frame of the videos in the validation set is attacked by adding almost human-imperceptible perturbations. The maximum perturbation value \(\epsilon\) in Eq. (7) and the constant \(\beta\) used in Eq. (9) are both set to \(8/255\), while the threshold \(\alpha\) is set to \(-\log(0.4)\). During the perturbation update, the maximum iteration number of the iteration-based attack is set to 10, and the Adam optimizer [56] is adopted to obtain the gradient map with a learning rate of 0.1 and a weight decay of 0.01.
### _Compared Methods_
**Attackers**. To comprehensively evaluate the effect of our ARA attacker against the VOS model, several gradient-based adversarial attack methods are taken for comparison, including FGSM [10], BIM (Basic Interactive Method) [38], PGD (Projected Gradient Descent) [7], TI (Translation-Invariant attack Method) [9], PI (Patch-wise Iterative FGSM) [39], VMI (Variance tuning Momentum Iterative FGSM) [52], AutoAttack [58], SegPGD [42], and ALMA (Augmented Lagrangian Minimal Adversarial perturbation) [59]. Among them, FGSM is an non-iterative attack methods. FGSM is proposed by Goodfellow _et al._[10], who pointed out that the vulnerability of deep neural networks comes from its linear characteristics. Note that our method just uses the already trained VOS models, and other attackers need modification for attacking these models.
**Semi-supervised VOS models**. This paper compares the attack power of different adversarial attack methods on several spatio-temporal matching based semi-supervised video object segmentation (SVOS) models, including STM [19], HMMN [35], STCN [33], and AOT (Associating Objects with Transformers) [57]. Note that AOT Large version with ResNet50 backbone (AOTL-R) and the Large version with Swin Transformer [60] backbone (AOTL-S) are adopted here.
**Unsupervised VOS models**. They segment the objects in a video without any user annotation [61][62], and our attacker is also applied to several unsupervised VOS models, including COSNet (CO-attention Siamese Network) [63], MATNet (Motion-Attentive Transition Network) [20], and FSNet (Full-duplex Strategy Network) [64]. Among them, COSNet adopts a global co-attention mechanism to capture the inherent correlation across all video frames in the video, and it only utilizes the appearance feature, thus avoiding the time-consuming optical flow extraction like MATNet and FSNet.
**Referring VOS models**. They segment the objects in a video with textual descriptions [65][66], and the referring VOS models examined in this paper include RefVOS (Referring video object segmentation) [67], MTTR (Multimodal Tracking Transformer) [68], and ReferFormer [69]. Among them, RefVOS utilizes a semantic segmentation model along with a language processing model to segment the language referred target object frame by frame, but it fails to incorporate rich spatial temporal features of the video. MTTR handles both text and frames in a single transformer. It not only captures rich spatial-temporal features but also associates the language feature and video feature at both word and pixel levels. Similar to MTTR, ReferFormer is also a transformer-based approach with a better feature backbone and more complex network architecture that requires more training data, and it has three versions, i.e., ReferFormer-T/S/L, where the "T/S/L" indicates a tiny, small, and large version of video Swin Transformer [70].
### _Quantitative Results_
#### Iv-E1 Semi-supervised VOS Setting
The experimental results on DAVIS2016 and DAVIS2017 are reported in Table I, while those on YouTube-VOS are presented in Table II. In the tables, "Origin" refers to the VOS model without attack, and "Random" refers to the adversarial attack with random noise. The best attack records are highlighted in bold.
**Results on DAVIS2016**. As shown on the left of Table I, the VOS models are robust to random noise on the second row. However, the segmentation performance degrades significantly when the attacker is applied to the model. Among the competitive attackers, our attacker exhibits the strongest perturbation ability, as indicated by the bottom row, e.g., it reduces the segmentation performance by 7.2%, 8.4%, 6.9%, 11.6%, and 10.6% on STM [19], HMMN [35], STCN [33], AOTL-R [57], and AOTL-S [57], respectively. The results demonstrate the superiority of our ARA method, and this is attributed to that the HRL captures a hardness map to strengthen the perturbation. Among the five VOS models, STCN is the most robust against our attacker, and AOTL-R is the most vulnerable to being seriously attacked.
Except for our attacker, SegPGD [42] and ALMA [59] perform better than others among the remaining attackers, e.g., on the STM model, SegPGD is better than FGSM [10], BIM [38], TI [9], PI [39], and VMI [52], by 2.7%, 2.0%, 4.3%,
0.5%, and 0.6%, respectively. Meanwhile, TI performs the worst, and the reason is that TI generates a perturbation over an ensemble of translated images to increase the transferability, but the image translation operation possibly makes the gradient deviate from that without translation. Among these attackers, FGSM, BIM, and PGD degrade the segmentation performance but with obvious differences. Taking the attack on STCN for example, FGSM causes a performance drop of 1.1%, and its successor BIM decreases the evaluation metric to 87.6%, i.e., a large drop of 4.0%, verifying the effectiveness of adopting the iteration strategy of generating the adversarial example by BIM. Based on BIM, PGD adds random noise initialization and further degrades the performance to 86.9%, which validates the necessity of noise initialization. Moreover, it can be seen that the recently proposed attackers like TI, PI, and VMI fail to improve the attack ability. This is because they focus on improving the transferability of the attacker on different target models, resulting in inferior performance. Among the three attackers, PI shows a rather strong attack power as it applies the the patch-wise perturbation to video frame.
From the table, AutoAttack performs slightly better than PGD but has weaker attack ability compared to our ARA attacker, since it is an improved PGD and adaptively changes the attack step for image classification rather than VOS task. In addition, SegPGD and ALMA have stronger attack ability compared to PGD but are inferior to ours. This might be the reason that SegPGD is likely to attack those correctly-classified pixels neglecting those challenging pixels with more uncertainty, while ALMA desires expensive costs to seek the attack with small perturbations (e.g., less than \(\epsilon=8/255\)). However, these attackers cannot be directly applied to VOS without modification, and they perform unsatisfactorily when they are not designed specially for the segmentation task. Instead, this paper designs an attack framework especially considering the easily-confused pixel areas by introducing the HRL, which helps to generate adversarial examples with stronger attack power.
**Results on DAVIS2017**. The right of Table I shows the similar behaviors of the attackers on the VOS models. The overall segmentation performance is lower than that on DAVIS2016. This is because there are multiple objects to be segmented in the videos of DAVIS2017, which is more challenging. Our attacker ranks the first consistently on several VOS models, indicating the effectiveness of the proposed adversarial region attack. Among the VOS models, STCN is the most robust to adversarial attacks, which may because the external examples used in training improve its robustness.
**Results on YouTube-VOS**. As shown in Table II, our attacker largely outperforms other alternatives consistently on SOTA segmentation models. For instance, our method degrades the evaluation metric by 6.5% on the STM model. The attackers on VOS models behave similarly to that for DAVIS2016. However, the overall attack performance is the lowest among the three benchmarks, this is because the target object does not always appear in the first frame of videos in YouTube-VOS.
#### Iv-C2 Unsupervised VOS Setting
Table III shows the segmentation performance of several unsupervised VOS models attacked by our attacker on the DAVIS2016 [22] validation set. The perturbation is added to the first frame and all frames of each video, respectively. As shown in the table, the overall performance (\(\mathcal{J\&F}\)) drops by 2.7%, 2.8% and 2.3%, for COSNet [63], MATNet [20], and FSNet [64], respectively. The performance drops are much smaller than those for semi-supervised VOS model with our ARA attacker, i.e., 6.9% to 11.6% in Table I. This is because common unsupervised models mainly utilize both appearance features and motion features to capture the target object, while semi-supervised models rely on the first frame to provide the prior knowledge about the object. Thus, unsupervised models are more robust to our first-frame attacker than semi-supervised models. How
ever, if perturbations are added to all of the video frames, the segmentation performance drops significantly by 40.9%, 43.5%, and 34.6% on COSNet [63], MATNet [20], and FSNet [64] respectively, in terms of \(\mathcal{J}\&\mathcal{F}\). This demonstrates the superiority and good transferability of our ARA attacker in an unsupervised setting.
#### Iv-D3 Referring VOS Setting
Table IV shows the segmentation performance of several referring VOS models attacked by our attacker on the A2D Sentences [54] test set. This setting requires natural language to guide the segmentation, and perturbations are added to all the video frames.
According to the table, the segmentation performance drops greatly by 23.4%, 27.4%, 32.3%, 33.2%, and 30.7% respectively for RefVOS [67], MTTR [68], and ReferFormer-T/S/L [69] in terms of the mean IoU. This suggests that our ARA attacker is also strong on referring VOS models, as the perturbed video frames successfully fool the model to make incorrect predictions on a large number of pixels.
### _Ablation Study_
All ablation studies are conducted by attacking the semi-supervised VOS model, i.e., STCN [33], on the DAVIS2017 validation set.
**HRL Backbone.** To investigate the influence of the backbone on the HRL, several popular deep neural networks are compared in Table V, including VGG16 [71], MobileNetV3-large [72], InceptionV3 [73], DenseNet121 [74], and ResNet-18/34/50 [21], which are pre-trained on the ImageNet [75] database. The last fully-connection layer and pooling layer of these neural networks are removed to derive the feature map of the input frame. Besides the common evaluation metrics, some statistics of these backbones are shown, such as the model parameters in #Params (Million), the computational complexity in GFLOPs (Giga Floating Point Operations) with a \(480\times 854\times 3\) gradient map as the input, and average time of attacking a video in seconds. Among the backbones, ResNet18 achieves the best attack performance with the fastest speed 5.8 s per video and modest parameters. The larger backbones like InceptionV3 and ResNet50 are more difficult to optimize because there are too many parameters to learn, leading to inferior attack results.
**Attack Variant.** Table VI presents the attack performance of the adversarial example generated by the HRL with several gradient-based attack methods, including PI [39], VMI [52], and PGD [7]. Our attacker takes advantage of unifying both PGD and the HRL, and it degrades the segmentation performance the most by 4.1% in terms of \(\mathcal{J}\&\mathcal{F}\), as indicated by the bottom row in Table VI. Meanwhile, when using the HRL for
PI and VMI, the attack performances are improved by 1.6% and 1.9%, respectively, which justifies the good transferability of our HRL.
**Hardness Loss**. This paper adopts the Mean Squared Error (MSE) as the hardness loss function, and Table VII shows the attack performance of other loss functions, such as Mean Absolute Error (MAE) and CE. Among the three losses, MSE leads to the worst segmentation results when attacking the VOS model, which indicates it is the best choice for computing the hardness loss of the gradient map. Compared with MAE, the MSE function is smoother and easier to optimize.
**Norm of Gradient Map**. Table VIII shows the impacts of different norms applied to the gradient map \(\mathbf{G}_{r}\), including \(\ell_{\infty}\)-norm, i.e., the \(\epsilon\text{sign}(\cdot)\) function in Eq. (9), \(\ell_{2}\)-norm, and \(\ell_{1}\)-norm. It can be seen that the \(\ell_{\infty}\)-norm has the most powerful attack ability on the segmentation model, and \(\ell_{2}\)-norm is slightly better than \(\ell_{1}\)-norm by perturbing the first video frame.
**Parameter Sensitivity**. When the threshold \(\alpha\) of the hardness pseudo-label map varies from \(-\log(0.1)\) to \(-\log(0.9)\), the attack results are illustrated in Fig. 2(a). When \(\alpha\) takes \(-\log(0.4)\), the attacker performs the best. Meanwhile, the sensitivity of \(\epsilon\) and \(\beta\) on the perturbation is investigated by increasing the value from \(1/255\) to \(128/255\), and the performances are presented in Fig. 2(b). Although the segmentation performance of \(\epsilon\) drops quickly after \(32/255\), the perturbation becomes more easily visible to human eyes. Thus, following [10], this paper set \(\epsilon\) to \(8/255\). Since the performance saturates after \(\beta\) equals \(16/255\), so the same value of \(\epsilon\) is used here.
**Attacked Region and Frame**. To further explore our attacker, Fig. 3 illustrates the VOS performance by adding perturbations to different regions of the first frame and different numbers of frames. It can be seen from Fig. 3(a) that the segmentation performance degrades gradually when the percentage of the attacked region increases from 10% to 100%. Compared with other alternatives, such as FGSM [10], BIM [38], TI [9], and VMI [52], our ARA attacker exhibits more powerful attacking ability as indicated by the much steeper curve. As shown in Fig. 3(b), our attacker greatly degrades the VOS performance with the increasing number of frames attacked, e.g., the \(\mathcal{J\&F}\) decreases from 82.5% to 67.2% with a margin of 15.3%. Besides, BIM [38] and PGD [7] have stronger attacking power than other alternatives.
**Attack Transferability**. To explore the transferability of the attack, we choose the STCN [33] model as the surrogate, which generates adversarial examples to attack other VOS models, including STM [19] and HMMN [35]. The transferring attack results are shown in Table IX. From the table, we see that the attack transferability seems weak on other segmentation models, which might be the reason that the white-box attack methods usually depend on the model gradients, and the different VOS models propagate distinct gradients, leading to the inferior transferability. Moreover, compared to PGD [7] and SegPGD [42], the model transferability of our ARA attacker is better. This is because our attacker focuses on the hard region area where the foreground and the background are easily confused.
### _Black-box Attack Results_
To investigate the performance of our black-box version, this paper compares several SOTA alternatives including OP [76], SimBA [77], and DE (Differential Evolution) [78] to attack the first video frame against the VOS models. Among them, OP generates one-pixel adversarial perturbations based on differential evolution and requires less adversarial information, SimBA randomly samples a vector from a predefined orthonormal basis and either adds to or subtracts it from the target frame, and DE is an approximated gradient sign method that uses differential evolution to solve the black-box adversarial attack problem, by searching the gradient sign
Fig. 3: Attacked region and frames. (a) The percentage of the attacked region; (b) The number of attacked frames.
Fig. 2: Parameter sensitivity analysis. (a) The label threshold; (b) The perturbation parameter.
rather than the perturbation. The results for three benchmarks are presented in Table X.
It can be seen from Table X that our black-box ARA attacker generates consistently powerful perturbations to the VOS models by attacking only the first video frame, e.g., the VOS performance degrades by at most 6.5%, 5.8%, and 4.7%, on DAVIS2016 [22], DAVIS2017 [23], and YouTube-VOS [24], respectively. Meanwhile, our ARA algorithm leads to larger performance drops than the compared methods, demonstrating its stronger attack power.
### _Defense Results_
To investigate the defense performance of adversarial training, our ARA attacker and the PGD [7] attacker are taken as an example to generate attacked frames for training robust semi-supervised VOS models STM [19] and STCN [33]. The results are presented in Table XI. In detail, perturbations are added to the first frame of the videos from the training sets in DAVIS2017 [23] and YouTube-VOS [24], and the perturbed frames are used to conduct adversarial training for obtaining robust VOS models. Note that the original pre-training with clean videos is completed before the adversarial training with attacked videos, and the perturbation parameters are kept the same as those on the attacker.
According to Table XI, the top group shows the VOS performance of the adversarial training or the white-box attack using the PGD and our attacker individually. The first row shows the original segmentation performance as a baseline without any attack or defense. Rows 2 and 3 show that our attacker degrades the performance more significantly than PGD by 2.2%; Rows 4 and 5 indicate that the VOS model performance is slightly affected by adversarial training of both PGD and our attacker. The bottom group lists the records of the defense results against the attack by the PGD and our attacker. Rows 6 and 7 show that the adversarial training with our method is robust to the VOS model attacked by PGD, and our attacker has higher model robustness than PGD when the video frames are attacked by our ARA method, as indicated by the last two rows. Therefore, the overall defense performance of our ARA method is promising.
### _Qualitative Results_
To illustrate the attack performance, several videos were randomly chosen from DAVIS2017 [23], and the first frame of each video is attacked against the VOS model, i.e., STCN [33], which adopts ResNet as the backbone. As illustrated in Fig. 4 where each color indicates one object, it is almost impossible to perceive the perturbation added to the adversarial example (Row 2) by human eyes, indicating the good safety of our attacker.
For the test examples in Row 3, the VOS model obtains satisfactory segmentation results (Row 4), while the PGD [7] attacker and our ARA attacker (including white-box and
Fig. 4: The visualization of the attack results on the STCN [33] model.
black-box) successfully fool the model to make incorrect pixel predictions indicated by the red area of Rows 5 to 7. For the PGD attacker, the proportion of its prediction error pixels is less than that of our attacker, demonstrating that the developed ARA attacker produces adversarial examples with a stronger attacking power. Also, for our ARA attacker, the power of the white-box attacker is better than that of the black-box one, i.e., the red error area is larger. This is because the white-box setting can employ the gradients during the optimization, but the gradients are unavailable for the black-box setting. Moreover, our method has a satisfactory defense effect against the ARA attacker, as the red error area is greatly reduced compared to those results in the above rows.
## V Conclusion
This work explores the effects of adversarial attacks on video object segmentation. An adversarial region attacker is developed to generate adversarial examples by adding almost human-imperceptible perturbations to the first frame of the video. Meanwhile, to improve the attack power of the adversarial example, a hard region learner is introduced to derive the hardness map by using the gradients derived from the model back-propagation mechanism. This makes the perturbation emphasize the pixel areas where the foreground and the background are easily confused. Moreover, the iterative strategy is adopted to update the perturbation, thus improving the attack ability of the adversarial example. Finally, extensive experiments are conducted on three benchmarks to verify the superiority of our attacker to other adversarial attack methods on several state-of-the-art VOS models. In the future, we will investigate attacking the frames with large uncertainty to enhance the attack power.
|
2310.20326 | Erato: Automatizing Poetry Evaluation | We present Erato, a framework designed to facilitate the automated evaluation
of poetry, including that generated by poetry generation systems. Our framework
employs a diverse set of features, and we offer a brief overview of Erato's
capabilities and its potential for expansion. Using Erato, we compare and
contrast human-authored poetry with automatically-generated poetry,
demonstrating its effectiveness in identifying key differences. Our
implementation code and software are freely available under the GNU GPLv3
license. | Manex Agirrezabal, Hugo Gonçalo Oliveira, Aitor Ormazabal | 2023-10-31T10:06:37Z | http://arxiv.org/abs/2310.20326v1 | # Erato: Automatizing Poetry Evaluation
###### Abstract
We present Erato, a framework designed to facilitate the automated evaluation of poetry, including that generated by poetry generation systems. Our framework employs a diverse set of features, and we offer a brief overview of Erato's capabilities and its potential for expansion. Using Erato, we compare and contrast human-authored poetry with automatically-generated poetry, demonstrating its effectiveness in identifying key differences. Our implementation code and software are freely available under the GNU GPLv3 license.4
Footnote 4: [https://www.github.com/manexagirrezabal/erato](https://www.github.com/manexagirrezabal/erato)
Keywords:Evaluation Poetry Automatic Poetry Evaluation.
## 1 Introduction
Poem composition typically exploits several levels of language, from lexical to semantics, pragmatics, and aesthetics in general. Therefore, the evaluation of poetry is subjective and poses many challenges. However, when it comes to computer-generated poetry, shortcuts need to be taken to reach conclusions on the quality of results, i.e., how well the produced poems actually employ poetic features, how they reflect the input parameters, including the desired message (e.g., given in the form of a topic, a theme, a prompt), and how they compare to human-written poetry. Relevant aspects include the presence of a regular metre and rhymes, fluency and meaning, among others, like novelty towards an inspiration set or other creations by the same system.
The challenges of poetry evaluation have been acknowledged [5, 16], and researchers typically end up resorting to human opinions. Given the subjective nature of the goal, this is a fair decision. This adds to experiments where low correlation between human assessments and automatic metrics was noted [11]. Still, we argue that automatic metrics, depending on how they are interpreted, can at least support the creation of automatic poetry generation models.
This paper describes Erato, a framework that aims to make the evaluation of poetry easier. Having in mind that such evaluation cannot rely on a single
aspect or metric, Erato offers a set of Python scripts for assessing different complementary aspects. Some are language-specific and others are not. Some analyze a single poem independently and others are based on a set of poems. Also, the scripts can be classified according to the type of aspect of study, namely: poetic, novelty-related, lexico-semantics and fluency-related features. We further make a distinction between analyzing poems and evaluating them. While the former does not need expectations, the latter checks whether certain output is satisfied (e.g., are stanzas organized in a specific way? Does the poem follow a rhythmic pattern?).
Erato is open-source with a number of already implemented and ready to use scripts. The inclusion of new features is made to be straightforward, easing the addition of language/culture dependent features that one may want to analyze. Towards their adaptation to different purposes, underlying resources (e.g., lexicons or semantic models) can be changed, and the provided interfaces can be re-implemented following a set of guidelines. For illustrating what we can do with Erato, the paper further describes its usage in the analysis of human-written poems and poems automatically generated by two computational systems, in two different languages.
The paper is structured as follows: some related work on poetry evaluation is reviewed; based on previous research, we attempt to characterize good poems; we present Erato and its architecture, together with implementation details; we describe a case study involving the application of Erato to human-authored poetry and poetry by two computational systems; and we conclude by discussing possible future directions.
## 2 Related Work
Many authors in the Computational Creativity community have acknowledged the difficulty of evaluating creative outcomes [27, 21]. When assessing an artifact, one can look at its quality based on pre-established conditions, unexpectedness, reactions of the public, and so on. Researchers in this community have proposed several methods for this. Some emphasized the evaluation of creativity [24, 4, 15, 1, 11], others went more into detail, and proposed methods for evaluating poetry, specifically [18, 25, 8, 11].
Supported by the low correlation between human judges and automatic metrics, many authors resorted to human judges [25, 13, 12], while others combined it with automatic evaluation. Perplexity was employed as sanity check, followed by BLEU [29, 28], both on some reference text. Another string similarity metric, ROUGE, was used for computing novelty in generated poems [8, 7]; concepts have been assessed with the master-apprentice method [10]; and, in order to assess the impact of an input theme, semantic similarity was computed between the used theme and titles given by humans to generated poems [7].
To the best of our knowledge, there is no framework for evaluating poetry in an automatic way. Erato aims to fill this gap, with inspiration in previous
work [8], but extending it and further releasing the scripts, so that future researchers of the field can benefit from it.
## 3 What characterizes a good poem?
Poetry is a form of literature that uses different elements of language to convey a message and a feeling. The elements of language that typically characterize poetry are rhythm, rhyme and different types of figures of speech. These usually form recurring patterns, caused by the deliberate way in which poets arrange their information. The question of whether a poem is good or not does not have a trivial answer. We believe, though, that it is possible to define features to make this question more quantifiable, to some degree. We depart from well-established features [18], and propose a similar set that we believe could be employed to assess a poem.
It is widely accepted that generated poetry should satisfy the properties of meaningfulness, grammaticality, and poeticness [18]. We address these three aspects from a practical perspective, and following more recent work [8], include a new aspect, novelty.
**Poetic features**, similar to _poeticness_[18]: Poetry is commonly arranged in a different way to prose. Common aspects to consider include the number of stanzas and their shape, often regarding the number of syllables. Apart from that, as there is a number of recurring patterns that poems follow, the analysis of rhythm, in particular stresses and feet, and rhymes constitute two valued aspects. These elements, though, should be considered with a grain of salt, as different cultures and traditions have their own aspects of interest.
**Lexico/semantic features**, related to meaningfulness [18]: Semantic features have different levels of granularity and complexity. Poems should convey a certain message. Thus, if we randomly combine a set of lines from different poems and compose a new one out of that, chances are low that a coherent and understandable message is conveyed, with a negative impact on quality. Apart from abstract semantic aspects, word choice plays a crucial role in poetry, as writers commonly resort to unusual words, often to satisfy sound related constraints. The deviation of word usage in comparison to regular language could be used as another measure of quality of a poem. This aspect would be related to the.
**Poetic fluency**, similar to _grammaticality_[18]: Checking the correctness of utterances in poetry is important, especially because the conveyed message might be affected if no proper morphology or syntax is used, but the control of poetic licenses is not straightforward. Therefore, we suggest to control this aspect by checking whether the text does "sound like poetry".
**Novelty features**: Also mentioned as _imagination_[3], we argue that novelty1 is a very influential for the assessment of a poem. In poetry, it can be regarded in different levels. We may consider it within a poem, where we check whether
there is variation across lines or it may be analyzed across poems by the same author or system. If an author writes a very good poem and, every year they publish it, we can safely state that they are not creating new poems. Novelty can also consider poems in the world, i.e., if an author writes the same as another, it could be seen as plagiarism.
## 4 Erato: A framework for poetry evaluation
Erato is a framework for the automatic evaluation of poetry, having in mind poetry generators, but also applicable to human-authored poetry. It implements some ideas of previous work [8], in order to offer the evaluation of a range of relevant aspects in poems. This is useful, for instance, for developers of poetry generators, which may use Erato for assessing the results by their systems, before resorting to human evaluation. It includes the implementation of some aspects for the analysis of poetry, but its modular architecture makes the inclusion of new ones straightforward. Included aspects can be divided into four groups, described in the previous section: poetic features, novelty features, lexico/semantics, and poetic fluency.
Erato is a software package that can be called from the terminal,6 and be used to analyze or evaluate a single poem, or to analyze several poems by the same author. When one analyzes a poem, there is no specific expectation, but, for evaluation, there should be a target goal (either a specific value, or a range of acceptable values). Erato is designed in a way that, once the analyzer function is written in a script, the implementation of the evaluator is very easy. Already implemented scripts for analysis are organized in two main groups: Single poem analyzers, which analyze a poem as a single element; and global poem analyzers, which require a collection of poems. Each of these types of analyzer may then be divided into the four aforementioned aspects. Finally, some scripts are language/culture dependent, while others are not.
Footnote 6: We have an experimental version that can be used as a web application.
### General structure
When we start Erato, we are given the option of analyzing a single poem or a collection of poems. Before starting any analysis, all relevant modules are loaded. The relevant modules are specified in the modules package in the __init__.py file. In that file, two dictionaries are defined, one for single poem analyzers and another one for poem collection analyzers. The keys of each dictionary are actual aspects: poetic_features, novelty_features, fluency_features and lexsem_features and each of those would contain a list of actual Python files that perform one specific analysis. For instance, "models/lindep/lineCounter.py", "models/lindep/stanzaCounter.py" and "models/en/syllableCounter.py" are examples of already implemented poetic features.
Each of these files should have the following structure. There has to be a class called evaluator. This class must contain two static methods: analyze
and evaluate. The analyze function should return a tuple of two elements. A name for the analyzer and the actual result. The evaluate function should call the internally defined analyze function and to compare it to a given expected output.7 In the evaluation function it would be possible to define some evaluation criteria, for instance, in the previous example of line counting, we could return 1 if the number of lines is 14, and 0 if it is not.
Footnote 7: We are currently working on a generic evaluate class, especially because the evaluate function is very similar in many cases, but it is still in trial period.
### Available modules
Erato currently includes scripts for checking some poetic features, novelty features and semantic features. We are planning to extend the fluency detector.
#### 4.2.1 Poetic features
We include a stanza, line and syllable counter, a scansion model and a rhyme checker. The syllable counter is currently implemented for English8, and few other languages. The scansion model is only available for English.9 We perform rhyme analysis using an existing tool [22]. For each poem, we calculate: (1) the number of rhyme patterns10; (2) the ratio of rhyming lines, or rhyme richness.
Footnote 8: It is an implementation that relies on the CMU pronunciation dictionary [26].
Footnote 9: Simple model relying on lexical stress from CMU dictionary.
Footnote 10: A rhyme pattern is counted if it appears at least two times in the poem.
#### 4.2.2 Novelty features
Novelty is based on the _structure variation_ method [8]. It is analyzed on the overlapping n-grams, based on ROUGE [17], a common metric to evaluate how overlapping two sentences are in terms of n-grams. ROUGE is computed within the poem --to inform about possible repetition within it--but also across poems by the same author/system. We are thus able to detect whether poems are very similar to each other (i.e., if ROUGE scores are high), or if they are novel (i.e., if the scores are low). We call these two aspects intrapoem novelty and interpoem novelty, respectively.
When we analyze novelty internally, we attempt to find whether patterns are repeated within a poem. When we do it across poems, the goal is to check how repetitive the poems are with respect to each other. We calculate the novelty of a single poem as the average ROUGE score (f1-score) of all line pairs in a single poem, except a line with itself. Following the details from [8], we calculate novelty across poems in three different ways. (1) Single string,11 (2) line by line,12 and (3) all lines.13
#### 4.2.2 Semantic features
Semantic evaluation relies on semantic textual similarity and has some resemblance to _topicality_[8]. For this, Erato expects poems to be associated with a specific topic and it performs an information retrieval task where the top-\(k\) poems for the target topic are predicted. This is evaluated in terms of the F1-score, and the main assumption is that, if the text is indeed related to a specific topic, the poem retriever should be able to perform perfectly. Therefore, the greater performance we get, the better the poems are. In the current implementation, we encode each topic and each poem with sentence-BERT (multilingual) [23],14 and then, we compute the similarity between these topics and the poems.
Footnote 14: This Transformer-model, sentence-transformers/distiluse-base-multilingual-cased-v1, performed best in a similar experiment on a subset of: [https://www.kaggle.com/datasets/michalerman/poemsdataset](https://www.kaggle.com/datasets/michalerman/poemsdataset). It may, however, be changed in the future.
### Extending Erato for specific purposes
One of the main advantages of using Erato as a framework is how simple it is to extend it for specific purposes. Suppose that we want to adopt a more elaborate syllable counter for English. To incorporate it, the first step is to get a template of a module, available in a provided file15. In that file, we need to implement the function analyze, and the produced output should be returned as a tuple, where the first element includes a string with the performed analysis (e.g., _syllable-count_) and the second element contains the output (in this case, the number of syllables). Finally, the current file should be linked, as mentioned before, in the __init__.py file from the modules package.
Footnote 15: [https://github.com/manexagirrezabal/erato/tree/master/models/modeltemplate.py](https://github.com/manexagirrezabal/erato/tree/master/models/modeltemplate.py)
## 5 Case Study: Human and Machine Poetry
As Erato can be used to analyze poetry, we can use its results as a method for understanding the differences between different types of poetry. To illustrate what we can do with Erato, we conducted a simple experiment, where we use it for analyzing and comparing poetry produced by humans and by machines, in English and Spanish. The following subsections introduce the setup of this experiment and the result of the analysis.
### Computer-generated poetry
For computer-generated poems, we resorted to two available APIs: PoeTryMe [6], for a system that generates poems in Portuguese, Spanish and English; and OpenAI's GPT316[2], a large language model that can be used for generating text given specific prompts. We created poems using the same seed words as in
previous work [8].17 and added three --virus, pandemic and facemask-- to see how the models behave with current topics. From each API, we generated 10 poems for each seed word and for each language.
Footnote 17: The seeds were: _love_, _artificial_, _blue_, _sing_, _computer_, _build_, _football_, _read_, _new_, _poetry_[18] [https://beta.openai.com/docs/engines/gpt-3](https://beta.openai.com/docs/engines/gpt-3)
For PoeTryMe, we used a surprise factor of 0.005 and requested always a poem with the structure of a sonnet, using the target seed. For OpenAI, we used the Davinci engine18 with a temperature of 0.7, and we set the maximum number of tokens to 300, which was more or less what we expect a sonnet to have. As the GPT3 model is not trained to generate poetry, we used a prompt requesting a sonnet in English or Spanish, respectively "_Write a sonnet about_" and "_Escribe un soneto sobre_", followed by the seed word.
Footnote 18: [https://beta.openai.com/docs/engines/gpt-3](https://beta.openai.com/docs/engines/gpt-3)
### Human-written poetry
Poems by well-known authors were also used in the experiment. For English, we created a corpus with poems by William Shakespeare, Emily Dickinson and Edgar Allan Poe. For Spanish, we selected a number of poems from the Spanish Golden Age. This subset was based on previously obtained author clusters [19], but only a small number was selected, to have a comparable size.
### Analysis
We used Erato for analyzing the computer-generated and the human-written poems. Considering what is currently implemented, we discuss the poetic, the novelty, and the semantic features below. The included visualizations refer only for the English data.
On the stanza structure, PoeTryMe seems to follow the sonnet pattern exactly, meaning that each poem has three stanzas with four lines each, except the last one with two lines, as it can be seen in Figure 0(a). The same happens for Spanish and English. As meter is not explicitly controlled by GPT3, and as the model itself is not specifically designed for metrical poetry generation, these numbers vary greatly in GPT3's output. Some poems contain a single stanza with several lines, while others are composed by a number of independent lines (as if one stanza had a single line). Human poems in Spanish are sonnets, so the stanzas follow the exact same structure (i.e., two stanzas with four lines and two with three lines). English poems by Shakespeare (only sonnets, 14 lines) and by Emily Dickinson (generally 3-6 paragraphs with 4 lines each) have a stable stanza structure, while Poe's work is more variable in this regard. This can be observed in Figure 0(b).
On the number of syllables, in Figure 0(c), we can observe that the majority of lines by PoeTryMe has a very strict metre in the number of syllables. Human poets and GPT3 follow a more free verse.
We also analyzed the differences in rhymes by checking their number in each poem. Figure 1(a) shows how many different rhyme patterns appear for each poem
on average. Figure (b)b has the ratio of rhyming lines.19 Based on rhyme richness, GPT3 poems seem to be the poorest among all, with a distribution skewed towards 0.0.
Footnote 19: 1.0 means that all lines rhyme with each other, 0.0 means that none rhyme.
We compute ROUGE for measuring the overlap of the poems within themselves (intra poem20) and within other poems made by the same system/author (or inter poem). The main clear conclusion is that human authors are the least repetitive, both inside poems and across poems. We further observe that PoeTryMe in Spanish results in plenty of repetition within a poem, in comparison to other methods or languages. The average ROUGE-1 score is 0.17, higher than for English (0.05) (Figure 3). This makes sense because the size of PoeTryMe's grammar and semantic network for Spanish are much smaller [8]. A common observation is that GPT-3 tends to generate repetitive content across poems compared to human authors or PoeTryMe. However, adjusting the temperature parameter could potentially reduce this effect, although it may also impact the overall quality of the generated poems. Additionally, further refinement of the prompt engineering process could lead to more diverse and unique outcomes.
Footnote 20: For illustration, Figure 3 shows how the intra poem ROUGE results look like Poems about each topic are organized in folders and topics are given as text.
With regards to semantic evaluation, we retrieved poems for each topic, given a predefined list.21 Macro F1-scores were between 0.7 and 0.8 for both GPT3 and PoeTryMe. For the former, the semantic model was especially good at distinguishing poems about "blue" (F1-score=0.95). For the same word, in the Spanish
PoeTryMe, the model did not guess a single case. With this evaluation, we would be able to see that PoeTryMe has limitations for generating poems on certain topics. It produces generally well-sounding poems, but this can be done at the expense of less accurate semantics. Something similar happens with the word _facemask_ in the English version of PoeTryMe. Further analysis of precision and recall could shed further light on the underlying reasons of this behaviour. We additionally performed some basic analysis at the type/token level and saw the following Type/Token Ratios for the three poetry sources: 0.130, 0.257 and 0.237 for GPT3, PoetryMe and humans, respectively. We can say that when GPT3 is required to write a sonnet, it resorts to similar words. PoeTryMe compares well to humans in this aspect.
## 6 Conclusion and Future directions
We present Erato,22 a framework for the automatic analysis and evaluation of poetry. It comes with a number of already implemented modules, and the addition of new ones is straightforward. We invite researchers working in the automatic generation of poetry to use Erato as a midway step to check how their systems work before resorting to human evaluators. In a case study, we mentioned the output of some of the metrics. We argue that there is no perfect metric, but the more metrics we employ, the better understanding of the poems we get. Thus, a sufficiently large number of automatic metrics should provide a sufficiently good understanding of quality in poetry.
Footnote 22: [https://github.com/manexagirrezabal/erato](https://github.com/manexagirrezabal/erato)
Now that Erato is available with different metrics, it is possible to analyze how different sets of parameters can affect the poems in different dimensions. For instance, a possibility is to change the temperature parameter in GPT3 or
Figure 3: Boxplot of Intra ROUGE scores for each of the three different authors.
the surprise in PoeTryMe and to look at how different metrics, such as novelty, rhymes or semantics, are affected. Besides, some of the metrics presented here can be used as fitness functions for an evolutionary poetry generation model.
Many aspects could be further developed. The current implementation of Erato does not include any visualization mechanism, but we are planning to include this as part of the first release so that results are more interpretable. Besides, at the current stage, when evaluation is performed, Erato only accepts equality as condition. For example, in some poetic traditions, the number of syllables of a line does not need to have an exact number, but it needs to be within a range of numbers. We expect to soon accommodate this type of issues. Furthermore, we have an experimental version that allows using Erato as a web application, which allows us to reach a wider audience (e.g. people without programming experience).
We are also planning to implement a fluency detector, based on a Large Language Model. We are very aware that this will be very dependent on the type of corpus we use for fine-tuning, and because of that we intend to use a corpus of poetry that is as varied as possible, for instance [14, 20].
When computing novelty metrics, as there might be several files, this computation can become extremely resource-intensive, as we compare all poems with all others. To make this more efficient, we are considering undersampling methods, which, instead of going through all lines and all files, will focus on a random selection of all.
## Acknowledgements
This work was partially supported by the EU-funded Marie Sklodowska-Curie Action project EA-Digifolk, Grant agreement ID 101086338.
|
2309.07822 | CATfOOD: Counterfactual Augmented Training for Improving Out-of-Domain
Performance and Calibration | In recent years, large language models (LLMs) have shown remarkable
capabilities at scale, particularly at generating text conditioned on a prompt.
In our work, we investigate the use of LLMs to augment training data of small
language models~(SLMs) with automatically generated counterfactual~(CF)
instances -- i.e. minimally altered inputs -- in order to improve
out-of-domain~(OOD) performance of SLMs in the extractive question
answering~(QA) setup. We show that, across various LLM generators, such data
augmentation consistently enhances OOD performance and improves model
calibration for both confidence-based and rationale-augmented calibrator
models. Furthermore, these performance improvements correlate with higher
diversity of CF instances in terms of their surface form and semantic content.
Finally, we show that CF augmented models which are easier to calibrate also
exhibit much lower entropy when assigning importance, indicating that
rationale-augmented calibrators prefer concise explanations. | Rachneet Sachdeva, Martin Tutek, Iryna Gurevych | 2023-09-14T16:16:40Z | http://arxiv.org/abs/2309.07822v3 | # CATFOOD: Counterfactual Augmented Training for Improving Out-of-Domain Performance and Calibration
###### Abstract
In recent years, large language models (LLMs) have shown remarkable capabilities at scale, particularly at generating text conditioned on a prompt. In our work, we investigate the use of LLMs to augment training data of small language models (SLMs) with automatically generated counterfactual (CF) instances - i.e. minimally altered inputs - in order to improve out-of-domain (OOD) performance of SLMs in the extractive question answering (QA) setup. We show that, across various LLM generators, such data augmentation consistently enhances OOD performance and improves model calibration for both confidence-based and rationale-augmented calibrator models. Furthermore, these performance improvements correlate with higher diversity of CF instances in terms of their surface form and semantic content. Finally, we show that CF augmented models which are easier to calibrate also exhibit much lower entropy when assigning importance, indicating that rationale-augmented calibrators prefer concise explanations.1
Footnote 1: We make our code available at: github.com/CATFOOD
## 1 Introduction
Since their introduction to the field of NLP, large language models (LLMs) have shown exceptional performance across a wide array of applications Devlin et al. (2019); Brown et al. (2020); Wei et al. (2022); _inter alia_). LLMs have frequently been utilized to enhance reasoning capabilities of smaller models Li et al. (2022), generate counterfactuals (CF) - minimally perturbed input instances - for data augmentation Fryer et al. (2022); Paranjape et al. (2022), and have shown remarkable generalization capabilities, performing well on a wide variety of tasks such as question answering (QA), complex reasoning, and code generation Wei et al. (2022); Black et al. (2022); Touvron et al. (2023). On the other hand, comparatively small language models (SLMs) such as BERT Devlin et al. (2019) perform well on task specific data but their performance drops with a change in the data distribution Koh et al. (2021); He et al. (2023) and they are frequently poorly calibrated, exhibiting under- or overconfidence in their predictions Desai and Durrett (2020); Kong et al. (2020); Guo et al. (2021); Jiang et al. (2021). In our paper, we examine how data augmentation with CFs of varying diversity improves out-of-domain (OOD) performance and model calibration of SLMs. For comparability to previous work, we perform our experiments in the extractive QA domain, but we believe our findings could generalize to other tasks, given the remarkable versatility exhibited by LLMs Wei et al. (2022).
To alleviate the issue of poor OOD performance for QA, recent works have resorted to augmenting training data with _counterfactual_ instances automatically generated by LLMs Paranjape et al. (2022). Training on CF augmented data reduces model reliance on spurious features, which in turn improves generalizability Sen et al. (2021). While Paranjape et al. (2022) fine-tune a T5-based model to generate counterfactual instances with
Figure 1: An illustration of the counterfactual samples (purple) for the input question (green) produced by the RGF baseline and our approaches using LLMs. While RGF produces a question closely related to the input, LLMs generate more diverse questions with respect to surface form and semantic content.
their Retrieve-Generate-Filter (RGF) approach, we leverage a range of more powerful LLMs such as Flan-UL2 (Tay et al., 2022) and LLaMA (Touvron et al., 2023), which we prompt in one-shot manner to generate CFs. Owing to the extensive training of these LLMs on diverse data, coupled with their enhanced generative capabilities, we hypothesize they will produce counterfactual instances _more diverse_ with respect to their surface form and semantic content, covering a broader part of the input space, further improving robustness of the base model to spurious correlations and bridging the data distribution gap to OOD datasets. A demonstration of diverse CF instances produced by our approach is shown in Figure 1, showing variations in focus, temporality, specificity, and domain knowledge.
In other work, Ye and Durrett (2022) leverage features from _rationales_, explanations of the inner decision making process of the model, to train a calibrator model - a simple classifier to predict whether the base model is correct or not. We hypothesize that CF augmented models possess more precise explanations of their decisions, as they are forced to consolidate the more complex discrepancies between instances and their CFs, which should in turn provide better information to the calibrator model. To better investigate the connection between model explanations and calibrator performance, we introduce semantic features - dense representations of the most important tokens from explanations - to calibrator models, consider a wider range of explainability methods, and measure whether characteristics of explanations - such as _comprehensiveness_ and _sufficiency_(Chrysostomou and Aletras, 2022) are indicative of the models' calibration performance.
In our work, we present the first systematic and comprehensive study on the effect of diverse CFs for augmenting SLMs with respect to their OOD performance, explanation quality and calibration performance. Our experiments show that:
1. _More diverse CFs improve OOD performance and model calibration in extractive QA by a large margin;_
2. _Introducing rationale semantics from CF augmented models to calibrators improves calibration performance;_
3. _Rationale augmented calibrators prefer concise and informative explanations._
## 2 Related Work
### Counterfactual Generation
Counterfactual instances have demonstrated their importance in evaluating the OOD generalization capabilities of LLMs (Bowman and Dahl, 2021) and in augmenting training data (Longpre et al., 2021). One major downside of works which tackle CF generation (Kaushik et al., 2020; Khashabi et al., 2020; Ribeiro et al., 2020) has been the prohibitive requirement for human annotators, which would manually perturb data instances to generate CFs - a setup both expensive and difficult to scale.
With the improvements brought forward by LLMs, the idea of automatically generating CFs with generative models has gained significant traction. In the QA scenario, Ye et al. (2021) and Longpre et al. (2021) generate counterfactuals by substituting entity names with other plausible entity names. However, this approach requires heuristic methods or human re-labeling to derive the resulting label changes. More recent work (Paranjape et al., 2022) focuses on creating fluent, and automatically labeled CFs with minimal human supervision. However, their method requires fine-tuning models for both question generation and answering which restricts the diversity of generated CFs to only what exists within the fine-tuning dataset. On the other hand, our methodology utilizes LLMs pretrained on vast array of datasets that enables us to generate CFs with a broader range of knowledge and linguistic nuances, surpassing the limitations posed by fine-tuning on specific datasets. In summary, no previous work explores the relationship between diversity of generated CFs and OOD performance, which we investigate in the scope of our work.
### Model Calibration
Estimating the uncertainty of SLMs is challenging due to limited training data available, especially under OOD settings (Desai and Durrett, 2020; Guo et al., 2021). While prior approaches to model calibration have used "meta-features" based on model confidence (Kamath et al., 2020) and input representations (Zhang et al., 2021), these techniques do not incorporate features from explanations which is the central focus of our work. In the OOD calibration scenario, recent works have explored the use of explanations during training (Li et al., 2022), and data augmentation (Park and Caragea, 2022). However, these works mostly focus on calibration
techniques, whereas token importance scores from explanations are only used for selecting data samples that improve model generalization. More recently, Ye and Durrett (2022) have studied how to improve a black box model's calibration on OOD settings by leveraging handcrafted features from model explanations Ribeiro et al. (2016); Lundberg and Lee (2017). However, the computation of handcrafted features first maps tokens to linguistic features such as POS tags, a process in which the meaning of individual tokens is lost in favor of their part of speech. In our work, we analyse the connection between explanation quality of CF augmented models and calibration performance. The questions we set out to answer are: (1) does the content of the explanation matter to the calibrator? (2) which explainer is the best at producing calibration features? and (3) do better explanations improve model calibration?
## 3 Preliminaries
### Datasets
We adopt the OOD evaluation methodology of Paranjape et al. (2022) and test our CF generation methods on seven extractive question answering datasets: SQuAD Rajpurkar et al. (2016), SQuAD-Adversarial Jia and Liang (2017), TriviaQA Joshi et al. (2017), HotpotQA Yang et al. (2018), Natural Questions (NQ) Kwiatkowski et al. (2019), NewsQA Trischler et al. (2017), BioASQ Tsatsaronis et al. (2015). For all datasets except SQuAD, we directly use the pre-processed version of the dataset from the MRQA Shared Task Fisch et al. (2019). Detailed descriptions of the datasets can be found in Appendix A.
### Setup and Base Models
Following the setup of Ye and Durrett (2022), we train a RoBERTa-base model Liu et al. (2019) on the SQuAD dataset and evaluate its OOD performance on five datasets (SQuAD-Adversarial, TriviaQA, HotpotQA, BioASQ, and NQ) and OOD calibration performance on three datasets (SQuAD-Adversarial, TriviaQA, and HotpotQA). To improve the OOD performance of our base model, we augment SQuAD data with CFs automatically generated using the following LLMs: GPT-JT (6B) and GPT-NeoxT (20B), instruction tuned versions of GPT-J Wang and Komatsuzaki (2021) and GPT-Neox Black et al. (2022), respectively; LLaMA (13B) Touvron et al. (2023), Alpaca Taori et al. (2023), Flan-T5-xxl (11B) Wei et al. (2022), and Flan-UL2 (20B) Tay et al. (2022). We obtain the Alpaca model by Low-Rank Adaptation (LoRA) Hu et al. (2022) fine-tuning the LLaMA (13B) model on the Alpaca dataset Taori et al. (2023) for 10 epochs. We select representative publicly available models from both decoder-only and encoder-decoder families and their instruction-tuned variants based on their performance, efficiency, and diversity of training data. We omit detailed model descriptions for brevity and refer the reader to Appendix B for more details.
Figure 2: Our proposed methodology for generating CFs from LLMs. The Solo-QAG approach (top) generates counterfactual QA pairs in a single pass while the Duo-QAG approach (bottom) first generates the question, and then the answer.
### Generating Counterfactuals
#### 3.3.1 Retrieve-Generate-Filter (RGF)
Introduced in Paranjape et al. (2022), RGF is used to create counterfactual evaluation and training instances with minimal human supervision. RGF leverages the REALM retrieval augmented language model (Guu et al., 2020) to produce a ranked list of contexts and answers within those contexts, given a question as input. Based on this set of contexts and answers, RGF then generates questions using a T5-3B2 question generation model fine-tuned on the NQ dataset. Each generated question, along with its corresponding context and answer, constitute a _counterfactual_ instance. The generated counterfactual instances are then filtered for _minimality_, to ensure they are not direct duplicates and for _noise_, to ensure they are answerable. To improve the CF generation pipeline of the baseline RGF approach, we use an additional filtering step based on _data maps_: a model-based method which groups instances of a dataset into three categories: _easy-to-learn_, _ambiguous_, and _hard-to-learn_(Swayamdipta et al., 2020) based on model confidence and variability across epochs. We believe that training the base model with the filtered _hard_ samples should help in improving OOD performance since these examples are the most challenging for the model (Shrivastava et al., 2016). For further details on the filtering step, we refer the reader to Appendix D.
Footnote 2: Due to compute limitations, we use a T5-large model consisting of \(770\) million parameters.
#### 3.3.2 Solo-QAG and Duo-QAG
We follow the RGF methodology up to the context retrieval stage - that is, given a question, we use the REALM model to generate the candidate contexts and then use the _context selector_ to select the context for which the T5-large model generates the closest question based on the Levenshtein distance (Levenshtein, 1966). Then, given the chosen context \(\hat{c}_{i}\), our _LLM QA generator_ generates the counterfactual \((\hat{q},\hat{a})\) pair using an LLM prompted in \(1\)-shot manner with a prompt containing the original \((q,a)\) pair. As our preliminary experiments have shown that some LLMs are better at jointly generating the question and answer, while others perform better at sequential generation, we propose the following two approaches to generating counterfactual instances:
Solo-QAG.For every context \(\hat{c}_{i}\) chosen by the _context selector_, we prompt the LLM to produce a question answer pair (\(\hat{q}_{i}\), \(\hat{a}_{i}\)) in a single generative step. We name this approach **S**ingle-**P**hase **Q**uestion-**A**nswer **G**eneration.
Duo-QAG.In this approach, we split LLM QA generation into two phases. We first generate the question \(\hat{q}_{i}\) that can be answered based on the given context \(\hat{c}_{i}\), and then use the question-context pair (\(\hat{q}_{i}\), \(\hat{c}_{i}\)) to generate an answer \(\hat{a}_{i}\). We name this approach **D**ual-**P**hase **Q**uestion-**A**nswer **G**eneration.
We illustrate our proposed approaches in Figure 2. In both generation approaches, we prompt the LLM a maximum of three times with different random seeds until a satisfiable instance is produced (e.g. one which is not empty or excessively short). Our approach is computationally efficient, requiring a single LLM query per instance, while maintaining a high quantity of accepted generated instances, as seen in Table 5. We detail the prompts used for CF generation in Appendix C.
As the LLM-based CF generation approaches are still prone to generating open-ended questions which cannot be answered based on information provided in the input context, we introduce a filtering step designed to ensure high quality of generated CF instances. The first filtering step leverages _context relevance filtering_ to identify CF questions where the corresponding input context does not provide sufficient information for an answer. Since context relevance filtering may also discard some complex, but answerable questions, we further employ the round-trip consistency approach (Alberti et al., 2019; Fang et al., 2020) to retrieve incorrectly discarded samples using an ensemble of three language models initialized with different seeds to answer the LLM generated questions. If answers from 2 or more language models agree with the LLM generated target answer, the CF sample is retained. For the sake of space, we elaborate the details of our approach in Appendix D.
#### 3.3.3 Quantifying Diversity of counterfactuals
We quantitatively evaluate the _diversity_ of generated counterfactual questions with respect to the original questions along two axes: (1) _surface form variation_, measured by self-BLEU (Zhu et al., 2018) and Levenshtein edit distance, as proposed in Wu et al. (2021); and (2) _semantic variation_, measured by SBERT (Reimers and Gurevych, 2019) embedding similarity and semantic uncer
tainty (Kuhn et al., 2023). _Surface form variation_ metrics quantify the surface form difference between the original question and its counterfactual counterpart through n-gram and character level overlaps. Lower self-BLEU and conversely, a higher edit distance indicate greater surface form diversity between a question and its corresponding CF. As surface form diversity does not necessarily imply semantic difference, we also estimate _semantic variation_ through two methods. We first measure semantic similarity between a question and its counterfactual counterpart as cosine similarity between their respective SBERT embeddings (EmBSim). We complement EmbSim by adapting a novel method of measuring semantic uncertainty (Kuhn et al., 2023). For this, we leverage a pretrained natural language inference model, in our case DeBERTa-large (He et al., 2021) and compute the bidirectional entailment (equivalence) probability between the original question and its corresponding CF. Herein, a lower equivalence score indicates lower confidence of entailment between the pair, which in turn corresponds to greater semantic variation. In Table 1, we highlight semantic variations for randomly sampled counterfactuals generated by our approach. Even in the random sample, we can observe a variety of semantic changes such as paraphrasing, metonymy, presupposition, clarification, and expansion to name a few.
### Model Calibration
As prediction probabilities of language models are frequently poorly calibrated, practitioners frequently resort to model calibration - training simpler models which detect when the underlying model is faulty, by producing a score which overrides the model confidence and conveys whether its original prediction is correct. Apart from the base model confidence, these models usually leverage diverse heuristic features. Following previous work (Kamath et al., 2020; Ye and Durrett, 2022), we use the random forest classifier as our calibration model. We train each calibrator model on \(500\) training data samples, changing only the input feature sets, while the correctness of the base model prediction is used as the output label. For further training details, please refer Appendix F.2. To evaluate the quality of model calibration, we leverage the Macro-average Calibration Error, MacroCE (Si et al., 2022) metric, a recently proposed enhanced version of the Expected Calibration Error (ECE) (Guo et al., 2017). For a more in-depth discussion motivating the metric choice, see Appendix G.
#### 3.4.1 Baseline
Ye and Durrett (2022) focus on calibrating black-box models with explainers based on local perturbation techniques: LIME (Ribeiro et al., 2016) and SHAP (Lundberg and Lee, 2017). Due to the large scale of our experiments and the high computational complexity of LIME, we only use their SHAP feature-based calibration technique. SHAP
\begin{table}
\begin{tabular}{l l l} \hline \hline
**Model** & **Example (Question, Counterfactual)** & **Semantic change** \\ \hline \multirow{4}{*}{
\begin{tabular}{l} LLaMA \\ C: How much food was the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name of the name name of the name of the name name of the name of the name of the name of the name name of the name of the name name of the name name of the name name of the name name of the name name of the name name of the name name of the name name name of the
assigns an importance score to each input token reflecting its influence on the model prediction. Ye and Durrett (2022) map input tokens to linguistic features, such as POS tags, and then aggregate importance scores across all tokens assigned specific feature values, e.g. nouns. These aggregated scores are then used as input features of the calibrator model, augmenting it with the information from the explanation. A detailed overview of commonly used calibrator features is given in Appendix F.1.
#### 3.4.2 Improving Explanations for Calibrators
The calibration approach of Ye and Durrett (2022) has two main drawbacks: (1) the quality of explanations generated by explainability methods varies significantly Jain and Wallace (2019); Neely et al. (2022), and as a result, the explanations obtained from two different methods might not faithfully represent the true behavior or decision-making process of the model; (2) when relying solely on the cumulative importance of tokens with a specific linguistic feature (POS tag) as features, the explainer fails to convey the semantics of the tokens deemed important to the calibrator model.
To tackle the first issue, we drop the restrictive black-box scenario and extend the scope of our evaluation to attention- and gradient-based white-box explainers, which provide a broader overview of how explanations affect calibration performance. We employ normalized attention scores (\(\alpha\)) Jain et al. (2020) and gradient-scaled attention scores (\(\alpha\nabla\alpha\)) from the attention-based family, while we consider InputXGradients (\(x\nabla x\))Kindermans et al. (2016) and integrated gradients (IG) Sundararajan et al. (2017) from gradient-based approaches. To address the second drawback, we augment calibrator models with semantic features computed from dense representations of input tokens assigned high importance by explanation methods. For each set of input tokens \(\{x_{i}\}_{i=1}^{T}\) and their corresponding importance scores \(\{p_{i}\}_{i=1}^{T}\), we select the top 10% and 20% subset of tokens from the _context_ and _answer_ tokens, respectively, based on their importance scores and then average their token representations. We take a higher percentage of answer tokens since our initial experiments indicate high correlation with calibration performance. Consequently, we exclude explanation-based features from the _question_ as we observed diminishing calibrator performance on their inclusion. We hypothesize that this decrease is due to the noise or irrelevant information introduced by the _question_ features. Such an approach would yield \(h\) new features, where \(h\) is the model dimension. As such a large number of features is a problem for the simple calibrator model, we leverage Principal Component Analysis Shlens (2014) to reduce the dimensionality to the top ten principal components.3
Footnote 3: The selection of the principal components was determined through a series of non-exhaustive experiments. We experimented with \(10\) and \(100\) features, and found that using \(10\) features yields better results.
We are further interested which characteristics of explanations are indicative of the rationale-augmented calibrators' performance. To this end, we measure the _comprehensiveness_ and _sufficiency_DeYoung et al. (2020) of generated explanations, two metrics used to determine the influence of the rationale on a prediction. Given input tokens \(\{x_{i}\}_{i=1}^{t}\), _comprehensiveness_ masks \(n\%\) input tokens assigned the highest importance scores. The comprehensiveness score is then determined as the change in the prediction probability of the model for the same answer, where a high difference in the prediction score indicates that the masked rationale tokens were influential for the prediction. To estimate the degree to which extracted rationales are _sufficient_ for the models' prediction, given input tokens \(\{x_{i}\}_{i=1}^{t}\), _sufficiency_ retains only \(n\%\) of tokens assigned the highest importance scores, masking out the rest. The sufficiency score is then determined as the change in prediction probability of the model for the same answer. Following Carton et al. (2020) and Chrysostomou and Aletras (2022), we constrain sufficiency between \(0\) and \(1\) and report \(1-su\)f so that higher is better.
In case of extractive QA, we do not mask (for comprehensiveness) and explicitly keep (for sufficiency) the question and answer tokens so that the model is able to answer the input question. We report average sufficiency and comprehensiveness scores when retaining (for sufficiency) or masking (for comprehensiveness) the top \(n\in\{2\%,10\%,20\%,50\%\}\) most important tokens.
## 4 Experiments
### Generating Counterfactual Instances
We report an overview of models used to generate CFs, their parameter sizes and the resulting number of generated (usable) CFs in the Appendix Table 5. The Duo-QAG approach yields a significantly higher number of usable samples (~70k) compared to Solo-QAG (~50k), indicating that
the two-step approach might produce CF instances with higher fidelity. We hypothesize that the better generative abilities of the Duo-QAG approach arise from the extensive pre-training of FLAN-based LLMs on question generation and question answering tasks, while these models are not trained specifically to produce the question answers concurrently.
In Table 2, we report the diversity of the generated CFs with respect to both surface form and semantic variation (see Section 3.3.3). Our reference baseline approach quantifies the diversity of SQuAD dataset by comparing every data sample with another random sample from the dataset, giving a upper bound for diversity based on the evaluation metric. The RGF approach produces the least diverse CFs, which is expected considering its methodology which aims to generate and select CFs which deviate minimally from the input samples. Contrary to RGF, our methodology utilizes the extensive knowledge of LLMs to produce counterfactual instances that are semantically and contextually more diverse - emphasized by the low self-BLEU, SBERT similarity, and semantic equivalence scores and high Levenshtein distance from the original instances. We hypothesize that counterfactual instances which are more diverse from the original provide more valuable information to the model by improving its input space coverage, which should in turn improve OOD performance and model calibration. We experimentally verify this hypothesis in the following sections.
### Generalization of CF Augmented Models
We report the exact-match scores of the CF augmented RoBERTa-base model on six OOD datasets in Table 3. Although the F1 scores follow a similar trend, for the sake of space we omit the results to the Appendix E. Models augmented with CFs generated by our approach outperform all baselines across all OOD datasets, with the ex
\begin{table}
\begin{tabular}{l l c c c} \hline \hline \multirow{2}{*}{**Approach**} & \multirow{2}{*}{**Model**} & \multicolumn{2}{c}{**Surface form variation**} & \multicolumn{2}{c}{**Semantic variation**} \\ \cline{3-6} & & Self-BLEU (\(\downarrow\)) & Levenshtein (\(\uparrow\)) & SBERT Sim. (\(\downarrow\)) & Semantic Equivalence (\(\downarrow\)) \\ \hline Reference & - & 0.11 & 1.00 & 0.11 & 0.54 \\ \hline RGF & T5-large & 0.30 & 0.61 & 0.55 & 0.52 \\ \hline \multirow{3}{*}{Solo-QAG} & GPT-JT & 0.26 & 0.67 & 0.48 & 0.46 \\ & LLAMA & 0.28 & 0.65 & 0.50 & 0.51 \\ & Alpaca & 0.27 & 0.67 & 0.50 & 0.55 \\ & GPT-NeoT & 0.24 & 0.68 & 0.45 & 0.46 \\ \hline \multirow{2}{*}{Duo-QAG} & Flan T5-XXL & **0.19** & **0.71** & **0.41** & 0.41 \\ & Flan-UL2 & **0.19** & **0.71** & **0.41** & **0.40** \\ \hline \hline \end{tabular}
\end{table}
Table 2: Quantitative evaluation of the diversity of generated counterfactuals with respect to the original questions. The metrics are complementary – diverse CFs are expected to be further away from original instances in both surface form and meaning. To contextualize semantic and surface form variation of CFs, we contrast them to a **reference** baseline – diversity of an instance compared to a randomly selected other instance from the dataset.
\begin{table}
\begin{tabular}{l|c c c c c c|c} \hline \hline
**Exact Match** & **SQuAD** & **SQuAD\({}_{Adv.}\)****TriviaQA** & **HotpotQA** & **NQ** & **NewsQA** & **BioASQ** & \(G_{\text{ood}}\) \\ \hline Base & 84.30 & 66.60 & 41.27 & 48.21 & 43.18 & 41.43 & 48.80 & - \\ \hline RGF & **85.46** & 66.43 & 43.00 & 52.67 & 44.26 & 42.09 & 48.45 & 1.24 \\ - Easy CFs & 85.00 & 64.69 & 44.83 & 51.61 & 44.90 & 42.60 & 48.42 & 1.26 \\ - Amb. CFs & 85.14 & 65.71 & 45.16 & 51.19 & 45.15 & 42.90 & 48.87 & 1.58 \\ - Hard CFs & 85.20 & 65.70 & 45.42 & 52.48 & 46.13 & **43.28** & 49.31 & 2.14 \\ \hline GPT-JT & 84.74 & 67.19 & 47.40 & 51.21 & 47.08 & 42.12 & 52.59 & 3.02 \\ LLAMA & 84.85 & 67.57 & **48.13** & 51.68 & 48.80 & 42.35 & 51.68 & 3.45 \\ Alpaca & 85.42 & 66.59 & 41.79 & 51.88 & 44.79 & 42.48 & 49.56 & 1.27 \\ GPT-Neoxt & 84.80 & 68.07 & 46.96 & 53.14 & 47.80 & 41.99 & **53.19** & **3.61** \\ \hline Flan-T5-XXL & 85.41 & 67.15 & 42.91 & 53.52 & 48.05 & 42.70 & 49.29 & 2.36 \\ Flan-UL2 & 85.38 & **68.09** & 45.40 & **53.70** & **48.88** & 42.99 & 51.33 & 3.48 \\ \hline \hline \end{tabular}
\end{table}
Table 3: Exact Match results for RoBERTa-base model trained on the SQuAD dataset (Base) and augmented with counterfactual data. All the results are averaged over 3 runs with different random seeds. The last column (\(G_{\text{ood}}\)) shows the average gain of models over the Base model on out-of-domain datasets. Numbers marked in green, **bold** and orange colours represent the highest and second highest scores for the particular dataset and model, respectively.
ception of NewsQA. We hypothesize that this is due to the higher reasoning complexity required by NewsQA, involving synthesis of information from multiple sentences (Trischler et al., 2017) and that the LLMs might not be able to generate such complex CF questions based on instances from the simpler SQuAD dataset. All of the CF augmented models maintain a comparable performance on the in-domain SQuAD dataset, implying that training with diverse data improves OOD generalization
Figure 3: Calibration results for a RoBERTa-base model trained on SQuAD and augmented with LLM generated counterfactuals in out-of-domain settings using features based on probability (conf) and heuristics from three explanation methods: shap, scaled attention and integrated gradients. All of the results are evaluated across three metrics: accuracy, AUC and inverse MacroCE. The results for conf (rows #1 and #5) are reported on the base and CF augmented models which do not use explanations – as the conf baseline uses only the model confidence as input. In the remaining experiments (other rows), along with base and fig, we report results of dense-feature augmented calibrators, as they consistently outperform their counterparts. For completeness, we provide the full results in tabular form in Appendix H.
while preserving in-domain performance. We find that LLAMA and GPT-NeoxT, based on the Solo-QAG approach, perform best on TriviaQA and BioASQ datasets, while the Flan-UL2 model performs best on SQuAD-adversarial, HotpotQA, and NQ datasets. These results show that there is no _one-model-fits-all_ strategy and that even less diverse CFs may be better suited for some datasets, while others benefit from instances spread further across the input data distribution. In addition, the size and training data of the CF generation models may also play an influential role - the larger scale LLAMA, GPT-NeoxT and Flan-UL2 models are also the best performers; likely due to their robust generation capabilities. However, this aspect should not limit the applicability of our approach since even the smaller GPT-JT model provides significant gains (~3EM points over base model) on OOD datasets.
Overall, the GPT-NeoxT CF augmented model has the highest average gain across all OOD datasets, with FLAN-UL2 and LLAMA closely behind. This is largely attributed to its strong performance on the BioASQ dataset, which we hypothesize is due to its pre-training on medical data from the large scale PubMed Central dataset Gao et al. (2021), which comprises of nearly five million biomedical publications. Our findings show that although all models consistently outperform baselines, the best augmentation approach depends on the concrete OOD dataset, suggesting that alignment between domain expertise of LLMs used to generate CFs and the data distribution of OOD datasets could be important. One potential way to achieve this alignment is by choosing an LLM or a retrieval system fine-tuned on a domain closer to the target domain.
To further highlight the relationship between CF diversity and OOD performance, we compute the Spearman correlation between individual metrics capturing the surface form and semantic variation of CFs (Table 2) and the OOD performance gain after CF augmentation. The OOD performance gain is calculated by averaging over the accuracy gain of CF augmented models over the Base model (Table 3). Our analysis shows a high average correlation of \(0.55\) between the diversity metrics and OOD performance gain. Specifically, we notice a strong negative correlation between the OOD performance and self-BLEU (\(-0.52\)), SBERT similarity (\(-0.58\)), and semantic equivalence (\(-0.59\)) scores and a positive correlation between OOD performance and levenshtein distance (\(+0.49\)), showing that training with diverse CFs indeed helps the model in learning robust features that help improve its OOD performance.
### Calibration
We present our calibration results on six OOD datasets in Figure 3. We compare our models against two baselines: (1) conf, where the calibrator model only uses the thresholded probability of the predicted class to assess whether the prediction is trustworthy, and (2) shap. On the conf baseline, when only the probability of the underlying model is used as input to the calibrator, our CF augmented models still enhance calibration accuracy across all OOD datasets with an average increase of \(-4\%\), and up to \(\sim\)\(8\%\) on the TriviaQA dataset compared to training on SQuAD only. These results suggest that augmenting a model with counterfactual instances improves the model's capability to capture nuanced shifts in the data distribution, resulting in better calibrated prediction probabilities. Furthermore, improved robustness of CF augmented models is evident from the high inverse MacroCE scores on the conf baseline where even without features from explanations, CF augmented models exhibit the best calibration scores (\(\Delta+0.03\)) across all datasets.
On the shap baseline, the CF augmented models which incorporate features from SHAP explanations improve calibration accuracy by an average of ~\(2.5\%\) on four out of six OOD datasets, the exceptions being TriviaQA and NQ datasets where the accuracy decreases marginally. Nevertheless, the CF augmented models achieve superior AUC scores on all OOD datasets with an average improvement of \(\sim\)\(4\%\) compared to the base model without data augmentation utilizing SHAP explanation features. We attribute these improvements to the diverse CFs which improve input space coverage, enhancing the capacity of the base model by forcing it to discern nuanced differences between original instances and their counterfactual counterparts, thus also contributing to improving explanations. For completeness, we also report results based on explanations produced by attention and InputXGradients in the Appendix H.
Overall, the CF augmented models coupled with dense rationale features improve calibration over the baselines across all explanation methods and
OOD datasets, specifically on the SQuAD adversarial dataset. Our results show that augmenting training data with counterfactual instances improves model calibration in all scenarios, and that calibrator models benefit from the semantic content of the most important tokens from explanations.
We also compute the Spearman correlation between the diversity metrics and calibration gain. The calibration gain is calculated by averaging over the AUC gain of CF augmented models over the Base model (Figure 3). We observe a significant correlation between diversity and calibration gain, especially in the Conf baseline and Sc. Attn. explanation scenarios, where we observe a perfect linear relationship. This suggests that the model becomes more trustworthy in its predictions after CF augmentation. Further addition of our rationale features also retains the strong correlation (\(0.80\)) between the diversity and calibration performance, emphasizing the importance of diverse CFs in enhancing calibration performance.
### Desiderata of Explanations for Calibration
As we have seen that incorporating dense features from explanations improves performance of calibrator models, we are now interested to evaluate whether some underlying characteristics of explanations are indicative of their usefulness to calibrators. In Table 4, we report two metrics commonly used to estimate faithfulness of explanations - _sufficiency_ and _comprehensiveness_. The RGF approach produces the most _comprehensive_ explanations across all OOD datasets when compared to CFs generated by LLMs, while in terms of _sufficiency_, all LLM augmented CFs report higher scores compared to the RGF baseline. As _comprehensiveness_ is higher when importance scores are distributed evenly across a larger number of tokens (higher entropy), while higher _sufficiency_ means that the
\begin{table}
\begin{tabular}{l l c c c c c|c c c c c} \hline \hline & \multirow{2}{*}{Model} & \multicolumn{6}{c}{**Comprehensiveness**} & \multicolumn{6}{c}{**Sufficiency**} \\ \cline{3-11} & & \(\alpha\) & \(\alpha\nabla\alpha\) & x\(\nabla\)x & IG & SHAP & \(\alpha\) & \(\alpha\nabla\alpha\) & x\(\nabla\)x & IG & SHAP \\ \hline \multirow{4}{*}{\begin{tabular}{l} \end{tabular} } & Base & 0.33 & 0.35 & 0.37 & **0.38** & 0.34 & 0.39 & 0.39 & 0.39 & 0.39 & 0.40 \\ & RGF & **0.35** & **0.40** & **0.39** & **0.38** & **0.35** & 0.34 & 0.35 & 0.34 & 0.34 & 0.35 \\ & LLaMA & 0.34 & 0.36 & 0.36 & 0.36 & 0.33 & 0.40 & 0.41 & 0.40 & 0.40 & 0.41 \\ & GPT-Nex & 0.30 & 0.35 & 0.36 & 0.36 & 0.32 & **0.41** & **0.42** & **0.42** & **0.42** & **0.43** \\ & Flan-UL2 & 0.32 & 0.36 & 0.37 & 0.36 & 0.33 & 0.39 & 0.39 & 0.39 & 0.39 & 0.40 \\ \hline \multirow{4}{*}{\begin{tabular}{l} \end{tabular} } & Base & 0.35 & 0.35 & 0.37 & 0.39 & 0.35 & 0.56 & 0.55 & 0.56 & 0.56 & 0.54 \\ & RGF & **0.38** & **0.45** & **0.45** & **0.48** & **0.40** & 0.41 & 0.42 & 0.43 & 0.43 & 0.42 \\ & LLaMA & 0.29 & 0.33 & 0.33 & 0.34 & 0.29 & 0.57 & 0.59 & 0.58 & 0.59 & 0.58 \\ & GPT-Nex & 0.26 & 0.29 & 0.29 & 0.30 & 0.25 & **0.63** & **0.64** & **0.64** & **0.64** & **0.63** \\ & Flan-UL2 & 0.32 & 0.36 & 0.37 & 0.39 & 0.33 & 0.51 & 0.51 & 0.52 & 0.53 & 0.51 \\ \hline \multirow{4}{*}{\begin{tabular}{l} \end{tabular} } & Base & 0.29 & 0.29 & 0.32 & 0.34 & 0.31 & 0.52 & 0.52 & 0.53 & 0.53 & 0.52 \\ & RGF & **0.36** & **0.40** & **0.40** & **0.42** & **0.36** & 0.34 & 0.35 & 0.34 & 0.35 & 0.36 \\ & LLaMA & 0.30 & 0.31 & 0.31 & 0.29 & 0.51 & 0.52 & 0.51 & 0.51 & 0.51 & 0.53 \\ & GPT-Nex & 0.28 & 0.29 & 0.28 & 0.30 & 0.27 & **0.56** & **0.57** & **0.57** & **0.57** & **0.58** \\ & Flan-UL2 & 0.35 & 0.34 & 0.35 & 0.36 & 0.33 & 0.43 & 0.43 & 0.43 & 0.43 & 0.44 \\ \hline \multirow{4}{*}{\begin{tabular}{l} \end{tabular} } & Base & 0.35 & 0.34 & 0.36 & 0.37 & 0.34 & 0.55 & 0.55 & 0.55 & 0.55 & 0.55 \\ & RGF & **0.40** & **0.40** & **0.43** & **0.41** & **0.39** & 0.43 & 0.43 & 0.43 & 0.43 & 0.43 \\ & LLaMA & 0.36 & 0.35 & 0.34 & 0.35 & 0.32 & **0.56** & **0.56** & 0.55 & 0.55 & **0.55** \\ & GPT-Nexox & 0.35 & 0.32 & 0.33 & 0.34 & 0.31 & **0.56** & **0.56** & **0.56** & **0.56** & **0.55** \\ & Flan-UL2 & **0.40** & 0.37 & 0.38 & 0.38 & 0.35 & 0.49 & 0.50 & 0.49 & 0.49 & 0.49 \\ \hline \multirow{4}{*}{\begin{tabular}{l} \end{tabular} } & Base & 0.34 & 0.35 & 0.37 & 0.40 & 0.43 & 0.55 & 0.55 & 0.56 & 0.57 & 0.54 \\ & RGF & **0.39** & **0.44** & **0.43** & **0.44** & **0.51** & 0.46 & 0.47 & 0.47 & 0.47 & 0.45 \\ & LLaMA & 0.30 & 0.32 & 0.32 & 0.33 & 0.37 & 0.61 & 0.62 & 0.61 & 0.62 & 0.60 \\ & GPT-Nex & 0.28 & 0.29 & 0.30 & 0.31 & 0.35 & **0.63** & **0.64** & **0.64** & **0.64** & **0.62** \\ & Flan-UL2 & 0.32 & 0.37 & 0.37 & 0.40 & 0.45 & 0.52 & 0.54 & 0.53 & 0.54 & 0.52 \\ \hline \multirow{4}{*}{
\begin{tabular}{l} \end{tabular} } & Base & 0.32 & 0.38 & 0.38 & 0.40 & 0.35 & 0.50 & 0.51 & 0.51 & 0.51 & 0.51 & 0.50 \\ & RGF & **0.35** & **0.47** & **0.43** & **0.43** & **0.39** & 0.41 & 0.45 & 0.43 & 0.43 & 0.41 \\ & LLaMA & 0.30 & 0.34 & 0.33 & 0.34 & 0.32 & **0.57** & **0.58** & **0.57** & **0.58** & **0.57** \\ & GPT-Nex & 0.24 & 0.33 & 0.32 & 0.34 & 0.30 & 0.56 & **0.58** & **0.57** & **0.58** & **0.57** \\ & Flan-UL2 & 0.29 & 0.38 & 0.38 & 0.39 & 0.35 & 0.49 & 0.50 & 0.50 & 0.50 & 0.50 \\ \hline \hline \end{tabular}
\end{table}
Table 4: Comprehensiveness and sufficiency scores of explanations generated by baseline and counterfactual augmented models. Numbers marked in green, **bold** and red colours represent the highest and lowest scores, respectively, for the particular dataset with a corresponding model and explanation.
model assigns more importance to a smaller subset of tokens (lower entropy), we posit that higher sufficiency is preferred by calibrator models when using features from explanations. This finding is intuitive when considering the data augmentation setup - the RGF approach generates counterfactual instances which are minimally different from the originals, in which case it is expected that the same input tokens will be important. On the contrary, counterfactual instances generated by Solo- and Duo-QAG are more diverse while maintaining surface form overlap, forcing the base model to discern the key input features, thus better understanding the nuances of the task - which naturally results in improved OOD and calibration performance.
## 5 Conclusion
In our paper, we present a novel approach for automatic data augmentation by LLM generated counterfactual instances diverse in surface form and semantic content. Our results show that augmenting training data of smaller models with LLM generated CFs consistently improves generalization capabilities of the underlying models across six OOD extractive QA datasets. We further show that models trained on CF augmented data are easier to calibrate, both when considering the standard confidence-based setup as well as the explanation-augmented calibration setup. Finally, show that rationale-augmented calibrator models prefer concise explanations, rather than comprehensive ones. By highlighting the fact that more diverse CF instances improve the quality of the models' internal representations by covering a broader part of the input space we pave the way for future works exploring the relation between surface form and semantic diversity of data used for augmentation and the models' generalization performance.
## Limitations
Our work only concentrates on the QA task and can be extended to other generative tasks in the future. In addition, our approach of generating CFs can be computationally expensive for very large models and therefore we constrained ourselves to a maximum model size of 20B. In future, smaller and efficient LLMs can even make our methods better applicable. For model calibration, we utilize SHAP explanations as baselines from prior work which are also compute intensive since they need to compute many perturbations on the data. But these compute based limitations should not limit the applicability of our methods since we also show that efficient explanations based on attentions and gradients can also perform at par or sometimes even better than SHAP.
## Ethics and Broader Impact Statement
The core of our work is based on the ability of LLMs to generate reasonable explanations but prior works have shown that these models hallucinate and are not free from biases captured from large-scale web data. These hallucinations and biases might trickle down to SLM as we augment them with LLM generated CF data. To overcome these issues, we design our approaches with hard and soft filtering stages that try to eliminate such noisy and biased data and still achieve significant improvements over existing baselines.
|